![]() Let's see how our transparency mask promotes realistic composition. Even a human would find it difficult and time-consuming to produce such a mask himself, and that's where background removal comes to the rescue. ![]() Both cover almost the same hair area, yet their precise values make a big difference.Īlgorithms must understand the colours and lighting interactions of the foreground and background to circumvent such issues. Here, we notice that the first algorithm has slightly overestimated its values leading to more of the background light being taken in the extraction. In the third column, we can see close-ups of a small patch on the right-hand side of the toy. The masks look very similar however, when the extraction is done using the first, the blue light is still partially visible on the subject's hair, as shown in the second column. Shown below it in the first column are two masks generated by two different algorithms. ![]() The top image is the original image which has a stuffed toy on a bright blue background. Let's talk about three factors that impact the realism of compositions:Ĭonsider the below example to illustrate how slightly different masks can cause significant visual differences in composition. The created masks must have a high level of accuracy as compositions created from background removal are often used for eCommerce or media purposes where humans would be seeing them. Such a mask can completely separate foreground and background elements, even at places where they deeply interact. In such a case, what we need is a transparency mask or alpha matte for proper extraction, as shown below. This is because part of the subject, particularly her hair, is semi-transparent at certain places and interacts with the lighting and color of the background to produce the soft tone we see in the original image. Even though the mask covers the hair pixels, something is not right in the extraction. However, in more complex cases, such as those with delicate hairs, image segmentation will not do the trick, as seen below. We can see that image segmentation did the job perfectly here. Then, this mask can be used to extract the object, as shown below. We focus on the second challenge in this post and leave this one for a future blog post.Ī common approach to generating a mask is a technique called image segmentation, which assigns a binary value, a YES or a NO, to every pixel in an image to indicate if that pixel belongs to the subject. While detecting the foreground object is easy for a human, it is not so for a computer. Predicting an accurate mask for realistic composition She is automatically detecting the foreground subject.Ģ. There are two main challenges in doing this:ġ. Our goal is to automatically produce a mask to extract the foreground object from the image. Logos/Signature/Graphics: Promote your business or any of your cool projects Skip the line and create passport photos anywhere.ĥ. Profile and Passport Pictures: Create stunning profile pictures in just a few clicks. Media/Creative: Unfold your creative edge and make stunning contentĤ. Ecommerce: Get your product images ready and make them look more appealing for any E-Commerce platformģ. Reuse photoshoots: Get more out of your existing contentĢ. For now, let’s look at some applications of background removal:ġ. In the next section, we will elaborate more on these challenges and show examples to explain them in detail. Real Applications of Background Removal Moreover, many structures like fur and human hair have delicate semi-transparent and intricate structures that must be handled with care to ensure precise extraction and realistic composition. Predicting an accurate mask: The mask must cover only the subject area. Handling interactions with known subjects, like a human playing basketball or sitting on a chair, can also be tricky.Ģ. ![]() Detecting the foreground subject without human input: As the foreground subject can belong to a diverse range of objects like humans, animals, electronics, clothing, furniture etc., our model must identify and handle the subject even if our model has never seen that subject before. There are two significant challenges in creating a completely automated system that can do background removal:ġ. This can be tricky for delicate structures like hair and fur.Ĭomposition allows us to reuse images by putting them on formal or creative backgrounds. Placing the subject on a new background is called composition.Īs the human eye is susceptible to discrepancies in images, great care must be taken so that the composition of the extracted subject looks realistic on a new background. This is done with the help of a mask produced in the removal process, as shown below. Background removal is selecting the foreground subject in an image and erasing the background so the foreground subject can be placed on a new background. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |