Abstract
Image background subtraction refers to the technique which eliminates the background in an image or video to extract the object (foreground) for further processing like object recognition in surveillance applications or object editing in film production, etc. The problem of image background subtraction becomes difficult if the background is cluttered, and therefore, it is still an open problem in computer vision. For a long time, conventional background extraction models have used only image-level information to model the background. However, recently, deep learning based methods have been successful in extracting styles from the image. Based on the use of deep neural styles, we propose an image background subtraction technique based on a style comparison. Furthermore, we show that the style comparison can be done pixel-wise, and also show that this can be applied to perform a spatially adaptive style transfer. As an example, we show that an automatic background elimination and background style transform can be achieved by the pixel-wise style comparison with minimal human interactions, i.e., by selecting only a small patch from the target and the style images.