2013 | OriginalPaper | Buchkapitel
Adaptive Background Defogging with Foreground Decremental Preconditioned Conjugate Gradient
verfasst von : Jacky Shun-Cho Yuk, Kwan-Yee Kenneth Wong
Erschienen in: Computer Vision – ACCV 2012
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
The quality of outdoor surveillance videos are always degraded by bad weathers, such as fog, haze, and snowing. The degraded videos not only provide poor visualizations, but also increase the difficulty of vision-based analysis such as foreground/background segmentation. However, haze/fog removal has never been an easy task, and is often very time consuming. Most of the existing methods only consider a single image, and no temporal information of a video is used. In this paper, a novel adaptive background defogging method is presented. It is observed that most of the background regions between two consecutive video frames do not vary too much. Based on this observation, each video frame is firstly defogged by a background transmission map which is generated adaptively by the proposed foreground decremental preconditioned conjugate gradient (FDPCG). It is shown that foreground/background segmentation can be improved dramatically with such background-defogged video frames. With the help of a foreground map, the defogging of foreground regions is then completed by 1) foreground transmission estimation by fusion, and 2) transmission refinement by the proposed foreground incremental preconditioned conjugate gradient (FIPCG). Experimental results show that the proposed method can effectively improve the visualization quality of surveillance videos under heavy fog and snowing weather. Comparing with the state-of-the-art image defogging methods, the proposed method is much more efficient.