2011 | OriginalPaper | Buchkapitel
Modeling Urban Scenes in the Spatial-Temporal Space
verfasst von : Jiong Xu, Qing Wang, Jie Yang
Erschienen in: Computer Vision – ACCV 2010
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
This paper presents a technique to simultaneously model 3D urban scenes in the spatial-temporal space using a collection of photos that span many years. We propose to use a middle level representation,
building
, to characterize significant structure changes in the scene. We first use structure-from-motion techniques to build 3D point clouds, which is a mixture of scenes from different periods of time. We then segment the point clouds into independent buildings using a hierarchical method, including coarse clustering on sparse points and fine classification on dense points based on the spatial distance of point clouds and the difference of visibility vectors. In the fine classification, we segment building candidates using a probabilistic model in the spatial-temporal space simultaneously. We employ a z-buffering based method to infer existence of each building in each image. After recovering temporal order of input images, we finally obtain 3D models of these buildings along the time axis. We present experiments using both toy building images captured from our lab and real urban scene images to demonstrate the feasibility of the proposed approach.