Many models of visual attention have been proposed in the past, and proved to be very useful, e.g. in robotic applications. Recently it has been shown in the literature that not only single visual features, such as color, orientation, curvature, etc., attract attention, but complete objects do. Symmetry is a feature of many man-made and also natural objects and has thus been identified as a candidate for attentional operators. However, not many techniques exist to date that exploit symmetry-based saliency. So far these techniques work mainly on 2D data. Furthermore, methods, which work on 3D data, assume complete object models. This limits their use as bottom-up attentional operators working on RGBD images, which only provide partial views of objects. In this paper, we present a novel local symmetry-based operator that works on 3D data and does not assume any object model. The estimation of symmetry saliency maps is done on different scales to detect objects of various sizes. For evaluation a Winner-Take-All neural network is used to calculate attention points. We evaluate the proposed approach on two datasets and compare to state-of-the-art methods. Experimental results show that the proposed algorithm outperforms current state-of-the-art in terms of quality of fixation points.
Weitere Kapitel dieses Buchs durch Wischen aufrufen
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:
- Local 3D Symmetry for Visual Saliency in 2.5D Point Clouds
- Springer Berlin Heidelberg
Neuer Inhalt/© ITandMEDIA