The most important aspects in computer and mobile robotics are both visual object and place recognition; they have been used to tackle numerous applications via different techniques as established previously in the literature, however, combining the machine learning techniques for learning objects to obtain best possible recognition and as well as to obtain its image descriptors for describing the content of the image fully is considered as another vital way which can be used in computer vision. Thus, the ability of the system is to learn and describe the structural features of objects or places more effectively, which in turn; it leads to a correct recognition of objects. This paper introduces a method that uses Naive Base to combine the Kernel Principle Component (KPCA) features with HOG features from the visual scene. According to this approach, a set of SURF features and Histogram of Gradient (HOG) are extracted from a given image. The minimum Euclidean Distance between all SURF features is computed from the visual codebook which was constructed by K-means previously to be combined with HOG features. A classification method such as Support Vector Machine (SVM) was used for data analysis and the results indicate that KPCA with HOG method significantly outperforms bag of visual keyword (BOW) approach on Caltech-101 object dataset and IDOL visual place dataset.
Weitere Kapitel dieses Buchs durch Wischen aufrufen
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:
- Kernel Visual Keyword Description for Object and Place Recognition
Abbas M. Ali
Tarik A. Rashid
Neuer Inhalt/© ITandMEDIA