2006 | OriginalPaper | Chapter
Learning by Integrating Information Within and Across Fixations
Authors : Predrag Neskovic, Liang Wu, Leon N Cooper
Published in: Artificial Neural Networks – ICANN 2006
Publisher: Springer Berlin Heidelberg
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
In this work we introduce a Bayesian Integrate And Shift (BIAS) model for learning object categories. The model is biologically inspired and uses Bayesian inference to integrate information within and across fixations. In our model, an object is represented as a collection of features arranged at specific locations with respect to the location of the fixation point. Even though the number of feature detectors that we use is large, we show that learning does not require a large amount of training data due to the fact that between an object and features we introduce an intermediate representation, object views, and thus reduce the dependence among the feature detectors. We tested the system on four object categories and demonstrated that it can learn a new category from only a few training examples.