2015 | OriginalPaper | Buchkapitel
Interactive Relevance Visual Learning for Image Retrieval
verfasst von : Hsin-Chia Fu, L. X. Zheng, J. B. Wang, Hsiao-Tien Pao
Erschienen in: Advances in Computational Intelligence
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
This paper proposes mixture Gaussian neural networks (MGNN) to learn visual features from user specified query image objects or regions for relevance image retrieval. Instead of segmenting query image regions from sample images, relevance feedback feature learning is performed by the proposed MGNN to extract query visual features. After feature learning, the MGNN can be used to measure the appearance difference between the query features and images for image retrieval. The proposed methods were tested on COREL image gallery and the WWW image collections, and testing results were compared with currently leading approaches. From the experimental results, that the extracted and learned query visual features by MGNN can be very close to users’ mind and/or desire, and the closeness is somewhat related to the number of feature leaning iterations. Since any dimensional data can be approximated by mixture Gaussian distributions, thus using MGNN to query and to retrieve similar and/or relevance high dimensional data or images will be a new area of research for future works.