ABSTRACT
In this work we explore the trade-offs in acquiring training data for image classification models through automated web search as opposed to human annotation. Automated web search comes at no cost in human labor, but sometimes leads to decreased classification performance, while human annotations come at great expense in human labor but result in better performance. The primary contribution of this work is a system for predicting which visual concepts will show the greatest increase in performance from investing human effort in obtaining annotations. We propose to build this system as an estimation of the absolute gain in average precision (AP) experienced from using human annotations instead of web search. To estimate the AP gain, we rely on statistical classifiers built on top of a number of quality prediction features. We employ a feature selection algorithm to compare the quality of each of the predictors and find that cross-domain image similarity and cross-domain model generalization metrics are strong predictors, while concept frequency and within-domain model quality are weak predictors. In a test application, we find that the prediction scheme can result in a savings in annotation effort of up to 75\%, while only incurring marginal damage (10% relative decrease in mean average precision) to the overall performance of the concept models.
- NIST TREC Video Retrieval Evaluation http://www-nlpir.nist.gov/projects/trecvid/.Google Scholar
- LSCOM lexicon definitions and annotations version 1.0, DTO challenge workshop on large scale concept ontology for multimedia. Technical report, Columbia University, March 2006.Google Scholar
- A. Amir, J. Argillander, M. Campbell, A. Haubold, G. Iyengar, S. Ebadollahi, F. Kang, M. R. Naphade, A. Natsev, J. R. Smith, J. Tesic, and T. Volkmer. IBM Research TRECVID-2005 Video Retrieval System. In NIST TRECVID workshop, Gaithersburg, MD, November 2005.Google Scholar
- C.-C. Chang and C.-J. Lin. LIBSVM: a library for support vector machines, 2001. Software available at http://www.csie.ntu.edu.tw/¿cjlin/libsvm.Google Scholar
- S.-F. Chang, W. Hsu, L. Kennedy, L. Xie, A. Yanagawa, E. Zavesky, and D. Zhang. Columbia University TRECVID-2005 Video Search and High-Level Feature Extraction. In NIST TRECVID workshop, Gaithersburg, MD, November 2005.Google Scholar
- R. Fergus, L. Fei-Fei, P. Perona, and A. Zisserman. Learning object categories from googles image search. ICCV, 2005. Google ScholarDigital Library
- G. A. Miller, R. Beckwith, C. Fellbaum, D. Gross, and K. J. Miller. Introduction to WordNet: An on-line lexical database. International Journal of Lexicography, 3(4): 235--244, 1990.Google ScholarCross Ref
- C. G. Snoek, M. Worring, D. C.Koelma, and A. W. Smeulders. Learned lexicon-driven interactive video retrieval. In CIVR, 2006. Google ScholarDigital Library
- R. Typke, R. C. Veltkamp, and F. Wiering. A measure for evaluating retrieval techniques based on partially ordered ground truth lists. In ICME, 2006.Google ScholarCross Ref
- T. Volkmer, J. R. Smith, and A. P. Natsev. A web-based system for collaborative annotation of large image and video collections: an evaluation and user study. In ACM Multimedia, 2005. Google ScholarDigital Library
- X.-J. Wang, L. Zhang, F. Jing, and W.-Y. Ma. Annosearch: Image auto-annotation by search. CVPR, 2006. Google ScholarDigital Library
- E. Yom-Tov, S. Fine, D. Carmel, and A. Darlow. Learning to estimate query difficulty: including applications to missing content detection and distributed information retrieval. In SIGIR, 2005. Google ScholarDigital Library
Index Terms
- To search or to label?: predicting the performance of search-based automatic image classifiers
Recommendations
A performance prediction approach to enhance collaborative filtering performance
ECIR'2010: Proceedings of the 32nd European conference on Advances in Information RetrievalPerformance prediction has gained increasing attention in the IR field since the half of the past decade and has become an established research topic in the field. The present work restates the problem in the area of Collaborative Filtering (CF), where ...
An Empirical Investigation of the Effort of Creating Reusable, Component-Based Models for Performance Prediction
CBSE '08: Proceedings of the 11th International Symposium on Component-Based Software EngineeringModel-based performance prediction methods aim at evaluating the expected response time, throughput, and resource utilisation of a software system at design time, before implementation. Existing performance prediction methods use monolithic, throw-away ...
An Investigation into the Application of Different Performance Prediction Methods to Distributed Enterprise Applications
Response time predictions for workload on new server architectures can enhance Service Level Agreement--based resource management. This paper evaluates two performance prediction methods using a distributed enterprise application benchmark. The ...
Comments