skip to main content
10.1145/1835449.1835518acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

Evaluating and predicting answer quality in community QA

Authors Info & Claims
Published:19 July 2010Publication History

ABSTRACT

Question answering (QA) helps one go beyond traditional keywords-based querying and retrieve information in more precise form than given by a document or a list of documents. Several community-based QA (CQA) services have emerged allowing information seekers pose their information need as questions and receive answers from their fellow users. A question may receive multiple answers from multiple users and the asker or the community can choose the best answer. While the asker can thus indicate if he was satisfied with the information he received, there is no clear way of evaluating the quality of that information. We present a study to evaluate and predict the quality of an answer in a CQA setting. We chose Yahoo! Answers as such CQA service and selected a small set of questions, each with at least five answers. We asked Amazon Mechanical Turk workers to rate the quality of each answer for a given question based on 13 different criteria. Each answer was rated by five different workers. We then matched their assessments with the actual asker's rating of a given answer. We show that the quality criteria we used faithfully match with asker's perception of a quality answer. We furthered our investigation by extracting various features from questions, answers, and the users who posted them, and training a number of classifiers to select the best answer using those features. We demonstrate a high predictability of our trained models along with the relative merits of each of the features for such prediction. These models support our argument that in case of CQA, contextual information such as a user's profile, can be critical in evaluating and predicting content quality.

References

  1. Dervin, B. (1998). Sense-making theory and practice: An overview of user interests in knowledge seeking and use. In Journal of Knowledge Management, 2(2), 36--46.Google ScholarGoogle ScholarCross RefCross Ref
  2. Gazan, R. (2008). Social annotations in digital library collections. D-Lib Magazine, 11/12(14). Available from http://www.dlib.org/dlib/november08/gazan/11gazan.html.Google ScholarGoogle Scholar
  3. Harper, M. F., Raban, D. R., Rafaeli, S., & Konstan, J. K. (2008). Predictors of answer quality in online Q&A sites. In Proceedings of the 26th Annual SIGCHI Conference on Human Factors in Computing Systems (pp. 865--874). New York: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Janes, J. (2003). The Global Census of Digital Reference. In 5th Annual VRD Conference. San Antonio, TX.Google ScholarGoogle Scholar
  5. Kim, S., Oh, J-S., & Oh, S. (2007). Best-Answer Selection Criteria in a Social Q&A site from the User Oriented Relevance Perspective. Proceeding of the 70th Annual Meeting of the American Society for Information Science and Technology (ASIST '07), 44.Google ScholarGoogle ScholarCross RefCross Ref
  6. Kresh, D. N. (2000). Offering High Quality Reference Service on the Web: The Collaborative Digital Reference Service (CDRS). D-Lib Magazine, 6.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Lee, J. H., Downie, J. S., & Cunningham, S. J. (2005). Challenges in cross-cultural/ multilingual music information seeking. In Proceedings of the 6th International Society for Music Information Retrieval (pp. 1--7). London, UK.Google ScholarGoogle Scholar
  8. Liu, Y., Bian, J., & Agichtein, E. (2008). Predicting Information Seeker Satisfaction in Community Question Answering. Proceedings of the ACM SIGIR International Conference on Research and Development in Information Retrieval. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Pomerantz, J. (2008). Evaluation of Online Reference Services. Bulletin of the American Society for Information Science and Technology, 34(2), 15--19. Available from http://www.asis.org/Bulletin/Dec-07/pomerantz.html.Google ScholarGoogle ScholarCross RefCross Ref
  10. Pomerantz, J., Nicholson, S., Belanger, Y., & Lankes, R. D. (2004). The Current State of Digital Reference: Validation of a General Digital Reference Model through a Survey of Digital Reference Services. Information Processing & Management, 40(2), 347--363. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Saracevic, T. (1995). Evaluation of evaluation in information retrieval. Proceedings of the ACM SIGIR International Conference on Research and Development in Information Retrieval (pp. 138--146). Seattle, USA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Shah, C., Oh, J. S., & Oh, S. (2008). Exploring characteristics and effects of user participation in online social Q&A sites. First Monday, 13(9). Available from http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/2182/2028.Google ScholarGoogle Scholar
  13. Shah, C., Oh, S., & Oh, J-S. (2009). Research Agenda for Social Q&A. Library and Information Science Research, 11(4), 205--209.Google ScholarGoogle ScholarCross RefCross Ref
  14. Su, Q., Pavlov, D., Chow, J., & Baker, W. (2007). Internet-scale collection of human-reviewed data. In C. L. Williamson, M. E. Zurko, P. E. Patel-Schneider, & P. J. Shenoy (Eds.), Proceedings of the 16th International Conference on World Wide Web (pp. 231--240). New York: ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Voorhees, E. M (2003). Overview of the TREC 2003 question-answering track. In TREC 2003.Google ScholarGoogle Scholar
  16. Zhu, Z., Bernhard, D., & Gurevych, I. (2009). A Multi-dimensional Model for Assessing the Quality of Answers in Social Q&A Sites. Technical Report TUD-CS-2009-0158. Technische Universitat Darmstad.Google ScholarGoogle Scholar

Index Terms

  1. Evaluating and predicting answer quality in community QA

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          SIGIR '10: Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
          July 2010
          944 pages
          ISBN:9781450301534
          DOI:10.1145/1835449

          Copyright © 2010 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 19 July 2010

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          SIGIR '10 Paper Acceptance Rate87of520submissions,17%Overall Acceptance Rate792of3,983submissions,20%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader