skip to main content
10.1145/2806416.2806472acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
research-article

More Accurate Question Answering on Freebase

Published:17 October 2015Publication History

ABSTRACT

Real-world factoid or list questions often have a simple structure, yet are hard to match to facts in a given knowledge base due to high representational and linguistic variability. For example, to answer "who is the ceo of apple" on Freebase requires a match to an abstract "leadership" entity with three relations "role", "organization" and "person", and two other entities "apple inc" and "managing director". Recent years have seen a surge of research activity on learning-based solutions for this method. We further advance the state of the art by adopting learning-to-rank methodology and by fully addressing the inherent entity recognition problem, which was neglected in recent works.

We evaluate our system, called Aqqu, on two standard benchmarks, Free917 and WebQuestions, improving the previous best result for each benchmark considerably. These two benchmarks exhibit quite different challenges, and many of the existing approaches were evaluated (and work well) only for one of them. We also consider efficiency aspects and take care that all questions can be answered interactively (that is, within a second). Materials for full reproducibility are available on our website: http://ad.informatik.uni-freiburg.de/publications.

References

  1. H. Bast, F. Bäurle, B. Buchhold, and E. Haussmann. Broccoli: Semantic full-text search at your fingertips. CoRR, abs/1207.2615, 2012.Google ScholarGoogle Scholar
  2. J. Berant, A. Chou, R. Frostig, and P. Liang. Semantic Parsing on Freebase from Question-Answer Pairs. In EMNLP, pages 1533--1544, 2013.Google ScholarGoogle Scholar
  3. J. Berant and P. Liang. Semantic Parsing via Paraphrasing. In ACL, pages 1415--1425, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  4. A. Bordes, S. Chopra, and J. Weston. Question Answering with Subgraph Embeddings. CoRR, abs/1406.3676, 2014.Google ScholarGoogle Scholar
  5. L. Breiman. Random forests. Machine Learning, 45(1):5--32, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. C. J. C. Burges, R. Ragno, and Q. V. Le. Learning to rank with nonsmooth cost functions. In NIPS, pages 193--200, 2006.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Q. Cai and A. Yates. Large-scale Semantic Parsing via Schema Matching and Lexicon Extension. In ACL, pages 423--433, 2013.Google ScholarGoogle Scholar
  8. A. X. Chang and C. D. Manning. Sutime: A library for recognizing and normalizing time expressions. In LREC, pages 3735--3740, 2012.Google ScholarGoogle Scholar
  9. ClueWeb, 2012. The Lemur Projekt.Google ScholarGoogle Scholar
  10. W. W. Cohen, R. E. Schapire, and Y. Singer. Learning to order things. JAIR, 10:243--270, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. C. Fellbaum. WordNet. Wiley Online Library, 1998.Google ScholarGoogle Scholar
  12. Y. Freund, R. D. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. JMLR, 4:933--969, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. E. Gabrilovich, M. Ringgaard, and A. Subramanya. FACC1: Freebase annotation of ClueWeb corpora, Version 1.Google ScholarGoogle Scholar
  14. T. Joachims. Optimizing search engines using clickthrough data. In KDD, pages 133--142, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. T. Kwiatkowski, E. Choi, Y. Artzi, and L. S. Zettlemoyer. Scaling Semantic Parsers with On-the-Fly Ontology Matching. In EMNLP, pages 1545--1556, 2013.Google ScholarGoogle Scholar
  16. T. Liu. Learning to rank for information retrieval. Foundations and Trends in Information Retrieval, 3(3):225--331, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. C. D. Manning, M. Surdeanu, J. Bauer, J. R. Finkel, S. Bethard, and D. McClosky. The stanford corenlp natural language processing toolkit. In ACL, pages 55--60, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  18. M. Mintz, S. Bills, R. Snow, and D. Jurafsky. Distant supervision for relation extraction without labeled data. In ACL, pages 1003--1011, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. S. Reddy, M. Lapata, and M. Steedman. Large-scale Semantic Parsing without Question-Answer Pairs. TACL, 2:377--392, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  20. V. I. Spitkovsky and A. X. Chang. A Cross-Lingual Dictionary for English Wikipedia Concepts. In LREC, pages 3168--3175, 2012.Google ScholarGoogle Scholar
  21. M. Steedman. The syntactic process, volume 35. MIT Press, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. C. Unger, C. Forascu, V. Lopez, A. N. Ngomo, E. Cabrio, P. Cimiano, and S. Walter. Question answering over linked data (QALD-4). In CLEF 2014, pages 1172--1180, 2014.Google ScholarGoogle Scholar
  23. J. Xu and H. Li. Adarank: a boosting algorithm for information retrieval. In SIGIR, pages 391--398, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. X. Yao, J. Berant, and B. V. Durme. Freebase QA: Information Extraction or Semantic Parsing? In ACL, Workshop on Semantic Parsing, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  25. X. Yao and B. V. Durme. Information Extraction over Structured Data: Question Answering with Freebase. In ACL, pages 956--966, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  26. C. Zhu, R. H. Byrd, P. Lu, and J. Nocedal. Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization. ACM Trans. Math. Softw., 23(4):550--560, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. More Accurate Question Answering on Freebase

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CIKM '15: Proceedings of the 24th ACM International on Conference on Information and Knowledge Management
      October 2015
      1998 pages
      ISBN:9781450337946
      DOI:10.1145/2806416

      Copyright © 2015 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 17 October 2015

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      CIKM '15 Paper Acceptance Rate165of646submissions,26%Overall Acceptance Rate1,861of8,427submissions,22%

      Upcoming Conference

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader