2020 | OriginalPaper | Buchkapitel
Jointly Learning Text and Visual Information in the Scientific Domain
verfasst von : Jose Manuel Gomez-Perez, Ronald Denaux, Andres Garcia-Silva
Erschienen in: A Practical Guide to Hybrid Natural Language Processing
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
In this chapter we address multi-modality in domains where not only text but also images or, as we will see next, scientific figures and diagrams are important sources of information for the task at hand. Compared to natural images, understanding scientific figures is particularly hard for machines. However, there is a valuable source of information in scientific literature that until now remained untapped: the correspondence between a figure and its caption. In this chapter we show what can be learnt by looking at a large number of figures and reading their captions, and describe a figure-caption correspondence learning task that makes use of such observation. Training visual and language networks without supervision other than pairs of unconstrained figures and captions is shown to successfully solve this task. We also follow up on previous chapters and illustrate how transferring lexical and semantic knowledge from a knowledge graph significantly enriches the resulting features. Finally, the positive impact of such hybrid and semantically enriched features is demonstrated in two transfer learning experiments involving scientific text and figures: multi-modal classification and machine comprehension for question answering.