2015 | OriginalPaper | Buchkapitel
Concept-Based Multimodal Learning for Topic Generation
Autoren: Cheng Wang, Haojin Yang, Xiaoyin Che, Christoph Meinel
Verlag: Springer International Publishing
In this paper, we propose a concept-based multimodal learning model (CMLM) for generating document topic through modeling textual and visual data. Our model considers cross-modal concept similarity and unlabeled image concept, it is capable of processing document which has modality missing. The model can extract semantic concepts from unlabeled image and combine with text modality to generate document topics. Our comparison experiments on news document topic generation shows, in multimodal scenario, CMLM can generate more representative topics than latent dirichet allocation (LDA) based topic for representing given document.