Skip to main content
Top

Are Mixture-of-Modality-Experts Transformers Robust to Missing Modality During Training and Inferring?

  • 2024
  • OriginalPaper
  • Chapter
Published in:

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The chapter delves into the robustness of Mixture-of-Modality-Experts Transformers (MoME) when faced with missing modality during training and inference. It begins by discussing the limitations of current multi-modal Transformers and the need for models that can handle incomplete data. The authors propose a series of sub-questions to guide their research and conduct experiments to compare the robustness of MoME Transformers with vanilla Transformers. They also explore the use of multi-task learning and data imputation techniques, such as Mixup, to improve the model's performance. The chapter concludes with a novel method based on MoME Transformers and multi-task learning, which demonstrates high robustness to missing modalities with no extra computational requirements. The method is validated through experiments on three popular multi-modal datasets, showing significant improvements over existing approaches.

Not a customer yet? Then find out more about our access models now:

Individual Access

Start your personal individual access now. Get instant access to more than 164,000 books and 540 journals – including PDF downloads and new releases.

Starting from 54,00 € per month!    

Get access

Access for Businesses

Utilise Springer Professional in your company and provide your employees with sound specialist knowledge. Request information about corporate access now.

Find out how Springer Professional can uplift your work!

Contact us now
Title
Are Mixture-of-Modality-Experts Transformers Robust to Missing Modality During Training and Inferring?
Authors
Yan Gao
Tong Xu
Enhong Chen
Copyright Year
2024
DOI
https://doi.org/10.1007/978-3-031-57808-3_12
This content is only visible if you are logged in and have the appropriate permissions.

Premium Partner

    Image Credits
    Neuer Inhalt/© ITandMEDIA, Nagarro GmbH/© Nagarro GmbH, AvePoint Deutschland GmbH/© AvePoint Deutschland GmbH, AFB Gemeinnützige GmbH/© AFB Gemeinnützige GmbH, USU GmbH/© USU GmbH, Ferrari electronic AG/© Ferrari electronic AG