Skip to main content
Top
Published in: Data Mining and Knowledge Discovery 1/2024

28-08-2023

An attention matrix for every decision: faithfulness-based arbitration among multiple attention-based interpretations of transformers in text classification

Authors: Nikolaos Mylonas, Ioannis Mollas, Grigorios Tsoumakas

Published in: Data Mining and Knowledge Discovery | Issue 1/2024

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Transformers are widely used in natural language processing, where they consistently achieve state-of-the-art performance. This is mainly due to their attention-based architecture, which allows them to model rich linguistic relations between (sub)words. However, transformers are difficult to interpret. Being able to provide reasoning for its decisions is an important property for a model in domains where human lives are affected. With transformers finding wide use in such fields, the need for interpretability techniques tailored to them arises. We propose a new technique that selects the most faithful attention-based interpretation among the several ones that can be obtained by combining different head, layer and matrix operations. In addition, two variations are introduced towards (i) reducing the computational complexity, thus being faster and friendlier to the environment, and (ii) enhancing the performance in multi-label data. We further propose a new faithfulness metric that is more suitable for transformer models and exhibits high correlation with the area under the precision-recall curve based on ground truth rationales. We validate the utility of our contributions with a series of quantitative and qualitative experiments on seven datasets.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
go back to reference Alammar J (2021) Ecco: An open source library for the explainability of transformer language models. In: Proceedings of the 59th Annual Meeting of the ACL and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pp 249–257. ACL, Online. https://doi.org/10.18653/v1/2021.acl-demo.30 Alammar J (2021) Ecco: An open source library for the explainability of transformer language models. In: Proceedings of the 59th Annual Meeting of the ACL and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pp 249–257. ACL, Online. https://​doi.​org/​10.​18653/​v1/​2021.​acl-demo.​30
go back to reference Bastings J, Filippova K (2020) The elephant in the interpretability room: Why use attention as explanation when we have saliency methods? In: BlackboxNLP@EMNLP, pp 149–155. ACL, Online Bastings J, Filippova K (2020) The elephant in the interpretability room: Why use attention as explanation when we have saliency methods? In: BlackboxNLP@EMNLP, pp 149–155. ACL, Online
go back to reference Camburu O-M, Rocktäschel T, Lukasiewicz T, Blunsom P (2018) e-snli: Natural language inference with natural language explanations. Advances in Neural Information Processing Systems 31 Camburu O-M, Rocktäschel T, Lukasiewicz T, Blunsom P (2018) e-snli: Natural language inference with natural language explanations. Advances in Neural Information Processing Systems 31
go back to reference Chefer H, Gur S, Wolf L (2021) Transformer interpretability beyond attention visualization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 782–791 Chefer H, Gur S, Wolf L (2021) Transformer interpretability beyond attention visualization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 782–791
go back to reference Du M, Liu N, Yang F, Ji S, Hu X (2019) On attribution of recurrent neural network predictions via additive decomposition. In: The World Wide Web Conference, pp 383–393 Du M, Liu N, Yang F, Ji S, Hu X (2019) On attribution of recurrent neural network predictions via additive decomposition. In: The World Wide Web Conference, pp 383–393
go back to reference EU (2021) Proposal for a regulation of the european parliament and the council laying down harmonised rules on artificial intelligence (AI Act) and amending certain union legislative acts. EUR-Lex-52021PC0206 EU (2021) Proposal for a regulation of the european parliament and the council laying down harmonised rules on artificial intelligence (AI Act) and amending certain union legislative acts. EUR-Lex-52021PC0206
go back to reference Feldhus N, Schwarzenberg R, Moller S (2021) Thermostat: A large collection of nlp model explanations and analysis tools. In: EMNLP Feldhus N, Schwarzenberg R, Moller S (2021) Thermostat: A large collection of nlp model explanations and analysis tools. In: EMNLP
go back to reference Jain S, Wallace BC (2019) Attention is not explanation. In: NAACL-HLT, pp 3543–3556. ACL, Minneapolis, Minnesota Jain S, Wallace BC (2019) Attention is not explanation. In: NAACL-HLT, pp 3543–3556. ACL, Minneapolis, Minnesota
go back to reference Lertvittayakumjorn P, Toni F (2019) Human-grounded evaluations of explanation methods for text classification. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP, November 3–7, pp 5194–5204. ACL, Hong Kong, China. https://doi.org/10.18653/v1/D19-1523 Lertvittayakumjorn P, Toni F (2019) Human-grounded evaluations of explanation methods for text classification. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP, November 3–7, pp 5194–5204. ACL, Hong Kong, China. https://​doi.​org/​10.​18653/​v1/​D19-1523
go back to reference Lundberg SM, Lee S-I (2017) A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems 30, pp 4765–4774. Curran Associates, Inc., Long Beach, California Lundberg SM, Lee S-I (2017) A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems 30, pp 4765–4774. Curran Associates, Inc., Long Beach, California
go back to reference Melis DA, Jaakkola T (2018) Towards robust interpretability with self-explaining neural networks. In: Advances in Neural Information Processing Systems, Montreal, Canada, pp 7775–7784 Melis DA, Jaakkola T (2018) Towards robust interpretability with self-explaining neural networks. In: Advances in Neural Information Processing Systems, Montreal, Canada, pp 7775–7784
go back to reference Mullenbach J, Wiegreffe S, Duke J, Sun J, Eisenstein J (2018) Explainable prediction of medical codes from clinical text. In: NAACL-HLT, pp 1101–1111. ACL, New Orleans, Louisiana Mullenbach J, Wiegreffe S, Duke J, Sun J, Eisenstein J (2018) Explainable prediction of medical codes from clinical text. In: NAACL-HLT, pp 1101–1111. ACL, New Orleans, Louisiana
go back to reference Ribeiro MT, Singh S, Guestrin C (2016) Why should i trust you? Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp 1135–1144. ACM Ribeiro MT, Singh S, Guestrin C (2016) Why should i trust you? Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp 1135–1144. ACM
go back to reference Robnik-Sikonja M, Bohanec M (2018) Perturbation-based explanations of prediction models. In: Zhou J, Chen F (eds) Human and machine learning - visible, explainable, trustworthy and transparent. Springer International, Cham, pp 159–175 Robnik-Sikonja M, Bohanec M (2018) Perturbation-based explanations of prediction models. In: Zhou J, Chen F (eds) Human and machine learning - visible, explainable, trustworthy and transparent. Springer International, Cham, pp 159–175
go back to reference Schwenke L, Atzmueller M (2021) Show me what you’re looking for: visualizing abstracted transformer attention for enhancing their local interpretability on time series data. In: The International FLAIRS Conference Proceedings, vol. 34 Schwenke L, Atzmueller M (2021) Show me what you’re looking for: visualizing abstracted transformer attention for enhancing their local interpretability on time series data. In: The International FLAIRS Conference Proceedings, vol. 34
go back to reference Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. Advances in neural information processing systems 30 Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. Advances in neural information processing systems 30
go back to reference Wang Y, Lee H-Y, Chen Y-N (2019) Tree transformer: Integrating tree structures into self-attention. In: Proceedings of EMNLP 2019 and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp 1061–1070. ACL, Hong Kong, China. https://doi.org/10.18653/v1/D19-1098 Wang Y, Lee H-Y, Chen Y-N (2019) Tree transformer: Integrating tree structures into self-attention. In: Proceedings of EMNLP 2019 and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp 1061–1070. ACL, Hong Kong, China. https://​doi.​org/​10.​18653/​v1/​D19-1098
go back to reference Wiegreffe S, Pinter Y (2019) Attention is not not explanation. In: EMNLP/IJCNLP, pp 11–20. ACL, Hong Kong, China Wiegreffe S, Pinter Y (2019) Attention is not not explanation. In: EMNLP/IJCNLP, pp 11–20. ACL, Hong Kong, China
go back to reference Wolf T, Debut L, Sanh V, Chaumond J, Delangue C, Moi A, Cistac P, Rault T, Louf R, Funtowicz M, Davison J, Shleifer S, von Platen P, Ma C, Jernite Y, Plu J, Xu C, Le Scao T, Gugger S, Drame M, Lhoest Q, Rush A (2020) Transformers: State-of-the-art natural language processing. In: Proceedings of EMNLP 2020: System Demonstrations, pp 38–45. ACL, Online. https://doi.org/10.18653/v1/2020.emnlp-demos.6 Wolf T, Debut L, Sanh V, Chaumond J, Delangue C, Moi A, Cistac P, Rault T, Louf R, Funtowicz M, Davison J, Shleifer S, von Platen P, Ma C, Jernite Y, Plu J, Xu C, Le Scao T, Gugger S, Drame M, Lhoest Q, Rush A (2020) Transformers: State-of-the-art natural language processing. In: Proceedings of EMNLP 2020: System Demonstrations, pp 38–45. ACL, Online. https://​doi.​org/​10.​18653/​v1/​2020.​emnlp-demos.​6
Metadata
Title
An attention matrix for every decision: faithfulness-based arbitration among multiple attention-based interpretations of transformers in text classification
Authors
Nikolaos Mylonas
Ioannis Mollas
Grigorios Tsoumakas
Publication date
28-08-2023
Publisher
Springer US
Published in
Data Mining and Knowledge Discovery / Issue 1/2024
Print ISSN: 1384-5810
Electronic ISSN: 1573-756X
DOI
https://doi.org/10.1007/s10618-023-00962-4

Other articles of this Issue 1/2024

Data Mining and Knowledge Discovery 1/2024 Go to the issue

Premium Partner