Skip to main content
Erschienen in: Artificial Intelligence and Law 2/2021

30.07.2020 | Original Research

Legal requirements on explainability in machine learning

verfasst von: Adrien Bibal, Michael Lognoul, Alexandre de Streel, Benoît Frénay

Erschienen in: Artificial Intelligence and Law | Ausgabe 2/2021

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Deep learning and other black-box models are becoming more and more popular today. Despite their high performance, they may not be accepted ethically or legally because of their lack of explainability. This paper presents the increasing number of legal requirements on machine learning model interpretability and explainability in the context of private and public decision making. It then explains how those legal requirements can be implemented into machine-learning models and concludes with a call for more inter-disciplinary research on explainability.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Aletras N, Tsarapatsanis D, Preoţiuc-Pietro D, Lampos V (2016) Predicting judicial decisions of the European court of human rights: a natural language processing perspective. PeerJ Comput Sci 2:e93CrossRef Aletras N, Tsarapatsanis D, Preoţiuc-Pietro D, Lampos V (2016) Predicting judicial decisions of the European court of human rights: a natural language processing perspective. PeerJ Comput Sci 2:e93CrossRef
Zurück zum Zitat Alonso C (2012) La motivation des décisions juridictionnelles : exigence(s) du droit au procès équitable. In Regards sur le droit au procès équitable. Presses de l’Université Toulouse 1 Capitole Alonso C (2012) La motivation des décisions juridictionnelles : exigence(s) du droit au procès équitable. In Regards sur le droit au procès équitable. Presses de l’Université Toulouse 1 Capitole
Zurück zum Zitat Ashley KD, Brüninghaus S (2009) Automatically classifying case texts and predicting outcomes. Artif Intell Law 17(2):125–165CrossRef Ashley KD, Brüninghaus S (2009) Automatically classifying case texts and predicting outcomes. Artif Intell Law 17(2):125–165CrossRef
Zurück zum Zitat Autin J-L (2011) La motivation des actes administratifs unilatéraux, entre tradition nationale et évolution des droits européens. Revue française d’administration publique 1:85–99CrossRef Autin J-L (2011) La motivation des actes administratifs unilatéraux, entre tradition nationale et évolution des droits européens. Revue française d’administration publique 1:85–99CrossRef
Zurück zum Zitat Bibal A, Frénay B (2016) Interpretability of machine learning models and representations: an introduction. In: Proceedings of the European symposium on artificial neural networks, computational intelligence and machine learning (ESANN), Bruges, Belgium, pp 77–82 Bibal A, Frénay B (2016) Interpretability of machine learning models and representations: an introduction. In: Proceedings of the European symposium on artificial neural networks, computational intelligence and machine learning (ESANN), Bruges, Belgium, pp 77–82
Zurück zum Zitat Branting LK (2017) Data-centric and logic-based models for automated legal problem solving. Artif Intell Law 25(1):5–27CrossRef Branting LK (2017) Data-centric and logic-based models for automated legal problem solving. Artif Intell Law 25(1):5–27CrossRef
Zurück zum Zitat Edwards L, Veale M (2018) Enslaving the algorithm: from a “right to an explanation” to a “right to better decisions”? IEEE Secur Priv 16(3):46–54CrossRef Edwards L, Veale M (2018) Enslaving the algorithm: from a “right to an explanation” to a “right to better decisions”? IEEE Secur Priv 16(3):46–54CrossRef
Zurück zum Zitat Fisher A, Rudin C, Dominici F (2018) All models are wrong but many are useful: variable importance for black-box, proprietary, or misspecified prediction models, using model class reliance. arXiv preprint arXiv:1801.01489 Fisher A, Rudin C, Dominici F (2018) All models are wrong but many are useful: variable importance for black-box, proprietary, or misspecified prediction models, using model class reliance. arXiv preprint arXiv:​1801.​01489
Zurück zum Zitat Frénay B, Hofmann D, Schulz A, Biehl M, Hammer B (2014) Valid interpretation of feature relevance for linear data mappings. In: IEEE Symposium on Computational Intelligence and Data Mining (CIDM), pp 149–156 Frénay B, Hofmann D, Schulz A, Biehl M, Hammer B (2014) Valid interpretation of feature relevance for linear data mappings. In: IEEE Symposium on Computational Intelligence and Data Mining (CIDM), pp 149–156
Zurück zum Zitat Goodman B, Flaxman S (2016) EU regulations on algorithmic decision-making and a “right to explanation”. In: ICML workshop on human interpretability in machine learning, New York, USA Goodman B, Flaxman S (2016) EU regulations on algorithmic decision-making and a “right to explanation”. In: ICML workshop on human interpretability in machine learning, New York, USA
Zurück zum Zitat Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D (2018) A survey of methods for explaining black box models. ACM Comput Surv 51(5):1–42CrossRef Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D (2018) A survey of methods for explaining black box models. ACM Comput Surv 51(5):1–42CrossRef
Zurück zum Zitat John G. H, Kohavi R, Pfleger K (1994) Irrelevant features and the subset selection problem. In: International conference on machine learning (ICML), pp 121–129 John G. H, Kohavi R, Pfleger K (1994) Irrelevant features and the subset selection problem. In: International conference on machine learning (ICML), pp 121–129
Zurück zum Zitat Kodratoff Y (1994) The comprehensibility manifesto. AI Commun 7(2):83–85CrossRef Kodratoff Y (1994) The comprehensibility manifesto. AI Commun 7(2):83–85CrossRef
Zurück zum Zitat Kohavi R, John GH (1997) Wrappers for feature subset selection. Artif Intell 97(1–2):273–324CrossRef Kohavi R, John GH (1997) Wrappers for feature subset selection. Artif Intell 97(1–2):273–324CrossRef
Zurück zum Zitat Lepri B, Oliver N, Letouzé E, Pentland A, Vinck P (2018) Fair, transparent, and accountable algorithmic decision-making processes. Philos Technol 31(4):611–627CrossRef Lepri B, Oliver N, Letouzé E, Pentland A, Vinck P (2018) Fair, transparent, and accountable algorithmic decision-making processes. Philos Technol 31(4):611–627CrossRef
Zurück zum Zitat Li J, Zhang G, Yu L, Meng T (2018) Research and design on cognitive computing framework for predicting judicial decisions. J Signal Process Syst 91:1159–1167CrossRef Li J, Zhang G, Yu L, Meng T (2018) Research and design on cognitive computing framework for predicting judicial decisions. J Signal Process Syst 91:1159–1167CrossRef
Zurück zum Zitat Lipton ZC (2016) The mythos of model interpretability. In: ICML workshop on human interpretability of machine learning, New York, USA Lipton ZC (2016) The mythos of model interpretability. In: ICML workshop on human interpretability of machine learning, New York, USA
Zurück zum Zitat Luo B, Feng Y, Xu J, Zhang X, Zhao D (2017) Learning to predict charges for criminal cases with legal basis. In: Proceedings of the conference on empirical methods in natural language processing (EMNLP), pp. 2727–2736 Luo B, Feng Y, Xu J, Zhang X, Zhao D (2017) Learning to predict charges for criminal cases with legal basis. In: Proceedings of the conference on empirical methods in natural language processing (EMNLP), pp. 2727–2736
Zurück zum Zitat Malgieri G, Comandé G (2017) Why a right to legibility of automated decision-making exists in the general data protection regulation. Int Data Priv Law 7(4):243–265CrossRef Malgieri G, Comandé G (2017) Why a right to legibility of automated decision-making exists in the general data protection regulation. Int Data Priv Law 7(4):243–265CrossRef
Zurück zum Zitat Mittelstadt B, Russell C, Wachter S (2019) Explaining explanations in AI. In: Proceedings of the conference on fairness, accountability, and transparency (FAT), pp 279–288 Mittelstadt B, Russell C, Wachter S (2019) Explaining explanations in AI. In: Proceedings of the conference on fairness, accountability, and transparency (FAT), pp 279–288
Zurück zum Zitat Palau RM, Moens MF (2009) Argumentation mining: the detection, classification and structure of arguments in text. In: Proceedings of the international conference on artificial intelligence and law (ICAIL), pp 98–107 Palau RM, Moens MF (2009) Argumentation mining: the detection, classification and structure of arguments in text. In: Proceedings of the international conference on artificial intelligence and law (ICAIL), pp 98–107
Zurück zum Zitat Pasquale F (2015) Black box society, the secret algorithms that control money and information. Harvard University Press, New YorkCrossRef Pasquale F (2015) Black box society, the secret algorithms that control money and information. Harvard University Press, New YorkCrossRef
Zurück zum Zitat Ribeiro MT, Singh S, Guestrin C (2016) “Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the ACM SIGKDD, pp. 1135–1144 Ribeiro MT, Singh S, Guestrin C (2016) “Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the ACM SIGKDD, pp. 1135–1144
Zurück zum Zitat Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206–2015CrossRef Rudin C (2019) Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206–2015CrossRef
Zurück zum Zitat Selbst AD, Barocas S (2018) The intuitive appeal of explainable machines. Fordham Law Rev 87:1085–1139 Selbst AD, Barocas S (2018) The intuitive appeal of explainable machines. Fordham Law Rev 87:1085–1139
Zurück zum Zitat Selbst AD, Powles J (2017) Meaningful information and the right to explanation. Int Data Privacy Law 7(4):233–242CrossRef Selbst AD, Powles J (2017) Meaningful information and the right to explanation. Int Data Privacy Law 7(4):233–242CrossRef
Zurück zum Zitat Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE international conference on computer vision (ICCV), pp 618–626 Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE international conference on computer vision (ICCV), pp 618–626
Zurück zum Zitat Simonyan K, Vedaldi A, Zisserman A (2013) Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint aDrXiv:1312.6034 Simonyan K, Vedaldi A, Zisserman A (2013) Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint aDrXiv:1312.6034
Zurück zum Zitat Singla P, Domingos P (2005) Discriminative training of markov logic networks. In: National conference on artificial intelligence (AAAI), pp 868–873 Singla P, Domingos P (2005) Discriminative training of markov logic networks. In: National conference on artificial intelligence (AAAI), pp 868–873
Zurück zum Zitat Tibshirani R (1996) Regression shrinkage and selection via the lasso. J Roy Stat Soc Ser B (Methodol) 58(1):267–288MathSciNetMATH Tibshirani R (1996) Regression shrinkage and selection via the lasso. J Roy Stat Soc Ser B (Methodol) 58(1):267–288MathSciNetMATH
Zurück zum Zitat Ustun B, Rudin C (2016) Supersparse linear integer models for optimized medical scoring systems. Mach Learn 102(3):349–391MathSciNetCrossRef Ustun B, Rudin C (2016) Supersparse linear integer models for optimized medical scoring systems. Mach Learn 102(3):349–391MathSciNetCrossRef
Zurück zum Zitat Ustun B, Traca S, Rudin C (2013a) Supersparse linear integer models for interpretable classification. arXiv preprint arXiv:1306.6677 Ustun B, Traca S, Rudin C (2013a) Supersparse linear integer models for interpretable classification. arXiv preprint arXiv:​1306.​6677
Zurück zum Zitat Ustun B, Traca S, Rudin C (2013b) Supersparse linear integer models for predictive scoring systems. In: Proceedings of AAAI late breaking track Ustun B, Traca S, Rudin C (2013b) Supersparse linear integer models for predictive scoring systems. In: Proceedings of AAAI late breaking track
Zurück zum Zitat Wachter S, Mittelstadt B, Floridi L (2017) Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int Data Privacy Law 7(2):76–99CrossRef Wachter S, Mittelstadt B, Floridi L (2017) Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int Data Privacy Law 7(2):76–99CrossRef
Zurück zum Zitat Wiener C (1969) La motivation des décisions administratives en droit comparé. Revue internationale de droit comparé 21(21):779–795CrossRef Wiener C (1969) La motivation des décisions administratives en droit comparé. Revue internationale de droit comparé 21(21):779–795CrossRef
Zurück zum Zitat Ye H, Jiang X, Luo Z, Chao W (2018) Interpretable charge predictions for criminal cases: learning to generate court views from fact descriptions. In: Proceedings of the conference of the North American chapter of the association for computational linguistics: human language technologies, pp 1854–1864 Ye H, Jiang X, Luo Z, Chao W (2018) Interpretable charge predictions for criminal cases: learning to generate court views from fact descriptions. In: Proceedings of the conference of the North American chapter of the association for computational linguistics: human language technologies, pp 1854–1864
Zurück zum Zitat Yu L, Liu H (2004) Efficient feature selection via analysis of relevance and redundancy. J Mach Learn Res 5:1205–1224MathSciNetMATH Yu L, Liu H (2004) Efficient feature selection via analysis of relevance and redundancy. J Mach Learn Res 5:1205–1224MathSciNetMATH
Zurück zum Zitat Zhong H, Zhipeng G, Tu C, Xiao C, Liu Z, Sun M (2018) Legal judgment prediction via topological learning. In: Proceedings of the conference on empirical methods in natural language processing (EMNLP), pp 3540–3549 Zhong H, Zhipeng G, Tu C, Xiao C, Liu Z, Sun M (2018) Legal judgment prediction via topological learning. In: Proceedings of the conference on empirical methods in natural language processing (EMNLP), pp 3540–3549
Metadaten
Titel
Legal requirements on explainability in machine learning
verfasst von
Adrien Bibal
Michael Lognoul
Alexandre de Streel
Benoît Frénay
Publikationsdatum
30.07.2020
Verlag
Springer Netherlands
Erschienen in
Artificial Intelligence and Law / Ausgabe 2/2021
Print ISSN: 0924-8463
Elektronische ISSN: 1572-8382
DOI
https://doi.org/10.1007/s10506-020-09270-4

Weitere Artikel der Ausgabe 2/2021

Artificial Intelligence and Law 2/2021 Zur Ausgabe

Premium Partner