Machine Learning – The Results Are Not the only Thing that Matters! What About Security, Explainability and Fairness? | springerprofessional.de Skip to main content
Top

Hint

Swipe to navigate through the chapters of this book

2020 | OriginalPaper | Chapter

Machine Learning – The Results Are Not the only Thing that Matters! What About Security, Explainability and Fairness?

Authors : Michał Choraś, Marek Pawlicki, Damian Puchalski, Rafał Kozik

Published in: Computational Science – ICCS 2020

Publisher: Springer International Publishing

share
SHARE

Abstract

Recent advances in machine learning (ML) and the surge in computational power have opened the way to the proliferation of ML and Artificial Intelligence (AI) in many domains and applications. Still, apart from achieving good accuracy and results, there are many challenges that need to be discussed in order to effectively apply ML algorithms in critical applications for the good of societies. The aspects that can hinder practical and trustful ML and AI are: lack of security of ML algorithms as well as lack of fairness and explainability. In this paper we discuss those aspects and provide current state of the art analysis of the relevant works in the mentioned domains.

To get access to this content you need the following product:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 69.000 Bücher
  • über 500 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt 90 Tage mit der neuen Mini-Lizenz testen!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 50.000 Bücher
  • über 380 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe



 


Jetzt 90 Tage mit der neuen Mini-Lizenz testen!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 58.000 Bücher
  • über 300 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko





Jetzt 90 Tage mit der neuen Mini-Lizenz testen!

Literature
2.
go back to reference Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A., Mukhopadhyay, D.: Adversarial attacks and defences: a survey. arXiv preprint arXiv:​1810.​00069 (2018) Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A., Mukhopadhyay, D.: Adversarial attacks and defences: a survey. arXiv preprint arXiv:​1810.​00069 (2018)
3.
go back to reference Liao, X., Ding, L., Wang, Y.: Secure machine learning, a brief overview. In: 2011 Fifth International Conference on Secure Software Integration and Reliability Improvement-Companion, pp. 26–29. IEEE (2011) Liao, X., Ding, L., Wang, Y.: Secure machine learning, a brief overview. In: 2011 Fifth International Conference on Secure Software Integration and Reliability Improvement-Companion, pp. 26–29. IEEE (2011)
4.
go back to reference Papernot, N., McDaniel, P., Sinha, A., Wellman, M.P.: SoK: security and privacy in machine learning. In: 2018 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 399–414. IEEE (2018) Papernot, N., McDaniel, P., Sinha, A., Wellman, M.P.: SoK: security and privacy in machine learning. In: 2018 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 399–414. IEEE (2018)
6.
go back to reference Nelson, B., et al.: Exploiting machine learning to subvert your spam filter. LEET 8, 1–9 (2008) Nelson, B., et al.: Exploiting machine learning to subvert your spam filter. LEET 8, 1–9 (2008)
7.
go back to reference Xiao, H., Biggio, B., Brown, G., Fumera, G., Eckert, C., Roli, F.: Is feature selection secure against training data poisoning? In: International Conference on Machine Learning, pp. 1689–1698 (2015) Xiao, H., Biggio, B., Brown, G., Fumera, G., Eckert, C., Roli, F.: Is feature selection secure against training data poisoning? In: International Conference on Machine Learning, pp. 1689–1698 (2015)
8.
go back to reference Biggio, B., Fumera, G., Roli, F.: Pattern recognition systems under attack: design issues and research challenges. Int. J. Pattern Recognit. Artif. Intell. 28(07), 1460002 (2014) CrossRef Biggio, B., Fumera, G., Roli, F.: Pattern recognition systems under attack: design issues and research challenges. Int. J. Pattern Recognit. Artif. Intell. 28(07), 1460002 (2014) CrossRef
9.
go back to reference Shafahi, A., et al.: Poison frogs! Targeted clean-label poisoning attacks on neural networks. In: Advances in Neural Information Processing Systems, pp. 6103–6113 (2018) Shafahi, A., et al.: Poison frogs! Targeted clean-label poisoning attacks on neural networks. In: Advances in Neural Information Processing Systems, pp. 6103–6113 (2018)
10.
go back to reference Muñoz-González, L., et al.: Towards poisoning of deep learning algorithms with back-gradient optimization. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security - AISec 17. ACM Press (2017) Muñoz-González, L., et al.: Towards poisoning of deep learning algorithms with back-gradient optimization. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security - AISec 17. ACM Press (2017)
11.
go back to reference Papernot, N., McDaniel, P., Goodfellow, I.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:​1605.​07277 (2016) Papernot, N., McDaniel, P., Goodfellow, I.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:​1605.​07277 (2016)
12.
go back to reference Demontis, A., et al.: Why do adversarial attacks transfer? Explaining transferability of evasion and poisoning attacks. In: 28th \(\{\)USENIX \(\}\) Security Symposium ( \(\{\)USENIX \(\}\) Security 19), pp. 321–338 (2019) Demontis, A., et al.: Why do adversarial attacks transfer? Explaining transferability of evasion and poisoning attacks. In: 28th \(\{\)USENIX \(\}\) Security Symposium ( \(\{\)USENIX \(\}\) Security 19), pp. 321–338 (2019)
13.
go back to reference Mei, S., Zhu, X.: Using machine teaching to identify optimal training-set attacks on machine learners. In: Twenty-Ninth AAAI Conference on Artificial Intelligence (2015) Mei, S., Zhu, X.: Using machine teaching to identify optimal training-set attacks on machine learners. In: Twenty-Ninth AAAI Conference on Artificial Intelligence (2015)
15.
go back to reference Chen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:​1712.​05526 (2017) Chen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:​1712.​05526 (2017)
16.
go back to reference Zhong, Z.: A tutorial on fairness in machine learning, July 2019 Zhong, Z.: A tutorial on fairness in machine learning, July 2019
17.
go back to reference Barocas, S., Selbst, A.D.: Big data’s disparate impact. SSRN Electron. J. 104, 671 (2016) Barocas, S., Selbst, A.D.: Big data’s disparate impact. SSRN Electron. J. 104, 671 (2016)
18.
19.
go back to reference Verma, S., Rubin, J.: Fairness definitions explained. In: Proceedings of the International Workshop on Software Fairness - FairWare 18. ACM Press (2018) Verma, S., Rubin, J.: Fairness definitions explained. In: Proceedings of the International Workshop on Software Fairness - FairWare 18. ACM Press (2018)
20.
go back to reference Hardt, M., Price, E., Srebro, N., et al.: Equality of opportunity in supervised learning. In: Advances in Neural Information Processing Systems, pp. 3315–3323 (2016) Hardt, M., Price, E., Srebro, N., et al.: Equality of opportunity in supervised learning. In: Advances in Neural Information Processing Systems, pp. 3315–3323 (2016)
21.
go back to reference Zafar, M.B., Valera, I., Rodriguez, M.G., Gummadi, K.P.: Fairness beyond disparate treatment & disparate impact. In: Proceedings of the 26th International Conference on World Wide Web - WWW 17. ACM Press (2017) Zafar, M.B., Valera, I., Rodriguez, M.G., Gummadi, K.P.: Fairness beyond disparate treatment & disparate impact. In: Proceedings of the 26th International Conference on World Wide Web - WWW 17. ACM Press (2017)
22.
go back to reference Del Barrio, E., Gamboa, F., Gordaliza, P., Loubes, J.-M.: Obtaining fairness using optimal transport theory. arXiv preprint arXiv:​1806.​03195 (2018) Del Barrio, E., Gamboa, F., Gordaliza, P., Loubes, J.-M.: Obtaining fairness using optimal transport theory. arXiv preprint arXiv:​1806.​03195 (2018)
23.
go back to reference Zemel, R., Wu, Y., Swersky, K., Pitassi, T., Dwork, C.: Learning fair representations. In: International Conference on Machine Learning, pp. 325–333 (2013) Zemel, R., Wu, Y., Swersky, K., Pitassi, T., Dwork, C.: Learning fair representations. In: International Conference on Machine Learning, pp. 325–333 (2013)
26.
go back to reference Calmon, F., Wei, D., Vinzamuri, B., Ramamurthy, K.N., Varshney, K.R.: Optimized pre-processing for discrimination prevention. In: Advances in Neural Information Processing Systems, pp. 3992–4001 (2017) Calmon, F., Wei, D., Vinzamuri, B., Ramamurthy, K.N., Varshney, K.R.: Optimized pre-processing for discrimination prevention. In: Advances in Neural Information Processing Systems, pp. 3992–4001 (2017)
27.
go back to reference Iosifidis, V., Ntoutsi, E.: Dealing with bias via data augmentation in supervised learning scenarios. Jo Bates Paul D. Clough Robert Jäschke, p. 24 (2018) Iosifidis, V., Ntoutsi, E.: Dealing with bias via data augmentation in supervised learning scenarios. Jo Bates Paul D. Clough Robert Jäschke, p. 24 (2018)
28.
go back to reference Sharma, S., Zhang, Y., Aliaga, J.M.R., Bouneffouf, D., Muthusamy, V., Varshney, K.R.: Data augmentation for discrimination prevention and bias disambiguation. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 358–364 (2020) Sharma, S., Zhang, Y., Aliaga, J.M.R., Bouneffouf, D., Muthusamy, V., Varshney, K.R.: Data augmentation for discrimination prevention and bias disambiguation. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 358–364 (2020)
29.
30.
go back to reference Zafar, M.B., Valera, I., Rodriguez, M.G., Gummadi, K.P.: Fairness constraints: mechanisms for fair classification. arXiv preprint arXiv:​1507.​05259 (2015) Zafar, M.B., Valera, I., Rodriguez, M.G., Gummadi, K.P.: Fairness constraints: mechanisms for fair classification. arXiv preprint arXiv:​1507.​05259 (2015)
31.
go back to reference Agarwal, A., Beygelzimer, A., Dudík, M., Langford, J., Wallach, H.: A reductions approach to fair classification. arXiv preprint arXiv:​1803.​02453 (2018) Agarwal, A., Beygelzimer, A., Dudík, M., Langford, J., Wallach, H.: A reductions approach to fair classification. arXiv preprint arXiv:​1803.​02453 (2018)
32.
go back to reference Gall, R.: Machine learning explainability vs interpretability: two concepts that could help restore trust in AI. KDnuggets News 19(1) (2019) Gall, R.: Machine learning explainability vs interpretability: two concepts that could help restore trust in AI. KDnuggets News 19(1) (2019)
33.
go back to reference Dosilovic, F.K., Brcic, M., Hlupic, N.: Explainable artificial intelligence: a survey. In 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). IEEE, May 2018 Dosilovic, F.K., Brcic, M., Hlupic, N.: Explainable artificial intelligence: a survey. In 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). IEEE, May 2018
36.
go back to reference Etmann, C., Lunz, S., Maass, P., Schönlieb, C.-B.: On the connection between adversarial robustness and saliency map interpretability. arXiv preprint arXiv:​1905.​04172 (2019) Etmann, C., Lunz, S., Maass, P., Schönlieb, C.-B.: On the connection between adversarial robustness and saliency map interpretability. arXiv preprint arXiv:​1905.​04172 (2019)
37.
go back to reference Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 1885–1894. JMLR.org (2017) Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 1885–1894. JMLR.org (2017)
38.
go back to reference Carvalho, D.V., Pereira, E.M., Cardoso, J.S.: Machine learning interpretability: a survey on methods and metrics. Electronics 8(8), 832 (2019) CrossRef Carvalho, D.V., Pereira, E.M., Cardoso, J.S.: Machine learning interpretability: a survey on methods and metrics. Electronics 8(8), 832 (2019) CrossRef
42.
go back to reference Honegger, M.: Shedding light on black box machine learning algorithms: development of an axiomatic framework to assess the quality of methods that explain individual predictions. arXiv preprint arXiv:​1808.​05054 (2018) Honegger, M.: Shedding light on black box machine learning algorithms: development of an axiomatic framework to assess the quality of methods that explain individual predictions. arXiv preprint arXiv:​1808.​05054 (2018)
Metadata
Title
Machine Learning – The Results Are Not the only Thing that Matters! What About Security, Explainability and Fairness?
Authors
Michał Choraś
Marek Pawlicki
Damian Puchalski
Rafał Kozik
Copyright Year
2020
DOI
https://doi.org/10.1007/978-3-030-50423-6_46