Skip to main content
main-content
Top

Hint

Swipe to navigate through the chapters of this book

2021 | OriginalPaper | Chapter

4. AI in the EU: Ethical Guidelines as a Governance Tool

share
SHARE

Abstract

This chapter examines ethical guidelines as a tool for the governance of artificial intelligence (AI). Analysing the European development towards a trustworthy AI, with a focus on the High-Level Expert Group on AI appointed by the European Commission, the chapter highlights the interaction between guidelines and law in light of technological advancements. The chapter explores why ethical framing has such a prominent place in the discussions regarding AI. Applied AI, here argued, must be understood from its interaction with social structures and human expressions, resulting in a need for a multidisciplinary understanding for its governance. Via an analysis of the fuzziness of the AI concept, as well as the related notions of risk and transparency, the chapter concludes by stressing the necessity to move from principles to process in the governance of AI in the EU.
Literature
go back to reference Abbott, K. W., & Snidal, D. (2000). Hard and Soft Law in International Governance. International Organization, 54(3), 421–456. CrossRef Abbott, K. W., & Snidal, D. (2000). Hard and Soft Law in International Governance. International Organization, 54(3), 421–456. CrossRef
go back to reference Bastos, M. T., & Mercea, D. (2019). The Brexit Botnet and User-Generated Hyperpartisan News. Social Science Computer Review, 37(1), 38–54. CrossRef Bastos, M. T., & Mercea, D. (2019). The Brexit Botnet and User-Generated Hyperpartisan News. Social Science Computer Review, 37(1), 38–54. CrossRef
go back to reference Benkler, Y. (2019). Don’t Let Industry Write the Rules for AI. Nature, 569(7754), 161–162. CrossRef Benkler, Y. (2019). Don’t Let Industry Write the Rules for AI. Nature, 569(7754), 161–162. CrossRef
go back to reference Bostrom, N. (2014). Superintelligence. Paths, Dangers, Strategies. Oxford: Oxford University Press. Bostrom, N. (2014). Superintelligence. Paths, Dangers, Strategies. Oxford: Oxford University Press.
go back to reference Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, T., Filar, B. & Anderson, H. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv preprint arXiv:1802 .07228. Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, T., Filar, B. & Anderson, H. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv preprint arXiv:1802 .07228.
go back to reference Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Conference on Fairness, Accountability and Transparency, PMLR, 81, 77–91. Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Conference on Fairness, Accountability and Transparency, PMLR, 81, 77–91.
go back to reference Chesney, B., & Citron, D. (2019). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. California Law Review, 107, 1753. Chesney, B., & Citron, D. (2019). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. California Law Review, 107, 1753.
go back to reference Cotterrell, R. (1992). The Sociology of Law: An Introduction. Oxford: Oxford University Press. Cotterrell, R. (1992). The Sociology of Law: An Introduction. Oxford: Oxford University Press.
go back to reference Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination. Proceedings on Privacy Enhancing Technologies, 2015(1), 92–112. Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination. Proceedings on Privacy Enhancing Technologies, 2015(1), 92–112.
go back to reference de Laat, P. B. (2018). Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability? Philosophy & Technology, 31(4), 525–541. CrossRef de Laat, P. B. (2018). Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability? Philosophy & Technology, 31(4), 525–541. CrossRef
go back to reference de Vries, K. (2020). You Never Fake Alone. Creative AI in Action. Information, Communication & Society, 23(14), 2110–2127. de Vries, K. (2020). You Never Fake Alone. Creative AI in Action. Information, Communication & Society, 23(14), 2110–2127.
go back to reference Die Bundesregierung. (2020, June 29). Stellungnahme der Bundesregierung der Bundesrepublik Deutschland zum Weißbuch zur Künstlichen Intelligenz – ein europäisches Konzept für Exzellenz und Vertrauen. COM (2020) 65 Final. Die Bundesregierung. (2020, June 29). Stellungnahme der Bundesregierung der Bundesrepublik Deutschland zum Weißbuch zur Künstlichen Intelligenz – ein europäisches Konzept für Exzellenz und Vertrauen. COM (2020) 65 Final.
go back to reference Ellickson, R. C. (1994). Order without Law. Cambridge, MA: Harvard University Press. Ellickson, R. C. (1994). Order without Law. Cambridge, MA: Harvard University Press.
go back to reference European Commission. (2018, April 25). Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions on Artificial Intelligence for Europe. COM (2018) 237 final. European Commission. (2018, April 25). Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions on Artificial Intelligence for Europe. COM (2018) 237 final.
go back to reference European Commission. (2020, February 19). White Paper on Artificial Intelligence: Public Consultation Towards a European Approach for Excellence and Trust. COM (2020) 65 final. European Commission. (2020, February 19). White Paper on Artificial Intelligence: Public Consultation Towards a European Approach for Excellence and Trust. COM (2020) 65 final.
go back to reference Fuller, L. L. (1975). Law as an Instrument of Social Control and Law as a Facilitation of Human Interaction. BYU Law Review, 1, 89–98. Fuller, L. L. (1975). Law as an Instrument of Social Control and Law as a Facilitation of Human Interaction. BYU Law Review, 1, 89–98.
go back to reference Gasser, U., & Almeida, V. A. (2017). A Layered Model for AI Governance. IEEE Internet Computing, 21(6), 58–62. CrossRef Gasser, U., & Almeida, V. A. (2017). A Layered Model for AI Governance. IEEE Internet Computing, 21(6), 58–62. CrossRef
go back to reference Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative Adversarial Nets. Advances in Neural Information Processing Systems, 27, 2672–2680. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative Adversarial Nets. Advances in Neural Information Processing Systems, 27, 2672–2680.
go back to reference Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines, 30, 99–120. CrossRef Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines, 30, 99–120. CrossRef
go back to reference HLEG. (2019a). A Definition of AI: Main Capabilities and Disciplines: Definition Developed for the Purpose of the AI HLEG’s Deliverables. Brussels: European Commission. HLEG. (2019a). A Definition of AI: Main Capabilities and Disciplines: Definition Developed for the Purpose of the AI HLEG’s Deliverables. Brussels: European Commission.
go back to reference HLEG. The (2019b). Ethics Guidelines for Trustworthy AI. Brussels: European Commission. HLEG. The (2019b). Ethics Guidelines for Trustworthy AI. Brussels: European Commission.
go back to reference HLEG. (2019c). Policy and Investment Recommendations for Trustworthy Artificial Intelligence. Brussels: European Commission. HLEG. (2019c). Policy and Investment Recommendations for Trustworthy Artificial Intelligence. Brussels: European Commission.
go back to reference Jansen, S. C., & Martin, B. (2015). The Streisand Effect and Censorship Backfire. International Journal of Communication, 9, 656–671. Jansen, S. C., & Martin, B. (2015). The Streisand Effect and Censorship Backfire. International Journal of Communication, 9, 656–671.
go back to reference Jobin, A., Ienca, M., & Vayena, E. (2019). The Global Landscape of AI Ethics Guidelines. Nature Machine Intelligence, 1(9), 389–399. CrossRef Jobin, A., Ienca, M., & Vayena, E. (2019). The Global Landscape of AI Ethics Guidelines. Nature Machine Intelligence, 1(9), 389–399. CrossRef
go back to reference Koulu, R. (2020). Human Control over Automation: EU Policy and AI Ethics. European Journal of Legal Studies, 12, 9–46. Koulu, R. (2020). Human Control over Automation: EU Policy and AI Ethics. European Journal of Legal Studies, 12, 9–46.
go back to reference Larsson, S. (2017). Conceptions in the Code. How Metaphors Explain Legal Challenges in Digital Times. New York: Oxford University Press. CrossRef Larsson, S. (2017). Conceptions in the Code. How Metaphors Explain Legal Challenges in Digital Times. New York: Oxford University Press. CrossRef
go back to reference Larsson, S. (2018). Algorithmic Governance and the Need for Consumer Empowerment in Data-Driven Markets. Internet Policy Review, 7(2), 1–12. CrossRef Larsson, S. (2018). Algorithmic Governance and the Need for Consumer Empowerment in Data-Driven Markets. Internet Policy Review, 7(2), 1–12. CrossRef
go back to reference Larsson, S. (2019). The Socio-Legal Relevance of Artificial Intelligence. ‘Law in an Algorithmic World’. Special Issue of Droit et Société, 103(3), 573–593. Larsson, S. (2019). The Socio-Legal Relevance of Artificial Intelligence. ‘Law in an Algorithmic World’. Special Issue of Droit et Société, 103(3), 573–593.
go back to reference Larsson, S. (2020). On the Governance of Artificial Intelligence Through Ethics Guidelines. Asian Journal of Law and Society, 1, 1–15.  Larsson, S. (2020). On the Governance of Artificial Intelligence Through Ethics Guidelines. Asian Journal of Law and Society, 1, 1–15. 
go back to reference Larsson, S., & Heintz, F. (2020). Transparency in Artificial Intelligence. Internet Policy Review, 9(2), 1–16. CrossRef Larsson, S., & Heintz, F. (2020). Transparency in Artificial Intelligence. Internet Policy Review, 9(2), 1–16. CrossRef
go back to reference Legg, S., & Hutter, M. (2007) A Collection of Definitions of Intelligence. In B. Goertzel & P. Wang (Eds.), Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms (pp. 17–24), Proceedings of the AGI Workshop 2006 (Vol. 157), IOS Press. Legg, S., & Hutter, M. (2007) A Collection of Definitions of Intelligence. In B. Goertzel & P. Wang (Eds.), Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms (pp. 17–24), Proceedings of the AGI Workshop 2006 (Vol. 157), IOS Press.
go back to reference Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, Transparent, and Accountable Algorithmic Decision-Making Processes. Philosophy & Technology, 31, 611–627. CrossRef Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, Transparent, and Accountable Algorithmic Decision-Making Processes. Philosophy & Technology, 31, 611–627. CrossRef
go back to reference Lidskog, R. (2008). Scientised Citizens and Democratised Science. Re-assessing the Expert-Lay Divide. Journal of Risk Research, 11(1–2), 69–86. CrossRef Lidskog, R. (2008). Scientised Citizens and Democratised Science. Re-assessing the Expert-Lay Divide. Journal of Risk Research, 11(1–2), 69–86. CrossRef
go back to reference Mandel, G. N. (2009). Regulating Emerging Technologies. Law, Innovation and Technology, 1(1), 75–92. CrossRef Mandel, G. N. (2009). Regulating Emerging Technologies. Law, Innovation and Technology, 1(1), 75–92. CrossRef
go back to reference Pasquale, F. (2015). The Black Box Society. The Secret Algorithms that Control Money and Information. Cambridge: Harvard University Press. CrossRef Pasquale, F. (2015). The Black Box Society. The Secret Algorithms that Control Money and Information. Cambridge: Harvard University Press. CrossRef
go back to reference Rahwan, I. (2018). Society-in-the-Loop: Programming the Algorithmic Social Contract. Ethics and Information Technology, 20(1), 5–14. CrossRef Rahwan, I. (2018). Society-in-the-Loop: Programming the Algorithmic Social Contract. Ethics and Information Technology, 20(1), 5–14. CrossRef
go back to reference Reisman, D., Schultz, J., Crawford, K., & Whittaker, M. (2018). Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability. AI Now Institute, 1–22. Reisman, D., Schultz, J., Crawford, K., & Whittaker, M. (2018). Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability. AI Now Institute, 1–22.
go back to reference Samuel, A. L. (1959). Some Studies in Machine Learning Using the Game of Checkers. IBM Journal of Research and Development, 3(3), 210–229. CrossRef Samuel, A. L. (1959). Some Studies in Machine Learning Using the Game of Checkers. IBM Journal of Research and Development, 3(3), 210–229. CrossRef
go back to reference Shankar, S., Halpern, Y., Breck, E., Atwood, J., Wilson, J., & Sculley, D. (2017). No Classification without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World. arXiv preprint arXiv:1711 .08536. Shankar, S., Halpern, Y., Breck, E., Atwood, J., Wilson, J., & Sculley, D. (2017). No Classification without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World. arXiv preprint arXiv:1711 .08536.
go back to reference Srivastava, M., Heidari, H., & Krause, A. (2019). Mathematical Notions vs. Human Perception of Fairness: A Descriptive Approach to Fairness for Machine Learning. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 2459–2468). Srivastava, M., Heidari, H., & Krause, A. (2019). Mathematical Notions vs. Human Perception of Fairness: A Descriptive Approach to Fairness for Machine Learning. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 2459–2468).
go back to reference Stone, P., et al. (2016). Artificial Intelligence and Life in 2030, Report of the (pp. 2015–2016). Stanford University: Study Panel. Stanford. Stone, P., et al. (2016). Artificial Intelligence and Life in 2030, Report of the (pp. 2015–2016). Stanford University: Study Panel. Stanford.
go back to reference Van Dijck, J., Poell, T., & De Waal, M. (2018). The Platform Society: Public Values in a Connective World. New York: Oxford University Press. CrossRef Van Dijck, J., Poell, T., & De Waal, M. (2018). The Platform Society: Public Values in a Connective World. New York: Oxford University Press. CrossRef
go back to reference Wilson, B., Hoffman, J., & Morgenstern, J. (2019). Predictive Inequity in Object Detection. arXiv preprint arXiv:1902 .11097. Wilson, B., Hoffman, J., & Morgenstern, J. (2019). Predictive Inequity in Object Detection. arXiv preprint arXiv:1902 .11097.
Metadata
Title
AI in the EU: Ethical Guidelines as a Governance Tool
Author
Stefan Larsson
Copyright Year
2021
DOI
https://doi.org/10.1007/978-3-030-63672-2_4

Premium Partner