Algorithmic Discrimination and Ethical Perspective of Artificial Intelligence
- 2024
- Book
- Editors
- Muharrem Kılıç
- Sezer Bozkuş Kahyaoğlu
- Publisher
- Springer Nature Singapore
About this book
This book delves into the complex intersection between artificial intelligence and human rights violations, shedding light on the far-reaching implications within the framework of discrimination and the pivotal role equality bodies play in combatting these issues. Through a collaborative effort between the Human Rights and Equality Institution of Türkiye (HREIT) and Hasan Kalyoncu University, the groundbreaking "International Symposium on the Effects of Artificial Intelligence in the Context of the Prohibition of Discrimination" took place on March 30, 2022, in Gaziantep. This book is the outcome of this symposium, bringing attention to the alarming issues of "bias and discrimination" prevalent in the application of artificial intelligence. With a commitment to Sustainable Development Goal 8.10 of safeguarding human rights in the digital realm and countering the adverse effects of artificial intelligence, this work is an essential contribution to the Human Rights Action Plan.
Comprising an array of insightful chapters, this book offers an in-depth exploration of artificial intelligence technologies, encompassing a broad spectrum of topics ranging from data protection to algorithmic discrimination, the deployment of artificial intelligence in criminal proceedings to combating hate speech, and from predictive policing to meta-surveillance. It aims to serve as a call to action, urging societies and policymakers to acknowledge the potential threats posed by AI and recognize the need for robust legislative frameworks and ethical principles to ensure that international standards on human rights are upheld in the face of technological advancements.
Table of Contents
-
Frontmatter
-
Introduction
-
Frontmatter
-
Chapter 1. The Interaction of Artificial Intelligence and Legal Regulations: Social and Economic Perspectives
Muharrem Kılıç, Sezer Bozkuş KahyaoğluAbstractWith artificial intelligence, a facilitating change expectation emerges not only at work but also at home and in all social environments including digital. This situation opens the discussion of artificial intelligence applications and the legal interaction with an ethical perspective regarding humanity, which concerns every environment, including public and private areas. This study aims to reveal the situations that may arise if the algorithms that form the basis of artificial intelligence applications are biased and what needs to be done to prevent this in relation to the Fair, Accountable, and Transparent (FAT) approach. Based on the fact that artificial intelligence has a wide variety of dimensions, we try to contribute through its legal and economic perspectives.
-
-
Prohibition of Discrimination in the Age of Artificial Intelligence
-
Frontmatter
-
Chapter 2. Socio-political Analysis of AI-Based Discrimination in the Meta-surveillance Universe
Muharrem KılıçAbstractThe AI-based “digital world order” which continues to develop rapidly with technological advances, points to a great transformation from the business sector to the health sector, and educational services to the judicial sector. All these developments make fundamental rights and freedoms more fragile in terms of human rights politics. This platform of virtuality brings with it discussions of “digital surveillance” or “meta-surveillance” which we can define as a new type of surveillance. It should be stated that the surveillance ideology produced by this surveillance power technology also has an effect that we can describe as “panoptic discrimination”. The use of these algorithms, especially in the judicial sector, brings about a transformation in terms of discrimination and equality law. For that reason, the main discussion on the use of AI-based applications focuses on the fact that algorithmic bias and discrimination. It is seen that AI-based applications sometimes lead to discrimination based on gender; sometimes based on race, religion, wealth, and health status. As an equality institution, it is significant for national human rights institutions to combating algorithmic discrimination and develop strategies for it. -
Chapter 3. Rethinking Non-discrimination Law in the Age of Artificial Intelligence
Selin Çetin Kumkumoğlu, Ahmet Kemal KumkumoğluAbstractIrrespective of artificial intelligence (“AI”) developments of our age, discrimination always found a way to influence our communities directly or indirectly. On the other hand, discrimination of individuals is prohibited as a reflection of the principle of equality, in the constitutions of contemporary societies and in international laws regulating the fundamental rights of the individual. Within the framework of the mechanisms improved depending upon this principle and constitutional basis, a legal struggle is waged against discrimination of individuals as a person or as part of a community. However, discrimination has reached a different dimension in the digital environment with the developing technologies, especially AI systems. Such systems bring new problems and the anti-discrimination rules and mechanisms generated for the physical world cannot completely find a solution. The discrimination of AI systems could arise from reasons such as biased data sets and AI models, insufficient training data, and human-induced prejudices. At the same time, due to the black box problem in AI systems, the fact that it is not always clear which inputs lead to which result, and which inputs are another problem in the detection of discriminatory results. On the other side, problems caused by different applications such as credit scoring, exam grade determination, face recognition, and predictive policing, in which individuals face explicitly discrimination due to AI systems, are also emerging more and more. However, individuals do not always know that they deal with an AI system, and they may or may not be subject to discrimination. For instance, it has been revealed that the algorithm used in visa applications in the UK ranks the applications as red, green, and yellow, yet classifies a certain group as persons with suspicious nationality, and applications made by persons from this nationality receive higher risk scores and are more likely to be rejected. Even though the use of AI systems in our socio-economic life has reached an indispensable point, the prevention of discriminatory results caused by these systems also triggers the obligations of states in the context of fundamental rights. In this regard, the state has an obligation not to use discriminatory AI systems within the scope of its negative obligation; again, it has the obligation to ensure that private institutions or individuals cannot violate a fundamental right against other individuals in a way that is contrary to non-discrimination, within the scope of the positive obligation of the state. Furthermore, as in the process of protecting each fundamental right, the state will have an obligation to ensure that AI systems do not cause discrimination, and in case of discrimination, to eliminate the elements that cause it. In parallel with the development of AI technologies, it may be necessary to reinterpret existing rules and mechanisms or to establish new rules and mechanisms. In this sense, as in the draft Artificial Intelligence Act (“AIA”), having regard to the fact that AI has become a unique sector and its spheres of influence, organizing an institution specific to AI, and establishing an audit mechanism for developing and placing AI on the market, could be considered as an effective way to prevent and eliminate discrimination. In this regard, this Article aims to open a discussion about implementing newly emerging solutions to the Turkish legal framework, such as introducing a pre-audit mechanism as “AI—human rights impact assessment”, establishing AI audit mechanisms, and notification to individuals that they are subject to discrimination. -
Chapter 4. Regulating AI Against Discrimination: From Data Protection Legislation to AI-Specific Measures
Ahmet Esad Berktaş, Saide Begüm FeyzioğluAbstractVarious legislation regarding data protection acknowledges the right to protection of personal data as fundamental human right and introduces certain legal obligations to people who have access to personal data to prevent this data to be used without data subject’s knowledge and even in some cases their consent. Processing personal data by automated decision-making (ADM) systems bears the risk of discrimination. Especially, when these ADM systems use Artificial Intelligence (AI) and machine learning technologies, natural persons’ data may be fed into the system to train the model. Hence, natural persons’ personal data constitutes a basis for ADM systems’ decisions. Data protection legislation includes certain general principles and measures to prevent misjudgments and discrimination. In the scope of these principles and measures, data processing activity shall be adequate, relevant, and limited in relation to the intended purposes, “privacy by design” and “privacy by default” principles and objection mechanisms regarding negative decisions taken exclusively by ADM systems shall be implemented, accountability and risk-based approach shall be considered. On the other hand, data protection legislation may not be sufficient to eliminate all the risks and threats of AI. Hence, specific regulations, guidelines, and recommendations addressing AI are being drafted. -
Chapter 5. Can the Right to Explanation in GDPR Be a Remedy for Algorithmic Discrimination?
Tamer SoysalAbstractSince the birth of computation with Alan Turing, a kind of “excellence/extraordinary” and “objectivity” has been attributed to algorithmic decision-making processes. However, increasing research in recent years has revealed that algorithms and machine learning systems can contain disturbing levels of bias and discrimination. Today, efforts have accelerated to include “fairness”, “transparency”, and “accountability” features of algorithms. In this paper, in this new environment created by algorithms, whether the “right to explain” regulation in the GDPR, which entered into force on April 25, 2018, in the EU, can be used as a remedy and its limits will be discussed.
-
-
Evaluation of Artificial Intelligence Applications in Terms of Criminal Law
-
Frontmatter
-
Chapter 6. Sufficiency of Struggling with the Current Criminal Law Rules on the Use of Artificial Intelligence in Crime
Olgun DeğirmenciAbstractEvery new technology affects crime, which is a social phenomenon. This interaction is in the form of either the emergence of new forms of crime or the facilitation of committing the crime. Based on the definition of intelligence as the ability to adapt to changes, artificial intelligence is defined as “the ability to perceive a complex situation and make rational decisions accordingly”. Based on this definition, in cases where the decisions taken constitute a crime, it is necessary to determine the responsibility in terms of criminal law. The criminal responsibility of artificial intelligence may immediately come to mind. However, holding artificial intelligence, which does not form a legal personality, responsible in terms of criminal law is a controversial situation. Secondly, the responsibility of the software developer who created the artificial intelligence algorithm can be discussed here as well. And yet, in this second case, the willful or negligent responsibility of the software developer should be examined separately. In terms of the negligent responsibility of the software developer who created the artificial intelligence algorithm, the issue of whether artificial intelligence can be used in committing a crime is predictable should be addressed. In this paper, it will be examined whether the existing regulations will be sufficient to determine the responsibility in terms of criminal law where the artificial intelligence algorithm is used in the commission of a crime. -
Chapter 7. Prevention of Discrimination in the Practices of Predictive Policing
Murat Volkan DülgerAbstractArtificial Intelligence (AI)—driven regulations become increasingly prevalent in the field of law. In this context, the predictive policing practices used in detecting and preventing the potential criminality emerge. In the practices of predictive policing, basically, the data on the crimes committed in the past (such as the setting of the crimes—place and time, perpetrator and victim) are analyzed by algorithms, and a risk assessment regarding the commission of a new crime is conducted. Within the framework of the conducted risk assessment, the police take the necessary precautions to prevent the criminality. The use of AI in the practices of predictive policing creates an impression that risk assessment and its results are independent of human bias. On the contrary, AI algorithms involve the biases of the people who design the algorithms. In addition, the data analyzed by algorithms are not free from the biases of societies and class inequalities, either. Therefore, the practices of predictive policing are also likely to reflect discriminatory thoughts and practices of both individuals and societies. To prevent this situation, the practices of predictive policing should be critically examined, and possible solutions should be discussed. In this study, the side of predictive policing practices which is prone to discrimination will be examined, and the steps that can be taken to protect minorities and vulnerable groups in society will be discussed. -
Chapter 8. Issues that May Arise from Usage of AI Technologies in Criminal Justice and Law Enforcement
Benay ÇaylakAbstractDue to the constant and swift technological advancements, artificial intelligence technologies have become an integral part of our daily lives and as a result, have started to impact various areas of our society. Legal systems proved to be no exception as many countries took steps to implement AI technologies to their legal systems in order to improve the law enforcement and criminal justice systems, making changes in various processes including but not limited to preventing crimes, locating perpetrators, accelerating judicial processes, and improving the accuracy of judicial decisions. While the usage of AI technologies provided improvements to criminal justice and law enforcement processes in various aspects, concerning instances demonstrated that AI technologies may reach to biased, discriminatory, or simply inaccurate conclusions that may cause harm to people. This realization becomes even more alarming considering that criminal justice and law enforcement consist of extremely critical and fragile processes where a wrong decision may cost someone their freedom, or in some cases, life. In addition to discrimination and bias, automated decision-making processes also have a number of other issues such as lack of transparency and accountability, jeopardization of the presumption of innocence principle, and concerns regarding personal data protection, cyber-attacks, and technical challenges. Implementing AI technologies to legal processes should be encouraged since criminal justice and law enforcement could benefit from recent advancements in technology and it is possible that more accurate, more just, and faster judicial processes can be created. However, it should be carefully considered that implementing AI systems which are in their infancy to legal processes that could lead to severe consequences may cause incredible and, in some cases, irrevocable damages. This study aims to address current and possible issues in usage of AI technologies in criminal justice and law enforcement, providing possible solutions when possible.
-
-
Evaluation of the Interaction of Law and Artificial Intelligence Within Different Application Areas
-
Frontmatter
-
Chapter 9. Artificial Intelligence and Prohibition of Discrimination from the Perspective of Private Law
Ş. Barış ÖzçelikAbstractArtificial intelligence (AI) technologies promise to change our lives in positive way in many aspects while bringing along some risks. One of these risks is the possibility that decisions based on AI systems contain discrimination. Since the prohibition of discrimination is predominantly seen as a matter of public law, it may seem to be questionable to talk about the prohibition of discrimination in private law where principles of private autonomy and particularly freedom of contract prevail. Nevertheless, depriving individuals of the opportunity to enter into a fair and freely negotiated contract as a result of discrimination would be incompatible with the ideas underlying the freedom of contract. Moreover, since discrimination is insulting in most of the cases, it also violates the personal rights of the individual who is discriminated against. Thus, discrimination is an issue that also needs to be considered from the perspective of private law. As private law sanctions, nullity, compensation or an obligation to contract can be applied against discrimination. The fact that discrimination is the product of a decision-making mechanism using AI systems brings along some legal problems specific to this situation. One of these problems is that the results produced by some AI technologies are unexplainable since the reasons on which the decision is based must first be known to conclude that a decision is based on discrimination. -
Chapter 10. Legal Challenges of Artificial Intelligence in Healthcare
Merve Ayşegül Kulular İbrahimAbstractRight to health is a fundamental right defined in World Health Organization 1946 as “the enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being”. Because right to health is recognized as human right, every human being must have access to health services without distinction of race, gender, or economic status. In addition, beyond health services, there are several rights such as the right to freedom from discrimination, and the right to benefit from scientific progress and its applications providing sufficient protection for right to health. Accordingly, to promote right to health information technologies play a significant role. Due to technologic developments, artificial intelligence (AI) is used in healthcare for a number of implementations. AI applications provide many advantages in medical care. However, there are also significant risks of AI applications. AI might cause infringement of the right to health considering discrimination. There are several research illustrating discrimination due to algorithmic bias in health sector. The discrimination might be unintentionally and more. AI systems are capable of learning progression. The data that AI systems are designed from might incorporate cognitive biases. AI system learns from both training data and its own experience. As a result, the AI learning progression may unintentionally lead more discrimination and hurt patients. To prevent algorithmic discrimination and infringement on human rights, this work proposes not only new laws and policies but also measures for standards of technical tools. Last, but not least considering that training data that causes decision-making bias of AI applications, is consist of health care professionals’ decisions, health care professionals should be educated about antiracism to provide sufficient protection on right to health. -
Chapter 11. The Impact of Artificial Intelligence on Social Rights
Cenk KonukpayAbstractDigitalization has led to an increase in the use of artificial intelligence [AI] systems in many areas related to social and economic rights. AI is of significant benefit to the welfare society. Thanks to large-scale data analysis, it becomes easier to identify deficiencies in the implementation of social policies and the allocation of social benefits. However, data-driven tools may also create risks to the access to private and public services and the enjoyment of essential social rights. This is also bound up with the principle of equality and non-discrimination. Due to the increasing use of AI systems, the potential risks arise in various areas. Besides a serious transformation in the labor market with the AI technology, algorithms are used to measure the performance of employees and manage all stages of the employment including recruitment process. In addition to its impact on the right to work, AI systems are also deployed in the context of accessing social services. The use of AI in order to verify identities, prevent fraud, and calculate benefits may limit the enjoyment of social rights and discriminate against vulnerable groups of society. For this reason, there is a need to examine the place of social rights in the application of AI technology. This study seeks to carry out an analysis on how AI affects social and economic rights in the light of the principle of non-discrimination. To achieve this aim, relevant legislation and court decisions will also be examined. -
Chapter 12. A Review: Detection of Discrimination and Hate Speech Shared on Social Media Platforms Using Artificial Intelligence Methods
Abdülkadir BilenAbstractPeople have political views, race, language, religion, gender, etc. may face discrimination based on their status. Again, these situations can emerge as hate speech against people. Hate speech and discrimination can occur in any environment today, as well as on social media platforms such as Twitter, Instagram, Facebook, YouTube, TikTok, Snapchat recently. Twitter is a place where people share their ideas and news about themselves with their followers. To detect situations such as sexist, racist, and hate speech, recently, Twitter data have been examined and these discourses have been tried to be determined with various analysis and classification methods. While detecting these, it is done with artificial intelligence methods such as Support Vector Machines, Artificial Neural Networks, Decision Trees, Long Short-Term Memory. Considering that many information such as events, meetings, news etc. spread rapidly on social media, it is extremely important to quickly determine how people can react in discrimination and hate speech with these methods and to be able to take precautions. In the study, firstly, after determining the discrimination and hate speech, it was determined which studies were carried out in the literature about what was shared on social media platforms. In these studies, it has been determined that artificial intelligence methods are used, and the methods used are successful. Automatic detection systems for discrimination and hate speech have been developed in many languages. -
Chapter 13. The New Era: Transforming Healthcare Quality with Artificial Intelligence
Didem İncegil, İbrahim Halil Kayral, Figen Çizmeci ŞenelAbstractTuring was one of the first and prominent names of artificial intelligence as an independent science discipline. As a result of the lectures, he gave at the London Mathematical Society, he wrote an important article called “Computing Machinery and Intelligence” in 1950, based on the idea that whether machines or robots can think, Artificial intelligence (AI) is a revolution in the healthcare industry. The primary purpose of AI applications in healthcare is to analyze the links between prevention or treatment approaches and patient outcomes. AI applications can be as simple as using natural language processing to convert clinical notes into electronic data points or as complex as a deep learning neural network performing image analysis for diagnostic support. Artificial intelligence (AI) and robotics are shown to be used in many areas in the health sector Keeping Well, Early Detection, Diagnosis, Decision Making, Treatment, End of Life Care, Research and Training. Electronic patient records and recording of observation results in electronic environment have been started with the digital transformation. One of the most important issues here is that hospitals keep patient records in their own electronic environment. In the near future, using the sensors of these devices; it is aimed to store the data collected from patients in the cloud computing environment and to use them in analysis. Despite the potential of AI in healthcare to improve diagnosis or reduce human error, a failure in an AI program will affect a large number of patients. -
Chapter 14. Managing Artificial Intelligence Algorithmic Discrimination: The Internal Audit Function Role
Lethiwe Nzama-SitholeAbstractArtificial intelligence (AI) systems bring exciting opportunities for organizations to speed up their processes and have a competitive advantage. However, some weaknesses come with some of the AI systems. For example, artificial intelligence bias may occur due to AI algorithms. The algorithms’ discrimination or bias may result in organizational reputational risk. This chapter aims to conduct a literature review to synthesize the role of the internal audit function (IAF) in data governance. The chapter will investigate the measures that may be put in place by the IAF to assist the organizations in being socially responsible and managing risks when implementing artificial intelligence algorithms. A literature review will be undertaken using articles recently published with similar keywords for the chapter and the most cited articles from high-impact factor journals. The findings and contributions of the chapter will be updated after the chapter.
-
-
Backmatter
- Title
- Algorithmic Discrimination and Ethical Perspective of Artificial Intelligence
- Editors
-
Muharrem Kılıç
Sezer Bozkuş Kahyaoğlu
- Copyright Year
- 2024
- Publisher
- Springer Nature Singapore
- Electronic ISBN
- 978-981-9963-27-0
- Print ISBN
- 978-981-9963-26-3
- DOI
- https://doi.org/10.1007/978-981-99-6327-0
PDF files of this book don't fully comply with PDF/UA standards, but do feature limited screen reader support, described non-text content (images, graphs), bookmarks for easy navigation and searchable, selectable text. Users of assistive technologies may experience difficulty navigating or interpreting content in this document. We recognize the importance of accessibility, and we welcome queries about accessibility for any of our products. If you have a question or an access need, please get in touch with us at accessibilitysupport@springernature.com