Skip to main content
Erschienen in:
Buchtitelbild

Open Access 2024 | OriginalPaper | Buchkapitel

1. Introduction

verfasst von : Dennis Hirsch, Timothy Bartley, Aravind Chandrasekaran, Davon Norris, Srinivasan Parthasarathy, Piers Norris Turner

Erschienen in: Business Data Ethics

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
download
DOWNLOAD
print
DRUCKEN
insite
SUCHEN
loading …

Abstract

Business use of artificial intelligence (AI) can produce tremendous insights and benefits. But it can also invade privacy, perpetuate bias, and produce other harms that injure people and damage business reputation. To succeed in today’s economy, companies need to implement AI in a responsible and ethical way. The question is: How to do this? This book points the way. The authors interviewed and surveyed AI ethics managers at leading companies. They asked why these experts see AI ethics as important, and how they seek to achieve it. This book conveys the results of that research on a concise, accessible way that readers should be able to apply to their own organizations. Much of the existing writing on AI ethics focuses either on macro-level AI ethics principles, or on micro-level product design and tooling. The interviews showed that companies need a third component: data and AI ethics management. This third component consists of the management structures, processes, training and substantive benchmarks that companies use to operationalize their high-level data and AI ethics principles and to guide and hold accountable their developers. AI ethics management is the connective tissue that makes AI ethics principles real. It is the focus of this book.
Key Take-Aways
  • This book conveys the findings from an empirical study, conducted between 2017 and 2019, of how and why businesses seek to manage the threats and ethical challenges that their own use of data, advanced analytics and AI can create.
  • The research sought to explore three core questions: (1) How do business organizations at the forefront of data ethics management conceptualize the threats that their use of data, advanced analytics and AI create for others, and the ethical challenges that this poses for the organization itself? (2) If it is true that the law does not yet require businesses to reduce these threats, then why are certain companies pursuing this end? (3) How are businesses pursuing data ethics management? Which substantive benchmarks, management structures, processes, and technical solutions do they employ to ground and operationalize their ethical responsibilities as they conceive them?
  • Much of the scholarly literature on data ethics focuses on normative analysis of data ethics principles and of regulatory frameworks. Empirical work on data ethics management can inform regulation and complement high-level data ethics principles, but scholars have done far less of this type of work. This book helps to fill that gap.
  • The researchers used a “grounded theory” approach that moves iteratively between observation and theory to identify the best conceptual framework for understanding the observed realities. This study concludes that research on “beyond compliance” behavior and the “social license to operate” best fits the behavior that businesses refer to as data ethics management.
  • To date, scholarly work on the social license to operate has focused largely on environmental management, working conditions, and human rights in global supply chains. This book suggests that, in today’s digital and algorithmic economy, the “social license to operate” is coming increasingly to depend as well on an organization’s data ethics performance.
Some years ago, an issuer of subprime credit cards (cards issued to people who generally do not qualify for them) sought to identify which of its current customers were most likely to default on their credit card bills and then to cut their credit limits in half (FTC v. CompuCredit 2008). The company used a “behavioral scoring model” for this purpose. It first pulled together data on which of its past customers had defaulted. It then looked for a pattern: did these defaulting customers tend to use their cards in ways that their non-defaulting peers did not? The company found such a pattern. Defaulting card holders had used their cards at pawn shops, massage parlors, and marital counselors far more frequently than their non-defaulting peers.1 Based on this correlation, the company predicted that current card holders who use their cards to pay for these particular items presented a high risk of default and proceeded to cut their credit limits in half—an action that, in and of itself, did not violate the law.2
Should the company have done this? Even if it is legal, is it right to reduce someone’s credit line by half because they have used their card to pay for marital counseling services? Is it more (or less) justifiable to reduce it because the person used the card at a pawn shop or massage parlor? One can easily come up with arguments on each side of this question. On the one hand, some might point out that reducing default rates will strengthen the company’s bottom line and so enable it to issue more credit cards, at lower interest rates, to those who would otherwise not be eligible for credit. These proponents might also explain that this practice prevents vulnerable card holders from getting over-extended and so saves them from the emotional pain and lasting economic damage that a default can cause.
On the other hand, critics of the company’s action might decry the unfairness of penalizing those whose only sin is to try to preserve or improve their marriage. They could further point out that those who use a card for marital counseling and then see their credit line cut in half will be less likely to seek out marital counseling in the future. That could hurt not only the card holders themselves but also their spouses, children, and society at large. These critics could also ask whether people of one race, gender, or other protected demographic characteristic tend to use their cards at pawn shops, massage parlors, and marital counselors more than those who do not share this characteristic, and so whether the policy would have a disparate negative impact based on a protected characteristic. So, what is the right answer? Should the company cut the credit of those who use their card to pay for marital counseling, or not? The solution is anything but clear.
Today, many organizations face ethical choices of this type. Most of these dilemmas do not become public. But some do. For example, Target analyzed customer data to infer which potential customers were pregnant and marketed baby-related goods to them (Duhigg 2012). This provided people with relevant marketing, but it also invaded their privacy. Facebook uses machine learning to predict which of its users are most likely to commit suicide and notifies the police or other first responders when the data suggest that such risk is imminent (Andrade et al. 2018; Marks 2019). Facebook’s suicide prevention initiative arguably saves lives. But it can also lead to police knocking on the doors of people who are actually not at risk. Hewlett-Packard used advanced analytics to predict, for each of its 300,000 employees, the likelihood that the person would leave the company, and then provided this “flight risk” score to a select group of managers (Seigel 2016). This could help the company retain valued employees. But it can also prejudice managers against some employees who have no intention of leaving. Should these companies have used in these ways the powerful insights that advanced analytics and AI3 can provide?
Many of the ethical choices that businesses make with respect to their use of advanced analytics4 and AI5 remain hidden from the public eye. But they are there in abundance. Advanced analytics and AI, in combination with the massive amounts of data that the digital society makes available about people, enable data scientists to predict an individual’s race, age, IQ, sexual orientation, personality type, substance use, and political views with great accuracy (Kozinski et al. 2013), not to mention their pregnancy status, the likelihood that they will default on their credit card, and many other salient traits. Generative AI (e.g., ChatGPT) raises its own ethical questions such as whether to mine existing, publicly available works to generate new content without compensating the original creators. Advanced analytics and AI give organizations profound new powers that they can use in many ways for their own benefit. Should they feel free to use these technologies in any way that benefits the businesses’ short-term interests, or should they observe some limits?
If an organization is to observe some constraints, how should it go about deciding what those limits are? Should it feel free to do anything that the law currently allows? Or, should it try to use data and AI ethically and responsibly, even if that means going beyond current legal requirements? If it does seek to achieve an ethical standard, how should it draw the line between ethical and unethical practices? Who in the organization should be responsible for spotting and deciding these issues, where should that person sit in the organization, and what qualifications should they have? What processes should the organization follow for making data ethics decisions? Which internal stakeholders should it consult? Should it engage any external stakeholders?
These questions are at the heart of data ethics management. They are also the subject of this book. Between 2017 and 2019 our interdisciplinary research team interviewed or surveyed 50 or so companies at the forefront of the then-emerging field of “data ethics” management.6 We found these companies to be struggling with the many ethical dilemmas that the exponential growth in personal data raised for them, and that chief among these were questions about how and whether to use advanced analytics and AI to further their business interests. We learned about how some business organizations wrestle with, and make decisions about, how they will use the newfound power that massive amounts of data about people, used to fuel advanced analytics and AI, have given them. This book conveys what we learned.
We do not write on a blank slate. Much has already been published about the ethical dilemmas that organizations face when they use advanced analytics and AI, and how to govern them. The existing literature takes two main paths. A first group of authors focuses on what it means to use advanced analytics and AI “ethically.” Scholars, think tanks, corporations, multi-stakeholder organizations, governments, and others have generated dozens of sets of ethical principles and have encouraged businesses to align their advanced analytics and AI practices with them (Drosou et al. 2017; Gordon and Nyholm 2021; Herschel and Miori 2017; Mcdermott 2017; Mittelstadt et al. 2016; Richards and King 2014; Vallor 2018; Yang et al. 2018; Zwitter 2014). In their review of the global landscape, Jobin and colleagues identified eighty-four such frameworks (Jobin et al. (2019). Fjeld and colleagues surveyed over thirty sets of AI ethics principles put forth by a diverse set of institutions (Fjeld et al. 2020). Though each framework is distinct, it is possible to identify a convergence on a core set of ideas. For example, Jobin and colleagues identified eleven overarching themes: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, dignity, sustainability, and solidarity (Jobin et al. 2019). Floridi and Cowls condensed the multitude of considerations down to five elements: beneficence (promoting well-being and preserving dignity), non-maleficence (ensuring privacy and security), autonomy (avoiding manipulation), justice (preventing unfairness), and explicability (enabling transparency and accountability) (Floridi and Cowls 2019).
While organizations can adopt one of these sets of ethical principles, they tend to have a hard time employing such principles to reach a determinate decision. Consider the question of whether to cut the credit limits of those who seek marital counseling. The “beneficence” principle might counsel in favor of this action since, by pursuing it, the company would extend credit to more people who cannot otherwise get it and prevent card holders from taking on too much debt. But it also might push in the other direction since the policy could lead some people to forego marital counseling and its many benefits. And what of the “justice” principle? Is it just to deny credit to someone because they did something that most would view as meritorious such as going to a marriage counselor? High-level ethical principles are good for framing these questions. But they are too often in conflict with one another or too open to interpretation to direct what the answer should be. Taken alone, the existing sets of high-level ethical principles do not provide the necessary guidance.
A second stream of writing seeks to locate the required guidelines in the law. These scholars look to foundational legal frameworks and argue that they should be updated for and applied to business use of advanced analytics and AI. For example, scholars have advocated adopting a “technological due process” approach that takes notions of procedural fairness from the judicial arena and applies them to algorithmic decision-making (Citron 2016; Citron and Pasquale 2014; Crawford and Schultz 2014). Others, drawing on fiduciary law, view corporations that handle people’s data as “information fiduciaries” who should put their consumers’ or users’ interests first, and so become worthy of trust (Balkin 2016; Richards and Hartzog 2015; Waldman 2018b). Another group looks to commercial unfairness law to set parameters for the fair use of predictive analytics (Citron and Pasquale 2014; Hartzog 2015; Hirsch 2015, 2020; MacCarthy 2011). Still others would update legal frameworks to prevent manipulation and harmful bias (Selbst and Barocas 2018; Hellman 2020), promote accountability and transparency, or mandate impact assessments (Selbst 2021).
While these two bodies of scholarship—one focused on ethical principles, the other on legal innovation and reform—are critically important, each would benefit from the addition of a third line of inquiry that, at the time of this study and to a large extent still today, is largely missing from the academic literature: empirical research into what, if anything, companies are doing to manage the threats that their use of advanced analytics and AI can create, and into the strengths and limitations of these management efforts. Scholars have studied these questions with respect to privacy management (Smith 1994; Bamberger and Mulligan 2015; Waldman 2021). While there have been informative accounts of AI governance in practice (Moss and Metcalf 2020), work on this topic is only just beginning.
Empirical knowledge about the practice of AI governance “on the ground” is essential to policymakers who, when designing regulation, must understand how companies implement data governance protections (Bamberger and Mulligan 2015; Waldman 2018a). It also complements the high-level ethical principles by pairing them with an operational understanding of data ethics management and how to motivate it (Whittlestone et al. 2019). Yet too little has been written about whether, how, and why companies go about spotting and preventing the harm that their use of advanced analytics and AI can create.
This book helps to fill this gap. From 2017 to 2019, the research team interviewed and surveyed data governance professionals at US-based companies at the forefront of AI ethics management. The research sought to answer three, fundamental questions: (1) How do business organizations at the forefront of data ethics management conceptualize the threats that their use of advanced analytics and AI create for others, and the ethical challenges that this poses for the organization itself? (2) If it is true that the law does not yet require businesses to reduce these threats, then why, in their own words, are certain companies pursuing this end? (3) How are businesses pursuing data ethics management? Which substantive benchmarks, management structures, processes, and technical solutions do they employ to ground and operationalize their ethical responsibilities as they conceive them?
Gaining insight into these questions provides a novel extension of academic literature that is lacking in empirical investigations (Flyverbom et al. 2019) and contributes to broader conversations among scholars, policymakers and practitioners about how to balance the possibilities and the pitfalls of advanced analytics and AI. While this inquiry overlaps, to some extent, with the other streams of scholarship on ethical principles and regulatory futures, it provides a distinct point of entry focused on how U.S. companies, in their governance of emerging advanced analytics techniques, are interpreting and negotiating a complex, evolving landscape of legislation, regulation, social norms and expectations. The book attempts to explain how a range of professionals tasked with navigating that convergence have articulated both what constitutes responsible decision-making in the uncertain, “beyond compliance” domain of data ethics, and the steps they have taken to achieve this standard.
The researchers used a “grounded theory” approach to understand, and ultimately structure, their findings. Grounded theory is “an organic process of theory emergence based on how well data fit conceptual categories identified by an observer, by how well the categories explain or predict ongoing interpretations, and by how relevant the categories are to the core issues being observed” (Suddaby 2006). In a grounded theory approach, substantive theory provides an initial direction and sensitizes the researchers to certain types of data. But the researchers do not attempt, in a deductive fashion, simply to test the theory against the data. Rather they use a process of “analytic induction” that moves back and forth between deduction and induction “to find the best fit or the most plausible explanation for the relationships being studied” (Suddaby 2006).
This iterative approach led us to conclude that research on the “social license to operate” (Gunningham et al. 2006; Prakash 2011; Bamberger and Mulligan 2015) best fits the behavior that we observed. This body of research has identified a variety of reasons that companies go “beyond compliance” with existing law to signal their conformity with public expectations (Gunningham et al. 2006; Prakash 2011; Bamberger and Mulligan 2015). These reasons include pressures from regulators, consumers, employees, and advocacy organizations, as well as media coverage of controversies. To date, scholarly work on beyond compliance corporate behavior has focused largely on the field of environmental management (Gunningham et al. 2006; Prakash 2011; Short and Toffel 2010). Scholars have also examined corporate responsibility initiatives that seek to improve working conditions and human rights in global supply chains (Bartley 2018; Locke 2013). This book suggests that, in the algorithmic economy, the “social license to operate” increasingly turns on data ethics performance as well. As advanced analytics and AI expand, scholars should look closely at how companies are managing pressures and expectations for fairness, justice, and privacy. Our research suggests that the practice of “data ethics” within companies deals neither exclusively with long-standing questions about data privacy nor with the full range of companies’ data uses, but rather with an evolving set of questions about prediction, manipulation, automation, and algorithmic bias. At the same time, not all of these concerns are attended to equally, and companies have pursued a variety of different approaches as they formalize data ethics management.
At least four audiences should find this book to be relevant. The book’s description of actual data ethics management practices should be of use to organizations seeking to improve their own performance in this vital management area. Its description of data ethics management “on the ground” (Bamberger and Mulligan 2015) should inform legislators and policymakers attempting to develop workable and effective laws and regulations that build on existing management practices. The book’s depiction of beyond compliance data ethics management should further be of interest to scholars that think about the social license to operate and other theories for why organizations may, at times, go beyond legal requirements in the service of social objectives. Finally, by revealing businesses’ attempts to govern their own use of advanced analytics and AI, the book hopes to show members of the public that such efforts are possible, even if they may currently be inadequate, and that they should demand and expect more of them.
The book is organized as follows: Chap. 2, Studying Data Ethics Management: Research Methodology, describes in greater detail the research team’s methods for interviewing and surveying data ethics managers, and for analyzing the data collected. Chapter 3, Risks: From Privacy and Manipulation to Bias and Displacement, recounts corporate data governance professionals’ assessment of the threats that business use of advanced analytics and AI poses for individuals, groups and the broader society; Chap. 4, What is Business Data Ethics Management?, explores what corporate managers mean when they say that they are pursuing “data ethics” as opposed to compliance with privacy or other laws. Chapter 5, Motivations—Why Do Companies Pursue Data Ethics? documents the reasons that companies give for pursuing data ethics management, even when the law does not yet require them to do so. Chapter 6, Drawing Substantive Lines, discusses the ways in which companies distinguish between ethical, and unethical, uses of advanced analytics and AI, and the benchmarks and standards that they use for this purpose. Chapter 7, Management Structures and Functions, identifies who, within a company, is responsible for carrying out the data ethics function, and how this role is structured. Chapter 8, Management Processes, discusses the processes that organizations use to spot and ultimately reach decisions about data ethics issues. Chapter 9, Technical Solutions, conveys what we learned about the technological and data-focused approaches that companies employ to reduce advanced analytics and AI’s potential harms. Chapter 10, Data Analytics for the Social Good, describes instances in which companies intentionally use their advanced analytics and AI abilities to serve the social good without any direct benefit to their own bottom lines and explores why they might do this. Chapter 11, Conclusion, sums up what we have learned and suggests future directions for research.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
download
DOWNLOAD
print
DRUCKEN
Fußnoten
1
The full list of proxies associated with default also included using the card to pay direct marketing merchants, personal counselors, automobile tire retreading and repair shops, bars and nightclubs, and pool and billiards establishments. (FTC v. CompuCredit 2008).
 
2
The FTC’s enforcement action, taken under its Section 5 deceptiveness authority, was premised on CompuCredit’s misrepresentations about its behavioral scoring model, not on the use of the model itself.
 
3
This book will use the term “advanced analytics and AI.” However, the term “big data analytics” was more commonly in use at the time that the researchers conducted the interviews and survey and so the interview protocols and survey instruments employed this term. This book will use the “big data analytics” where necessary to represent accurately the survey and interview results.
 
4
For the purposes of this book, the term “advanced analytics” refers to “the autonomous or semi-autonomous examination of data or content using sophisticated techniques and tools, typically beyond those of traditional business intelligence (BI), to discover deeper insights, make predictions, or generate recommendations. Advanced analytic techniques include those such as data/text mining, machine learning, pattern matching, forecasting, visualization, semantic analysis, sentiment analysis, network and cluster analysis, multivariate statistics, graph analysis, simulation, complex event processing, neural networks.” (Gartner 2023).
 
5
As used in this book, the term “artificial intelligence” means the use of “advanced analysis and logic-based techniques, including machine learning, to interpret events, support and automate decisions, and take actions.” (Gartner 2023).
 
6
At the time, most organizations called this area “data ethics” management. Today most refer to it as AI ethics or responsible AI management.
 
Literatur
Zurück zum Zitat Andrade, Gomes De, Norberto Nuno, Dave Pawson, Dan Muriello, Lizzy Donahue, and Jennifer Guadagno. 2018. Ethics and artificial intelligence: suicide prevention on Facebook. Philosophy & Technology 31 (4): 669–684. Andrade, Gomes De, Norberto Nuno, Dave Pawson, Dan Muriello, Lizzy Donahue, and Jennifer Guadagno. 2018. Ethics and artificial intelligence: suicide prevention on Facebook. Philosophy & Technology 31 (4): 669–684.
Zurück zum Zitat Balkin, Jack. 2016. Information fiduciaries and the first amendment. UC Davis Law Review 49 (4): 1183–1234. Balkin, Jack. 2016. Information fiduciaries and the first amendment. UC Davis Law Review 49 (4): 1183–1234.
Zurück zum Zitat Bamberger, Kenneth A., and Deirdre K. Mulligan. 2015. Privacy on the Ground: Driving Corporate Behavior in the United States and Europe. Cambridge, MA: MIT Press.CrossRef Bamberger, Kenneth A., and Deirdre K. Mulligan. 2015. Privacy on the Ground: Driving Corporate Behavior in the United States and Europe. Cambridge, MA: MIT Press.CrossRef
Zurück zum Zitat Bartley, Tim. 2018. Rules Without Rights: Land, Labor, and Private Authority in the Global Economy. Oxford: Oxford University Press.CrossRef Bartley, Tim. 2018. Rules Without Rights: Land, Labor, and Private Authority in the Global Economy. Oxford: Oxford University Press.CrossRef
Zurück zum Zitat Citron, Danielle Keats. 2016. Big Data Should Be Regulated by ‘Technological Due Process.’ The New York Times. Retrieved April 23, 2020. Citron, Danielle Keats. 2016. Big Data Should Be Regulated by ‘Technological Due Process.’ The New York Times. Retrieved April 23, 2020.
Zurück zum Zitat Citron, Danielle Keats, and Frank Pasquale. 2014. The scored society: Due process for automated predictions. Washington Law Review 89 (1): 1–34. Citron, Danielle Keats, and Frank Pasquale. 2014. The scored society: Due process for automated predictions. Washington Law Review 89 (1): 1–34.
Zurück zum Zitat Crawford, Kate, and Jason Schultz. 2014. Big data and due process: Toward a framework to redress predictive privacy harms. Boston College Law Review 55 (1): 93–128. Crawford, Kate, and Jason Schultz. 2014. Big data and due process: Toward a framework to redress predictive privacy harms. Boston College Law Review 55 (1): 93–128.
Zurück zum Zitat Drosou, Marina, H.V. Jagadish, Evaggelia Pitoura, and Julia Stoyanovich. 2017. Diversity in big data: A review. Big Data 5 (2): 73–84.CrossRef Drosou, Marina, H.V. Jagadish, Evaggelia Pitoura, and Julia Stoyanovich. 2017. Diversity in big data: A review. Big Data 5 (2): 73–84.CrossRef
Zurück zum Zitat Duhigg, Charles. 2012. “How Companies Learn Your Secrets.” New York Times (February 16, 2012). Duhigg, Charles. 2012. “How Companies Learn Your Secrets.” New York Times (February 16, 2012).
Zurück zum Zitat Federal Trade Commission v. CompuCredit Corp. 2008. Complaint. Civil No. 1:08–CV–1976–BBM–RGV. (North. Dist. of Ga., Oct. 8, 2008). Federal Trade Commission v. CompuCredit Corp. 2008. Complaint. Civil No. 1:08–CV–1976–BBM–RGV. (North. Dist. of Ga., Oct. 8, 2008).
Zurück zum Zitat Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. 2020. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI. Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. 2020. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI.
Zurück zum Zitat Floridi, Luciano, and Josh Cowls. 2019. A unified framework of five principles for AI in society. Harvard Data Science Review 1 (1): 1–15. Floridi, Luciano, and Josh Cowls. 2019. A unified framework of five principles for AI in society. Harvard Data Science Review 1 (1): 1–15.
Zurück zum Zitat Flyverbom, Mikkel, Ronald Deibert, and Dirk Matten. 2019. The governance of digital technology, big data, and the Internet: New roles and responsibilities for business. Business and Society 58 (1): 3–19.CrossRef Flyverbom, Mikkel, Ronald Deibert, and Dirk Matten. 2019. The governance of digital technology, big data, and the Internet: New roles and responsibilities for business. Business and Society 58 (1): 3–19.CrossRef
Zurück zum Zitat Gunningham, Neil, Robert Kagan and Dorothy Thornton, 2006. Social license and environmental protection: Why businesses go beyond compliance. Law and Social Inquiry. Gunningham, Neil, Robert Kagan and Dorothy Thornton, 2006. Social license and environmental protection: Why businesses go beyond compliance. Law and Social Inquiry.
Zurück zum Zitat Hartzog, Woodrow. 2015. Unfair and deceptive robots. Maryland Law Review 74: 785–829. Hartzog, Woodrow. 2015. Unfair and deceptive robots. Maryland Law Review 74: 785–829.
Zurück zum Zitat Hellman, Deborah. 2020. Measuring algorithmic fairness. Viriginia Law Review 106: Forthcoming. Hellman, Deborah. 2020. Measuring algorithmic fairness. Viriginia Law Review 106: Forthcoming.
Zurück zum Zitat Herschel, Richard, and Virginia M. Miori. 2017. Ethics & big data. Technology in Society 49: 31–36.CrossRef Herschel, Richard, and Virginia M. Miori. 2017. Ethics & big data. Technology in Society 49: 31–36.CrossRef
Zurück zum Zitat Hirsch, Dennis D. 2015. That’s unfair! Or is it? Big data, discrimination and the FTC’s unfairness authority. Kentucky Law Journal 103: 345–361. Hirsch, Dennis D. 2015. That’s unfair! Or is it? Big data, discrimination and the FTC’s unfairness authority. Kentucky Law Journal 103: 345–361.
Zurück zum Zitat Hirsch, Dennis D. 2020. From individual control to social protection: New paradigms for law and policy in the age of predictive analytics. Maryland Law Review 79: 439–505. Hirsch, Dennis D. 2020. From individual control to social protection: New paradigms for law and policy in the age of predictive analytics. Maryland Law Review 79: 439–505.
Zurück zum Zitat Jobin, Anna, Marcello Ienca, and Effy Vayena. 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1: 389–399.CrossRef Jobin, Anna, Marcello Ienca, and Effy Vayena. 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1: 389–399.CrossRef
Zurück zum Zitat Kozinski, Michael, David Stillwell, and Thore Graepel. 2013. Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences 110 (15): 5802–5805.CrossRef Kozinski, Michael, David Stillwell, and Thore Graepel. 2013. Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences 110 (15): 5802–5805.CrossRef
Zurück zum Zitat Locke, Richard M. 2013. The Promise and Limits of Private Power: Promoting Labor Standards in a Global Economy. Cambridge: Cambridge University Press.CrossRef Locke, Richard M. 2013. The Promise and Limits of Private Power: Promoting Labor Standards in a Global Economy. Cambridge: Cambridge University Press.CrossRef
Zurück zum Zitat MacCarthy, Mark. 2011. New directions in privacy: Disclosure, unfairness and externalities. I/S: Journal of Law and Policy for the Information Society 6 (3): 425–512. MacCarthy, Mark. 2011. New directions in privacy: Disclosure, unfairness and externalities. I/S: Journal of Law and Policy for the Information Society 6 (3): 425–512.
Zurück zum Zitat Marks, Mason. 2019. Artificial intelligence based suicide prediction. Yale Journal of Law and Technology 21 (3): 98–121. Marks, Mason. 2019. Artificial intelligence based suicide prediction. Yale Journal of Law and Technology 21 (3): 98–121.
Zurück zum Zitat Mcdermott, Yvonne. 2017. Conceptualising the right to data protection in an era of big data. Big Data & Society (January–June): 1–7. Mcdermott, Yvonne. 2017. Conceptualising the right to data protection in an era of big data. Big Data & Society (January–June): 1–7.
Zurück zum Zitat Mittelstadt, Brent D., Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi. 2016. The ethics of algorithms: Mapping the debate. Big Data and Society 3 (2): 1–21.CrossRef Mittelstadt, Brent D., Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi. 2016. The ethics of algorithms: Mapping the debate. Big Data and Society 3 (2): 1–21.CrossRef
Zurück zum Zitat Moss, Emanuel, and Jacob Metcalf. 2020. Ethics Owners: A New Model of Organizational Responsibility in Data-Driven Technology Companies, 2020. New York: Data & Society Research Institute. Moss, Emanuel, and Jacob Metcalf. 2020. Ethics Owners: A New Model of Organizational Responsibility in Data-Driven Technology Companies, 2020. New York: Data & Society Research Institute.
Zurück zum Zitat Prakash, Aseem. 2011. Why do firms adopt ‘beyond compliance’ environmental policies. Business Strategy and the Environment 10: 286–299.CrossRef Prakash, Aseem. 2011. Why do firms adopt ‘beyond compliance’ environmental policies. Business Strategy and the Environment 10: 286–299.CrossRef
Zurück zum Zitat Richards, Neil M., and Jonathan H. King. 2014. Big data ethics. Wake Forest Law Review 49 (2): 393–432. Richards, Neil M., and Jonathan H. King. 2014. Big data ethics. Wake Forest Law Review 49 (2): 393–432.
Zurück zum Zitat Richards, Neil M., and Woodrow Hartzog. 2015. Taking trust seriously in privacy law. Stanford Technology Law Review 19: 431–472. Richards, Neil M., and Woodrow Hartzog. 2015. Taking trust seriously in privacy law. Stanford Technology Law Review 19: 431–472.
Zurück zum Zitat Selbst, Andrew D. 2021. An institutional view of algorithmic impact assessments. Harvard Journal of Law & Technology 35 (1): 117–191. Selbst, Andrew D. 2021. An institutional view of algorithmic impact assessments. Harvard Journal of Law & Technology 35 (1): 117–191.
Zurück zum Zitat Selbst, Andrew D., and Solon Barocas. 2018. The intuitive appeal of explainable machines. Fordham Law Review 87 (3): 1085–1139. Selbst, Andrew D., and Solon Barocas. 2018. The intuitive appeal of explainable machines. Fordham Law Review 87 (3): 1085–1139.
Zurück zum Zitat Short, Jodi L., and Michael W. Toffel. 2010. Making self-regulation more than merely symbolic: The critical role of the legal environment. Administrative Science Quarterly 55 (3): 361–396. Short, Jodi L., and Michael W. Toffel. 2010. Making self-regulation more than merely symbolic: The critical role of the legal environment. Administrative Science Quarterly 55 (3): 361–396.
Zurück zum Zitat Siegel, Eric. 2016. Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie or Die. Hoboken, New Jersey: John Wiley & Sons. Siegel, Eric. 2016. Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie or Die. Hoboken, New Jersey: John Wiley & Sons.
Zurück zum Zitat Smith, H. Jeff. 1994. Managing Privacy: Information Technology and Corporate America. Chapel Hill, NC: University of North Carolina Press. Smith, H. Jeff. 1994. Managing Privacy: Information Technology and Corporate America. Chapel Hill, NC: University of North Carolina Press.
Zurück zum Zitat Suddaby, Roy. 2006. From the editors: What grounded theory is not. Academy of Management Journal 49 (4): 633–642.CrossRef Suddaby, Roy. 2006. From the editors: What grounded theory is not. Academy of Management Journal 49 (4): 633–642.CrossRef
Zurück zum Zitat Vallor, Shannon. 2018. An Introduction to Data Ethics (Course Module). Santa Clara, CA: Markkula Center for Applied Ethics. Vallor, Shannon. 2018. An Introduction to Data Ethics (Course Module). Santa Clara, CA: Markkula Center for Applied Ethics.
Zurück zum Zitat Waldman, Ari E. 2018a. Designing without privacy. Houston Law Review 55 (3): 659–727. Waldman, Ari E. 2018a. Designing without privacy. Houston Law Review 55 (3): 659–727.
Zurück zum Zitat Waldman, Ari E. 2018b. Privacy as Trust: Information Privacy for an Information Age. Cambridge University Press. Waldman, Ari E. 2018b. Privacy as Trust: Information Privacy for an Information Age. Cambridge University Press.
Zurück zum Zitat Waldman, Ari E. 2021. Industry Unbound: The Inside Story of Privacy, Data and Corporate Power. Cambridge, UK: Cambridge University Press.CrossRef Waldman, Ari E. 2021. Industry Unbound: The Inside Story of Privacy, Data and Corporate Power. Cambridge, UK: Cambridge University Press.CrossRef
Zurück zum Zitat Whittlestone, Jess, Rune Nyrup, Anna Alexandrova, and Stephen Cave. 2019. The role and limits of principles in AI ethics: Towards a focus on tensions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 195–200. Whittlestone, Jess, Rune Nyrup, Anna Alexandrova, and Stephen Cave. 2019. The role and limits of principles in AI ethics: Towards a focus on tensions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 195–200.
Zurück zum Zitat Yang, Ke, Julia Stoyanovich, Abolfazl Asudeh, Bill Howe, H. V. Jagadish, and Gerome Miklau. 2018. A nutritional label for rankings. In Proceedings of the International Conference on Management of Data (SIGMOD’18), 1773–1776. Yang, Ke, Julia Stoyanovich, Abolfazl Asudeh, Bill Howe, H. V. Jagadish, and Gerome Miklau. 2018. A nutritional label for rankings. In Proceedings of the International Conference on Management of Data (SIGMOD’18), 1773–1776.
Zurück zum Zitat Zwitter, Andrej. 2014. Big data ethics. Big Data & Society (July–December): 1–6. Zwitter, Andrej. 2014. Big data ethics. Big Data & Society (July–December): 1–6.
Metadaten
Titel
Introduction
verfasst von
Dennis Hirsch
Timothy Bartley
Aravind Chandrasekaran
Davon Norris
Srinivasan Parthasarathy
Piers Norris Turner
Copyright-Jahr
2024
DOI
https://doi.org/10.1007/978-3-031-21491-2_1