Skip to main content
Erschienen in:
Buchtitelbild

Open Access 2024 | OriginalPaper | Buchkapitel

4. What is Business Data Ethics Management?

verfasst von : Dennis Hirsch, Timothy Bartley, Aravind Chandrasekaran, Davon Norris, Srinivasan Parthasarathy, Piers Norris Turner

Erschienen in: Business Data Ethics

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
download
DOWNLOAD
print
DRUCKEN
insite
SUCHEN
loading …

Abstract

The law, including privacy law, lags the rapid development of advanced analytics and AI. As a result, compliance with the law is not sufficient to protect individuals or society from the threats that corporate use of these technologies can create. To protect people against these risks, and so to safeguard their own reputations and live by their values, companies need to do more than the law requires. As the interviewees described it, business “data ethics” management consists of the ways in which a company determines how far it wants to go beyond legal minimums, and how it seeks to achieve this goal. As a result, data ethics in the current period is largely a question of beyond compliance principles and assessments, even as legal norms continue to evolve. The literature has described such beyond compliance behavior with respect to corporate environmental performance. This Book documents beyond compliance behavior with respect to business governance of advanced analytics and AI.
Key Take-Aways
  • Companies Need to Go Beyond Compliance to Protect Others, and so Themselves: The law, including privacy law, lags the rapid development of advanced analytics and AI. As a result, compliance with the law is not sufficient to protect individuals or society from the threats that corporate use of these technologies can create. To protect people against these risks, companies need to do more than the law requires.
  • Data Ethics Management is a Form of Beyond Compliance Behavior: As companies themselves describe it, business “data ethics” management is a form of beyond compliance behavior that seeks to mitigate the risks that a company’s use of advanced analytics and AI can create for individuals and the broader society, and so for the company itself.
  • Data Ethics Resembles Corporate Sustainability Efforts: The literature has described such “beyond compliance” behavior with respect to corporate environmental performance. This Book documents beyond compliance behavior with respect to business governance of advanced analytics and AI.
Business use of advanced analytics produces benefits. But it also creates the harms outlined in the previous chapter. How can companies reduce or prevent these negative impacts? Where they do occur, how should businesses balance them against the positive outcomes? How should they determine whether a given advanced analytics project is legitimate and socially acceptable?
Traditionally, companies have looked to privacy law, and the Fair Information Practices (FIPs) that underlie them, as a guide in such matters. Privacy law generally requires that companies notify individuals before they collect and use their personal information, afford them some choice as to whether to allow this, and use the data only for the purpose specified in the notice and to which the individuals have acquiesced. So long as a company adheres to these core principles—notice, choice and purpose limitation—and the individual in question consents to the data processing, the company feels relatively comfortable that its data practices are legitimate. The individual consented to them, after all.
A number of interviewees explained that, while this approach may work for simpler forms of data processing, advanced analytics puts great strain on it. To begin with, the above-described harms that advanced analytics can impose extend well beyond privacy to bias, increased inequality, and other such areas. Privacy law’s individual consent model is not designed to perceive and address these social harms. Second, U.S. privacy law governs only certain sectors, leaving important ones (social media, data brokerage, search engines, etc.) lightly regulated. Third, U.S. privacy law generally applies only where companies process personally identifiable information (PII). Advanced analytics, however, can find correlations in, and develop predictive algorithms from, de-identified information. Where companies de-identify data before analyzing it, they arguably take advanced analytics outside the scope of privacy law.1
Finally, the interviewees explained that, even if a company were to try to comply with the spirit of privacy law and the FIPs, the nature of advanced analytics makes this difficult to achieve. Companies that engage in advanced analytics typically start by compiling or gaining access to a massive dataset drawn from multiple sources. Later, they look for correlations in the data and, based on these patterns, make inferences and actionable predictions. As a result, the company carrying out the advanced analytics may be several steps removed from the entity that first collected the data. This makes it difficult to go back and obtain consent from the individuals whose data make up the data set. “[Y]ou may be so many steps removed, you can't possibly have gotten consent from the individual in way that is reflective of what you want to do with that data. So, I think that in of itself is a problem.” (Interviewee #22).
Interviewees further explained that they often do not figure out what they are going to try to learn from the data—the purpose of their processing—until after they have amassed the data set. Thus, even where a company is involved in the data collection and so in a position to seek consent, it often cannot specify the purpose at the moment of data collection and can only obtain the most general kind of consent. As one interviewee put it: “[T]he problem is that in order for it to be meaningful consent, the person or organization who was seeking that consent at the time would had to have thought through every possible way those data could be used and at least to have framed up, at least at a generalized level, a consent that's actually broad enough in scope. They don’t have the ability to do that, unless you say, ‘You're consenting to everything that we possibly ever might want to do with this data.’ It's problematic.” (Interviewee #22).2 Another concurred: “in the area of big data, where data is used well beyond the purpose of primary collection, it’s almost impossible to get consent, informed consent, and consent that is useful.” (Interviewee #23). For all of the above-described reasons, the interviewees believed that privacy law does not adequately address advanced analytics’ potential harms, and that compliance with such laws is not a sufficient way to protect people from these harms.
The survey data shows something similar. In Fig. 4.1, we see that a clear majority of respondents said that current laws either do not address, or do not clearly and adequately address, the harm that advanced analytics can impose on individuals and the broader society. Some risks—such as lack of transparency, errors in decision-making, and especially displacement of workers through automation—were frequently rated as not addressed at all in current law. Other risks, such as privacy and manipulation, were seen as only ambiguously addressed in current law. Discrimination was the most likely of these risks to be seen as clearly addressed in existing law, and yet even here, more than 40% of respondents saw this legal treatment as ambiguous. While there is clearly a great deal of variation in legal clarity across different risks, these findings also suggest that companies must look beyond the law to address potential harms that they have identified as salient.
Unable to rely on privacy law and individual consent as a source of legitimacy, some companies have begun to assess for themselves the social acceptability of particular advanced analytics projects.3 They see themselves as venturing beyond privacy law and into the realm of substantive value choices, of ethics. For these companies, “data ethics” means assessing the legitimacy and acceptability of advanced analytics projects so that the company can act—or at least try to act—in a socially responsible way. “[T]he laws and regulations haven’t caught up yet to this new, innovative use of data. Therefore, we will have to make our best guess at how to be ethical and responsible.” (Interviewee #21).4
This shift from a consent-based model to substantive assessments of impacts and harms is reminiscent of the longstanding debate in the privacy law literature between individual control-based, and use- or harm-based, approaches to privacy regulation. What seems to be happening is that, when it comes to advanced analytics, some companies are starting to move on their own from a consent-based model, to a harm-based one. One interviewee spoke to this directly: “the FIPs were not created with this world in mind. Transparency and choice in a world that is opaque and complex are not going to solve all of these problems.... I think the conclusion of the White House report about looking at use-based rules in the complexities of this world was right. The difficulty of that is how do you do it?” (Interviewee #3).
Framed as beyond compliance management, data ethics shares much with other forms of corporate social responsibility, such as reducing carbon emissions, where companies go beyond compliance with the law. Data ethics is, in a sense, a form of corporate social responsibility for the algorithmic economy.
I almost think that it comes down to just this perfect storm of the company's history and philosophy around social responsibility. Because I actually think that everything we're actually talking about here, ultimately, is social responsibility. Not unlike the labor issues; not unlike the environment. I see patterns that are similar to the waves that we've seen of other social responsibility. And, I know we've never thought about this, or data protection, as a social responsibility function. But, I think ultimately it will be and those always align to the ethics department and tend to get pulled away from the legal department. (Interviewee #21)
We predict that, in the next five to ten years, growing numbers of U.S. companies will include data ethics as part of their general corporate social responsibility initiatives and reporting. We base this prediction on what we learned about the strong reasons for going beyond compliance to address data ethics risks. The next chapter sets out these reasons, as described by the companies themselves.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
download
DOWNLOAD
print
DRUCKEN
Fußnoten
1
As one interviewee put it: “[A]ll privacy laws in the world are written with the caveat of personal information. But if you think about the number of potential technology-enabled decisions or impacts an individual could be subject to that have nothing to do with personal information, you start to say: ‘Well, wait a sec, that just doesn't work anymore....’” Interviewee #7.
 
2
Another interviewee made a similar point: “Let's put that in the context of big data. We have laws today that really are built on foundations of providing notice, providing a purpose specification, and gaining consent for whatever purpose you're specifying. That's simply inconsistent and does not take into account the reality of this observational world that we live in, or advanced algorithms where. . . the whole purpose of discovery is to discover causation or correlations that we can't anticipate. Otherwise, it's really just research, or analytics, which we've been doing for decades. The fact that we don't know what we're going to discover is simply inconsistent with specifying purpose”. (Interviewee #21)
 
3
As one interviewee put it: where “the ability to go and get consent, really meaningful consent, just doesn't exist, you need another basis on which to do what you're doing.” (Interviewee #22).
 
4
One interviewee directly tied the rise of data ethics to the reluctance to rely on consent: “[the] reason that the ethics conversation is important and interesting... is: when do ethics let me use data without consent?... I mean, could Google use all the searches and go play the stock market? Who knows what machine learning will enable them to predict? And what can Amazon do with the data it learns about my home?” Interviewee #19.
 
Metadaten
Titel
What is Business Data Ethics Management?
verfasst von
Dennis Hirsch
Timothy Bartley
Aravind Chandrasekaran
Davon Norris
Srinivasan Parthasarathy
Piers Norris Turner
Copyright-Jahr
2024
DOI
https://doi.org/10.1007/978-3-031-21491-2_4