Skip to main content
Erschienen in:
Buchtitelbild

Open Access 2024 | OriginalPaper | Buchkapitel

5. Motivations—Why Do Companies Pursue Data Ethics?

verfasst von : Dennis Hirsch, Timothy Bartley, Aravind Chandrasekaran, Davon Norris, Srinivasan Parthasarathy, Piers Norris Turner

Erschienen in: Business Data Ethics

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This chapter examines the reasons that companies go beyond compliance to engage in data ethics management. Our research suggests that a range of different pressures and incentives may encourage companies to adopt data ethics policies. These include issues of corporate and industry reputation (particularly in the wake of scandals), emerging or looming regulation (in both the U.S. and other jurisdictions, especially the EU), demand from employees, and strategic interests in improving decision-making and gaining competitive advantages. Delving into reputational dynamics, the chapter considers the role of data ethics policies in gaining trust not only with consumers/users but also with regulators and business partners. Using our survey data, we examine how types of markets (business-to-business vs. business-to-consumer), media and stakeholder pressures, and perceptions of regulation may be related to whether companies have a data ethics policy or not.
Key Take-Aways
  • For the most part, businesses pursue data ethics management for strategic reasons.
  • Companies pursue data ethics management for six main reasons:
    1.
    Build and sustain reputation and trust: Companies worry that if they use advanced analytics and AI in ways that harm people or the broader society, this will damage their reputation with their customers, business partners and regulators. They invest in data ethics to protect the reputation and build trust with these important constituencies.
     
    2.
    Prepare for and shape future law and policy: Companies believe that regulation of advanced analytics and AI is coming. They undertake beyond compliance activities to prepare for, or potentially pre-empt or influence, such regulation.
     
    3.
    Recruit and retain employees: Employees who perceive the company’s data practices as harmful are more likely to leave, or not to accept a job offer in the first place. Better data ethics performance can enable companies to recruit and retain these employees.
     
    4.
    Make faster and better decisions: The uncertain risks of advanced analytics and AI projects can make it hard for companies to decide whether to undertake such projects. Standards and processes for assessing the social acceptability of an advanced analytics or AI project can enable businesses to resolve these issues more quickly and intelligently. In this way, effective data ethics management can facilitate faster and higher quality innovation.
     
    5.
    Achieve competitive advantage: Customers may prefer more ethical businesses and products. Companies seek to differentiate themselves and achieve greater market share by taking data ethics seriously.
     
    6.
    Fulfill values: Some respondents reported that it was their company’s or CEO’s deeply held values that motivated and informed its data ethics efforts. They see data ethics management as an extension of a broader commitment to corporate social responsibility.
     
  • Survey data suggested that companies that faced external pressures from the media, advocacy groups, employees and/or investors for better data ethics performance were more likely to have a policy in place to manage the risks from their use of big data.
Chapter Four’s account of data ethics as a form of beyond compliance risk mitigation leaves an important question unanswered: Why do companies make this effort? If existing U.S. privacy law does not require companies to be more responsible in their use of advanced analytics, why are they investing resources in doing so? Our research suggests that there is not a single driving factor but rather a range of different pressures and incentives that may encourage companies to adopt data ethics policies.

5.1 Build Reputation and Sustain Trust

For one thing, companies want to build and maintain their reputation and the trust that others have in them. Negative incidents involving advanced analytics, such as the Facebook-Cambridge Analytica episode, which was particularly salient among survey respondents, can erode this trust, damage reputation, and so hurt the business. Our interviewees frequently cited the need to maintain reputation and trust as among the most important reasons that they invested resources in identifying and seeking to reduce the threats that advanced analytics can pose.
Well, there's a lot of different people like me at other companies that are trying to ensure that the trust in their brand is maintained and extended and the trust in the marketplace because trust is the fundamental of all human relationships and that's why if you act ethically and ensure the data use is ethical and you are fully accountable for that, then your brand is trustworthy and I think that is the most important. That's what we're all trying to achieve, so there's many, many companies get it and are trying to stand up programs or extend programs that really get at this fundamental of trust and operating ethically. (Interviewee #6).
While legal compliance is necessary for building a strong reputation and trusted relationships, it is far from sufficient. Negative incidents that do not violate the law can still affect the public’s perception of a company. For example, one interviewee pointed to the recent controversy in which ProPublica found that it was able to market ads to Facebook users who included the term “Jew Hater” in their profile.
Well obviously, it’s a big reputational issue for Facebook. They don’t want to be viewed as a company that’s serving up ads based on antisemitism, and likewise Google doesn’t want to do that as well. So, it doesn’t matter what the law requires. It’s really a question of what’s good for their business’s reputation. And we run into that a lot with companies. (Interviewee #23).1
Companies that engage directly with consumers have the strongest incentive to avoid negative incidents and protect reputation and trust. For example, an interviewee from the retail industry explained that their company had developed a management approach to vetting advanced analytics that centered on whether the data practice would be seen as benefiting the consumer.
After ascertaining that a given project complies with the law, the company then asks: “does it put the customer first, is it something we ought to do, is it brand right. . . . If we have a reputational hit and we have customers that either decrease spend or are not spending with us at all for whatever reasons, that's really something that we want to avoid. That's the harm, and that's the basis. But at the end of the day, it's really about the customer . . . does this put the customer first?”. (Interviewee #17).
Other companies, particularly those that do not transact directly with consumers, focused on their standing with the general public. For example, one interviewee attributed their company’s data ethics initiative to its decades-long reputation as “a really ethical company in the eyes of the public,” and to the company’s desire to maintain this reputation.
Other companies worried more about their reputation with regulators. As they saw it, legal compliance was not sufficient to maintain a good relationship with these public officials. They also needed to show that they were good actors. A high-profile incident involving the unethical use of advanced analytics could damage regulators’ image of them, even if it did not involve a violation of law. The Facebook-Cambridge Analytica episode, which may or may not have involved a legal violation but certainly caused regulators to scrutinize Facebook more carefully, illustrates this. As one interviewee explained:
I think some of the companies are very motivated by wanting to . . . have good, trustworthy relationships with regulators. So they're seeking to balance a number of different factors: how regulators perceive them, how their customers perceive them as well as how actively they're able to use and move data around the world. (Interviewee #16).
One interviewee focused, not on customers or regulators, but on their company’s reputation among its business partners. As this interviewee saw it, individual consumers lack the resources and expertise to assess meaningfully a company’s data practices. Business partners, particularly those who share their data with a company, are much more likely to scrutinize these practices carefully, especially where a negative incident could reflect back on them. A company needs to handle data ethically in order to earn the trust of these business partners.
As we all know, [individual] people don't read privacy policies . . . . When you're [a company that is] signing up for a CRM [customer relationship management] contract, for years and millions of dollars, you're reading every word of every agreement, right? . . . . Companies that sell these types of products need to make sure what they say in these contracts is true. But, more importantly that they're following their own controls and public statements about privacy and security protocols and living up to the values that they leverage when they sell these products. (Interviewee #9).
The survey data, however, suggests that reputation among business partners may not be a common driver of data ethics management, at least as compared to consumer-oriented reputations. As illustrated in Fig. 5.1, we found that companies in our sample that sell primarily to businesses are less likely to have a policy in place to manage the risks from advanced analytics and AI than do companies that sell primarily to consumers. Although this difference is not statistically significant (p = 0.21), the difference is certainly noticeable within the confines of our sample.
In sum, the interviews suggest that—whether to protect their standing with customers, regulators, business partners, or all three—companies pursue data ethics in part to protect their reputation as responsible stewards of people’s data, even where the law does not require them to do so.2
Preying on vulnerable populations, treating people unfairly, manipulating people in ways that could harm them . . . . There's some of that stuff that's perfectly legal, but it still may not be a good business decision. I'll throw out the word ethics. It's not the ethical thing to do. Some companies that I work with, they take that stuff very, very seriously. They don't want to do things that feel, or could be perceived as, unethical. (Interviewee #12)
The survey data supports the idea that trust and reputation are important drivers of corporate data ethics management. For example, in addition to asking survey respondents whether their company had adopted a policy for managing the risks of big data, we also asked respondents whether the media (Fig. 5.2), advocacy groups (Fig. 5.3) or employees or investors (Fig. 5.4) had brought pressure on companies in their industry to achieve better data ethics performance. As conveyed in the following figures, companies were more likely to have a policy for managing big data’s risks when their industry had experienced such pressures. While these findings are not statistically significant, the direction of associations is informative. Collectively, Figs. 5.2, 5.3 and 5.4 suggest a link between external pressures and having a policy in place to manage the risks from big data. Perhaps most notably, more than 75% of companies in industries that faced media exposés, investigative journalism, or media criticism related to data ethics had a voluntary policy, compared to only 40% in industries that had not faced this kind of scrutiny.
While the current survey was designed as exploratory and does not seek to assess causality, we did look further into the timing of media pressures and the creation of policies. Approximately 50% of respondents indicated that the media pressures on their industry began in 2015–2019 (rather than in earlier periods), and almost 75% of these companies adopted their policies between 2016 and 2019. The overlap of periods suggests that external pressures and data ethics policies tended to grow together in tandem in a relatively short period of time, as opposed to one greatly preceding the other. This provides a baseline for future research into the more precise timing of public pressures, media coverage, and the development of company policies.

5.2 Anticipate Emerging Regulation

Companies generally put resources into curbing externalities because laws require them to do so. At the time of the data collection for this book (2017–2019), US law did not directly require organizations to weigh the benefits and risks that their use of advanced analytics and AI could create. American companies that processed European’s personal data did have to comply with Article 22 of the European Union’s General Data Protection Regulation (GDPR) which expressly regulates “automated decision-making.” But Article 22 required merely that organizations include a human in the decision-making loop. It hardly mandated they undertake data ethics management.3
Instead, the interviewees explained that it was the likelihood of future regulation that motivated their companies to invest in data ethics management. At the time of the interviews, the Facebook-Cambridge Analytica story, which involved Cambridge Analytica’s use of advanced analytics to manipulate voters,4 was fresh news and was sparking interest in federal privacy legislation. Congress was considering a number of privacy bills, some of which would have curtailed abusive big data analytics practices (Kerry 2019). At the same time, the Federal Trade Commission suggested that it might use its Section 5 unfairness authority to reign in algorithms that had disparate, negative impacts on racial minorities or other protected classes.5
Due to developments such as these, the interviewees believed that the future regulation of advanced analytics and AI was likely. The survey respondents shared this view, although they were a bit less certain on this point than the interviewees. We asked the survey respondents whether they agreed—from 1 (strongly disagree) to 5 (strongly agree)—that there would be new US Federal or State government regulation of big data analytics in the coming years. As conveyed below in Fig. 5.5, almost 70% of the sample agreed that some state regulation was likely, while almost 50% agreed that federal regulation was likely.
With the benefit of hindsight, we can now see that the interviewees and survey respondents were right about how the law was likely to develop. In the years since we collected our data, the European Union has proposed, and will very likely soon pass, the AI Act. (Commission, EU 2021) In the US Senate alone, Senator Wyden has introduced the Algorithmic Accountability Act (US Cong., Senate 2022), Senator Coons the Algorithmic Fairness Act of 2020 (US Cong., Senate 2020), and Majority Leader Shumer has promised an “all-hands-on-deck effort in the Senate, with committees developing bipartisan legislation, and a bipartisan gang of non-committee chairs working to further develop the Senate’s policy response” to AI. The bi-partisan American Data Privacy and Protection Act, the leading federal privacy bill, contains a provision (Section 207) devoted exclusively to “Civil Rights and Algorithms.” State and local governments have actually passed legislation that directly regulates advanced analytics and AI. Notable examples include a Colorado statute to prohibit unfair algorithmic discrimination in the insurance industry, and a New York City law requiring independent audits for algorithmic bias before employers can use algorithmic decision-making in their hiring processes. The survey respondents were right, not only about the likelihood of future regulation, but about the fact that state regulation was likely to precede federal action (see Fig. 5.5).
At the time of data collection these legislative proposals and actions were still in the future. But companies’ anticipation of them formed a second, major motivation for engaging in “beyond compliance” data ethics management. The interviewees explained that their companies hoped, through these actions, to achieve one or more of three objectives. To begin with, some companies believed that if industry could demonstrate that it understood and was able to reign in advanced analytics’ harmful aspects, that might render unnecessary and prevent the passage of stringent government regulation. As one interviewee who worked with a variety of businesses on their data ethics initiatives explained:
a business with smarts, they would be advocating for self-regulatory or even co-regulatory types of models that held them accountable to a different standard of accountability or stewardship as a means to stave off what will invariably be badly written [regulation that will] have negative consequences from a data perspective to businesses. (Interviewee #7).
A second group of companies focused not so much on pre-empting future regulation as on shaping it. They worried that ill-informed government regulation could be incompatible with their business models and operations. They believed that, if they took the initiative to develop and implement strategies for reducing advanced analytics’ harms, policymakers might draw on these models when drafting legislation and regulations. That could make future regulation both more effective and more feasible from a business perspective.
But the smart ones are going to say wait a sec, this is the inevitable future and I want to stay at least one step ahead of it. And they're going to start to both work to influence the development of those regulatory guidance frameworks and . . . to implement their own ethical or fair data processing standards as a means to achieving sort of trusted data optimization. (Interviewee #7).
Finally, there were companies that wanted to get ahead of future regulation so that, when it did come, they could adapt to it more easily, and at lesser cost, than their competitors. These “smart organizations are seeing the tea leaves and saying, ‘I really want to make sure that I stay at least one step ahead of that.’”(Interviewee #7). These three motivations—to pre-empt, shape and prepare for future regulation—were important drivers of corporate investment in responsible advanced analytics and AI. Still, the move in this direction remained more the exception than the rule. “[F]or many organizations, they just don't want to invest in something unless they have to.”(Interviewee #7). In our survey, we see a positive association between having a policy in place to address the risks of advanced analytics, and the extent to which respondents see US state regulation in the near future (Fig. 5.6).

5.3 Recruit and Retain Employees

Several interviewees linked data ethics to employee retention. This was particularly evident among firms that employed data scientists. At the time of our research, these individuals were in very high demand. They were more likely to move to another employer when the company for which they worked used data in ways that offended their values. A pro-active approach to data ethics thus became important to employee retention which, itself, was critical to the company’s success.
I will say one thing about engineering companies in the Valley . . . the engineers themselves –these 24-year-old kids – are powerful. . . . They are valuable and they are the strength of the company. So, . . . there’s two markets you compete in for a tech company: one is to sell your products, the other is to attract talent – the talent market. If you don’t get the best talent, you don’t have the best product. . . . [T]hese are engineers coming out of generally elite, progressive, generally liberal institutions. They’re going to come in with sort of a mindset that is very pro-privacy and civil liberties. They want to do the right thing. (Interview #10).

5.4 Make Faster and Better Risk-Based Decisions

It may sound counter-intuitive, but the driving force for some companies to limit their use of advanced analytics was their desire to use advanced analytics more fully. This stemmed from companies’ uncertainty over the line between socially acceptable and unacceptable uses of this technology. Faced with this uncertainty, many companies found it difficult to make risk-based decisions about particular advanced analytics use cases. Risk-averse companies then held off on such uses, and so lost out on the value that advanced analytics, as applied to their data, could have generated.6 Companies that put resources into data ethics management and so developed a way to spot and navigate ethical issues connected to their use of advanced analytics not only protected people better, but also improved their ability to make quick and effective decisions about whether to proceed with particular analytics projects. This freed them up to use advanced analytics, and their own data, more fully. One consultant who worked with many companies, referred to this as overcoming “reticence risk.”
[Q]uite frankly, . . . the biggest driver of data value creation loss, and the increasing problem organizations face, is what I call self-inflicted reticence risk. It is their inability to make internal decisions about whether they should or shouldn't do something related to the use of data. . . . In the absence of a decision-making process inside the company, the risk voices always win. The result is these organizations end up leaving value on the table. . . . The decision-making process should address: ‘I don't know whether I can use data;’ and, ‘even if I legally can do it, I don't know whether I should do it.’ Absent a more formalized decision-making process, organizations find that many, if not every, stakeholder inside the company has an opinion on this and that and these opinions cannot be reconciled. The result is that data activity grinds to a halt . . . and these organizations end up only using only 30%, 40% of the data or the value because they can't reconcile the risk. . . . Reticence risk is leaving value on the table because you just can't make a risk-based decision. (Interviewee #7).

5.5 Achieve Competitive Advantage

A surprisingly large number of interviewees said that their companies pursued data ethics for competitive advantage. In a sentiment related to the above comments on reputation and trust, some focused on the market benefits of a good reputation.
[I]f you get people to believe . . . that you are handling things in a responsible manner, they're more likely to keep doing business with you or want to do business with you. . . . It can help you in the marketplace. . . [Anytime] we are able to talk about how we are handling or managing data in a responsible way, it does nothing but help our brand and our reputation. (Interviewee #9)
Others framed the advantage differently. As they saw it, an ethical product or project is one that benefits, rather than harms, customers. They believed that, by proactively working to prevent harm to consumers, they would improve the customer experience and so make the company’s products more attractive.
[I]t's also . . . what's the customer experience? It's just making them think through things that I wouldn't have had a problem with. They're not necessarily data ethics concerns. But suddenly they'll just realize this is actually to have a crappy customer experience. . . . So they're actually seeing this as a benefit from the business perspective. (Interviewee #20).
This is also how they explained their role to the business units that they worked with. They found this message to be more effective than an explicitly ethics-based one.
If you go to a team and say, "Hey, I'm here to do ethics review." They immediately think, "What? Am I being unethical? Am I doing something wrong?" . . . It sends the wrong message. So I often frame ethics questions as product improvement questions. Like, "I just want to know how you're doing things, and let me see if I can help you figure out what the sensitivities might be, and how we can resolve them." And those questions then, are ethics questions. And that's really my job right now. . . So, that's what we're doing here, is we're making these projects better because we ask questions. . . . we have many, many examples where we are truly proud of the work that we did as a team, because we know that we improved the project. (Interviewee #14).
A 2016 Price Waterhouse Coopers report suggested a number of levels on which data ethics could create competitive advantage, maintaining that “[t]hose that [have more developed ethical frameworks] could find themselves a magnet for employees, customers, and even investors who increasingly favor organizations that operate ethically and responsibly. In fact, several studies have confirmed that companies operating ethically outperform others in revenue and profitability . . . [they] gain a strategic advantage by excelling in leveraging data’s upside while managing risk and reducing costs.” (PwC 2016). One data ethics manager expressed a similar sentiment: “the time has come . . .  where privacy is a differentiator, and data ethics is even something that's going to further differentiate . . . I don't think there's any way you can escape it.” (Interviewee #16).
Data ethics managers likely have a vested interest in believing that their work makes their company more competitive and successful. These same managers may not be the most reliable reporters on whether, in fact, data ethics management has this effect. Based only on these reports, we cannot conclude as to whether data ethics does, in fact, produce such an advantage. But we can report that some data ethics managers see themselves, and explain their role to their companies, in this way.

5.6 Fulfill Corporate Values

The above motivations behind data ethics management are rooted, in one way or another, in the company’s self-interest. However, a number of the interviewees felt strongly that their companies pursued data ethics for more intrinsic reasons having to do with the company’s core values. As one said:
[T]hat's not the only driver, following the rules, following the law. I think [my company] has other drivers. One is company values. Those might extend beyond the exact letter of the law, and I think [company] is a values-based company, and I'm not just saying this as a marketing pitch, this is what I think. I think [company] has strong values about protecting its customers, protecting its own information, and its employees, and then making sure that it's a good steward of public information. [The company] puts a lot of energy and puts forth a lot of resources to be a good steward in those regards. (Interviewee #18).
The above, anecdotal account of the motivations behind data ethics management does not allow us to say which motivations are most important. But it does suggest that there are many reasons why a business might put resources into data ethics management, even when the law does not yet require it to do so. That explains why some companies might engage in beyond compliance data ethics management. But it does not yet tell us how they do so. What does data ethics management look like in practice? What are its main features? What challenges does it encounter? Chapters 610 share what we learned about how organizations pursue data ethics management.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Fußnoten
1
An attorney put it to their clients this way: “I can tell you this thing you're doing, that you're proposing, it's perfectly legal. But there's a really good chance there's going to be a really crappy New York Times article about you on this. I don't think you want that. So, let's brainstorm about ways that we can avoid that and achieve the business objective you're trying to achieve in a different way.” (Interviewee #12).
 
2
One interviewee explained that their company derives great value from its reputation and that this justifies an investment in data ethics. “A company kind of built their reputation over a 25-year period. And it's worth billions of dollars as an asset. And so they're very protective of that. And so... they are willing to devote substantial resources in making sure that they avoid [things that detract from their reputation.” Interviewee #15.
 
3
Insofar as the GDPR had any impact on U.S.-based businesses’ pursuit of data ethics, it was likely due, not to Article 22, but to Article 6 titled “Lawfulness of Processing.” Where an organization is unable to secure data subject consent—as is often the case when it comes to advanced analytics—Article 6 allows it to process personal data if the organization’s “legitimate interests” in processing the data outweigh the data subject’s “fundamental rights and freedoms.” By getting companies to articulate and balance benefits and the risks, Article 6 likely made a greater contribution to data ethics than Article 22. (Interviewee # 22). The GDPR’s most important contribution to corporate data ethics may have stemmed from its legitimate interests balancing test. As one interviewee put it, “there is not a lot of difference between the type of analysis you do, to make sure that the score card is balanced for what legitimate interest, and what you would do to make sure that benefits and risks are balanced in big data.” (Interviewee #1).
 
4
Cambridge Analytica obtained the data of 87 million Facebook users. Using advanced analytics, it inferred the personality types of these individuals. It then sent them political ads, at the behest of the Trump campaign, that appealed to each voter’s particular personality type and so influenced them in ways that they could not consciously detect. Cambridge Analytica’s use of advanced analytics, and its potential impact on the Presidential election, is one of the reasons that this incident outraged so many people.
 
5
Federal Trade Commission, Big Data: A Tool for Inclusion or Exclusion? 23 (January 2016) (stating that “Section 5 may also apply . . . if products are sold to customers that use the products for discriminatory purposes. The inquiry will be fact-specific, and in every case, the test will be whether the company is offering or using big data analytics in a deceptive or unfair way.”)
 
6
One study estimated that the median Fortune 1000 company could increase its revenue by $2.01 billion a year just by marginally improving the usability of the data already at its disposal. Anitesh Barua, Deepa Mani, and Rajiv Mukherjee. “Measuring the Business Impacts of Effective Data,” University of Texas McCombs School of Business, September 1, 2010, p. 3.
 
Literatur
Zurück zum Zitat Commission, EU. 2021. Laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Proposal for a regulation of the European parliament and of the council. COM (2021) 206 final. Commission, EU. 2021. Laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Proposal for a regulation of the European parliament and of the council. COM (2021) 206 final.
Zurück zum Zitat Kerry, Cameron. 2019. Breaking Down Proposals for Privacy Legislation: How Do They Regulate? Brookings (March 8, 2019). Kerry, Cameron. 2019. Breaking Down Proposals for Privacy Legislation: How Do They Regulate? Brookings (March 8, 2019).
Zurück zum Zitat PwC. 2016. Responsibly Leveraging Data in the Marketplace: Key Elements of a Leading Approach to Data Use Governance. PwC. 2016. Responsibly Leveraging Data in the Marketplace: Key Elements of a Leading Approach to Data Use Governance.
Metadaten
Titel
Motivations—Why Do Companies Pursue Data Ethics?
verfasst von
Dennis Hirsch
Timothy Bartley
Aravind Chandrasekaran
Davon Norris
Srinivasan Parthasarathy
Piers Norris Turner
Copyright-Jahr
2024
DOI
https://doi.org/10.1007/978-3-031-21491-2_5