Introduction
Conceptual Comparison of Greenwashing and Machinewashing
Greenwashing and Machinewashing: State-of-the-Art and Core Assumptions
Greenwashing
Author (year) | Title | Concept/definition |
---|---|---|
Oxford English Dictionary (2012) | Greenwashing, n | “Disinformation disseminated by an organization so as to present an environmentally responsible public image; a public image of environmental responsibility promulgated by or for an organization, etc., but perceived as being unfounded or intentionally misleading” |
Merriam-Webster Dictionary (2020) | Greenwashing | “practice of promoting environmentally friendly programs to deflect attention from an organization’s environmentally unfriendly or less savoury activities” |
Laufer (2003) | Social accountability and corporate Greenwashing | “[F]orms of disinformation from organizations seeking to repair public reputations and further shape public images” |
Walker and Wan (2012, p. 227) | The harm of symbolic actions and green-washing: corporate actions and communications on environmental performance and their financial implications | “[A] strategy that companies adopt to engage in symbolic communications of environmental issues without substantially addressing them in actions […]” |
Seele and Gatti (2017, p. 239) | Greenwashing revisited: in search of at typology and accusation-based definition incorporating legitimacy strategies | “[G]reenwashing as co-creation of an external accusation toward an organization with regard to presenting a misleading green message.” |
Bowen (2014, p. 33) | After greenwashing: symbolic corporate environmentalism and society | “Greenwashing is a special case of ‘merely symbolic’ in which firms deliberately manipulate their communications and symbolic practices so as to build a ceremonial façade” |
Marciniak (2010, p. 49) | Greenwashing as an example of ecological marketing misleading practices | “[T]he unjustified appropriation of environmental virtue by a company to create a pro-environmental image.” |
Matejek and Gössling (2014, p. 572) | Beyond legitimacy: a case study in BP's “Green Lashing” | “[S]ymbolic actions may even eclipse substantive activities entirely, a phenomenon generally referred to as greenwashing, or window dressing” |
Marquis et al. (2016, p. 483) | Scrutiny, norms, and selective disclosure: a global study of greenwashing | “[A] symbolic strategy whereby firms seek to gain or maintain legitimacy by disproportionately revealing beneficial or relatively benign performance indicators to obscure their less impressive overall performance” |
Guo et al. (2017, p. 524) | A path analysis of greenwashing in a trust crisis among Chinese energy companies: the role of brand legitimacy and brand loyalty | “Greenwashing here refers to the integration of two corporate behaviors: poor environmental performance and positive communication about environmental performance” |
Sheehy (2014, p. 626) | Defining CSR: problems and solutions | “[…] i.e. merely businesses claiming environmental credentials and other social contributions while continuing to generate excessive harms such as social costs, i.e. ‘business as usual’” |
Machinewashing
Author (year) | Title | Concept/definition |
---|---|---|
Wagner (2018, p. 1) | Ethics as an escape from regulation: from ethics-washing to ethics-shopping? | “[E]thics is presented as a concrete policy option. Striving for ethics and ethical decision-making it is argued, will make technologies better. […] Unable or unwilling to properly provide regulatory solutions, ethics is seen as the ‘easy’ or ‘soft’ option which can help structure and give meaning to existing self-regulatory initiatives. In this world, ‘ethics’ is the new ‘industry self-regulation’” |
Obradovich et al. (2019) | Beware corporate ‘machinewashing’ of AI | Today, we may be witnessing a new kind of greenwashing in the technology sector. Addressing widespread concerns about the pernicious downsides of artificial intelligence (AI)—robots taking jobs, fatal autonomous-vehicle crashes, racial bias in criminal sentencing, the ugly polarization of the 2018 election—tech giants are working hard to assure us of their good intentions surrounding AI. But some of their public relations campaigns are creating the surface illusion of positive change without the verifiable reality. Call it “machinewashing” |
Bietti (2020, p. 210) | From ethics washing to ethics bashing: a view on tech ethics from within moral philosophy | “[T]he term has been used by companies as an acceptable façade that justifies deregulation, self-regulation or market driven governance, and is increasingly identified with technology companies’ self-interested adoption of appearances of ethical behavior” |
McMillan and Brown (2019, p. 1) | Against ethical AI | In reference to Wagner (2018) it is summarized: “ethics washing is the use of working groups, guidelines, and manifestos as a counterbalance to calls for legal and regulatory frameworks which would ensure the safety of the public.” |
Rességuier and Rodrigues (2020, p. 2) | AI ethics should not remain toothless! A call to bring back the teeth of ethics | “Using ethics to prevent the implementation of legal regulation that is actually necessary is a serious and worrying abuse and misuse of ethics” |
Floridi (2019, p. 186) | Translating principles into practices of digital ethics: five risks of being unethical | “Digital ethics shopping = def. the malpractice of choosing, adapting, or revising (‘mixing and matching’) ethical principles, guidelines, codes, frameworks, or other similar standards (especially but not only in the ethics of AI), from a variety of available offers, in order to retrofit some pre-existing behaviours (choices, processes, strategies, etc.), and hence justify them a posteriori, instead of implementing or improving new behaviours by benchmarking them against public, ethical standards” |
Floridi (2019, p. 187) | Translating principles into practices of digital ethics: five risks of being unethical | “Ethics bluewashing = def. the malpractice of making unsubstantiated or misleading claims about, or implementing superficial measures in favour of, the ethical values and benefits of digital processes, products, services, or other solutions in order to appear more digitally ethical than one is.” |
Floridi (2019, p. 188) | Translating principles into practices of digital ethics: five risks of being unethical | “Digital ethics lobbying = def. the malpractice of exploiting digital ethics to delay, revise, replace, or avoid good and necessary legislation (or its enforcement) about the design, development, and deployment of digital processes, products, services, or other solutions” |
Floridi (2019, p. 191) | Translating principles into practices of digital ethics: five risks of being unethical | “Ethics shirking = def. the malpractice of doing increasingly less ‘ethical work’ (such as fulfilling duties, respecting rights, and honouring commitments) in a given context the lower the return of such ethical work in that context is mistakenly perceived to be” |
Coeckelbergh (2020, p. 4) | Green leviathan or the poetics of political liberty: navigating freedom in the age of climate change and artificial intelligence | “What if companies’ insistence that they will develop AI for ‘the earth’ and use AI in a sustainable and climate-friendly way is just ‘ethics washing’, a fig leaf for doing business as usual?” |
Yeung et al. (2020, p. 7) | AI governance by human rights-centred design, deliberation and oversight: an end to ethics washing | “It is hardly surprising that critics have dismissed these voluntary codes of conduct as ‘ethics washing’ given overwhelming evidence that the tech industry cannot be relied upon to honour its voluntary commitments” |
Umbrello and van de Poel (2020, p. 21) | Mapping value sensitive design onto AI for social good principles | “[…] there is a danger that contribution that societal challenge and SDGs are used to for legitimisation of AI technologies that do not respect some fundamental ethical principles, i.e. there is a danger of ethical white-washing (which is already visible ta the webpages of some large companies)” |
Metzinger (2019) | EU guidelines: ethics washing made in Europe | “Industry organizes and cultivates ethical debates to buy time – to distract the public and to prevent or at least delay effective regulation and policymaking.” |
Hao (2019) | In 2020, let’s stop AI ethics-washing and actually do something | “We’re falling into a trap of ethics-washing, where genuine action gets replaced by superficial promises” |
Johnson (2019) | How AI companies can avoid ethics washing | “[E]thics washing—also called ‘ethics theater’—is the practice of fabricating or exaggerating a company’s interest in equitable AI systems that work for everyone. A textbook example for tech giants is when a company promotes ‘AI for good’ initiatives with one hand while selling surveillance capitalism tech to governments and corporate customers with the other” |
Susser (2019) | Ethics alone can't fix big tech | “The result is “ethics theater”—or worse,“ethics washing”—a veneer of concern for the greater good, engineered to pacify critics and divert public attention away from what’s really going on inside the A.I. sausage factories.” |
Kinstler (2020) | Ethicists aim to save tech’s soul. Will anyone let them? | “[T]he practice of merely kowtowing in the direction of moral values in order to stave off government regulation and media criticism.” |
Waddell (2019) | The dangers of “AI washing” | “This “AI washing” threatens to overinflate expectations for the technology, undermining public trust and potentially setting up the booming field for a backlash.” |
Idiosyncrasies of Machinewashing
Disruptive AI
Broad Scope and Scalability
Lack of Societal and Governmental Watchdogs
Tangibility of AI Issues
Opacity and Complexity of AI: Difficult to Grasp for Stakeholders
Fluid Algorithms
Automated Decision Making and Unknown Consequences
Idiosyncrasies of Machinewashing | Machinewashing emerged rapidly along new and disruptive AI systems, challenging societal values and legal systems Broad range of AI use cases opens wide spectrum for machinewashing Lack of dedicated civil society and governmental watchdogs AI issues (privacy, algorithmic biases, discrimination etc.) and machinewashing not tangible at first glance Opacity and complexity of AI difficult to grasp for stakeholders. Machinewashing can be hidden in AI black boxes Fluid algorithms can quickly change shape (software patches), making machinewashing difficult to capture Automated decision making and unknown consequences obscuring responsibility for unintended adverse outcomes |
Antecedents | Nascent activism, NGO, and media attention Uncertain regulatory environment; regulatory pressure |
Underlying goals | Instrumental/normative corporate motives Reputation, competitive advantage Legitimacy, social license to operate Individual motives Firm visibility/size Maintain power, authority Control key resources (algorithms, data) and rhetoric |
Practice* | Misleading communication gesture accompanied with symbolic action and open/covered corporate political activity (on multiple levels: legislative, judicial, and academic lobbying) |
Outcomes | External Ethical image Indicate connection and adherence to principles Prevention of regulation or justification for deregulation/self-regulation Unintended outcomes: network effects Distract from major issues related to core business Internal Appropriation of (abstract) ethical virtues Financial/image gain Firm capabilities (operational efficiency, product quality, demographic diversity) Risk Unintended outcomes: such as job polarization |
Definition | Machinewashing is defined as a strategy that organizations adopt to engage in misleading behavior (communication and/or action) about ethical Artificial Intelligence (AI) / algorithmic systems. Machinewashing involves misleading information about ethical AI communicated or omitted via words, visuals, or the underlying algorithm of AI itself. Furthermore, and going beyond greenwashing, machinewashing may be used for symbolic actions such as (covert) lobbying and prevention of stricter regulation |
Antecedents
Greenwashing
Machinewashing
Underlying Goals
Greenwashing
Machinewashing
Practice
Greenwashing
Type | Description | Greenwashing Example | Machinewashing Example |
---|---|---|---|
(1) Mislead with words | |||
(a) Misleading/vague claims | Broad claims without any specific meaning | Eco-friendly, environmental-friendly, eco-safe, all-natural, non-toxic, eco-conscious (see, e.g., Futerra and Terrachoice greenwash criteria in Zanasi et al., 2017) | Ethical AI, explainable AI, fair AI, trustworthy AI, human-friendly AI, sustainable AI, AI to benefit everyone |
(b) Inaccurate claims | Claims or data that are wrong or made-up (closely related to 3 (a) complete omission of information) | “Common examples are tissue products that claim various percentages of post-consumer recycled content without providing any evidence” (TerraChoice, 2010, p. 10) | “IBM Watson is helping doctors outthink cancer, one patient at a time.” […]‘IBM needs to be held accountable for the image that it’s producing of its successes compared to what they’re actually able to deliver, because at a certain point it becomes an ethical issue…You’re telling cancer patients that they should have a higher feeling of hope about their outcome and then under-delivering on that—to me, that’s just dirty.” |
(c) Jargon claims | Claims that use language, terms, or jargon which do not resonate with stakeholders (especially customers) | Language and information that only an expert may understand (Futerra Sustainability Communications, 2009, p. 5) | Misleading and lengthy data and privacy policies, terms of service, and informed consent using legal and technical jargon (Obar & Oeldorf-Hirsch, 2020) |
(d) Meaningless /irrelevant claims | Stressing a trivial ethical/ green aspect, whereas remaining business practices go against ethical or environmental standards | “For example, if a company brags about its boutique green R&D projects but the majority of spending and investment reinforces old, unsustainable, polluting practices.” (Greenpeace Greenwash Criteria, in Zanasi et al., 2017, p. 65) | “YouTube has been driving millions of viewers to climate misinformation videos every day, a shocking revelation that runs contrary to Google's important missions of fighting misinformation and promoting climate action” (Corbin, 2020) |
(e) Overstatements /exaggerations | Claims that make the organization or its products look better than they are. Overstatements claims that go far beyond the possibilities of the product or organization’s capabilities | “For example, if a company were to do a million dollar ad campaign about a clean up that cost less” (Greenpeace Greenwash Criteria, in Zanasi et al., 2017, p. 65) | “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future” (Ng, 2016) |
(2) Mislead with visuals or graphics | Relates to images and video footage used in advertising, and seals, certifications and labels invoking unjustified commitments | “Green images that indicate a (un-justified) green impact e.g., flowers blooming from exhaust pipes” and “A label that looks like a third party endorsement … except it’s made up” (Futerra Sustainability Communications, 2009, p. 5) | “Sophia is not the first show robot to attain celebrity status. Yet accusations of hype and deception have proliferated about the misrepresentation of AI to public and policymakers alike” (Sharkey, 2018) |
(3) Misleading by omission | |||
(a) Complete omission of information | Claims that are made without proof of evidence (scientific confirmation) | “It could be right, but where’s the evidence?” (Futerra Sustainability Communications, 2009, p. 5) | “IBM Watson is the Donald Trump of the AI industry—outlandish claims that aren’t backed by credible data […] There is no way to validate what we’re getting from IBM is accurate unless we test the real patients in an experiment” (Brown, 2017) |
(b) Selective disclosure | Presented information creates a positive impression, while relevant information is kept back | “For example, if an industry or company has been forced to change a product, clean up its pollution or protect an endangered species, then uses PR campaigns to make such action look proactive or voluntary.” (Greenpeace Greenwash Criteria, in Zanasi et al., 2017, p. 65) | Chatbots imitating humans and the challenge of consumers to know whether they are interacting with AI or not: “As this development gains traction, service providers have to decide whether to disclose the chatbot identity and, if so, whether to provide additional information about it. From an ethical viewpoint, withholding identity information does not prove tenable, as intransparency regarding the non-human chatbot identity may be perceived as deceptive and could be exploited by service providers” (Mozafari et al., 2020, p. 2916) |
(c) Incomplete comparison | Basis for comparison is not provided | “Acme is more effective” (Lyon & Montgomery, 2015, p. 227) | “Ed Harbour, vice president of Implementation at IBM Watson […]”I believe very strongly Watson is ahead of the competition and we’ve got to continue to push [to make Watson better]. No, I don’t think it’s something that anybody can just do.” (Brown, 2017) |
(d) Masking of information | Relevant consequences of product/or services are omitted | “The ad leaves out or masks important information, making the green claim sound better than it is” (Greenwashingindex, in Zanasi et al., 2017, p. 66) | “A.I., most people in the tech industry would tell you, is the future of their industry, and it is improving fast thanks to something called machine learning. But tech executives rarely discuss the labor-intensive process that goes into its creation. A.I. is learning from humans. Lots and lots of humans. Before an A.I. system can learn, someone has to label the data supplied to it” (Metz, 2019) |
(4) Mislead with AI* | |||
(a) (Ab)using AI to imitate humans | Refers to the use of AI to deceive/mislead consumers and, or the wider public | - | “Harmful lies are nothing new. But the ability to distort reality has taken an exponential leap forward with “deep fake” technology. This capability makes it possible to create audio and video of real people saying and doing things they never said or did” (Chesney & Citron, 2019, p. 1753) |
(b) Biased real-time information | AI presents biased real-time information which cannot be verified by consumers | - | Discussing geographic information systems, Wagner and Winkler (2019, p. 7) note: “there is a considerable risk that users misinterpret the data provided to them and make bad decisions based on false or at best misleading information.” |
(c) Leaving AI code undisclosed | Using intellectual property rights law to avoid disclosure of algorithmic code (Related to mislead by omission) | - | Corporations may mislead about AI, leaving code undisclosed to avoid external assessment, referring to patent protection: “First, the overlap between, if not abuse of, intellectual property rights create a legal black box which is very difficult to open “ (Noto La Diega, 2018, p. 15) |
(d) Hidden change of use case (function creep) | Using dynamic nature of algorithms (such as updates and patches) to adjust the mode of operation in the future | - | “Any change to the software of the system may affect the behaviour of the entire system or of individual components, extending their functionality, and these may change the system’s operational risk profile, including its capacity to operate in ways that might cause harm or violate human rights.” (Yeung, 2019, p. 63) |
(e) Obscuring responsibility | Obscuring responsibility for unintended outcomes of semi-automated systems to human in the loop | - | “The collision of a Tesla car in semi-automated mode exemplifies the tendency to blame the proximate humans in the loop for unintended adverse consequences, rather than the surrounding socio-technical system in which the human is embedded. (Yeung, 2019, p. 61) |
(5) Mislead with symbolic action | |||
(a) Policy practice gap | Refers to an inconsistency between promises about initiatives and actual actions | “Such as efficient light bulbs made in a factory which pollutes rivers” (Futerra Sustainability Communications, 2009, p. 5) | “The decisions of internal AI ethics committees are subjected to internal limits, subordinated to the endorsement of high management and dependent on company funding. This dependency on the company’s benevolence makes such efforts inadequate for addressing serious cases of company misconduct and also importantly unfit for achieving desirable policy outcomes” (Bietti, 2020, p. 216) |
(b) Instrumentalization of ethics and moral philosophy* | Involves the instrumental use of ethics to achieve organizational outcomes | - | “[T]he trivialization of ethics and moral philosophy now understood as discrete tools or pre-formed social structures such as ethics boards, self-governance schemes or stakeholder groups” (Bietti, 2020, p. 210) |
(6) Mislead with (covert) lobbying | |||
(a) Legislative lobbying | Involves open or covert non-market actions aimed at favorable laws and regulations | “For example, if advertising or public statements are used to emphasize corporate environmental responsibility in the midst of legislative pressure or legal action” (Greenpeace Greenwash Criteria, in Zanasi et al., 2017, p. 65) | “Google led the way with what would become one of the world’s richest lobbying machines. In 2018 nearly half the Senate received contributions from Facebook, Google and Amazon, and the companies continue to set spending records” (Zuboff, 2021) |
(b) Academic lobbying* | Funding of research that favors corporate interests and helps to steer the academic debate | – | “Facebook has invested in the TU Munich – funding an institute to train AI ethicists. Similarly, until recently Google had engaged philosophers Joanna Bryson and Luciano Floridi for an ‘Ethics Panel,’ however this was abruptly discontinued at the end of last week. Had it not been for this, Google would have had direct access via Floridi, a member of HLEG AI, to the process by which this group will develop the political and investment recommendations for the European Union starting this month” (Metzinger, 2019) |
Machinewashing
Examples and Manifestations
Greenwashing
Machinewashing
Outcomes
Greenwashing
Machinewashing
From the Structural Analogy Towards a Definition of Machinewashing
The Analogy as Foundation for Future Machinewashing Research
Theory | Key assumptions | Examples of future research questions |
---|---|---|
Theories of organizations in their environments (Macro) | ||
Legitimacy theory | Organizations' long-term survival hinges on legitimacy: the conformity with societal norms (formal/informal) | How might mimetic pressures affect a firm’s use of machinewashing? Which kind of pressure (mimetic or normative) is more influential in adopting machinewashing practices? How does machinewashing relate to different types of legitimacy—pragmatic, moral, and cognitive? How does machinewashing affect the credibility of an AI strategy—or the organization’s reputation as such? |
Corporate political activity/lobbying | Organizations as strategic players shaping the non-market environment | Does machinewashing distract society from questioning the limits of current AI ethics programs and from pushing governments to adopt stricter regulations? How are lobbying expenditures against more stringent AI regulations related to organizational spending on AI ethics programs? Which role do societal watchdogs or internal whistleblowers play in exposing a misalignment of the two chessboards? To what extent does organizational funding of public research favor corporate interests?? Does the lobbying of academic institutions undermine the independence of academia? |
Resource-dependence theory | The long-term survival and growth of firms’ hinges on access to critical resources from the external firm environment | How does resource pressure impact the adoption and use of machinewashing? How is AI used as a resource itself to engage in machinewashing and influence external stakeholders? Are AI ethics boards used to limit dependence on or gain access to critical resources? |
Intermediate, organization-focused theories (Meso) | ||
Organizational Institutionalism | Organizations are embedded in institutional arrangements adapting to internal and external pressures | To what extent are AI ethics principles aligned with day-to-day organizational practices? How do internal procedures and the goal to preserve organizational efficiency relate to the adoption of machinewashing practices? What role can ethics boards play in ensuring that ethics guidelines and codes are translated to daily practice? To what extent are specific machinewashing practices (already) institutionalized in a given organization? |
Instrumental and deliberative CSR | A discursive approach to organizations’ responsibilities (e.g., discourse ethics, agnostic rhetoric, license to critique) | Can a deliberative approach to AI ethics offset the lack of societal watchdogs and assist in transferring principles into practice? Who influences the current AI ethics discourse, and why? In which way can organizational ethics boards contribute to the formation of ethics codes? How should an ethics board be structured to serve as an independent forum for discussion on weaknesses of AI? How are power and information asymmetries in the AI ethics discourse related to machinewashing practices? How can symbolic practices (e.g., ethics working groups and multi-stakeholder partnerships) be turned into credible and constructive discussions on ethical AI? |
Signaling theory | Observing organizational behavior from an economic self-interest perspective or instrumental rationale | Does machinewashing pay off? Is machinewashing used to change perceptions about organizational AI ethics performance? Does evidence exist that the market values machinewashing? Does greater transparency about AI ethics programs, such as the disclosure of algorithmic code, mitigate information asymmetries between organizations and their stakeholders? |
Theories of individuals within and around organizations (Micro) | ||
Agency theory | Issues arising when the principal employs an agent for value creation | (How) Does machinewashing impact agency cost? How can organizations control and verify that an agent acts in the principal's interest, not engaging in machinewashing practices? How do AI ethics programs relate to individual employees? What are the impacts of machinewashing on employee performance, well-being, and satisfaction? How can individual members of the organization be included in creating AI ethics codes and guidelines? How can individuals (continuously) challenge and be challenged by the code of their respective organizations? How can whistleblowers speaking out about weaknesses of corporate AI ethics be better protected/incentivized? |
Attribution theory | Individuals ‘attribution processes about organizational behavior | How do observers make sense of machinewashing communication? How is machinewashing related to purchasing and investment intentions and product as well as organizational loyalty? What micro-level attribution processes occur when consumers perceive AI ethics programs to be misleading? How do consumers and the wider public perceive deceptive practices, such as using intellectual property rights law to avoid disclosing algorithmic code or obscuring responsibility for unintended outcomes of semi-automated systems by blaming the human in the loop? |