Skip to main content
Top
Published in: Minds and Machines 2/2021

05-06-2021 | General Article

The Missing Ingredient in the Case for Regulating Big Tech

Author: Bartlomiej Chomanski

Published in: Minds and Machines | Issue 2/2021

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Having been involved in a slew of recent scandals, many of the world’s largest technology companies (“Big Tech,” “Digital Titans”) embarked on devising numerous codes of ethics, intended to promote improved standards in the conduct of their business. These efforts have attracted largely critical interdisciplinary academic attention. The critics have identified the voluntary character of the industry ethics codes as among the main obstacles to their efficacy. This is because individual industry leaders and employees, flawed human beings that they are, cannot be relied on voluntarily to conform with what justice demands, especially when faced with powerful incentives to pursue their own self-interest instead. Consequently, the critics have recommended a suite of laws and regulations to force the tech companies to better comply with the requirements of justice. At the same time, they have paid little attention to the possibility that individuals acting within the political context, e.g. as lawmakers and regulators, are also imperfect and need not be wholly compliant with what justice demands. This paper argues that such an omission is far from trivial. It creates a heavy argumentative burden on the part of the critics that they by and large fail to discharge. As a result, the case for Big Tech regulation that emerges from the recent literature has substantial lacunae and more work needs to be done before we can accept the critics’ calls for greater state involvement in the industry.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Footnotes
1
See e.g. Floridi et al. (2018), Loi et al. (2020), Yeung et al. (2020) for discussions of many of these codes.
 
2
Yeung et al. worry that the codes are not built on any coherent normative foundations and lack established mechanisms for resolving conflicts between different values. Greene et al.’s criticism is more difficult to pin down—they claim that AI ethics is committed to “ethical universalism” thus excluding relativist approaches, that it doesn’t explore the possibility that the new technologies should be banned, and that it ignores such issues as prison abolitionism and workplace democracy. While it’s clear from the paper’s tone that Greene and colleagues strongly disapprove of these assumptions and omissions, they don’t offer explicit arguments as to what is wrong with them or why alternative approaches are preferable.
 
3
A note on terminology: I use the terms “Big Tech” and the less popular “Digital Titans” (also used by Yeung et al.) interchangeably to refer to large, mostly US-based technology companies, such as Amazon, Apple, Alphabet, Facebook, and Microsoft.
 
4
In making their case against the unregulated or self-regulated status quo, the critics of AI ethics pay heed to the following principle, articulated by Brennan (2007): “the limits to human benevolence, to civic virtue, are a fundamental constraint in the pursuit of normatively desirable ends. Moral reasoning on its own can never be taken to be compelling for action.” However, as we shall see, once they enter into the analysis of their preferred public policy solutions, they abandon Brennan’s dictum, which continues to caution against just such an approach: “any normative social theory that simply assumes compliance [with morality] is therefore seriously incomplete at best and at worst can encourage action that is perverse in its consequences. Misspecifying the constraint of human moral frailty is no less an error than misspecifying other kinds of constraints.”.
 
5
This is not to accuse state agents of some special venality. They may well think that their re-election or bigger budgets would serve the public interest.
 
6
European Union’s GDPR law has been found to have “worsened one of the main problems experienced in digital markets today, which is increased market concentration and reduced contestability. In addition, the GDPR seems to have given large platforms a tool to harm rivals by reducing access to the data they need to run their business … [Moreover] the costs of implementing the GDPR benefit large online platforms, and … consent-based data collection gives a competitive advantage to firms offering a range of consumer-facing products compared to smaller market actors” (Geradin et al., 2020).
 
7
Majorities of investors interviewed by Le Merle et al. (2011) expressed strong reservations about investing in a regulatory environment with the opt-in system of data collection and a “Do Not Track” registry.
 
8
European Commission’s (2021) proposed rules for “ex ante conformity assessment” for “high-risk” AI systems – roughly, assessments of risk and benefits prior to entering the market—may have adverse impacts on innovation, imposing delays, compliance costs, and incentivizing exit from the European market entirely (Borggreen, 2020).
 
9
As Narayanan and Lee-Makiyama (2020) find, “[t]he economic impacts of shifting from ex-post to ex-ante [regulation] in the online services sector as stipulated by the proposals of Digital Services Act [will lead] to a loss of about 85 billion EUR in GDP and 101 billion EUR in lost consumer welfare based on a baseline value of 2018. Also, it will reduce the labour force by 0.9%.”.
 
10
“Total capital invested in [technology companies based in] North America… is approaching nearly 5 × the level of investment in Europe” (The State of Eurpean Tech, 2020). This is despite the EU having around 100 M more people and accounting for about the same share of global GDP as the United States. Since financing, especially in the form of venture capital, leads to increases in economic growth (Samila & Sorenson, 2011) and consumer welfare (Agmon & Sjögren, 2016), this is indicative of EU policy’s adverse effects on individual consumers.
 
11
“Our report finds tech founders are calling for simplified employment regulations, while Politico data suggests [EU] policymakers' attention is elsewhere: they are less focussed on the Digital Single Market than two years ago, and more focussed on the creation of a digital tax and the activities of big US tech firms” (The State of European Tech, 2019).
 
12
Thierer and Haaland (2021) document expensive failures of state-backed projects like the Quaero search engine and the Minitel network, funded generously by French and German governments and promoted as homegrown alternatives to Google and the Internet itself, respectively. I explain in more detail in Footnote 20 below why one could expect policies of this nature to prevail.
 
13
Of course, the state is special in certain other ways: crucially, it has powers that no other institution possesses.
 
14
Nor do the authors ever make the case that the regulators will be better people than those they regulate.
 
15
Indeed, from among the authors discussed here, Cath et al. are perhaps the only ones to point to an asymmetry between market actors and government officials.
 
16
One could object to this argument by pointing out that leaving social media and other digital services imposes a serious cost on the users, that could prevent them from holding Big Tech accountable by exiting. However, holding policymakers accountable comes with its own costs as well. It takes time and effort to inform oneself about the relevant issues to a large enough extent that enables the voter to assign accountability for the effects of various policies. Most voters in fact do not seem to be well informed (Somin, 2015). Furthermore, voters choose between bundles of policies—it’s therefore possible that a policy failure on Big Tech will be outweighed, in the voters’ minds, by the candidate’s successes in other fields.
 
17
This is a shorthand. Either behavioral asymmetry should be justified, or it should be shown why behavioral symmetry won’t be a problem.
 
18
This is not unique to proposals for more regulation. Rather, the burden arises for Balkin (and Cath et al.) in virtue of proposing a departure from the status quo. Such departures cannot be justified merely by showing problems with the status quo. They should be justified, in addition, by at least some reason why the sources of problems identified within the status quo (i.e., self-interest winning out over pursuit of justice) will cease to be sources of problems when the proposed changes are implemented. The same constraint would of course be imposed on anyone advocating for less regulation.
 
19
Another way of putting this goes as follows: according to Balkin, Big Tech should be regulated in virtue of the negative externalities produced by its use of algorithms. However, bad legislation and bad regulation likewise produce costs to third parties (e.g. tariffs raise the prices of goods for the average consumer). It is possible that the regulations governing Big Tech will also be of such nature, benefitting special interests at the cost to the average citizen, as some of EU’s regulatory efforts have. If Balkin demurs, he must explain by what mechanism such “negative externalities” can be avoided in legislation-crafting, and why the mechanism cannot work in the market. He fails to meet this burden on both counts.
 
20
As Holcombe (2016) explains, “[The] political marketplace is a real market, and votes are the currency.
that is exchanged. Because it is a small group with low transaction costs, legislators are able to bargain to pass the legislation that they value most highly. Interest groups can buy their way into the low-transaction cost group by offering campaign contributions, political support from a sizeable number of voters the group represents, or other benefits for legislators.” In contrast, the electorate as a whole is a high-transaction cost group, hence finding it more difficult, if not impossible, to bargain effectively with the legislators. As a consequence, legislators (understood as rational voting power maximizers, rather than as indefatigable servants of the people) have an incentive to favor legislation promoted by the lobbyists for interest groups, rather than the legislation promoting the common good (though it might be that some of the former will in fact promote the latter).
 
21
For demonstration that having good-sounding laws is not sufficient for their just implementation, one need look no further than the European Union itself and its member states’ failures to live up to the rhetoric of EU’s documents. This is especially visible when it comes to protecting the rights of migrants and minorities (Human Rights Watch, 2020; Kingsley & Shoumali, 2020; WeReport, 2018). Other than—in some cases—eliciting verbal condemnations, the violations of human rights described in the just cited sources have neither been stopped nor meaningfully punished by EU’s institutions.
 
22
Interestingly, Dignam, Yeung et al., and Citron & Pasquale also assume that the main objection to their proposals is that they could stifle innovation. However, rather than take on actual empirical research that seems to show this to be the case [e.g. Grajek and Röller (2012)], they simply cite empirical work consonant with their own view.
 
23
A similar problem plagues Scherer’s (2015) proposal to have the AI industry regulated by “an agency staffed by AI specialists with relevant academic and/or industry experience.” Scherer does not consider the potential for regulatory capture, despite the proposed agency’s considerable powers. This is especially jarring since just a few pages earlier Scherer does engage in institutional criticism (including the mention of misaligned incentives, but only on the part of lawyers rather than government officials) of the currently existing legal framework’s capacity to regulate AI. In this respect, Scherer’s paper shows an interesting similarity to the work by Wachter & Mittlestadt (2019) and Smuha’s (2020). All these articles engage in the criticism of currently existing legal frameworks to properly regulate (some aspect of) Big Tech, but they also—to my mind—stop short of showing how their new proposals will avoid these and other shortcomings.
 
24
I am not advocating for PoL. I am simply saying that, given our epistemic situation, it’s a principle that could govern our policy recommendations.
 
Literature
go back to reference Agmon, T., & Sjögren, S. (2016). Venture capital and the inventive process. Palgrace Macmillan.CrossRef Agmon, T., & Sjögren, S. (2016). Venture capital and the inventive process. Palgrace Macmillan.CrossRef
go back to reference Balkin, J. M. (2017). 2016 Sidley austin distinguished lecture on big data law and policy: The three laws of robotics in the age of big data. Ohio St. L.J., 78, 1217–1242. Balkin, J. M. (2017). 2016 Sidley austin distinguished lecture on big data law and policy: The three laws of robotics in the age of big data. Ohio St. L.J., 78, 1217–1242.
go back to reference Brennan, G. (2007). Economics. In R. E. Goodin, P. Pettit, & T. Pogge (Eds.), A companion to contemporary political philosophy (pp. 118–152). Blackwell. Brennan, G. (2007). Economics. In R. E. Goodin, P. Pettit, & T. Pogge (Eds.), A companion to contemporary political philosophy (pp. 118–152). Blackwell.
go back to reference Brennan, G., & Buchanan, J. (2008). The reason of rules. Cambridge University Press. Brennan, G., & Buchanan, J. (2008). The reason of rules. Cambridge University Press.
go back to reference Brock, W. A., & Magee, S. P. (1978). The economics of special interest politics: The case of the tariff. The American Economic Review, 68(2), 246–250. Brock, W. A., & Magee, S. P. (1978). The economics of special interest politics: The case of the tariff. The American Economic Review, 68(2), 246–250.
go back to reference Citron, D. K., & Pasquale, F. (2014). The scored society: Due process for automated predictions. Wash. l. Rev., 89, 1–34. Citron, D. K., & Pasquale, F. (2014). The scored society: Due process for automated predictions. Wash. l. Rev., 89, 1–34.
go back to reference Dignam, A. (2020). Artificial intelligence, tech corporate governance and the public interest regulatory response. Cambridge Journal of Regions, Economy and Society, 13(1), 37–54.CrossRef Dignam, A. (2020). Artificial intelligence, tech corporate governance and the public interest regulatory response. Cambridge Journal of Regions, Economy and Society, 13(1), 37–54.CrossRef
go back to reference Dunn, W. N. (2018). Public policy analysis : An integrated approach (6th ed.). Routledge, Taylor & Francis Group. Dunn, W. N. (2018). Public policy analysis : An integrated approach (6th ed.). Routledge, Taylor & Francis Group.
go back to reference Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., & Rossi, F. (2018). AI4People—an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707.CrossRef Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., & Rossi, F. (2018). AI4People—an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707.CrossRef
go back to reference Friedman, M. (1962). Capitalism and freedom. University of Chicago Press. Friedman, M. (1962). Capitalism and freedom. University of Chicago Press.
go back to reference Greene, D., Hoffmann, A. L., & Stark, L. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. Paper presented at the Proceedings of the 52nd Hawaii International Conference on System Sciences. Greene, D., Hoffmann, A. L., & Stark, L. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. Paper presented at the Proceedings of the 52nd Hawaii International Conference on System Sciences.
go back to reference Hagendorff, T. (2020). The ethics of Ai ethics: An evaluation of guidelines. Minds and Machines, 1–22. Hagendorff, T. (2020). The ethics of Ai ethics: An evaluation of guidelines. Minds and Machines, 1–22.
go back to reference Hee Park, J., & Jensen, N. (2007). Electoral competition and agricultural support in OECD countries. American Journal of Political Science, 51(2), 314–329.CrossRef Hee Park, J., & Jensen, N. (2007). Electoral competition and agricultural support in OECD countries. American Journal of Political Science, 51(2), 314–329.CrossRef
go back to reference Holcombe, R. G. (2016). Advanced introduction to public choice. Edward Elgar Publishing. Holcombe, R. G. (2016). Advanced introduction to public choice. Edward Elgar Publishing.
go back to reference Lessig, L. (2011). Republic, lost : How money corrupts Congress–and a plan to stop it (1st ed.). Twelve. Lessig, L. (2011). Republic, lost : How money corrupts Congress–and a plan to stop it (1st ed.). Twelve.
go back to reference Loi, M., Heitz, C., & Christen, M. (2020). A Comparative Assessment and Synthesis of Twenty Ethics Codes on AI and Big Data. Paper presented at the 2020 7th Swiss Conference on Data Science (SDS). Loi, M., Heitz, C., & Christen, M. (2020). A Comparative Assessment and Synthesis of Twenty Ethics Codes on AI and Big Data. Paper presented at the 2020 7th Swiss Conference on Data Science (SDS).
go back to reference McNamara, A., Smith, J., & Murphy-Hill, E. (2018). Does ACM’s code of ethics change ethical decision making in software development? Paper presented at the Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. McNamara, A., Smith, J., & Murphy-Hill, E. (2018). Does ACM’s code of ethics change ethical decision making in software development? Paper presented at the Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering.
go back to reference Müller, V. (2020). Ethics of Artificial Intelligence and Robotics. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2020 ed.). Müller, V. (2020). Ethics of Artificial Intelligence and Robotics. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2020 ed.).
go back to reference Nemitz, P. (2018). Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180089.CrossRef Nemitz, P. (2018). Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180089.CrossRef
go back to reference Samila, S., & Sorenson, O. (2011). Venture capital, entrepreneurship, and economic growth. The Review of Economics and Statistics, 93(1), 338–349.CrossRef Samila, S., & Sorenson, O. (2011). Venture capital, entrepreneurship, and economic growth. The Review of Economics and Statistics, 93(1), 338–349.CrossRef
go back to reference Scherer, M. U. (2015). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harv. J. l. & Tech., 29, 353–400. Scherer, M. U. (2015). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harv. J. l. & Tech., 29, 353–400.
go back to reference Smuha, N. A. (2020). Beyond a human rights-based approach to AI governance: Promise, pitfalls, plea. Philosophy & Technology, 1–14. Smuha, N. A. (2020). Beyond a human rights-based approach to AI governance: Promise, pitfalls, plea. Philosophy & Technology, 1–14.
go back to reference Somin, I. (2015). Rational ignorance. Routledge International Handbook of Ignorance Studies, 274–281. Somin, I. (2015). Rational ignorance. Routledge International Handbook of Ignorance Studies, 274–281.
go back to reference Tullock, G. (2005). Public goods, redistribution, and rent seeking. Edward Elgar.CrossRef Tullock, G. (2005). Public goods, redistribution, and rent seeking. Edward Elgar.CrossRef
go back to reference Wachter, S., & Mittelstadt, B. (2019). A right to reasonable inferences: Re-thinking data protection law in the age of big data and AI survey: Privacy, data, and business. Columbia Business Law Review, 2019(2), 494–620. Wachter, S., & Mittelstadt, B. (2019). A right to reasonable inferences: Re-thinking data protection law in the age of big data and AI survey: Privacy, data, and business. Columbia Business Law Review, 2019(2), 494–620.
go back to reference Wagner, B. (2018). Ethics as an escape from regulation: from ethics-washing to ethics-shopping. In E. Bayamlioğlu, I. Baraliuc, & L. Janssens (Eds.), Being Profiled: Cogitas Ergo Sum (pp. 84–88). Amsterdam University Press.CrossRef Wagner, B. (2018). Ethics as an escape from regulation: from ethics-washing to ethics-shopping. In E. Bayamlioğlu, I. Baraliuc, & L. Janssens (Eds.), Being Profiled: Cogitas Ergo Sum (pp. 84–88). Amsterdam University Press.CrossRef
go back to reference Yeung, K., Howes, A., & Pogrebna, G. (2020). AI governance by Human rights-centered design, deliberation, and oversight: an end to ethics washing. In M. D. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford handbook of ethics of AI. Oxford University Press. Yeung, K., Howes, A., & Pogrebna, G. (2020). AI governance by Human rights-centered design, deliberation, and oversight: an end to ethics washing. In M. D. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford handbook of ethics of AI. Oxford University Press.
Metadata
Title
The Missing Ingredient in the Case for Regulating Big Tech
Author
Bartlomiej Chomanski
Publication date
05-06-2021
Publisher
Springer Netherlands
Published in
Minds and Machines / Issue 2/2021
Print ISSN: 0924-6495
Electronic ISSN: 1572-8641
DOI
https://doi.org/10.1007/s11023-021-09562-x

Other articles of this Issue 2/2021

Minds and Machines 2/2021 Go to the issue

Premium Partner