Skip to main content
Top
Published in:
Cover of the book

Open Access 2020 | OriginalPaper | Chapter

3. Pluses and Minuses

Authors : Ian I. Mitroff, Rune Storesund

Published in: Techlash

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
download
DOWNLOAD
print
PRINT
insite
SEARCH
loading …

Abstract

The ability to foresee as many of the potential dangers as possible lurking within a technology is one of the key factors in Thinking the Unthinkable. As we’ve stressed, one of the best ways of accomplishing this is by listing as many of the supposed benefits of a technology as one can and then considering as many ways as possible how the exact opposite can and will occur. In other words, what are all the ways in which technology proposes to make our lives better, and then, how can it systematically fail to do so?
Notes
every time [Mr. Schroepfer, Facebook’s person in charge of erasing millions of unauthorized posts] and his more than 150 engineering specialists create A.I. solutions that flag and squelch noxious material, new and dubious posts that the A.I. systems have never seen before pop up—and thus are not caught. The task is made more difficult because ‘bad activity’ is often in the eye of the holder and humans, let alone machines, cannot agree on what that is.
Metz and Issac [1]
The ability to foresee as many of the potential dangers as possible lurking within a technology is one of the key factors in Thinking the Unthinkable. As we’ve stressed, one of the best ways of accomplishing this is by listing as many of the supposed benefits of a technology as one can and then considering as many ways as possible how the exact opposite can and will occur. In other words, what are all the ways in which technology proposes to make our lives better, and then, how can it systematically fail to do so?
The starting point is the long-standing, general aim of technology: “Technology not only magnifies the senses, but it allows us to surmount the limitations of our minds and bodies.” Technology thereby allows us to hear, see, and sense things we would not be able to do without it. It allows us to communicate with others over long distances. It permits us to be in touch instantly with unsurpassed numbers of others at a moment’s notice. It lets us travel long distances comfortably and safely at speeds once thought to be beyond the bounds of human possibility. It now promises to extend the power of human thought indefinitely. More portentously, it promises to extend human life endlessly, if not defeat death altogether. It can sense when our bodies are in danger of being harmed and then offer needed protection. And, it promises to capture and to respond to our deepest, most personal inner feelings and emotions.
But if technology brings great gifts and grants our miraculous wishes, it also poses equally portentous threats. If robots promise to do much of the work we find onerous and dangerous, they also threaten to relegate humans to permanent subservience and irrelevancy. Artificial Intelligence or AI offers not only to augment our mental and emotional capabilities, but also to replace us altogether. While driverless cars may be safer in the long run, how do we cope with the millions who will lose their jobs and dignity as a result of not being employed?
Consider that the heightened ability to hear and see objects and events that we could not without technology also makes possible the increased ability to spy on our most personal and private conversations and to relay them to others without our explicit knowledge and consent; witness Alexa and Echo.
Again a growing body of studies is testimony to the fact that instead of contributing to and thus raising the self-esteem of young people and adults, the more the people use Social Media, the more it lowers their self-esteem. After all, they are constantly comparing themselves against all of the idealized portraits of others that they cannot ever hope to match. It’s a losing proposition that produces noticeable psychological damage.
Social Media are the preeminent example of technology producing the exact opposite of what was intended. The very thing that was supposed to bring us closer together is now one of the biggest factors responsible for driving us apart.
To reiterate, key to Thinking the Unthinkable is taking every one of the proposed benefits of a technology and then showing how and why their exact opposite can occur. Given its extreme importance, we discuss in a later chapter a detailed process for accomplishing this. It involves among other things the simultaneous consideration of all of the factors that are essential in Thinking the Unthinkable. That is, it’s not possible to separate a discussion of the idealized properties and supposed benefits of a technology from the idealized contexts in which it is supposedly used. Any discussion of the positive benefits of a technology implicitly presupposes any number of idealized contexts or settings.
However, before we can discuss a process that allows us to consider all of the factors simultaneously, we need to discuss a special way of Systems Thinking, which is the topic of the next chapter.
But first, we want to discuss the limitations of AI. It’s one of the most important examples of why technology not only fails to achieve its desired aims, but results in the exact opposite.

The Fatal Flaws of AI

AI rests on a central premise: that all of human behavior—thought itself—can be not only be captured, but reduced to sets of rules—the fancy name is algorithms. By putting algorithms into computers, the prime contention is that they will not only speed up, but lead to better decisions. It doesn’t matter if the “rules” are based on probabilities of what we and others are mostly likely to do. It doesn’t even matter how many of them there are. All that counts is that they can be captured and encoded into algorithms.
The preceding is not only fundamentally misleading, but dangerously false.
AI rests on other key premises as well. For instance, by examining hundreds, if not thousands, of cases and thereby feeding huge amounts of data into so-called deep learning machines, they will be able to discern how humans learn. First of all, this ignores the fact that even young babies are able to learn from just a few, messy cases, not hundreds of them.1 The key question is how humans, both rightly and wrongly, learn from a few cases, not many of them. The short answer is that they rely on “approximate rules of thumb” or heuristics that do not guarantee success but “work well enough.”
Second, it also ignores that thoughts—the act of thinking itself—do not exist by themselves but are parts of a complex mind–body system. In short, the body is not just a “fancy or sophisticated carrying case” for the brain. Instead, the mind is distributed throughout the entire mind–body system. There is in fact a kind of “primitive brain” that surrounds the heart and is essential to its functioning. Capturing thoughts thus entails capturing, if only in part, the states of the entire mind–body system, at the very least, many of the most important interactions between them. In other words, the “contexts” in which tech is used and affects us are much broader than those that are typically assumed. There are no such things as “standalones,” i.e., completely self-contained contexts and/or situations.
It also ignores the fact that the notion of “mind” is fundamentally “social.” There is no such thing as a completely isolated, self-contained, and individual mind.
Third, the idea that thoughts can be reduced to rules alone ignores the basic fact that that thoughts and emotional states are inseparable. There are no thoughts without accompanying emotions and vice versa. And, emotions are not subject to the same kinds of rules. Indeed, much of our emotions and thoughts are not available to consciousness. They are triggered by other emotions, events, and thoughts of which we are only dimly aware, or at the very least, not fully. They are influenced and part of our basic hopes, dreams, fears, and anxieties.
In addition, Emotional Intelligence—knowing how to pick up and respond appropriately to the emotional cues and states of others as well as oneself—is very different from Cognitive Intelligence.
Fourth, and extremely important in its own right, the whole notion of algorithms that are collections of clear-cut, consistent rules ignores the basic fact that there is no aspect of human behavior that is not subject to differing if not contradictory views and opinions. Medicine is a prime example. True, more and more of medicine is evidence based. But this doesn’t mean that doctors evaluate the “same situation” exactly alike. After all, they bring different experiences, judgments, and intuitions to bear on everything they do.
Gary Marcus and Ernest Davis put it as follows:
…It’s the set of background assumptions, the conceptual framework, that makes possible all our thinking about the world.” Therefore, in order for AI to work, it’s essential to capture the background assumptions that are a crucial part of everything we do. Nonetheless, as they state, “Yet, few people working in A.I. are even trying to build …background assumptions into their machines…we’re not to get sophisticated comport intelligence without it.”2
Consider one of the most contentious and problematic situations: “determining who is and is not a ‘good, responsible’ potential employee.” Such judgments are historically full of enumerable ethnic, racist, and sexist biases so that basing algorithms on “normal standard practices” is not only misleading, but highly Unethical.3
Hiring the “right person” is not just a matter of selecting someone with the “right credentials” such as whether he or she went to the “right school,” majored in the “right subjects,” got the “right grades,” etc. It’s as much the case whether one is a “good fit” with the groups with which he or she will be working. And, it invariably involves whether a company needs to undergo serious Diversity Training so that it confronts its underlying biases and thereby expands its taken-for-granted notions regarding whom it needs to hire, let alone promote.4
An equally, if not more, critical case is that of health care. If one uses historic data on which to build algorithms to determine who should receive the proper amounts of health care, then Black people typically fare poorly and as a result become sicker. Historically, substantially less money has been spent on the health of Blacks than for Whites. Thus, using the cost spent on care in the past is a poor proxy for health needs. It shows that using the “right category or label” is critical for “solving the ‘right problem.’” It also shows that using “the right label” is a matter of “critical judgment” that only humans can exhibit, not machines.5
All data are subject to different interpretations. It’s rarely the case that one position is “completely right” and the other is “completely wrong.” Complex decisions are not like simple, canned exercises for which there are only “single right answers.” We say more about this in the next chapter.
F. Scott Fitzgerald said it best: “The test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time and still retain the ability to function.” To put it mildly, this is a very different notion of intelligence from that which underlies current thinking about AI.
Humans are not bundles of consistency through and through, but of the constant ebb and flow—enduring struggle—between opposing thoughts and emotions. Thus, if AI is truly to advance, it will have to take a radically different course. It will not only have to embrace, but deeply incorporate Dialectical Thinking, i.e., the strongest arguments pro and con that can be made for a particular position. In other words, Dialectical AI is absolutely essential if we are truly to capture the intricacies and subtleties of human behavior and thought. We’re waiting for evidence that the community of AI is ready in the slightest to do so.
As we show later, the ability to embrace Dialectical Thinking is one of the key attributes of the Socially Responsible Tech Company.
The following story while most likely apocryphal is one of the most powerful examples of the need for Dialectical Thinking. It shows in no uncertain terms that doubt is not only the underlying basis of Dialectical Thinking, but thereby one of the most critical attributes humans possess.
In the 1950s, at the height of the Cold War between the USA and the USSR, an Airman stationed in Alaska was seated in front of a large computer-like console. His job was to monitor it for swarms of incoming objects that showed in no uncertain terms that the USSR was about to attack the USA by means of Intercontinental Nuclear Ballistic Missiles. And sure enough, as if on cue, the console sounded an alarm and was prepared to launch a counterattack but if and only if the soldier gave his final approval. Fortunately, for some reason, the Airman in charge didn’t believe that the attack was real and thus didn’t press the big red button that would have effectively started World War III and ultimately would have led to the utter annihilation of both countries, if not the entire planet. The “large object coming over the horizon” for which the system was not programmed to recognize was the Moon!
The fate of humanity literally rested on the doubts—the nagging gut feelings—and good judgment of one human being. Why would we ever trust any machine to do the same? We always need intelligent humans to oversee our supposedly “brainy learning machines.” Show us the autonomous AI system that even comes close to incorporating “doubt,” let alone “gut feelings.”
On the other hand, why should something so critical as the launching of war be left to the judgement of one person? It surely required the consideration of a team that was schooled in the perils of Group Think, and thereby how to avoid it.
Finally, there is the matter of the broader harm done to democracy as a whole. As Stuart Russell, a computer scientist at UC Berkeley, puts it:
[There’s] a really simple example, where [an] algorithm isn’t particularly intelligent but has an impact on a global scale. The problem is that the objective of maximizing [getting people to click on an app] is the wrong objective because the company—Facebook or whatever—is not paying for the externality of destroying democracy. They think they’re maximizing profit, and nothing else matters. Meanwhile, from the point of the rest of us, our democracy is ruined and our society falls apart, which is not what we want…AI systems that are maximizing in a very single-minded way [a] single objective, end up having these effects that are extremely harmful.6

Closing Remarks

One of the biggest factors responsible for the Unthinkable is the failure to consider that all technologies can be used—abused and misused—in ways that were not intended, let alone not considered at all, by their developers. Latent defects, certainly fatal flaws, can and will be exploited. As result, even the benign properties of a technology can be taken advantage for nefarious purposes. For this reason, we talk later about a deliberate process that is the best of which we know for thinking about how technology can be exploited.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Footnotes
1
See Gopnik [2].
 
2
Marcus and Davis [3].
 
3
Ajuwa [4].
 
4
Metz [5].
 
5
Benjamin [6].
 
6
Worthington [7].
 
Literature
1.
go back to reference Metz C, Issac M (2019) It’s never going to go to zero. The New York Times, Sunday, 19 May 2019, p Bu 6 Metz C, Issac M (2019) It’s never going to go to zero. The New York Times, Sunday, 19 May 2019, p Bu 6
3.
go back to reference Marcus G, Davis E (2019) Build A. I. We can trust. The New York Times, Saturday, 7 Sept 2019, p A23 Marcus G, Davis E (2019) Build A. I. We can trust. The New York Times, Saturday, 7 Sept 2019, p A23
4.
go back to reference Ajuwa I (2019) Beware of automate hiring. The New York Times, Wednesday, 9 Oct 2019, p A29 Ajuwa I (2019) Beware of automate hiring. The New York Times, Wednesday, 9 Oct 2019, p A29
5.
go back to reference Metz C (2019) AI learns lots from us. Our biases too. The San Francisco Chronicle, Saturday, 16 Nov 2019, p D1–D2 Metz C (2019) AI learns lots from us. Our biases too. The San Francisco Chronicle, Saturday, 16 Nov 2019, p D1–D2
6.
go back to reference Benjamin R (2019) Assessing risk, automating racism. Science 336:421–422 Benjamin R (2019) Assessing risk, automating racism. Science 336:421–422
7.
go back to reference Worthington L (2019) A race against the machine: Stuart Russell wants to harness AI before it’s too late. Californian, Fall 2019, p 36 Worthington L (2019) A race against the machine: Stuart Russell wants to harness AI before it’s too late. Californian, Fall 2019, p 36
Metadata
Title
Pluses and Minuses
Authors
Ian I. Mitroff
Rune Storesund
Copyright Year
2020
DOI
https://doi.org/10.1007/978-3-030-43279-9_3