Skip to content
Publicly Available Published by De Gruyter Oldenbourg August 15, 2017

Artificial Communication? The Production of Contingency by Algorithms

Artifizielle Kommunikation? Die Produktion von Kontingenz durch Algorithmen
  • Elena Esposito

    Elena Esposito is Professor of Sociology at the University Bielefeld (D) and at the University of Modena-Reggio Emilia (I). She works with the theory of social systems especially on issues related with the social management of time, including memory and forgetting, fashion and transience, probability calculus, fiction and the use the time in finance. Her current research projects focus on the possibility and forms of forgetting on the web, on a sociology of algorithms and on the proliferation of rankings and ratings for the management of information.

    She published many works on the theory of social systems, media theory, memory theory and sociology of financial markets. Among them Algorithmic memory and the right to be forgotten on the web. Big Data & Society, January–June 2017; Economic Circularities and Second-Order Observation: The Reality of Ratings. Sociologica, 2/2013. doi: 10.2383/74851; The structures of uncertainty. Performativity and unpredictability in economic operations. Economy & Society 2013(42): 102–129.

    EMAIL logo

Abstract

Discourse about smart algorithms and digital social agents still refers primarily to the construction of artificial intelligence that reproduces the faculties of individuals. Recent developments, however, show that algorithms are more efficient when they abandon this goal and try instead to reproduce the ability to communicate. Algorithms that do not “think” like people can affect the ability to obtain and process information in society. Referring to the concept of communication in Niklas Luhmann’s theory of social systems, this paper critically reconstructs the debate on the computational turn of big data as the artificial reproduction not of intelligence but of communication. Self-learning algorithms parasitically take advantage – be it consciously or unaware – of the contribution of web users to a “virtual double contingency.” This provides society with information that is not part of the thoughts of anyone, but, nevertheless, enters the communication circuit and raises its complexity. The concept of communication should be reconsidered to take account of these developments, including (or not) the possibility of communicating with algorithms.

Zusammenfassung

Die Diskurse und Debatten über „smarte“ Algorithmen und digitale soziale Agenten beziehen sich vorwiegend auf Konstruktionen künstlicher Intelligenz, die die kognitiven Fähigkeiten von individuellen Akteuren nachbilden. Gegenwärtige Forschungen zeigen aber, dass solche Algorithmen erfolgreich sind, die nicht dieses Ziel verfolgen, sondern sich an den Kommunikationsfähigkeiten von Akteuren orientieren. Algorithmen, die nicht die Fähigkeiten von Individuen nachbilden, können die Möglichkeiten und Fähigkeiten der Prozessierens von Informationen verbessern. Unter Bezugnahme auf das Konzept der Kommunikation der Systemtheorie von Niklas Luhmann rekonstruiert der vorliegende Beitrag diesen Wandel in der künstlichen Intelligenz von Big Data als artifiziellen Reproduktionen von Kommunikation und nicht von Intelligenz. Selbstlernende Algorithmen machen sich den (reflektierten oder unreflektierten) Umgang von Nutzern im Umgang mit virtueller doppelter Kontingenz zu Diensten. Dies versorgt die Gesellschaft mit Informationen, die nicht auf individuellen Intelligenzen beruhen, sondern auf kommunikativen Kreisläufen, wodurch die kommunikative Komplexität gesteigert werden kann. Aus diesem Grunde und vor dem Hintergrund dieser Entwicklungen muss auch das Konzept der Kommunikation nochmals neu daraufhin untersucht werden, ob es Kommunikation mit Algorithmen zu integrieren vermag oder nicht.

1 A sociology of algorithms

Algorithms are social agents. Their presence and role are now central and indispensable in many sectors of society, both as tools to do things (such as machines) and as communicative partners. Algorithms are involved in communication not only on the web, where the active role of bots is now taken for granted, but also (explicitly or not) in more traditional forms, such as print communication and even voice communication.

Precise estimates are difficult (Ferrara et al. 2016), but apparently in online communication, bots are the authors of approximately 50% of the traffic.[1] Millions of Twitter users are bots,[2] more than 70% of trading in Wall Street happens via automatic programs, at least 40% of Wikipedia editing is carried out by bots. Highly automated accounts generated close to 25% of all Twitter traffic about the 2016 U.S. presidential debate (Kollany, Howard, and Woolley 2016). That Google and Facebook are driven by algorithms is well known, with the paradoxical consequence that the “discovery” that human operators guide the selection of news in Facebook Trending Topics was perceived as a scandal (Gillespie 2016). Similar systems are also used in personalized communication: on Gmail, the Smart Reply app recognizes emails that need responses and generates perfectly adequate natural language answers on the fly.[3] Spotify’s most popular compilation, Discover Weekly, is entirely assembled by an algorithm – as is Release Radar, the hyper-personalized playlist of new tracks (Pierce 2016b).

In these and many other cases web surfers communicate by means of algorithms, and often this happens also when we read texts in the traditional form of newspaper articles or books. Companies like Narrative Science[4] and Automated Insight[5] have developed algorithms to produce texts that are indistinguishable from those written by a human author: newspaper articles, brochures for commercial products, textbooks, and more. Philip Parker, professor at INSEAD in Fontainebleau, patented a method to automatically produce perfectly plausible and informative books, including more than 100,000 titles already available on Amazon.com. Robo-journalism is regularly used by the Associated Press and many companies like Samsung, Yahoo, Comcast, and others (Podolny 2015).

Even in voice communication, millions of individuals regularly interact with digital personal assistants like Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, IBM’s Watson, or Google Now, using natural language interfaces to answer new questions, manage the calendar, and offer individual suggestions and recommendations. In many cases, these programs seem to know the users better than their human partners and often better than the users themselves (Youyou, Kosinski & Stillwell 2015), anticipating their needs and demands even before they emerge.

The communicative role of algorithms is clearly a massive social phenomenon with many complex consequences. What does sociology have to offer on this topic? Marres and Gerlitz (2017) criticize the social deficits of computational technology because in the discussion to understand these developments and to provide an interpretive framework, the social sciences and sociology are curiously in a secondary position. Certainly there is a lot of sociological work on the social consequences of the spread of digitization, but it often follows developments that have taken place and have become well established without its contribution. The social sciences are committed to identifying shortcomings and dangers of technology. In highlighting the ethical and political aspects of technology and its impact on public opinion or social order, they focus on issues such as threats to privacy, risks of job loss, concentration of power in a few large corporations, intergenerational or international inequalities (the digital divide), and the exploitation of underpaid workers (Mechanical Turk) or even of all users (“If you are not paying for it, you’re not the customer; you’re the product being sold”).

Why social deficits? The topics covered by social sciences are extremely relevant and their contribution very useful, but this does not exhaust the role that sociology could (and should) play in the development of digitization. The sociological perspective is not involved in designing algorithms, which are programmed without adequate consideration of social and communicative aspects. The dominant reference in digitalization is still to individuals, as in Artificial Intelligence (Nilsson 2010): here the goal is to artificially reproduce the abilities of human beings, possibly integrating cognitive skills with intentionality or with subconscious aspects identified by philosophical reflection (after Dreyfus 1972 or Searle 1980). In sociological perspective, however, the primary reference is not to individual psychological processes but to communication. After all, what is interesting in the interaction with algorithms is not what happens in the machine’s artificial brain, but what the machine tells its users and the consequences of this. The problem is not that the machine is able to think but that it is able to communicate. The reference to communication and social context is the central issue, which should primarily inform the programming of effective social algorithms.[6] Sociology is the discipline to deal with this.

This is especially urgent now, when algorithms massively and autonomously play a role in communication. With respect to reflection on intelligent machines in the 1970s and 1980s many things have changed. First of all, computers are not isolated but always interconnected, and, furthermore, this connection occurs by means of the Web 2.0, which includes previously unthinkable data sources. The participatory web invites users to generate their own video, audio, and textual contents, which they share with other users in blogs, social media, wikis, and on countless media sites. This multiplicity of spontaneous and uncontrolled contents, with their metadata, adds to institutional content and to the data provided by pervasive sensors (the Internet of Things) to generate the increasing mass (or cloud) of data available in digital format.

Availability of data, however, is not enough. Excess of data has always been a difficulty. But recent programming techniques are able to obtain and process data in ways that are vastly more efficient and can also turn into advantages the problems that until now have hindered the development of Artificial Intelligence projects, such as vagueness and messiness of data or the unpredictable variety of contexts. Self-learning algorithms are able to work efficiently with data that not only are very numerous and complex, but also lack a structure recognizable to and understandable for human logic. Hence the recent (and far from clear) discourse on Big Data (Crawford, Miltner & Gray 2014), which departs more and more clearly from models analogous to individual mental processes. Data are social in their origin, and the processes elaborating them are often incomprehensible to human observers.

To address these developments, I argue that we need an approach referring not to intelligence but directly to communication. This requires a powerful and flexible concept of communication, sufficiently independent of individual psychological processes and able to take into account the cases where the partner is not (or cannot be) a human being. Such a concept must refer to society, not to individuals or groups of individuals.[7] I propose the theory of social systems in Niklas Luhmann’s formulation as an adequate framework, with sufficient complexity to deal with these issues – although Luhmann himself did not work specifically on communication with algorithms.

In my argumentation I will first present the alleged Big Data revolution, which I attribute to artificial reproduction not of intelligence but of communication. Recent “smart” algorithms, I will show, are efficient not because they have learned to work like human intelligence, but because they have abandoned the attempt and the ambition to do so and are oriented directly toward the forms of communication. I then reconstruct interaction with algorithms in terms of Luhmann’s theory, where the central point for the definition of communication is not the sharing of thoughts among participants but the presence of a situation of double contingency. From this perspective I reconstruct communication with a partner who does not think, i.e. a machine or any device capable of producing surprising information, and introduce the notion of virtual contingency to describe the situation in which a communication partner is confronted with his or her own contingency, revised and reflected in such a way as to simulate the conditions of communication. The real novelty of communication with self-learning algorithms, however, goes further: it is an unprecedented condition in which machines parasitically take advantage of the user participation on the web to develop their own ability to communicate competently and informatively. These machines develop their own contingency, which I describe in the final part of the paper referring to the interplay between human capabilities and the “creativity” of algorithms. This can be observed in AlphaGo, the computing system programmed to play the ancient game of GO. In the conclusion, the question as to whether the concept of communication can be extended to interaction with algorithms or whether a different concept is needed remains open, but I claim that in any case discussion on the subject of smart algorithms should refer to communicative double contingency rather than to the psychological processes of human beings.

2 Information without understanding

Discourses about Big Data are now ubiquitous but still very opaque. What is the real point? The practical achievements are amazing: today machine learning systems are able to recognize images never encountered before, carry on conversations about unknown topics, analyze medical data and formulate diagnoses, as well as anticipate the behavior, the reasoning, and also the wishes of users. On the basis of Big Data we can (or will soon be able to) build self-driving cars, translate live online phone calls from one language to another, and use digital assistants that deliver the information we need at any given moment.

As several observers (e.g. Kitchin 2014), however, claim at the theoretical level, it is not yet clear if and how Big Data is leading to a “computational turn in thought and research” (Boyd & Crawford 2012: 663), changing the very idea of data, information, and, finally, science and knowledge (Wagner-Pacifici et al. 2015). It cannot be just a matter of quantity (bigger data), unless we can show where and how quantity becomes quality.

The premise is the process of “datification” (Mayer-Schönberger & Cukier 2013: 73 ff.), which allows us to express more and more phenomena in a quantified format that can be analyzed and processed. Algorithms derive data from the information available on the web (texts, documents, videos, blogs, files of all types) and from the information provided by users: queries, recommendations, comments, chats. They are also able to extract data from information on information: the metadata that describe content and properties of each document, such as title, creator, subject, description, publisher, contributors, type, format, identifier, source, language, and much more. Social media also allow us to datify emotional and relational aspects such as feelings, moods, and relationships between people; and the Internet of Things can extract data from material entities like physical objects and spatial locations. We have many more data than in previous times. Moreover, and most importantly, algorithms are able to use all these data for a variety of secondary uses largely independent of the intent or of the original context for which they were produced.[8]

The consequence, according to some observers (first Chris Anderson 2008 in an article that ignited a huge debate) is that scientific reasoning as familiar to us, based on hypotheses to be tested and on the identification of causal links, is becoming obsolete. When you have access to all the data and enough computational power to analyze it, hypotheses and explanations are no longer necessary. You just go directly and see the results. These results are not the conclusion from reasoning, but simply the identification of forms and patterns: the discovery of correlations that disclose the meaning and the consequences of a phenomenon, regardless of any theory. This led Chris Anderson to the widely quoted (and widely criticized) statement that “with enough data, the numbers speak for themselves.” (Anderson 2008)

According to this approach, when you can access all data about a phenomenon (the statistical universe), there is also no need for sampling and probabilistic procedures. You can process the universe itself (n = all), looking for the patterns computers extract from the ocean of data (Kelly 2008). Statistical procedures are seen as related to our limited computing capacity, which forces us to use simplifications and shortcuts. Now that computing capacity is virtually unlimited, this would be no longer necessary, as is also the case with hypotheses and clever demonstrations. “Correlation supersedes causation.” (Anderson 2008) There is no need to know “why” you get a given result, only “what” it is (Mayer-Schönberger & Cukier 2013: 7).

This interpretation is extremely controversial and has been criticized under a number of aspects: data fundamentalism (Crawford, Miltner & Gray 2014), data fetishism (Sharon & Zandbergen 2016), mythology and apophenia (seeing patterns where none exists: boyd and Crawford 2014: 668), reductionism (Kitchin 2014), opacity (Pasquale 2015), confusion between correlation and causality (Cowls & Schroeder 2015; Floridi et al. 2016: 5), hidden bias (Gillespie 2014), and further ones as well. Here, however, I am not interested in taking a position in this debate, but in asking why the hypotheses about a radically new way of making sense of data are emerging right now and on what basis this is happening. What aspects of the development of digitization suggest a form of information processing fundamentally different from scientific reasoning and independent of its structures?

The protagonists in this alleged revolution are algorithms (Cardon 2015), whose advantage has always been that they do not require “creative” thought in their execution (Davis 1958: xv). In algorithms, and in the digital management of data that relies on them, the processing and mapping of data have nothing to do with understanding – indeed, in many cases the claim that algorhithms understand would be quite an obstacle. The machine has other ways to test the correctness of procedures. In the field of Big Data a certain “messiness” is a positive factor (Mayer-Schönberger & Cukier 2013: 33): imprecision and errors make the working of algorithms more flexible, and are neutralized by the increase in data. When the number of elements to be analyzed grows (to today’s incredible levels of petabytes and zettabytes) not only does performance not get worse, but rather it gradually becomes more precise and reliable – though less and less comprehensible (Burrell 2016).

3 Artificial Communication

The communicative relevance of Big Data is a consequence: we are facing a means of processing data (and managing information) that is different from human information processing and understanding – and this is the root of the success of these technologies. Just as men were first able to fly when they abandoned the idea of building machines that flap their wings like birds,[9] digital information processing only managed to achieve the results that we see today when it abandoned the ambition to reproduce in digital form the processes of the human mind. Since they do not try to imitate our consciousness, algorithms have become more and more able to act as competent communication partners, responding appropriately to our requests and providing information that no human mind ever developed and that no human mind could reconstruct.[10]

This is evident in practice but not always in the theory. The metaphors used in the field of Big Data still retain reference to the human mind and its processes. Indeed, the idea is very widespread that the recent procedures of “deep learning” are so effective because they are based on neural networks reproducing the functioning of the human brain. As most researchers admit (Goodfellow et al. 2016: 15;Wolchover 2014), however, we still know very little about the working of our brain, which makes the analogy quite curious: it can be seen as an orientation to lack of knowledge. If the machines no longer try to understand meanings as the human mind does, can we find a different, more fitting metaphor?

The recent approach of Big Data is actually very different from the models of Artificial Intelligence (AI) of the 1970s and 1980s, which aimed, by imitation or by analogy (“strong” and “weak” AI), at reproducing with a machine the processes of human intelligence. Now this is no longer what the systems do, and some designers declare it explicitly: “We do not try and copy intelligence” (Solon 2012) – that would be too heavy a burden. Translation programs do not try to understand the documents and their designers do not rely on any theory of language (Boellstorff 2013). Algorithms translate texts from Chinese without knowing Chinese, and the programmers do not know it either. Spell checkers can correct typographical errors in any language because they do not know the languages nor their (always different) spelling rules. Digital assistants operate with words without understanding what the words mean, and text-producing algorithms “don’t reason like people in order to write like people.” (Hammond 2015: 7) Examples can be multiplied from all areas in which algorithms are most successful. Algorithms competing with human players in chess, poker, and Go do not have any knowledge of the games nor of the subtleties of human strategies (Silver & Hassabis 2016).[11] Recommendation programs using collaborative filtering know absolutely nothing about the movies, songs or books they suggest, and can operate as reliable tastemakers (Grossman 2010; Kitchin 2014: 4). Computer-based personality judgments work “automatically and without involving human socio-cognitive skills.” (Youyou, Kosinski & Stillwell 2014: 1036)

One could say – and this is the idea that I propose here – that what these programs reproduce is not intelligence but rather communication. What makes algorithms socially relevant and useful is their ability to act as partners in communication that produces and circulates information, independently of intelligence. Can we say that the web does not work with Artificial Intelligence but with a kind of Artificial Communication which provides our society with unforeseen and unpredictable information?[12] Maybe society as a whole becomes “smarter” not because it reproduces intelligence artificially, but because it creates a new form of communication using data in a different way (Esposito 2013).

That the focus of the web is on communication rather than on intelligence is also confirmed by the rampant success of the social media, which was not foreseen in any model of their evolution. The web today is characterized by contacts, links, tweets, and likes more than by meaningful connections between content and between sites (Rogers 2013: 155; Vis 2013): it is driven by communication, not by human understanding.[13] Every link (every act of communicative behavior) is treated as a like, and “liking” and “being like” also have been equated (Seaver 2012). Everything that happens in the web becomes a fact and is used as a fact, having consequences and producing information.

4 Communication and thought

If we move from referring to (artificial) consciousness to referring to (artificial) communication,[14] however, we must ask different questions. What kind of communication is mediated by the web? Does it still make sense to talk of communication when data processing is performed by a machine which does not understand the communicated contents? Is it still communication, and with whom? Obviously, the answers to these questions depend on the concept of communication, which should be sufficiently precise and powerful to cover all these cases.

Here we can see the advantages of Luhmann’s theory, and the reason why I think that it is particularly appropriate to deal with the innovative aspects of digital communication. Most theories of communication assume that for communication to come about, the mental processes of the participants must converge on some common content. The partners must share the same thought, or at least part of it, whether they agree about it or not. Even when theories do not require the identity of information (minus noise) in the sense of Shannon and Weaver and of transmission models, at least they expect some identity in decoding meaning or interpretation. In the interaction with machines, however, we are dealing with a situation in which the communication partner is an algorithm that does not understand the content, the meaning, or the interpretations, and works not despite, but because of this. The users cannot share any content with their interlocutor because the interlocutor does not have access to any content. At issue here is whether communication comes about or not. Is this always an “aberrant” situation?[15] How can one keep some control over the ongoing processes and adequately describe what is happening?

When we are dealing with a partner who does not think, the concept of communication in the theory of social systems has the great advantage of not being based on psychological content and not requiring any sharing of thoughts among the participants. Communication is defined starting not from the source, but from the receiver, who can derive information different from what the utterer had in mind.[16] According to Luhmann (1984: 193 ff.), communication exists not when somebody says something,[17] but when somebody realizes that someone else said something. You can write entire books and make elaborate speeches, but if no one reads or listens, it is not plausible to think that communication has come about. However, if a receiver understands information that (according to him or her) someone meant to utter, communication has taken place – whatever the information and whatever the source had (or did not have) in mind. Communication is thus defined as a unit of three types of selection: of information, of utterance (Mitteilung), and of understanding (ibid: 196).[18]

The power and improbability of this notion of communication are related to the fact (which is fundamental for our focus on algorithms) that it does not include the thoughts of the participants, hence in principle could also involve participants who do not think (such as algorithms). The fact that communication is independent of thought, however, does not mean that communication can proceed without the participation of thinking people. If no one listens and no one participates, communication doesn’t take place. Communication requires participants who think; nevertheless, it is not dependent on or made up of their thoughts. Conversely, you can know the thoughts of participants without knowing the meaning of the ongoing communication.

The result is the paradox, often difficult to accept, of total dependence on and total independence of communication from consciousness, i.e. from the thoughts of participants (Luhmann 2002: 273). [19] In order for information to enter the communication circuit, an utterance by someone must be understood. Natural phenomena do not induce communication if they are not observed and reported by someone. Layers of rock are communicatively informative only when a geologist who can interpret them speaks about them in class or writes an article (communicates) about them and someone listens to his or her communication or reads it. The same applies to bodies (a disease does not communicate if there isn’t someone interpreting and communicating the symptoms) and to machines. In all these cases, one communicates with the utterer, not with rocks or with bodies. In this sense, communication is totally dependent on the presence of consciousness, which must not only develop a thought, but also must be motivated to communicate it and to pay attention to what is being said / written (which is not at all obvious). At the same time communication remains independent of individual thoughts, because those who read the article by the geologist don’t know his or her thoughts and may understand the text in a different way than intended.[20] On the basis of your individual perspective and your background, you can draw from a communicative event information that the utterer did not have in mind and perhaps doesn’t even know, yet which is a result of the communicative event. And another person may understand the communication in a different way. Basically every bit of information is different for every participant because we each understand everything from our idiosyncratic point of view (von Foerster 1970). Nevertheless we communicate – indeed, we communicate precisely because of this.

Even if it is does not consist of people and of their thoughts, communication as we know it so far (also as distant communication conveyed by technologies like print or television) normally requires the participation of the consciousness of at least two persons who address their thoughts to it. There must be someone (or several people) who for some reason listen(s) / read(s) / watch(es) that someone else for some reason utters something (Luhmann 1988). This distinguishes communication from simple perception, including the perception of others and of their behavior. We get a lot of information by watching (or otherwise analyzing) not only objects and living beings, but also the appearance and the behavior of humans; we study plants and stones, machines and bodies; but we do not communicate with them. Communication comes about when the observer not only learns something but also knows that someone is purposely saying (or writing, or somehow conveying) this something, i.e. when he or she not only gets information but also knows that someone wants to convey it.

This cannot be taken for granted, because everyone can turn their attention wherever they want, and not every observation is a communication. Since Parsons, sociology has spoken of a condition of double contingency[21] to indicate the very specific situation in which both the receiver and the source, who can always turn their attention elsewhere,[22] each refer to the contingency of the other. Contingency is double not simply because there are two contingent participants, but because each of them decides what to do (or select) depending on what the other is doing (selecting), and both know this.[23] Double contingency as reflected contingency is the defining condition of any communicative event.

5 Communicating with a partner who does not think

What if one participant is an algorithm that does not think, does not intend, and does not have expectations? What happens to double contingency? If we still want to talk of communication we must include the case in which there is only one person, facing a smart algorithm that can participate in the communication. But in order to be a communication partner this algorithm must operate differently from a machine or a watch, from which we get information but with which we do not communicate. Where is the difference? Are the defining elements of communication still present? Do smart algorithms perform as a communicative partner thus providing the equivalent of double contingency?

The entire issue of communication with machines and, if you want, what remains of the Turing test (Turing 1950), depend on the answers to these questions. What matters is not whether the person is or is not aware that they are dealing with a machine, because this now happens every day and usually is not relevant. Today we all communicate with bots without knowing it (in online service, videogames, social media), and even when we know it, as with personal assistants, normally we do not care.[24] What matters is whether the interaction with the machine has the features of communication with a contingent autonomous partner. Otherwise, it is a form of perception of objects in the environment, which can be extremely complex and informative but with different assumptions and consequences. For example, one can be interested in knowing how a machine-object (e.g. a watch) works and for what purposes, but does not get angry with the watch for being late nor care about understanding what it intends.[25] And especially one does not use its indeterminacy to structure one’s own, as happens in communication (which is a very important point for software programming strategy).

In principle the conclusion cannot be excluded that interaction with algorithms is communication, but this must be specified. As we saw above, in a definition according to systems theory, communication does not consist of the thoughts of the participants, so theoretically it can also include participants who do not communicate under the condition that the recipient thinks they do.[26] It is only required that the unit information-utterance-understanding is accomplished, i.e. that the recipient understands specific information related to the communicative intention of the counterparty in that event. Not only does the recipient understand the information, he or she also knows (or thinks) that it was uttered by the partner, and that it could be different (contingent). For example, if someone waves their hand to chase away a fly but someone watching them thinks that they wanted to say good-bye, communication comes about, even if the alleged source had something else in mind. Or if a reader thinks that the shopping list written by Montale, who only wanted to write a memo and did not intend to communicate anything to anyone, is a poem and interprets it and comments on it in this sense, a communication will have come about in the world and have consequences, no matter what Montale may have meant. The thoughts of the utterer are not part of the communication, and the receiver cannot access them anyway. If some information related to his or her behavior is understood and produces further communication (if the receiver resumes contact with the person who greeted him or her or talks with others about their meeting, or if one writes an article about Montale’s poem) communication has taken place.

Although erroneous from the point of view of the source, such borderline cases involve two human interlocutors, one of which may not be thinking about the ongoing communication, but, importantly, that person is thinking. The case is pointedly different in which the partner to whom the communication is attributed[27] is not a human being and does not operate on the basis of thoughts, and its partner knows this. Can we still speak of communication? How must a machine behave in order to be a communication partner?

There are precedents. Luhmann routinely communicated with a nonhuman interlocutor, as he claims in the article describing his communication with his much discussed box of file-cards (Zettelkasten: Luhmann 1981). But this was a file-box built up over many decades on the basis of a complex architecture of links and references. For communication to come about, actually, it is not enough that the file-box provides the information that the user recorded years before and now cannot remember. When you reflect on your thoughts you do not communicate with yourself, not even if the thoughts are reproduced at a later date (Luhmann 1985). There is no double contingency and no production of specific information in the communication act. But Luhmann’s Zettelkasten was structured in such a complex way that it could produce authentic surprises and did not simply act as a container (Behälter), allowing the author to retrieve what he once put in it. The information “produced” in the act of communication was the result of a query (Anfrage), which activated the internal network of references, and it was different from what had been stored by Luhmann in his notes (Luhmann 1981: 59). Of course, the archive is not contingent in the sense of autonomously deciding what to do and not to do; yet it is perceived by the user as unpredictable, informative, and reacting to the specific requests of its partner. The answers Luhmann got as a result of his query did not exist before his quest. In such cases the added value of communication is present since, as Luhmann himself experienced, the file-box acts as a communication partner.[28] Communication has occurred although no one would think of the archive as a person.[29]

In Luhmann’s reflections in the article mentioned, this form of communication is not particularly problematic.[30] In most cases there are actually two or more people participating in communication, and the reduction to a single individual is the exception. Cases such as these, in which the recipient mistakenly attributes to the source the intention to communicate, must be rare, because beyond a certain threshold it is very difficult to coordinate such cases of one-sided communication. Exceptions notoriously prove the rule, but they must remain exceptions. Today, however, with more widespread communication via algorithms, this kind of case seems to occur more often and in a much more complex way. Think about digital personal assistants like Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, etc. or about the many cases of conversations with social bots. Through interaction with algorithms, users get a lot of information that, in many cases, did not exist before they formulated their query and is different from what other human beings entered in the sources and databases. Are these users then communicating with algorithms?

6 Virtual contingency

Algorithms are not human and do not want to be human. But the discriminating factor, as we saw, is not whether the utterer is a person but whether there is double contingency, which has up to now normally required the participation of two people. The question we must address is not whether algorithms are people and not even whether they are perceived as people, but whether in the interaction with algorithms a condition of double contingency arises in which each partner is oriented towards the indeterminacy of his or her counterparty and specific information is produced.[31] We must ask whether and how algorithms can become contingent and hence reflect the contingency of the users, and how this contingency is controlled in the communication process.

Contingency means that there are open possibilities, therefore selection and a certain level of uncertainty. Algorithms by definition do not know uncertainty, because they proceed without making decisions and without creativity, merely following the instructions that program their behavior. This is their strength and the reason why they can operate efficiently and reliably. Algorithms and traditional machines can be informative, like a watch that tells us something we did not know before (the time), but the information is not uncertain or unpredictable. Different watches all indicate the same time if they work properly. As von Foerster (1985: 129) observed, if a traditional machine becomes unpredictable we do not think that it is creative or original, we think that it is broken.

The dilemma faced by the designers of smart algorithms, on the contrary, is how to build machines which are surprising but useful, i.e. how to program and control the production of appropriate and informative surprises. An algorithm that works perfectly (i.e. is not broken) produces a contingent outcome. Cozmo, a real-life toy robot based on a series of machine-learning algorithms,[32] is “programmed to be unpredictable” (Pierce 2016a), but also to be responsive and fun. Social algorithms not only provide information but respond appropriately to user requests, producing new and relevant information. The paradoxical purpose of programming intelligent algorithms is to build unpredictable machines in a controlled way. The goal is to control the lack of control (Esposito 1997).

In some cases the contingency of the machine is simply the projection of the contingency of the user. This happens with the robotic toys studied by Sherry Turkle (2011), which function as communication partners because children or elderly people interacting with them project onto them their own contingency. This has always happened with dolls and puppets, with which children play as if the toys understand and respond to their behavior. The performance of robotic toys, which allows them to be more fun than traditional dolls, is not their ability to understand but their ability to “perform understanding” (ibid: 26) in an elaborate and seemingly reactive way. Algorithms allow the machine to react to the behavior of the user, and this in turn allows the user to project onto the machine his or her own contingency and meanings more efficiently than in the interaction with a mute doll (ibid: 39–40).[33] A toy behaves differently depending on what the user does, making it easier for the user to interpret its behavior as communication. An algorithmically driven machine does not give an intelligent answer; it only gives an answer that may become intelligent for the user.[34]

A surrogate of double contingency is produced because the user faces his or her own contingency in an externalized and elaborated form, and interprets it as communication – somehow like Luhmann in his alleged communication with his Zettelkasten. In both cases the interlocutor (the file-box, the robotic toy) has a sufficiently complex structure for the interaction to produce information different from what the user already knows, and this information is attributed to the partner. The user communicates with the machine even if the machine does not communicate with the user.

It is not only children who do this, and it does not happen only in interaction with anthropomorphic or animal-like devices, such as the robotic seals and dogs studied by Turkle. Various experiments show that people, without realizing it, deal with computers as if they were real people (Nass & Yan 2010). For example they evaluate the work of the computer in more positive terms if the assessment is done on the same computer; it is as if they did not want to offend it. These dynamics, however, have rather restrictive limits. The user encounters information he or she already has, but gets the opportunity to observe it from a different perspective. This can produce additional information if the user is able to grasp it, but this is not really a different perspective. One observes one’s own perspective from another angle and does not observe the perspective of someone else.

The results can be complex and rewarding. Contingency is multiplied because it can be observed from the outside. The user experiences a kind of “virtualization” of his or her own contingency, as observed in a mirror that generates a virtual image.[35] The independent object that you see in the mirror does not exist where the image is: you cannot touch, manipulate, or modify it. You cannot enter the mirror. The image, however, is not an illusion: if there is nothing to reflect, no virtual image is produced (Esposito 1995). The mirror shows the image of objects that really exist, but not where they seem to be. It shows them as if they were somewhere else, thereby making it possible to see them from a different perspective. The observer can see the objects simultaneously from two different points of view, from the front and from the back, and see things he or she could not see otherwise. But the objects remain what they are and are not duplicated. Only the perspectives are duplicated. The observer notices this when he or she observes him- or herself in a mirror: he or she can see how others see them, and this can be very useful and even surprising,[36] but the image by itself does not behave surprisingly or unexpected, and certainly not independently. The observer interacts with him- or herself, not with another observer.

Something similar occurs with the reflection of contingency in the interaction with robotic toys. Consequently, we can speak of the virtualization of contingency. The interaction is meaningful because it produces information that did not exist previously, neither for the user nor for the machine. But this contingency is the result of the duplication of the perspective of the user, who observes his or her own contingency from a different perspective. The observers are not duplicated, what is duplicated is the perspective of the same observer. No authentic reflected (and unpredictable) double contingency is produced between two communicating parties. What is doubled is the contingency of a single observer interacting with him- or herself as if they were someone else. But two cases of simple contingency do not make up double contingency. In this interaction the observer can certainly acquire information that he or she could not get otherwise, can have fun and find company, but does not face the variety and unpredictability of a truly different perspective, as in communication.

Smart algorithms, however, go further, and do something different and more enigmatic than robotic toys.[37] When users interact with an algorithm capable of learning (maybe unsupervised: Russell & Norvig 2003: 763 f.; Etzioni 2016), they face a contingency that is not their own – even though it does not belong to the machine. They do not observe themselves from a different perspective, they face someone else’s perspective. The machine in this case is not only behaving in such a way as to allow users to think that it communicates, it actually produces information from a different perspective. The perspective that the machine presents is still a reflected perspective because the algorithm inevitably does not know contingency, but it is not the perspective of the user. The algorithm reflects and represents the perspective of other observers, and users observe through the machine a re-elaboration of other users’ observations.

7 Googlization

Where does the algorithm find the contingency it must reflect? How does it access the external perspectives it elaborates and presents to its users? To be able to act as communication partners, algorithms must be on the web (Hardy 2016). Artificial communication would not be possible without the web, as smart and sophisticated as algorithms can be. Actually the issue of communication with algorithms emerged when algorithms were connected to the web. The path-breaking effect of Web 2.0 (and presumably of Web 3.0) is not so much customization as the inclusion and exploitation of virtual contingency, which parasitically “feeds” on contributions by the users and uses them actively to increase its own complexity – and also the complexity of communication. Apparently we can communicate with algorithms experiencing an (artificial) form of unpredictability and reflection – or, if you will, of double contingency.

The symbol of this approach is Google, and this is also the reason for its huge success. The breakthrough came in 1998 with the introduction of link analysis after the spread of the world wide web, which had then existed for almost 10 years (Langville & Meyer 2006: 4 ff.). Earlier information retrieval was a search in a limited, non-linked, and static collection of documents. Organization and categorization of information were entrusted to specialists such as librarians, journal editors, or experts in various fields. Link analysis, in contrast, is based on the web and introduces a form of information retrieval that has become huge, dynamic (unlike traditional documents, web pages are constantly changing their content), hyperlinked, but above all self-organized. The structure is decided not by experts but by the dynamics of the web. And it is incomparably more efficient.

It was a radical conceptual turn adopted in the design of the algorithm PageRank by Google, which thus “invented” the Internet as we know it today (Metz 2012). Its authors, and later owners of the company, described it in a 1999 article (Page at al.)[38] as exploiting the link structure of the web as a large hypertext system. The key insight was to determine which pages are important and for whom, disregarding completely the content of the pages themselves (ibid: 15). To appropriately decide the ranking of pages in responding to users’ requests, the system uses information that is external to the Web pages and refers rather to what other users have done in their previous activity. In other words, to decide which pages are important PageRank does not go and see what they say and how they say it, but looks at how often they were linked to and by whom. PageRank is based on the number of backlinks to the pages (how many times they have been pointed to by other websites) and on their importance, according to the model of scholarly referees – where the “importance” of backlinks depends on how many links they have. The definition of relevance is openly circular: “a page has high rank if the sum of the ranks of its backlinks is high” (ibid: 3), including both the case of a page with many not particularly authoritative backlinks and that of a page with a few highly linked backlinks.

The genius of PageRank lies in completely giving up understanding what the page says and relying solely on the structure and the dynamics of communication. Google’s creators do not even try to come up with a great organizational scheme for the web based on experienced and competent consultants, as did the competing search engines from Altavista to Yahoo. They do not try to understand and not try to build an algorithm that understands. “Instead, they got everyone else to do it for them” (Grimmelmann 2009: 941) when surfing the net and making connections. Content came into play later, as a result of the classification and not as a premise. Google uses the links to learn not only how important a page is, but also what it is about. If the links to a given page use a certain sentence, the system infers that the sentence accurately describes that page and takes this into account for later searches. The algorithm is designed to apprehend and reflect the choices made by the users (Gillespie 2014). It activates a recursive loop in which the users use the algorithm to get the information, their research modifies the algorithm, and the algorithm then impinges on their subsequent searches for information. What the programmers design is only the algorithm’s ability to self-modify. What the algorithm selects and how, depends on how the users use it.

The system has been developed in order to take into account not only popularity but also other factors such as users click behavior, reading time, or patterns of query reformulation (Granka 2010: 367). As Google declares in the InsideSearch pages of its website,[39] today algorithms rely on more than 200 signals and clues referring to “things like the terms in websites, the freshness of content, your region.” The company has produced a Knowledge Graph that provides a semantic connection between the various entities and allows for more rapid and appropriate responses, also including information and answers no one had previously thought of (Hamburger 2012). The “intelligence” of the system derives from the use of the previous web activity and from the sources of information available on the web and from Wikipedia to databases of common knowledge in order to give people what they are looking for even if they do not know what this is. As John Gianandrea, Director of Engineering at Google, declares, when, for example, one is looking for Einstein on Google, “We’re not trying to tell you what’s important about Einstein – we’re trying to tell you about what humanity is looking for when they search.” The intelligence of the system is still the intelligence of the users that the algorithm exploits to direct and organize its behavior.

Google has become the symbol of an approach that can be found in all successful projects on the web. Since 2003 the term “googlization” (Rogers 2013: 83ff) has been introduced to describe the spread in more and more applications and contexts of a model that does not rely on traditional status makers like editors or experts, but “feeds” on the dynamics of the web to organize its operations and even itself. The web is guided by a “googlization of everything” (Vaidhyanathan 2011), which takes advantage of the operations performed by users to produce a condition in which “Google works for us because it seems to read our minds” (ibid: 51), but actually does not need to do this. What it does is use the results of our minds to give directions, and then produce information that none of us had in mind.

Google, and all systems that work in the same way, feed on the information provided by users to produce other information which it introduces into the communication circuit.[40] It is this information that users, if they are able to understand it, get from the interaction with algorithms, and that cannot be attributed to anything except the algorithms. In the communication with algorithms, it does not make sense to refer to the perspective of those who entered the data because they could not know how the data would be used, and it makes no sense to refer to what the algorithm intends because it did not intend anything. Constraints and orientation do not depend on intentions but on programs, which are normally inaccessible (Luhmann 2002: 143). The utterance (Mitteilung) selects the information relevant for the ongoing communication, but the criteria that guide this selection by programs do not serve to orient understanding. The real innovation in the communication with algorithms is that understanding is no longer oriented to the meaning of an utterance.[41]

As Sherry Turkle remarks (2011: 55), what you lose talking with a bot or a robot as a communication partner is alterity, “the ability to see the world through the eyes of another.” Algorithms do not act as alter egos, and if you communicate with an algorithm you do not communicate with an alter ego. You do not observe how another (like yourself) observes, you observe through the algorithm what others also can observe in communication.[42]

Nevertheless, in maintaining all the differences between interaction with algorithms and interaction with human beings,[43] we could conceive of this as a new form of communication. The user receives a contingent response that reacts to his or her contingency and does not just reflect his or her indeterminacy. The algorithm makes selections and choices based on criteria that are not random, but that the user does not know and need not know. The algorithm reflects and elaborates the indeterminacy of all participants, and each user faces the contingency of all the others, which is infinitely surprising and informative. It is still virtual contingency, but reflected in a mirror in which everyone sees not him- or herself but the other observers communicating – generating a kind of “virtual double contingency.” They do not communicate with him or her, but the result is the answer to the user’s specific questions and would not exist if they weren’t asked. The success of Google and of the models that adopt the same strategy is due to this: apparently their algorithms communicate with users and are able to do so precisely because they do not try to understand content. They do not artificially reproduce intelligence, but do directly engage in communication.

8 What algorithms learn

If this is still communication, we are dealing with a form of artificial communication. By artificial I mean not only that it was produced by someone because obviously all communication is artificial in this sense.[44] By artificial communication I mean communication that involves an entity, the algorithm, which has been built and programmed by someone to act as a communication partner. It is artificial because you communicate with the product of someone without communicating with the person who produced it.[45]

What is artificial is the perspective of the partner that is produced by the algorithm starting from the perspectives of web users. The algorithm uses them to create a different perspective, one that becomes that of the communication partner with whom users interact. It succeeds in doing so if it learns to learn by itself, i.e. to develop a practice of unsupervised learning, in which the algorithm does not learns what others teach. Instead it decides autonomously what to learn and what to communicate.[46] Unsupervised learning, however, is predictably rather enigmatic, as the classic communicative paradox of the Palo Alto school (Watzlawick, Beavin & Jackson 1962) has it: “be spontaneous,” or be creative. But how can you teach creativity, i.e., how can you program learning without knowing what the student-machine has to learn? This is not the classic educational problem of teaching to learn (Luhmann & Schorr 1979: 85), teaching a methodology and not content. Here not only do you not know what, you do not even know how the algorithm is supposed to learn, because it does not reproduce human capabilities. The power of the algorithm relies on operating differently.

In practice, unsupervised learning is realized as reinforcement learning,[47] in which the algorithm works freely and in the end is told which results are satisfactory (Russell and Norvig 2003: 763 f.; Etzioni 2016). You do not teach the machine to do things (as in supervised learning). The machine makes random moves, as if it tried to play a game without knowing the rules, and after a number of attempts you tell it whether it has won or lost (reinforcement). You do not teach the algorithm the moves and not even the rules, but, as shown by the much-discussed competitions in chess or GO, it can be enabled to defeat the most qualified champions. The algorithm uses reinforcements to calculate in its own way an evaluation function that indicates which moves to make – without making predictions, without a game strategy, without “thinking,” and without imagining the perspective of the counterparty. As the programmers of AlphaGo, the computing system built by Google to play GO, say: “our goal is to beat the best human players, not just to mimic them” (Silver & Hassabis 2016). The machine does not reason like human beings and in its behavior there is nothing to understand.[48] AlphaGo does not plan what to do depending on the opponent’s moves, but calculates and decides while playing. The programmers themselves do not understand the “reasoning” of the algorithm. When they tell it that something is “wrong,” they merely signal that there is an error, without telling what it is and without even knowing what it is.

In interaction with users, a learning algorithm gathers a lot of reinforcements of this kind from the behavior of people: if they accept the result, if they click, if they go on searching. It then uses them to direct its own behavior, which becomes more and more refined. AlphaGo and other game algorithms learn via self-play, refining their skills using a trial and error process (Schölkopf 2015; Mnih et al. 2015). The system is trained with data from a server that allows people to play against each other on the Internet. The players are all amateurs and the skills acquired are rather coarse, but the program enormously refines them in playing millions of games against itself. The system learns “not just from human moves, but from moves generated by multiple versions of itself.” (Metz 2016a) In this process of “self-supervised learning” (Etzioni et al. 2006) the algorithm becomes incomparably better than the players from which it learned, who would not be able to understand its moves. It can learn how to win at GO or at videogames and can learn to give satisfactory answers to user requests, but it does not learn anything about the world, about the users, or about the issues it deals with. It only learns about itself (Hammond 2015: 27).

The algorithm does not become more informed or more intelligent; it just learns to work better. But thereby it can produce increasingly complex communication with its users, who can learn unknown things about the world and about themselves. Communication becomes more effective, and new information is produced. “A lot of situations where you invoke machine learning, are because you do not really understand what the system should do” (Michael Warner, a robotic CMU researcher, quoted in Pierce 2016a), but you can learn it from the working of the machine itself.

We can learn from communication with algorithms. An example is the already legendary move 37 in the game of March 2016 between Lee Sedol, one of the world’s top GO players, and AlphaGo. The move has been described by all observers as absolutely surprising. “It was not a human move” and couldn’t have come to the mind of any human being (Metz 2016c). It was actually produced by an algorithm that does not have a mind, but is the move that allowed it to win the game and then the encounter. Looking back later, GO players found the move absolutely beautiful and brilliant and used it to rethink their game strategies, dramatically improving them – they started to learn.[49] Following this revision, Lee Sedol himself produced the celebrated highly unlikely (1 in 10,000) move 78 (“The Touch of God”) in the 4th game with AlphaGo, the only one he actually won (Metz 2016b; Taylor 2016).

The player defeated the algorithm by re-elaborating with human skills a move that no human could devise. It is likely that the algorithm will now also incorporate move 78 in its range of possibilities and will also learn to manage it and its consequences,[50] but it would not have been able to do this without the human intervention that devised it – and, in fact, the algorithm lost that game. No algorithm, however high its self-learning ability, can generate possibilities that are not implicit in the data supplied (Etzioni 2016). No algorithm can independently generate contingency, but the contingency that the algorithm processes can also be the result of the interaction of human beings with the algorithm.

9 Conclusion

Even and especially if the algorithm is not an alter ego, does not work with a strategy, and does not understand its counterpart, in interaction with machines human users can learn something that no one knew before or could have imagined, which changes their way of observing. People learning to learn from machines increases the complexity of communication in general. In the case of GO it was the game strategy, but the same mechanisms have been applied in designing other social algorithms.[51]

This is what sociological theory should be able to deal with. Whether one decides that interaction with algorithms is a specific form of communication and that the concept of communication should be amended accordingly or one decides that algorithms are not communication partners, what matters is to adequately describe the development of digital communication. We must be able to show how interaction with algorithms affects the communication of society in general (Luhmann 1997: 304) and to provide insights that can help to direct the work of those who program and build algorithms.

In more and more areas, reference to intelligence does not help, be they cases in which things are communicating (e.g. the Internet of Things) or cases in which communication is treated as a thing (e.g. Digital Humanities). The scenario of the Internet of Things (IoT) involves a network that connects machines, people, and real world objects interacting with one another as people do in the web today (Höller et al. 2014). The idea is that objects can communicate with objects and people in the same way that people communicate with other people, extending enormously the boundaries and forms of possible interactions. At the same time, and conversely, a new form of algorithmic reading seems to be emerging (Hayles 2012: 46; Sneha 2014; Kirschenbaum 2007) where texts are treated not as communication but as objects.[52] A set of algorithms process enormous numbers of texts differently from what a human reader would do even in the unlikely case that he or she could read them all, searching for patterns and correlations independent of interpretation (Moretti 2005: 10).

Does this mean that we are moving towards a state of widespread intelligence where there will be no difference between algorithms and people, between “intelligent” objects and minds involved in communication? My impression is that these developments require a radical shift of reference from intelligence to communication. What algorithms try to reproduce is not the consciousness of people but the informativity of communication. New forms of communication can combine the performances of algorithms with those of people, but not because algorithms are confused with people or because machines become intelligent. The working of algorithms is and becomes increasingly different from that of people, but this difference can give rise to a new way of dealing with data and producing differences in the communication circuit.


Article note

This text is an expanded and revised version of the inaugural lecture of the Niklas Luhmann Guest Professorship 2015 at Bielefeld University. For comments, criticism, and suggestions on an earlier draft of this paper, I would like to thank David Stark, Alberto Cevolini, Giancarlo Corsi, and the anonymous referees of Zeitschrift für Soziologie


About the author

Elena Esposito

Elena Esposito is Professor of Sociology at the University Bielefeld (D) and at the University of Modena-Reggio Emilia (I). She works with the theory of social systems especially on issues related with the social management of time, including memory and forgetting, fashion and transience, probability calculus, fiction and the use the time in finance. Her current research projects focus on the possibility and forms of forgetting on the web, on a sociology of algorithms and on the proliferation of rankings and ratings for the management of information.

She published many works on the theory of social systems, media theory, memory theory and sociology of financial markets. Among them Algorithmic memory and the right to be forgotten on the web. Big Data & Society, January–June 2017; Economic Circularities and Second-Order Observation: The Reality of Ratings. Sociologica, 2/2013. doi: 10.2383/74851; The structures of uncertainty. Performativity and unpredictability in economic operations. Economy & Society 2013(42): 102–129.

References

Agrawal, R., T. Imielinski & A. Swami, 1993: Mining Association Rules between Sets of Items in Large Databases. Proceeding of the 1993 ACM SIGMD Conference. Washington D. C.10.1145/170035.170072Search in Google Scholar

Agrawal, R., 2003: Rakesh Agrawal Speaks Out. Interview with Marianne Winslett. http://sigmod.org/publications/interviews/pdf/D15.rakesh-final-final.pdf.Search in Google Scholar

Amoore, L. & V. Piotukh, 2015: Life beyond big data: governing with little analytics. Economy and Society 44(3), 341–366.10.1080/03085147.2015.1043793Search in Google Scholar

Anderson, C., 2008: The End of Theory: The Data Deluge Makes the Scientific Method Obsolete. Wired, 16.Search in Google Scholar

Automated Insight, at: https://automatedinsights.com.Search in Google Scholar

Blumenberg, H., 1957: Nachahmung der Natur. Zur Vorgeschichte der Idee des schöpferischen Menschen. Studium Generale 10: 266–283.Search in Google Scholar

Boellstorff, T., 2013: Making big data, in theory. First Monday 18 (10).10.5210/fm.v18i10.4869Search in Google Scholar

Boyd, D. & K. Crawford, 2012: Critical Questions for Big Data. Information, Communication and Society 15 (5), 662–679. doi:10.1080/1369118x.2012.678878.10.1080/1369118X.2012.678878Search in Google Scholar

Braun-Thürmann, H., 2013: Agenten im Cyberspace: Soziologische Theorieperspektiven auf die Interaktionen virtueller Kreaturen. Pp. 70–96 in U. Thiedeke (ed.), Soziologie des Cyberspace: Medien, Strukturen und Semantiken. Wiesbaden: Springer VS.10.1007/978-3-322-80482-2_3Search in Google Scholar

Burrell, J., 2016: How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algoriths. Big Data & Society 1: 1–12.10.1177/2053951715622512Search in Google Scholar

Callon, M., 2004: The Role of Hybrid Communities and Socio-Technical Arrangements in the Participatory Design. Journal of the Centre for Information Studies 5(3), 3–10.Search in Google Scholar

Cardon, D., 2015: À quoi rêvent les algorithms. Paris: Seuil.Search in Google Scholar

Collins, H., 1990: Artificial Experts. Social Knowledge and Intelligent Machines. Cambridge MA: MIT Press.Search in Google Scholar

Cowls, J. & R. Schroeder, 2015: Causation, Correlation, and Big Data in Social Science Research. Policy & Internet 7: 447–472.10.1002/poi3.100Search in Google Scholar

Crawford, K., K. Miltner & M. L. Gray, 2014. Critiquing Big Data: Politics, Ethics, Epistemology. International Journal of Communication 8: 1663–1672. Search in Google Scholar

Davis, M., 1958: Computability and unsolvability. New York-Toronto-London: McGraw-Hill. Search in Google Scholar

Dill, K., 2013: What Is Game AI? Pp. 3–9 in: S. Rabin (ed.), Game AI Pro: Collected Wisdom of Game AI Professionals. Boca Raton: CRC Press. Search in Google Scholar

Dreyfus, H., 1972: What Computers Can’t Do. New York: MIT Press.Search in Google Scholar

Eco, U., 2012: Ci sono delle cose che non si possono dire. Di un realismo negativo. Alfabeta 2, III/17: 22–25.Search in Google Scholar

Eco, U. & P. Fabbri, 1978: Progetto di ricerca sull’utilizzazione dell’informazione ambientale. Problemi dell’informazione 4.Search in Google Scholar

Esposito, E., 1995: Illusion und Virtualität: Kommunikative Veränderung der Fiktion. Pp. 187–216 in: W. Rammert (ed.), Soziologie und künstliche Intelligenz. Frankfurt am Main/ New York: Campus.Search in Google Scholar

Esposito, E., 1997: Risiko und Computer: das Problem der Kontrolle des Mangels der Kontrolle. Pp. 93–108 in: T. Hijikata & A. Nassehi (eds.), Riskante Strategien. Beiträge zur Soziologie des Risikos. Opladen: Westdeutscher Verlag.10.1007/978-3-322-85107-9_5Search in Google Scholar

Esposito, E., 2012: Kontingenzerfahrung und Kontingenzbewusstsein in systemtheoretischer Perspektive. Pp. 39–48 in: K. Toens & U. Willems (eds.), Politik und Kontingenz. Wiesbaden: VS Springer.10.1007/978-3-531-94245-2_3Search in Google Scholar

Esposito, E., 2013: Digital Prophecies and Web Intelligence. Pp. 121–142 in: M. Hildebrandt & K. de Vries (eds.), Privacy, Due Process and the Computional Turn. The philosophy of Law Meets the Philosophy of Technology. New York: Routledge.Search in Google Scholar

Esposito, E., 2014: Algorithmische Kontingenz. Der Umgang mit Unsicherheit im Web. Pp. 233–249 in: A. Cevolini (ed.), Die Ordnung des Kontingenten. Beiträge zur zahlenmäßigen Selbstbeschreibung der modernen Gesellschaft. Wiesbaden: Springer VS.10.1007/978-3-531-19235-2_10Search in Google Scholar

Etzioni, O., 2016: Deep Learning isn’t a Dangerous Magic Genie. It’s just Math. wired.com 15.6.2016.Search in Google Scholar

Etzioni O., M. Banko & M. J. Cafarella, 2006: Machine Reading. American Association for Artificial Intelligence, http://web.eecs.umich.edu/~michjc/papers/machinereading_aaai06.pdfSearch in Google Scholar

Ferrara, E., O. Varol, C. Davis, F. Menczer & A. Flammini, 2016: The Rise of Social Bots. Communications of the ACM,. 59(7), 96–104.10.1145/2818717Search in Google Scholar

von Foerster, H., 1970: Thoughts and Notes on Cognition. Pp. 25–48 in P. Gavin (ed.), Cognition: A Multiple View. New York: Spartan Books. Search in Google Scholar

von Foerster, H., 1985: Cibernetica ed epistemologia: storia e prospettive. Pp. 112–140 in: G. Bocchi & M. Ceruti (eds.), La sfida della complessità. Milano: Feltrinelli.Search in Google Scholar

Fuchs, P., 1997: Adressabilität als Grundbegriff der soziologischen Systemtheorie. Soziale Systeme 3(1), 57–79.Search in Google Scholar

Gillespie, T., 2014: The Relevance of Algorithms. Pp. 167–194 in: J. Boczowki & K. A. Foot (eds.), Media Technologies. Cambridge MA: MIT Press.10.7551/mitpress/9780262525374.003.0009Search in Google Scholar

Gillespie, T., 2016: Algorithms, Clickworkers, and the Befuddled Fury Around Facebook Trends. Social Media Collective, May 18.Search in Google Scholar

Goodfellow, I., Y. Bengio & A. Courville, 2016: Deep Learning. Cambridge MA/ London: MIT Press.Search in Google Scholar

Google, at: https://gmail.googleblog.com/2015/11/computer-respond-to-this-email.html (accessed July 3. 2017)Search in Google Scholar

Granka, L. A., 2010: The Politics of Search: A Decade Retrospective. The Information Society, 26: 364–374. 10.1080/01972243.2010.511560Search in Google Scholar

Grimmelmann, J., 2009: The Google Dilemma. New York Law School Law Review 53(3/4), 939–950.10.31228/osf.io/yp6ejSearch in Google Scholar

Grossman, L., 2010: How Computers Know What We Want – Before We Do. Time, 27 May.Search in Google Scholar

Halevy, A., P. Norvig & F. Pereira, 2009: The Unreasonable Effectiveness of Data. IEEE Intelligent Systems 24(2), 8–12.10.1109/MIS.2009.36Search in Google Scholar

Hamburger, E., 2012: Building the Star Trek Computer: How Google’s Knowledge Graph is Changing Search. The Verge, 8. June. Search in Google Scholar

Hammond, K., 2015: Practical Artificial Intelligence for Dummies. Hoboken: Wiley.Search in Google Scholar

Hardy, Q., 2016: Artificial Intelligence Software Is Booming. But Why Now? New York Times, 19.9.2016.Search in Google Scholar

Hayles, K. N., 2012: How We Think. Digital Media and Contemporary Technogenesis. Chicago/ London: University of Chicago Press.10.7208/chicago/9780226321370.001.0001Search in Google Scholar

Höller, T., V. Tsiatis, C. Mulligan, S. Ksrnouskos, S. Avesand & D. Boyle, 2014: From Machine-To-Machine to the Internet of Things: Introduction to a New Age of Intelligence. Amsterdam: Elsevier.Search in Google Scholar

Kelly, K., 2008: On Chris Anderson’s the End of Theory. http://edge.org/discourse/the_end_of_theory.htmlSearch in Google Scholar

Kirschenbaum, M. G., 2007: The Remaking of Reading: Data Mining and Digital Humanities. NGDM 07, National Science Foundation. 12 October 2007. http://www.csee.umbc.edu/~hillol/NGDM07/abstracts/talks/MKirschenbaum.pdfSearch in Google Scholar

Kitchin, R., 2014: Big Data, new epistemologies and paradigm shifts. Big Data and Society April- June: 1–12.10.1177/2053951714528481Search in Google Scholar

Kollanyi, B., P. N. Howard & S. C. Woolley, 2016: Bots and Automation over Twitter during the U. S. Election. Comprop Data Memo 2016.4.Search in Google Scholar

Langville, A. N. & C. D. Meyer, 2006: Google’s PageRank and Beyond: The Science of Search Engine Rankings. Princeton: Princeton University Press.10.1515/9781400830329Search in Google Scholar

Latour, B., 2007: Beware, your imagination leaves digital traces. Times Higher Education Literary Supplement, April 6.Search in Google Scholar

Luhmann, N., 1981: Kommunikation mit Zettelkästen: Ein Erfahrungsbericht. Pp. 222–228 in: H. Baier, H. M. Kepplinger & K. Reumann (eds..), Öffentliche Meinung und sozialer Wandel: Für Elisabeth Noelle-Neumann. Opladen: Westdeutscher Verlag.10.1007/978-3-322-87749-9_19Search in Google Scholar

Luhmann, N., 1984: Soziale Systeme. Grundriß einer alllgemeinen Theorie, Frankfurt am Main: Suhrkamp.Search in Google Scholar

Luhmann, N., 1985, Die Autopoiesis des Bewußtseins. Soziale Welt 36: 402–446.Search in Google Scholar

Luhmann, N., 1988: Wie ist Bewußtsein an Kommunikation beteiligt? Pp. 884–905 in H. U. Gumbrecht & K. L. Pfeiffer (eds.), Materialität der Kommunikation. Frankfurt am Main: Suhrkamp. Search in Google Scholar

Luhmann, N., 1990: Ich sehe das, was Du nicht siehst. Pp. 228–234 in: Ders.: Soziologische Aufklärung 5. Opladen: Westdeutscher Verlag10.1007/978-3-322-97005-3_11Search in Google Scholar

Luhmann, N., 1997: Die Gesellschaft der Gesellschaft. Frankfurt am Main: Suhrkamp.Search in Google Scholar

Luhmann, N., 2002: Einführung in die Systemtheorie. Heidelberg: Carl-Auer-Systeme.Search in Google Scholar

Luhmann, N., 2005: Einführung in die Theorie der Gesellschaft. Heidelberg: Carl-Auer-Systeme.Search in Google Scholar

Malsch, T., 1997: Die Provokation der “Artificial Societies”. Ein programmatischer Versuch über die Frage, warum die Soziologie sich mit den Sozialmetaphern der Verteilten Künstlichen Intelligenz beschäftigen sollte. Zeitschrift für Soziologie 26(1), 3–21.10.1515/zfsoz-1997-0101Search in Google Scholar

Malsch, T., 2001: Naming the Unnamable: Socionics or the Sociological Turn of/to Distributed Artificial Intelligence. Autonomous Agents and Multi-Agent Systems 3: 155–187.10.1023/A:1011446410198Search in Google Scholar

Malsch, T. & Schlieder, C., 2004: Communication without Agents? From Agent-Oriented to Communication-Oriented Modeling. Pp. 113–133 in: G. Lindemann et al. (eds.), First International Workshop RASTA 2002. Berlin, Heidelberg: Springer. 10.1007/978-3-540-25867-4_7Search in Google Scholar

Marres, N. & C. Gerlitz, 2017: ‘Just because it’s called social, doesn’t make it social‘. On the Sociality of Social Media Platforms. In: M. Guggenheim, N. Marres & A. Wilckie (eds.), Inventing the Social. Manchester: Mattering Press.10.28938/9780995527768Search in Google Scholar

Mayer-Schönberger, V. & K. Cukier, 2013: Big Data. A Revolution That Will Transform How We Live, Work, and Think. London: Murray. Search in Google Scholar

Metz, C., 2012: If Xerox Parc Invented the PC, Google Invented the Internet. wired.com 8.8.2012.Search in Google Scholar

Metz, C., 2015: Google Made a Chatbot That Debates the Meaning of Life. wired.com 06.26.15.Search in Google Scholar

Metz, C., 2016a: How Google’s AI viewed the Move no Human Could Understand. wired.com 14.3.2016.Search in Google Scholar

Metz, C., 2016b: In Two Moves, AlphaGo and Lee Sedol Redefined the Future. wired.com 16.3.2016. Search in Google Scholar

Metz, C., 2016c: What the AI behind AlphaGo Can Teach Us About Being Human. wired.com 19.5.2016Search in Google Scholar

Metz, C., 2017a: Google’s Go-Playing Machine Opens the Door to Robots that Learn wired.com 30.01.2017Search in Google Scholar

Metz, C., 2017b: Inside Libratus, the Poker AI That Out-Bluffed the Best Humans. wired.com 01.012.2017Search in Google Scholar

Mnih, V. et al., 2015: Human-level control through deep reinforcement learning. Nature 518: 529–533.10.1038/nature14236Search in Google Scholar

Moretti, F., 2005: La letteratura vista da lontano. Torino: Einaudi.Search in Google Scholar

Mozur, P., 2017: Google’s AlphaGo Defeats Chinese Go Master in Win for A. I. The New York Times, May 23.Search in Google Scholar

Nass, C. & C. Yan, 2010: The Man Who Lied to His Laptop: What We Can Learn About Ourselves from Our Machines. London: Penguin.Search in Google Scholar

Narrative Science, at: https://www.narrativescience.comSearch in Google Scholar

Page, L., S. Brin, R. Motwani & T. Winograd, 1999: The PageRank Citation Ranking: Bringing Order to the Web. Technical Report, Stanford Infolab. Search in Google Scholar

Parsons, T. & E. A. Shils (eds.), 1951: Toward a General Theory of Action. Cambridge MA: Harvard University Press.10.4159/harvard.9780674863507Search in Google Scholar

Pasquale, F., 2015: The Black Box Society. The Secret Algorithms That Control Money and Information. Cambridge MA: Harvard University Press.10.4159/harvard.9780674736061Search in Google Scholar

Pierce, D., 2016a: Meet the Smartest, Cutest AI-Powered Robot You’ve Ever Seen. wired.com, 27.6.2016.Search in Google Scholar

Pierce, D., 2016b: Spotify’s Latest Algorithmic Playlist Is Full of Your Favorite New Music. wired.com, 05.08.2016.Search in Google Scholar

Podolny, S., 2015: If an Algorithm Wrote This, How Would You Even Know? The New York Times, March 7.Search in Google Scholar

Rogers, R., 2013: Digital Methods. Cambridge MA/ London: MIT Press.10.7551/mitpress/8718.001.0001Search in Google Scholar

Russell, S. J. & P. Norvig, 2003: Artificial Intelligence. A Modern Approach. 2. ed. Upper Saddle River: Pearson Education.Search in Google Scholar

Schmidt, J. F. K., 2017: Niklas Luhmann’s Card Index: Thinking Tool, Communication Partner, Publication Machine. Pp. 289–311 in: A. Cevolini (ed.), Forgetting Machines. Knowledge Management Evolution in Early Modern Europe. Leiden: Brill.10.1163/9789004325258_014Search in Google Scholar

Searle, J. R., 1980: Mind, Brains and Programs. Behavioral and Brain Sciences 3(3), 417–457.10.1017/S0140525X00005756Search in Google Scholar

Seaver, N., 2012: Algorithmic Recommendations and Synaptic Functions. Limn 2. http://limn.it/algorithmic-recommendations-and-synaptic-functions/Search in Google Scholar

Schölkopf, B., 2015: Learning to See and Act. Nature 518: 486–487. 10.1038/518486aSearch in Google Scholar

Sharon, T. & D. Zandbergen, 2016: From Data Feticism to Quantifying Selves: Self-tracking Practices and the Other Values of Data. New Media & Society doi:10.1177/146144481663609010.1177/1461444816636090Search in Google Scholar

Silver D. & D. Hassabis, 2016: AlphaGo: Mastering the ancient game of Go with Machine Learning. https://research.googleblog.com/2016/01/alphago-mastering-ancient-game-of-go.html, 27.1.2016.Search in Google Scholar

Sneha, P. P., 2014: Reading from a Distance — Data as Text. http://cis-india.org/raw/digital-humanities/reading-from-a-distance, July 23.Search in Google Scholar

Solon, O., 2012: Weavrs. The Autonomous, Tweeting Blog-Bots That Feed on Social Content, wired.co.uk, March 28.Search in Google Scholar

Suchman, L. A., 1987: Plans and Situated Actions: The Problem of Human-Machine Communication. Cambridge: Cambridge University Press.Search in Google Scholar

Taylor, P., 2016: The Concept of ‘Cat Face’. London Review of Books, 38/16: 30–32.Search in Google Scholar

Thiedeke, U., 2013: Wir Kosmopoliten. Einführung in eine Soziologie des Cyberspace. Pp. 15–47 in U. Thiedeke (ed.), Soziologie des Cyberspace: Medien, Strukturen und Semantiken. Wiesbaden: Springer VS.10.1007/978-3-322-80482-2_1Search in Google Scholar

Turing, A. M., 1950: Computing Machinery and Intelligence. Mind 59(236), 433–460.10.1093/mind/LIX.236.433Search in Google Scholar

Turkle, S., 2011: Alone Together. Why We Expect More form Technology and Less from Each Other. New York: Basic Books. Search in Google Scholar

Vaidhyanathan, S., 2011: The Googlization of Everything (And Why we Should Worry). Berkeley/ Los Angeles: University of California Press.10.1525/9780520948693Search in Google Scholar

Vis, F., 2013: A Critical Reflection on Big Data: Considering APIs, Researchers and Tools as Data Makers. First Monday, doi: 10.5210/fm.v18i10.4878.10.5210/fm.v18i10.4878Search in Google Scholar

Wagner-Pacifici, R., J. W. Mohr & R. L. Breiger, 2015: Ontologies, Methodologies and new uses of Big Data in the social and cultural sciences. Big Data and Society 2(2), 1–11.10.1177/2053951715613810Search in Google Scholar

Wang, Y., 2016: Your Next New Best Friend Might Be a Robot. Meet Xiaoice. She’s empathic, caring, and always available—just not human. Nautilus, February 4, http://nautil.us/issue/33/attraction/your-next-new-best-friend-might-be-a-robot.Search in Google Scholar

Watzlawick, P., J. H. Beavin & D. D. Jackson, 1962: Pragmatics of Human Communication. A Study of Interactional Patterns, Pathologies, and Paradoxes. New York: Norton. Search in Google Scholar

Winograd, T. & F. Flores 1986: Understanding Computer and Cognition. Reading MA: Addison-Wesley.Search in Google Scholar

Wolchover, N., 2014: AI Recognizes Cats the Same Way Physicists Calculate the Cosmos. wired.com, 15.12.Search in Google Scholar

Youyou, W., M. Kosinski & D. Stillwell, 2015: Computer-based Personality Judgments are More Accurate Than Those Made by Humans. Proceedings of the National Academy of Sciences (PNAS) 112(4), 1036–1040.10.1073/pnas.1418680112Search in Google Scholar

Published Online: 2017-8-15
Published in Print: 2017-8-28

© 2017 by De Gruyter

Downloaded on 9.5.2024 from https://www.degruyter.com/document/doi/10.1515/zfsoz-2017-1014/html
Scroll to top button