1 Introduction
A long time ago, information scientist Rob Kling and colleagues (
2000) warned about interpreting social phenomena as independent of and layered on a specific technology. As this might be history repeating itself, the deployment of AI technologies raised similar concerns. Research in academic and non-academic fields alike often talks about “social impacts” in such broad terms that they leave doubts about the openness of its perspective: the “social” is just something to be taken care of. On the contrary, society is much more than a passive recipient of the transformations carried out by inevitable technological development. As Dourish and Bell (
2011, 51) observed for the Internet, a “naive orientation towards social impacts frames the relationship between the social and technical too narrowly”.
In general, the AI community has been awakening and taking the first steps to avoid such a “layer cake” model. In the last few years, an articulated discussion has developed over regulatory, technical and ethical aspects of AI, but only to a lesser extent have the growing relevant social aspects of AI—both at the design and use stages—captured the attention of the AI community. Namely, the European Commission documents on AI are original contributions to set the stage for framing and regulation (European Commission
2020; European Commission
2021). Technical studies (Sirbu et al.
2019) in addition to research on ethics (Floridi and Cowls
2019) and privacy (Stone et al.
2016) normally address and investigate the “social impacts”’ of AI. In the eyes of the many, these efforts saturate the need for a deeper knowledge of the social world, where these AI-based systems come to operate. Along with some important recent work in computer sciences that seeks to open the perspective to the “social” (Dignum
2019), there are other promising approaches to study the societal implications of AI.
Sartori and Theodorou (
2022), for example, highlight the need for a proper sociology for AI to develop a sociological perspective within the AI community. Only when AI is conceived as a sociotechnical system, a more fruitful approach for the future of AI could be thought throughout. From design to use, there is a path to balance a Human-in-Control (HIC) approach with an ecosystem organized around diversity (in data collection and algorithms models) and intersectionality (in the society).
Cave and Dihal (
2019) dig into the narratives surrounding AI to extrapolate the most common visions among the public. Hopes and fears about AI’s scenarios might heavily influence how the public perceives and approaches the technology. Its diverse and composite configuration gives the general public a role in the acceptance and adoption of the technology mediated through the interaction between the media and public opinion. The public perception of AI is deemed essential to “how AI is deployed, developed and regulated” (ibid., p. 331).
This article falls in between these two approaches, elaborating a socio-technical perspective that overcomes the current neglect of how individuals frame, media portrays, and policy makers regulate AI. To “problematize” the conversation about AI (Roberge et al.
2020), identifying and accounting for who is involved with its role, purpose and imaginaries within the socio-technical system is then a mandatory step. As sociologist Patrice Flichy (
1995) supported the need to contemplate all subjects—from design to use—engaged in the “frame of use” proper to a specific technology, here we focus on the individuals and their levels of awareness, knowledge and emotional response when it comes to AI. Our main argument holds that, as for previous technologies, not only vary the aforementioned levels across social groups depending on many socio-demographics factors, but also they are crucial for the diffusion and acceptance of the technology in the society.
In this direction, the article is organized in six sections. After the introduction, we build the theoretical framework for our main argument relying on some established concepts such as that of “technology as a practice” (Suchman et al.
1999; Star
1999) “sociotechnical imaginaries” (Jasanoff
2015) and “narratives” (Cave et al.
2020). We illustrate the main existing hopes and fears associated with AI and robots as they refer to “imaginaries” that shape and organize how society sees and interprets technology. The third section gives a detailed description of data and methods that, notwithstanding the limitations, grounds our novel socio-technical approach to an original empirical study. The fourth section illustrates the results of the ad-hoc survey conducted within the University of Bologna. It examines the level of awareness (Sect.
4.1) and opinions (Sect.
4.2) with respect to some relevant socio-demographic variables. The latter are also crucial to analyse both utopian and dystopian narratives that, replete with exaggerations, are telling about individuals’ emotional responses (Sect.
4.3) and perceived likelihood in the future (Sect.
4.4). A deeper dive in the perception of narratives (Sect.
4.5) also offers some relevant insights about the role of gender and competence. The fifth section focuses on two issues—the current state of “AI anxiety” and the underestimated point of view of “non-experts”—we deem essential to problematize and to sustain the need for a sociological perspective for AI, opening the floor to future comparative research (Sect.
6).
2 The theoretical framework: a call for a socio-technical perspective
A too narrow definition of the relations between society and technology usually leads towards an attitude of “technological determinism”, where the technology unfolds its logic over a unique path, impacting society and determining its output (Ogburn
1922; Winner
1977). Alternatives have been played out focusing on a reverse logic of how social factors are responsible for shaping technical development and adoption (Mackenzie and Wajcman
1999). In dealing with technicalities, different stakeholders shape technology while solving their conflicts over resources, affordances and power. To a different degree, the constructionist school (Pinch and Bijker
1984) with the varieties of approaches that refer to Science, Technology and Society (STS) studies (Callon
1986) and Actor-network Theory (Latour
2005) investigated the relation between human and artifacts, how social groups—other than developers—are relevant for diffusion and adoption, for success and failures. Technologies are products of social practices, actions and decisions: from their design to their application and use phases, they come out of the contexts, with their specific institutional and organizational cultures, that elaborate on and imagine what they are needed for and where they might be employed. To our argument, AI might be useful to screen thousands of job curricula or historical photos to alleviate Human Resource’s recruitment process (Dastin
2018) or historians’ classifications and colorization of black-and-white pictures (Goree
2021). Yet, it was easily foreseeable that out-of-context AI systems would come up with biased suggestions, such as recruiting higher percentages of males or steering the middle path about colours (leading to beiges). If thinking about technology as neutral is misleading, considering AI simply the latest tool to ease both trivial or complex problems individuals, institutions, states and firms have to deal with is worse.
While acknowledging the march of technology as not inevitable is becoming more common, the AI community still needs to take some steps forward. Sartori and Theodorou (
2022), for example, highlight the need for a proper sociological perspective within the AI community. As such, sociological insights enter into the picture connecting inequalities and AI systems in the effort to offset the recently discovered magnifying glass effect in the process of “automating” inequalities (Eubanks
2018). More open technical approaches—such as the HIC—are also speaking up for a fairer AI (based on shared principles) to be transparent and explainable, accountable and contestable (Sartori and Theodorou
2022). Further down the line, a sociotechnical perspective calls for considering values and inequalities, institutional and organizational practices that are embedded in technology.
Another interesting approach relates to narratives surrounding AI to extrapolate the most common visions among the general public. AI technologies are increasingly present in our daily life, introducing significant changes: navigation systems, chatbots, or music and movie recommendation systems. Their rising presence in our everyday life notwithstanding, laymen do find it extremely difficult to understand these systems’ functioning and consequences. Narratives help in this direction for their broader capacity to convey meaning to social and cultural changes that come along with the technology (Natale
2019; Natale and Ballatore
2020). The study of how individuals approach new technologies and how their perceptions, understanding and expectations originate and unravel offers insights on the multidimensional relation between technology and society.
To a varying degree, all actors involved in the AI process—from start to finish—influence the construction of narratives and its power onto the public. Over the centuries, narratives have always played a role like for the press (Eisenstein
1980), the telephone (Fisher
1992; Marvin
1988), the Internet (Mosco
1999; Levy
1984). As such, narratives are a building block of a broader “socio-technical imaginary”, defined as a “collectively held, institutionally stabilized, and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of advances in science and technology” (Jasanoff
2015, p. 4).
Visions of desirable futures (or resistance to) are supported by shared understandings imbued with values and expectations about society, modernity, human agency and technical potentialities. Again, the Internet serves as an example. The distributed decentralized network infrastructure at the heart of the Internet reflects shared meanings and ethical attitudes across different social groups that overcame sectoral boundaries in the university, military and tech industry in the seventies. Values such as freedom, access and liberalism combined into what has been labelled Internet culture or the “Californian ideology” (Levy
1984; Barbrook and Cameron
1995).
Although AI technologies are not comparable to Information and Communication Technologies (ICTs) for the autonomy of use attributable to the final user, the idea of the imaginary is nonetheless relevant to our case. As Jasanoff’s definition informs the conceptual frame that surrounds any technology, it applies to AI and its community as well. By cutting through the dualism of structure and agency, it “combines some of the subjective and psychological dimensions of agency with the structured hardness of technological systems, policy styles, organizational behaviours, and political cultures” (Jasanoff
2015, p. 24). A sociotechnical perspective for AI allows for a deeper investigation of the social, economic and political roots of these imaginaries, disentangling possible conflicts over competing visions. As happened in the past with the Internet (Lesage and Rinfret
2015; Levy
1984), how the AI community envisions its future steps is key for pluralism and democratic accountability (Crawford
2021).
AI developers, policy makers and the media are other key players in the field. Research shows that not only do AI developers pursue specific technical goals, but also their readings might be a source of influence (Greshko
2019; Dillon and Schaffer-Goddard cit. in Cave et al.
2020, p. 8) as much as their collective imaginary (Robertson
2010; Bory and Bory
2016).
Policymakers also might choose among different forms of regulation on their beliefs and perceptions of AI, diverting public and private funding or affecting governance choices (Natale
2019). As for the Internet, they acted for a long time with an uncritical optimistic view that technical development is necessary and desirable under the auspices of a future economic well-being (Wyatt
2003). Another example of the public’s influence comes out of raising awareness about robots’ potentialities. Lin et al. (
2008) identify public perceptions as one of the main market forces that are currently impacting the development of military robotics and related regulation. Away from AI, in 2015 the European regulation of Genetically Modified crops changed—widening the powers to restrict or prohibit their production (EU Commission
2015)—not as the result of new scientific data. The change was driven by the increased perception of risk among consumers (Malyska et al.
2016).
It is a well-known fact that the media contributes to the framing of the public (DeFleur and Ball-Rokeach
1989; Cave et al.
2018) by covering selected features of emerging technologies. With regard to AI, the media discussion spurred, since 2014 is quite sophisticated in tone, but not in content (Ouchchy et al.
2020). For instance, when bound to the ethics of AI, it does not go deep into technicalities of different types of AI but uses specific examples for thematizing the topic at large. While specialized writers might lack a specific knowledge when it comes to recommendations, Ouchchy et al. (
2020) find a sound interest in the public debate about regulation. For the media to account for both negative and positive social implications is key for supporting a balanced framing and portrayal of AI. This could pave the way to even-handed media reporting.
Sometimes, science fiction jumps into the scene with its truly accurate description of the emerging technological issues by drawing on narratives that have expressed hopes and fears over centuries. Musa Giuliano (
2020) shows how fiction might act as cautionary tales that could nudge or forestall some imaginary compared to others. When we add “intersectionality” to the picture, there is room for more “intersectional sociotechnical imaginaries” that critically addresses the dominant narratives and the related AI potentialities (Ciston
2019). To add to this, it is relevant to consider that collectively held and institutionally stabilized visions are publicly discussed and performed by the media as much as they are instrumentally used by firms in trumpeting their notion of technological advancement (e.g., as promoter of the first internal Committee on Ethics, at the end of 2020 Google winded up firing the two most prominent names because of so-called conflicting views
1).
Since “the future is born of the past, it is equally true that the past is also continuously shaped by the future”, as sociologist Alberto Melucci wrote (
1996, p. 12), a sociotechnical perspective for AI offers new tools to link past, present and future. How future technological development is—individually and collectively—imagined, coordinated and aggregated into a vision of the world is worth investigating. For one, “imagined futures” are a way to control the unpredictability of the future for strategic actions in fields concerning money and innovation (Beckert
2016). For another, concepts such as uncertainty (Giddens
1990) and risk (Beck
1992) nicely fit the study of AI technologies. As “expert systems” intervening in the material and social worlds, they are tools for mediating knowledge asymmetries and balancing feelings of emotional anxiety. Tightly associated with the idea of modernity in the Western world, anxiety especially arises from the lack of technical expertise, which, in turn, requires a leap of faith in the technology. To the completion of everyday life routines, expert systems require to be trusted (Giddens
1991). Calling attention to the relevance of shared meanings and collective imaginaries, future individual expectations and trust is an important addition to the study of technology “put in context”. Narratives, sociologically conceived as “organizing visions” for society (Mosco
2004), offer the bridge to an original contribution to the debate. As they reflect and reproduce traditional lines of social, economic and political inequalities, narratives can be telling about collective and individual knowledge in a society increasingly organized around AI. What follows is a closer look at the AI dominant narratives.
2.1 AI narratives
Relevant works in the field of narratives dig into those regarding AI in the English-speaking countries of the West, looking both inside and outside the world of fiction. (Cave et al.
2019; Cave et al.
2020; Cave and Dihal
2019). Narratives can originate from both how the scientific community and the media covers the topic and how books, movies and TV series speculate about technology (Cave et al.
2018). There is a propensity to describe AI with both overly optimistic or pessimistic tones, which confirms a long-term trend (Fast and Horvitz
2017). In other words, both utopian and dystopian narratives trace back to the main recurrent hopes and fears connected to technology. Talking about AI, Cave and Dihal find four main narrative scenarios with regard to hopes as much as the four scenarios linked to fears. We will briefly describe them in pairs.
Immortality-dehumanization. The first dyad relates to the medical field, in which AI is the cornerstone of new and important areas of research. The extreme evolution of this scenario foresees man conquering Immortality, while the dystopian drift is Dehumanization, where humans lose their essence, ditching values and emotions.
Freedom-obsolescence. Freedom refers to the condition of men liberated from tedious or tiring tasks, be they physical or cognitive. No matter how astonishing technical developments are, this optimistic scenario, where AI and robots totally replace humans in the sphere of work, leaving time to engage only with leisure activities, is far from reality. The far-stretched opposite representation, Obsolescence, is the risk linked to this technical turning point.
Gratification-alienation. The optimistic scenario focused on Gratification sees AI and robots becoming an essential element in the relational sphere, satisfying every possible human desire. The drift to this utopian narrative foresees a world, where machines could satisfy all possible relational desires, leading to a scenario of Alienation, where people prefer interacting with technologies rather than with others.
Dominance-uprising. The latest dyad concerns the use of AI in the military field. Identifying new tools that allow nations or communities to dominate and maintain security over a territory is a major hope. The expectation here is to overcome one of the most iconic narratives in Western filmography: the uprising of machines that take physical and cognitive power, escaping human control.
Why are narratives so powerful? As one of the tools reflecting the content of the social imaginary, narratives forge how actors perceive and understand technology in their daily life. When interpreted as practice (Suchman et al.
1999), technology is telling about the underlying relations of production and use. However, the reality depicted in these eight scenarios turns out to be somehow disconnected from what, so far, are the plausible technical possibilities of AI and its purposes under development (Floridi
2020; Musa Giuliano
2020).
To explain this misalignment, researchers pinpointed how expectations and imagined affordances of AI (Neff and Nagy
2016) influence users’ perceptions and understandings. A well-known case study is Tay, one of the Microsoft most advanced chatbots. Soon after its launch in 2016, Tay started to interact with Twitter users in such an inflammatory and obscene manner that it was shut down within 24 h. Tay was the battlefield, where designers and users’ expectations about what she should do, or how to use her, fully conflicted. As Nagy and Neff (
2015, p.1) point out, rather than fixed capacities, we should think in terms of imagined affordances for it allows to render all together the user’s attitudes, designers’ intentions and both the materiality and functionalities of the technology. As such, technical and imagined affordances are crucial to understanding narratives, their articulations and possible implications for society at large.
Second, another accredited explanation for this misalignment is about the anthropomorphized notions of technologies (Zemčík
2021) driven by the need for social connection, understanding of the relevant technology, and promoting its acceptance (Salles et al.
2020), especially when robots are the object of research (Katz et al.
2015). Science fiction and the media greatly contribute to this end, bearing and fostering social, political and ethical issues. Overall, in a socio-technical perspective, technologies have capacities that extend to the social realm through interactions, perceptions and actions: they are never neutral.
All in all, the drafted socio-technical perspective leads us to formulate the research questions we investigate with an ad-hoc survey with regards to awareness and knowledge of the AI technology. The key elements of this perspective disentangle the socio-technical imaginaries behind the dominant AI narratives, which—as we will see in Sect.
4—powerfully shape attitudes and emotional responses in the public.
4 Results
Here, we present some results on awareness of AI, opinions on robots and AI, emotional responses to the narratives, perceived likelihood of the different future scenarios by gender, generation and competence (4.1, 4.2, 4.3, 4.4). A deeper dive into competence and gender (4.5.) also offers further interesting insights.
4.1 Awareness of AI
First of all, we wanted to assess the level of AI awareness by asking whether participants had heard, read or seen material related to the topic of AI in the last 12 months. In the sample, 76% answered positively, 16% negatively, while the remaining 8% said they were unable to say for sure if what they had read, heard or seen had anything to do with AI.
Table
2 reflects the absence of substantial differences by generation (except a slightly lower percentage among the youngest), while they emerge with respect to the other two variables. While 85% of men claim they have read about AI in the last year, the percentage drops to just under 70% among women. Unsurprisingly, there is a higher percentage of contact with this topic (90%) among those who have a degree in IT or CS than both those who have no competence (70%) or those who have “only” attended a university course in the field or are programming-savvy (82%).
Table 2
Contact with AI issues in the last 12 months by gender, generation and competence in IT or CS fields; percentages
Gender |
Women | 69 | 21 | 10 | 3102 |
Men | 85 | 9 | 6 | 2393 |
Generation |
1950–1989 | 78 | 17 | 5 | 1225 |
1990–1996 | 79 | 13 | 8 | 1279 |
1997–2003 | 74 | 17 | 9 | 2991 |
Competence |
No competence | 70 | 19 | 11 | 2960 |
Course or programming | 82 | 12 | 6 | 2052 |
Degree in IT/CS | 90 | 8 | 3 | 411 |
To test the level of general knowledge of AI, we suggested six technologies (Virtual assistants, Smart Speaker, Google Search, Facebook Tagging, Recommendation systems, Google Translate), asking whether they actually use AI.
As shown in Table
3, we found differences in all the three variables considered. With regard to gender, there is a greater—yet not particularly high—ability among men to correctly identify the presence of AI in the proposed examples. This gap is narrowed down within the category “three correct answers out of 6” substantiating the result that, among women who do not score “all correct”, the majority has sufficient knowledge at least to identify half of the systems that use AI.
Table 3
Technologies using AI (Virtual assistants, Smart Speaker, Google Search, Facebook Tagging, Recommendation systems, Google Translate) correctly identified by gender, generation and competence in IT or CS; percentages
Gender |
Women | 41 | 53 | 6 | 3040 |
Men | 47 | 47 | 5 | 2344 |
Generation |
1950–1989 | 53 | 41 | 7 | 1171 |
1990–1996 | 52 | 44 | 4 | 1267 |
1997–2003 | 37 | 57 | 6 | 2946 |
Competence |
No competence | 41 | 53 | 6 | 2891 |
Course or programming | 45 | 51 | 5 | 2017 |
Degree in IT/CS | 59 | 36 | 4 | 406 |
Interesting is the distance between the first two generations (1950–1989; 1990–1996) and the youngest (1997–2003): about 50% of older respondents provide all the correct answers, while among the youngest the percentage falls to 37%. This result suggests that belonging to the
digital native generation—that is, being the first to be born at a time when technologies such as smartphones, social media and AI already existed (Bennet et al.
2008)—does not mean a better understanding and proficient use the technology behind these tools.
Finally, the level of competence portrays the biggest difference. If attending a course or being programming-savvy does not seem to make a big difference compared to having no competence at all, a degree in IT or CS fields does: it increases by almost 20 percentage points the number of individuals who correctly identify AI in all the suggested examples. Thus, with regard to general knowledge of AI, competence plays the greatest role. Moreover, competence could also be the factor behind some of the gender and generation differences. Unsurprisingly, there are fewer percentages of correct answers among women and the youngest (1997–2003), since they register fewer graduates in IT or CS.
4.2 Opinions on robots and AI
Existing literature fully agrees that one of the salient predictors of knowledge and support for AI is gender: all around the world, women have a worse image of AI than men (Eurobarometer
2017). In the attempt to understand the opinions of individuals towards AI, competence with technology is also found to be a good predictor (Zhang and Dafoe
2019).
Overall, our sample reveals a positive attitude towards these technologies. Modal response was “quite positive” for both robots (60%) and AI (58%), followed by “very positive” (20%; 22%): it means that about 4 out of 5 claim to have more positive than negative opinions. 18% are “not very positive” and 2% is “not at all positive”. To further investigate the factors behind positive views, Table
4 shows interesting differences by gender, generation and competence with regard to “very positive” opinions, which are slightly greater for AI than robots, regardless of the type of breakdown.
Table 4
“Very positive” opinion on robots and AI by gender, generation and competence in IT or CS; percentages
Gender |
Women | 12 | 16 | 3108 |
Men | 30 | 32 | 2391 |
Generation |
1950–1989 | 24 | 23 | 1224 |
1990–1996 | 20 | 23 | 1282 |
1997–2003 | 19 | 22 | 2993 |
Competence |
No competence | 14 | 15 | 2961 |
Course or programming | 25 | 29 | 2054 |
Degree IT/CS | 42 | 45 | 412 |
Our data displays a gender divide in opinions: a higher percentage of men (30%; 32%) shows a very favourable attitude towards both technologies, compared to women (12%; 16%). The generation seems to have a slight influence only on robots: the percentage of “very positive” opinions among the youngest (1997–2003) and the middle (1990–1996) generation is just under (about 4 percentage points) compared to the oldest. There are no considerable differences in opinions about AI. Relevant is, instead, the role of education in the technical fields: the higher the level of competence, the higher the percentage of positive opinions. This is true either for robots or AI.
4.3 Emotional responses to narratives
Telling differences emerge when respondents are confronted with narratives: much variation over concern or excitement arises depending on the scenario. Overall, our data reveals that Freedom and Gratification scenarios are the ones that polarize the least with respect to gender, generation and competence. These narratives register the lowest levels of concern and—together with Alienation—they are perceived as the most likely to happen. These results are in line with Cave and Dihal (
2019), with the only exception being Gratification that scores among the lowest in the UK.
Gender reveals a clear trend: women are more concerned across all narratives (Table
5). These differences are even stronger with reference to specific scenarios such as those addressing fears about AI (Dehumanization, Obsolescence, Alienation and Uprising). When it comes to hopes, a more homogeneous emotional response is recorded with respect to Freedom, Gratification and Dominance with the only exception of Immortality. Not only does the latter elicit more concern than enthusiasm, but it also substantiates gender differences.
Table 5
Emotional responses across scenarios by gender, generation and competence in IT or CS; percentages of concern
Gender |
Women | 77 | 25 | 55 | 79 | 90 | 93 | 97 | 90 | 3040 |
Men | 64 | 21 | 52 | 78 | 72 | 81 | 91 | 81 | 2322 |
Generation |
1950–1989 | 76 | 26 | 56 | 83 | 86 | 87 | 94 | 86 | 1174 |
1990–1996 | 71 | 24 | 57 | 81 | 79 | 85 | 93 | 83 | 1255 |
1997–2003 | 70 | 22 | 51 | 75 | 82 | 90 | 95 | 87 | 2993 |
Competence |
No competence | 75 | 25 | 55 | 79 | 88 | 91 | 96 | 88 | 2898 |
Course or prog | 68 | 21 | 52 | 77 | 76 | 85 | 93 | 83 | 1995 |
Degree in IT/CS | 57 | 19 | 52 | 77 | 69 | 78 | 92 | 83 | 402 |
The variable generation provides considerably less precise indications in understanding what factors are associated with different emotional responses to the considered narratives. In the Italian context, data lead to presume that generation does not influence the respondents’ attitudes. Keeping in mind that these are small percentage differences, we can only note that a lower number of people born between 1997 and 2003 declare to be concerned about scenarios of hope (Immortality, Freedom, Gratification and Dominance) compared to the oldest generation (1950–1989). On the reverse side, the in-the-middle generation (1990–1996) is the least worried about the scenarios of fear, while the youngest (1997–2003) are more worried. Possible explanations should consider that younger respondents might not clearly distinguish the different technologies behind AI and robots or that they were socialized to technology through darker and dystopic fictions and movies (such as Black Mirror, Westworld or Ex Machina).
Again, competence does affect the emotional response. In three scenarios (Immortality, Dehumanization and Obsolescence) out of eight, those who have no competence are more concerned than those who have it. Again, the closer the relationship with IT or CS, the lower the percentage of concern recorded. Freedom and Uprising record minor differences (although with a similar trend), while scenarios of Gratification, Dominance and Alienation show the absence of differences.
4.4 Perceived likelihood of the narratives
Investigating the technological future, respondents were also asked whether they consider each scenario likely to happen in the next 15 years (Table
6). Across the scenarios, women are slightly inclined to consider everything more likely to happen. While negative scenarios polarize the opinions of men and women, there is greater alignment on the positive ones.
Table 6
Narrative’s likelihood in the next 15 years by gender, generation and competence in IT or CS; percentages
Gender |
Women | 5 | 69 | 70 | 65 | 52 | 64 | 72 | 29 | 3043 |
Men | 4 | 61 | 64 | 61 | 40 | 54 | 59 | 18 | 2353 |
Generation |
1950–1989 | 3 | 63 | 61 | 62 | 48 | 61 | 68 | 28 | 1179 |
1990–1996 | 5 | 64 | 67 | 66 | 49 | 62 | 69 | 23 | 1266 |
1997–2003 | 5 | 67 | 70 | 62 | 45 | 58 | 64 | 23 | 2951 |
Competence |
No competence | 4 | 65 | 68 | 64 | 50 | 62 | 69 | 27 | 2903 |
Course or prog | 5 | 64 | 67 | 62 | 46 | 58 | 63 | 21 | 2019 |
Degree in IT/CS | 3 | 70 | 65 | 60 | 35 | 51 | 60 | 19 | 404 |
Immortality is the only one showing no difference. There is a general agreement regardless of gender, generation and competence: in the next 15 years, it is unlikely for AI to reach a level of development that leads to eternal life. In the other seven scenarios, there are some small differences by generation. The youngest (1997–2003) perceives as less likely the negative scenarios (Dehumanization, Obsolescence, Alienation and Uprising) compared to older respondents. The opposite happens for Freedom and Gratification.
Looking at competence, there is a constant trend across all scenarios with the exception of two: immortality and freedom. The former registers a substantial agreement on the lower bound, while the latter is the only one where graduates peak their percentage.
In the other six scenarios, the lower the competence, the higher the percentage of considering those futures achievable. To confirm that competence does have a role in articulating the attitudes towards AI, it is to be noted that the difference between high and no competence is at its highest levels the scenarios of fear.
Our results are in line with Neri and Cozman (
2020) analysis of public tweets from January 2007 and January 2018 in English-speaking countries. Most of the risk perception is associated with existential risks that stretch from foreseeing the end of humanity and the advent of a Artificial General Intelligence (AGI). With regard to narratives which portrayed existential risks, our data reveals that 47% of the sample thinks that
Dehumanization is likely to happen. Likewise,
Uprising, which is one of the most discredited scenarios by experts (Stone et al.
2016; Brooks
2017), has been considered plausible by 3 respondents out of 10. These results are particularly intriguing in supporting the misalignment between real technical achievements and collective imaginaries.
4.5 Narratives by competence and gender
This section offers a deeper dive into competence and gender on the perception of robots and AI.
4.5.1 Proficiency profiles
A different way to evaluate the role of competence is to profile respondents over a more articulated line of expertise around the most and least experts. The “Proficient” profile comprises all those who heard, read or saw something about AI, correctly identified all six suggested AI technologies and got a degree in IT or CS. The “Not proficient” profile collects those who didn’t hear, read or see anything about AI (or didn’t know if it concerned this topic), made at least three mistakes out of the six suggested technologies and have no competence in IT or CS.
As shown in Table
7, the
Proficient profile has a better opinion on both AI and robots. In this group, the modal response is “very positive” (52–54%), while among
Not proficient it collects 6% for robots and 8% for AI. Among the latter, almost 60% declares a quite positive opinion, while about 30% feels “not very positive”. Sceptics fall to 5% among
Proficient. Again, results from surveys around the world confirm this trend: technical competence (Zhang and Dafoe
2019) or even just contact with more general sources of information about AI (Eurobarometer
2017) could improve people’s opinion about these technologies.
Table 7
Opinions about robots and AI by Proficiency profiles; percentages
Opinion |
Not at all positive | 3 | 1 | 3 | 2 |
Not very positive | 28 | 4 | 31 | 5 |
Quite positive | 63 | 42 | 59 | 39 |
Very positive | 6 | 52 | 8 | 54 |
N | 177 | 223 | 177 | 223 |
4.5.2 Gender
It is even more interesting to look at Tables
8 and
9. Respondents were asked whether they favour a further future development of AI systems for we wanted to check to what degree this disposition might influence their perception.
Table 8
Emotional response to the narratives of those “strongly in favour” to AI’s further development by gender; percentages
Excitement |
Women | 30 | 91 | 61 | 31 | 19 | 14 | 6 | 17 | 723 |
Men | 48 | 87 | 61 | 28 | 40 | 29 | 13 | 27 | 1083 |
Concern |
Women | 70 | 9 | 39 | 69 | 81 | 86 | 94 | 83 | 723 |
Men | 52 | 13 | 39 | 72 | 60 | 71 | 87 | 73 | 1083 |
Table 9
“Strongly in favour” to AI further future development among respondents who are concerned and those who are excited about different narratives by gender; percentages
Excitement |
Women | 32 | 29 | 32 | 34 | 46 | 48 | 42 | 41 |
(N) | (695) | (2291) | (1363) | (647) | (302) | (203) | (96) | (302) |
Men | 62 | 52 | 59 | 59 | 67 | 71 | 66 | 65 |
(N) | (837) | (1823) | (1121) | (520) | (651) | (442) | (216) | (444) |
Concern |
Women | 22 | 9 | 17 | 21 | 21 | 22 | 23 | 22 |
(N) | (2342) | (746) | (1674) | (2390) | (2735) | (2834) | (2941) | (2735) |
Men | 38 | 27 | 35 | 43 | 39 | 41 | 45 | 42 |
(N) | (1485) | (499) | (1201) | (1802) | (1671) | (1880) | (2106) | (1878) |
In the scenarios of hope (with the only exception being
Immortality), Table
8 reflects the absence of gender difference among those who are strongly in favour of further development of AI. It could be further noticed that Freedom shows a peculiar performance: women are (slightly) more enthusiastic than men. One possible interpretation of this anomaly emerges out of our qualitative data.
7 Women express appreciation for the potential aid that robots and AI systems could provide in the domestic activities within the household. This is especially true in the case of assisting robots: women—usually responsible for the care labour—foresee a potential material help.
When we turn to the scenarios of fear, a consistent gender difference is in clear sight. Men strongly in favour register less concerns about negative future evolutions. Table
6 helps in interpreting this result as men do perceive the scenarios of fear to be less likely to be realized in the next 15 years.
Table
9 highlights that gender does—again—play a role in the perception of AI and the subsequent attitudes towards its development in the future. Being the emotional response either excitement or concern, men do favour a further future development of AI systems more than women. Let’s look at both men and women who are concerned about the eight scenarios. Among them, men strongly in favour of future development have percentages that double those of women across all scenarios. Similarly, if we switch to those who are excited, data register percentages for men almost two times those of enthusiastic women.
Overall, we have a threefold intuition about the gender divide in the perception of AI to further investigate. Women consider each of these “extreme” scenarios as more likely compared to men (Table
6). Accordingly, the emotional response follows: even looking among those who are strongly in favour, there are higher levels of concern among women (Table
8). The differences of “strongly in favour” between worried and enthusiastic are greater among men than women, and this holds true across scenarios (Table
9). This suggests that the emotional response influences the attitude of men, keeping at bay non-realistic fearful reactions. These original insights highlight the importance of further research on the gender divide and how it mediates opinions, knowledge and the sociotechnical imaginaries about AI.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.