Skip to main content
Top
Published in: AI & SOCIETY 2/2024

Open Access 23-06-2022 | Original Article

The news framing of artificial intelligence: a critical exploration of how media discourses make sense of automation

Authors: Dennis Nguyen, Erik Hekman

Published in: AI & SOCIETY | Issue 2/2024

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Analysing how news media portray A.I. reveals what interpretative frameworks around the technology circulate in public discourses. This allows for critical reflections on the making of meaning in prevalent narratives about A.I. and its impact. While research on the public perception of datafication and automation is growing, only a few studies investigate news framing practices. The present study connects to this nascent research area by charting A.I. news frames in four internationally renowned media outlets: The New York Times, The Guardian, Wired, and Gizmodo. The main goals are to identify dominant emphasis frames in AI news reporting over the past decade, to explore whether certain A.I. frames are associated with specific data risks (surveillance, data bias, cyber-war/cyber-crime, and information disorder), and what journalists and experts contribute to the media discourse. An automated content analysis serves for inductive frame detection (N = 3098), identification of risk references (dictionary-based), and network analysis of news writers. The results show how A.I.’s ubiquity emerged rapidly in the mid-2010s, and that the news discourse became more critical over time. It is further argued that A.I. news reporting is an important factor in building critical data literacy among lay audiences.
Notes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Understanding public perception of big data and artificial intelligence (A.I.) is a growing research field in the intersection of critical data studies, media studies, and communication science (Hartman et al. 2020; Kennedy et al. 2020; Zubiaga et al. 2018; Bunz and Braghieri 2021). Researchers have come to view citizens’ perspectives as important factors for understanding how datafication and automation form subjects of politicisation in public discourses. The societal stakes are high, as data-driven technology affects diverse domains. Analysing public perception reveals the extent of awareness for these transformations, including associated benefits and risks. It is not unlikely that aside from personal experiences, news media reporting plays a role in shaping individual views on big data, A.I., technology companies, and data practices. On the one hand, news media cover how data-driven technologies offer innovative solutions, new consumer products, and drive progress. On the other, news media frequently report about the harms of data-driven technology, such as privacy invasion or algorithmic discrimination. News reporting makes trends visible, influences public discourses, contributes to the discursive construction of benefits and risks (Lupton 2013), and shapes technology perception (Pentzold et al. 2019). This links to critical questions about who dominates technology discourses and controls the digital transformation of society (Michael and Lupton 2015; Dourish and Gomez Cruz 2018).

2 The research objective: charting the A.I. media discourse

A few publications offer tentative theorisation and empirical findings for the connection between news media, public discourses, and public perception of big data and A.I. (Bunz and Braghieri 2021; Paganoni 2019; Pentzold et al. 2019; Pentzold and Fischer 2017). These studies emphasise how technologies are given meaning in media reporting and investigate the connection to audiences’ imagination of data and algorithms.
The present study contributes to this nascent research effort by critically exploring the news framing of A.I. through a quantitative content analysis. It charts how A.I. (1) was framed over the past decade; (2) triggered transformations in a growing spectrum of societal sectors; (3) evolved into a policy issue; (4) raised ethical challenges related to societal and individual risks; and (5) became a focal point of journalism that links to questions of critical data literacy among lay audiences (Nguyen 2017; Gray et al. 2018).
The sample includes popular new media outlets based in the USA and UK: The New York Times, The Guardian, Wired, and Gizmodo (N = 3098). These were selected for their international scope and focus on technology. All cover A.I. from different angles and are influential voices in global media discourses on technology. The primary method is an automated content analysis (A.C.A.) for the detection of emphasis frames (Burscher et al. 2016; Chong and Druckman 2007) and risk references, as well as sentiment analysis, and network analysis. Following Burscher et al. (2016: 531) news framing is defined ‘as emphasis in salience of some elements of a story above others’. In the case of A.I., different narratives circulate in media discourses that focus on diverse fields of applications (e.g., healthcare, transportation, and commerce), evaluations, future scenarios/predictions, benefits, and risks. The empirical part clusters news articles based on relevant linguistic indicators and the most prevalent words per cluster allow for detecting frames inductively (Burscher et al. 2016).
The present study thus addresses important empirical questions about public understanding of current tech trends: how news media make sense of A.I., where it comes from, where it has an impact, what benefits are associated with it, and what risks are construed as most pressing.

3 News framing and public discourses on technology

Emerging technologies are subject to framing across multiple domains, such as expert discourses in academia and research, business and management, culture, politics, and governance. The assumed role of news media is to synthesise and connect different expert views to general audiences that are affected by and, directly or indirectly, contribute to the adoption of new technologies as citizens and consumers/users (Groves et al. 2015). This concerns two intersecting dimensions. First, the more prominent an issue is covered in the news, the more salient it becomes in public discourse and among audiences. This relates to agenda-setting theory and how editorial choices influence the visibility and ranking of topics (McQuail and Deuze 2020). Second, how technology is presented may have an impact on audiences’ perceptions and evaluations. The question is then in what exact contexts do news media cover technology. For example, is the technology primarily portrayed as an opportunity or as a threat? News framing can shape attitudes, opinions, and behaviour, i.e., have framing effects on individuals (Matthes 2014; Lecheler and de Vreese 2019). However, news media are not neutral observers but articulate perspectives and influence opinions on ideas, issues, and people. They shape perceptions, understanding, and attitudes towards technology (Groves et al. 2015) through agenda-setting and news framing. Media reporting selects and emphasises specific aspects of a technology’s perceived impact and construct propositions for the meaning of specific issues (Entman 1993; Chong and Druckman 2007; de Vreese 2005).
Previous examples of novel technologies that attained media attention include nanotechnology (Cutcliffe et al. 2012), cloning (Holliman 2004), gene-modification (Tucker 2012), and digital technology (Guzman and Jones 2014). Research on their news framing shows that news media cover technologies with a focus on economic benefits versus societal, environmental, and individual risks. The debates over the pros and cons of technologies can be polarising by either overhyping potential benefits or exaggerating risks and harms (Groves et al. 2015). Media discourses on emerging technologies often address ethical implications, especially when the benefits and threats—hypothetical or real—of a newly emerging invention are not fully understood by experts, regulators, and the public. Prevailing cultural norms about admissible and undesirable (or even detestable) forms of technology usage determine the dynamics of news media discourses and framing therein across societies. The degree to which technology has tangible beneficial or harmful impacts plays a role in the formation of individual and collective views on ethics, acceptability, and unacceptability (Brossard et al. 2008).
Aside from mainstream news media, social media have become important sites for technology discourses. For example, Zubiaga et al. (2018) conducted a longitudinal study of Twitter debates on the Internet of Things and show how next to business opportunities, ‘trust, privacy, and security’ are posited as key issues. Social media and news media are entangled in a cycle of mutually influential flows of communication. Technological trends and the discussion about their impacts often emerge on social media first, where experts and tech creators outpace news media. Recent examples are non-fungible tokens (NFTs) or the metaverse. Within digital platforms, public conversations centre on questions of value and risk, which are often dominated by business and tech leaders. Social media debates can focus on aspects of technology that are not (fully) captured in mainstream news coverage. They offer the space to “go in-depth” about technology to an extent that the general format of news media does not allow for. For example, while cryptocurrency is a visible—albeit niche—topic in mainstream news, its journalistic coverage cannot keep up with the speed and level of detail in Twitter or Reddit discourses among communities involved in driving its adoption.
That is not to say that news media reporting is merely reactive to social media discourses by selectively including trending issues on news agendas. First, news media are sources of information and active participants in social media discourses themselves. Users share and react to relevant news stories and make them part of their discussions. On an individual level, journalists engage in online communities and co-shape tech discourses and their experiences can flow back into their news content. On an organisational level, news media developed strategies to instrumentalise social media for news curation (Park and Kaye 2019). Second, news media are still essential for indicating broader societal relevance of issues by including them on the public agenda. One prominent example is how “whistle-blowers” such as Julian Assange, Edward Snowden, or Christopher Wylie relied on conventional news media and not social media alone to reach a broader public. The making of meaning happens at diverse sites or “sections” of the wider public sphere, which can no longer be considered as confined to conventional mass media. However, news outlets remain in a strong position to reach the public, synthesise viewpoints from diverse stakeholders in a summarising manner, and influence the allocation of societal attention to specific issues (Swart and Broersma 2017). They may have lost some of their communicative monopolies, but cannot be considered irrelevant for public discourses.

4 Defining A.I.

In short, A.I. is an umbrella term for automated digital systems that classify, recommend, and make decisions via algorithms based on data with the ability to learn from that data. These take various context-dependent forms, but the underlying data-driven principles share tenets related to data analysis, ‘calculative practices’ (Williamson 2018), data assemblages, and automated, non-human decision-making through data. Advances in A.I. depend on two broader factors that are intrinsically linked. On the one hand, there are data, which have become abundantly available for organisations with the means to collect them and harness their potential. For A.I. to perform well, it must be trained on sufficiently large datasets relevant to the task in focus. On the other, algorithms process data to achieve specific goals. Machine learning (ML) and deep learning (DL) are strategies for making A.I. systems both more accurate and independent from human supervision. They are concerned with progress in how the underlying algorithmic principles implement more advanced forms of automation. Breakthroughs in ML and DL often make headlines in technology news sections.
Advanced forms of A.I. can adapt to novel situations. As “narrow” A.I.s, they serve clearly defined tasks in business, public administration, research and development, security, and even creativity. Their expanding implementation is often complementary to human labour, but may initiate the disappearance of certain professions. A.I. comes in form of hidden and visible data-driven solutions that automate processes and may attempt to simulate human behaviour (e.g., chatbots, voice assistants). They increasingly “look at” and “listen to” users (McStay 2018). A.I. is possibly a metaphorically more concrete concept than “big data” from a cognitive-linguistic viewpoint (Lakoff and Johnson 2003), since it is often associated with robots, cyborgs, and other anthropomorphic entities (Darling 2015). A.I. is a versatile technology and experts share different perceptions of associated values and risks. Moreover, datafication and automation are abstract concepts; they are everywhere yet difficult to grasp. Definitions are inevitably contestable, and the perceived impact varies between the domains in which A.I. systems operate.

4.1 News coverage of A.I., risk discourses, and critical data literacy

Considering A.I.’s versatility and plurality, news reporting is likely to cover its impact across diverse news sections (e.g., business, politics, technology, and culture). News media play a key role in the making of the meaning of A.I. technology on a societal scale and they are proactive forces in the discursive construction of perceived benefits and risks (Lupton 2013) associated with the technology. Yet few available studies focus on the news framing of A.I. specifically. In one recent contribution, Bunz and Braghieri (2021) investigate the portrayal of A.I. in the medical context via a qualitative framing analysis that covers 4 decades of news reporting. They show how A.I. is often portrayed as somehow superior to human experts in their sample of U.K.- and U.S.-based news outlets. They observe how the personification of A.I. has become a ‘trope in news coverage’ (Bunz and Braghieri 2021). A.I. is often perceived as more efficient and more accurate than humans. While there are indeed examples in which A.I. enhances human performance and optimises tasks prone to human errors, framing practices that imply A.I. superiority are problematic. First, this may lift A.I. above humans, which betrays the fact that they are human-made and reflect their creators’ biases. Second, anthropomorphising A.I. opens the path for deflecting questions about accountability and responsibility from the creators behind the technology (black-boxing).
Tensions between beneficial and harmful impacts become visible in expert and public discourses that appear polarised. Recurring themes are the oppositions of value through more accurate data insights versus privacy invasion or benefits from automation versus a loss of human influence, discrimination, and exclusion. These oppositional potentials are discussed along two temporal trajectories: (1) a vaguely defined future in which different scenarios along the greyscale between utopian and dystopian visions are put forward; (2) the present, in which the benefits and risks of A.I. are already observable. Cave and Dihal (2019) show that a certain set of recurring hopes and fears shape cultural discourses on A.I. in their explorative study of fictional and non-fictional texts. The authors identify four dichotomies: immortality vs. inhumanity, ease vs. obsolescence, gratification vs. alienation, and dominance vs. uprising. Though their study does not focus on news framing specifically, these general themes seem to resonate in new discourses where the balance between benefits and risks for human agency is discussed in different societal contexts.
While the exploration of A.I. futures can have a quasi-philosophical notion, (semi-)autonomous technologies already have tangible and diverse impacts on the present. For example, automation of labour can be seen as a form of progress (e.g., creating value, enhancing human performance) or as a threat to social stability (e.g., rendering humans redundant, amplifying human biases). A.I. solutions make processes more efficient, save costs, and support knowledge creation as well as creativity in, e.g., business, healthcare, governance, and culture. However, over the past few years, various risks emerged that indicate limitations to the beneficial impact of A.I. These are primarily data bias and algorithmic discrimination (Strauß 2021), surveillance and privacy invasion (Bu 2021), information disorder/disinformation (Ahmed 2021), and cyber-war/cyber-crime (Owe and Baum 2021; Ring 2021). A.I. tools that scan job applications and reproduce sexist biases or reflect racist tendencies in automated image categorisation are examples of data bias. A.I. systems deployed for monitoring populations and ranking individuals based on extensive data collection serve surveillance. A.I. technologies that create and promote false media reports, “fake news”, deepfakes, etc. relate to information disorder. Cyber-war/cyber-crime describes how A.I. innovations either serve new forms of militaristic or criminal uses or how existing A.I. systems become new vulnerabilities prone to cyber-attacks. This list of A.I. risk categories is not exhaustive, and some occurrences do not easily fit in any of these boxes, but the overview allows for differentiating risks as they become manifest in society.
In a quantitative study that also uses Natural Language Processing (NLP), Crépel et al. (2021; also Crépel and Cardon, 2021) show that news media engage in a critical discourse about these issues. Similar to the present study, the authors chart the most prevalent topics in the A.I. media discourse, before they identify what criticism is conveyed in news content. Their findings show that temporal trajectories indeed matter, with a critical news discourse that centres on undesirable A.I. futures and regulatory interventions in the present. However, while building on an expansive empirical foundation, the analysis does not explicitly link the news framing angle to critical data literacy.
How the news frame A.I. can have a direct connection to lay audiences’ views and attitudes. This connects to data literacy and algorithmic awareness: how much laypeople understand about the impact of datafication and automation, including the potential advantages as well as disadvantages that the use of (personal) data through technology can have (Nguyen 2017). Data literacy is not limited to numeric expertise and knowledge about statistics but encompasses skills in critical thinking about the social–political implications of big data and A.I., especially concerning fairness, inclusion, accessibility, and responsibility (Gray et al. 2018). It concerns conceptual knowledge about how power structures are affected by data-driven technologies and what the consequences are for personal and collective economic, social, cultural, and political well-being against the background of digital transformation. This connects to algorithmic awareness, i.e., an understanding of how automated systems make decisions about individuals both in their private (e.g., recommender systems in dating apps, mortgage applications) and professional lives (e.g., automated systems for selecting job applicants). In a representative survey among individuals living in the UK, Cave et al. (2019) found that while almost half of the respondents showed a relatively accurate understanding of A.I., many held distorted views on the perceived impact of the technology. Critical thinking about these issues can be stimulated by publicly accessible information about datafication and automation. Critical news coverage of A.I. thus directly connects to critical data literacy.
To sum up, A.I. is a technological trend ubiquitous in society and its perceived omnipresence should resonate in media reporting, which informs audiences about its benefits and risks. Especially, the latter point relates to general critical literacy among the public. Based on the previous discussion, three research questions guide the empirical analysis. The first research question (RQ1) is How do news outlets frame A.I.? It is likely that news reports on A.I. cover different societal domains and this diversity increased throughout the years. Not only did diversity in terms of domains increase, but certain frames also received more attention over time, reflecting trends in society. With several high-profile scandals and growing governmental scrutiny of “big tech”, it is probable that media discourses started to address more risks associated with AI. RQ2 is: What are the most common risks mentioned in news content on A.I.? It is likely that the overall tone of voice in A.I. reporting changed in recent years and that automation became a policy issue of growing relevance. Finally, considering that A.I. is a complex niche topic that gradually gained wider media attention, it is not unlikely that a limited group of journalists and experts report about/comment on A.I. developments. RQ3, therefore, is: Who are the journalists and opinion leaders that produce A.I.-related news content?

5 Method and data

The present study deploys an automated content analysis (A.C.A.) using term frequency-inverse document frequency (TD-IDF) to determine the importance of words within a large corpus of news articles. K-means clustering (Burscher et al. 2016) was used to assign clusters based on the determined document features. In addition, a dictionary was built to identify data risk references in the sampled news texts. Latent Semantic Scaling (LSS, Watanabe 2020) served for determining the sentiment in individual news articles. LSS combines a predefined dictionary of sentiment words with latent semantic analysis. It calculates the semantic proximities between words (concepts) in an article and the dictionary of sentiment words. Finally, network graphs of authors of A.I. articles were created in Gephi.
The text corpus consists of A.I.-related news content from four different (tech-)news outlets that publish in English: The New York Times, The Guardian, Wired, and Gizmodo. Relevant news content was retrieved via search requests for the keywords “A.I.”, “AI”, and “artificial intelligence”. Only articles in which these terms occurred at least twice in the body text or once in the title or short metadata description were included in the final dataset of 3098 unique items (Table 1). The timeframe for the analysis spans from January 2010 to May 2021.
Table 1
Final text corpus
Outlet
# Articles
Wired
694
Gizmodo
701
Guardian
1249
New York Times
454
Total
3098
Following Burscher et al. (2016), the articles were pre-processed before the actual clustering based on relevant document features. First only nouns, verbs, and adjectives within the text were extracted via Name Entity Recognition (N.E.R., using the spaCy library in Python). Stop words were removed, and the remaining words (nouns, verbs, and adjectives) lemmatised. Second, words that occurred in less than five articles and more than 40% of the articles were also excluded. Using the elbow method (Matthes and Kohring 2008; Burscher et al. 2016), a 14-cluster-solution was selected for grouping the news articles. The top ten words per cluster provided orientation for labelling each cluster as an emphasis frame (Table 2).
Table 2
Top ten words per frame cluster
F1
F2
F3
F4
F5
F6
F7
F8
F9
F10
F11
F12
F13
F14
App
Chinese
Film
Tech
Percent
Brain
Content
Car
Game
Robot
Automation
Health
Military
Neural
Assistant
Government
Movie
Facial
Business
Quantum
Fake
Driving
Chess
Show
Job
Medical
Autonomous
Deep
Software
American
Story
Government
Cloud
Science
News
Autonomous
Match
Science
Report
Care
Contract
Network
Voice
Administration
Trailer
Public
Chief
Language
Podcast
Vehicle
Player
Fiction
Employment
Cancer
Drone
Brain
Phone
Trade
Season
Recognition
Deal
Software
Social
Self
Poker
Story
Economy
Patient
War
Chip
Bot
Security
Fiction
Privacy
Executive
Program
Video
Driver
Playing
Book
Productivity
Hospital
Defense
Software
Virtual
National
Character
Law
Market
Intelligent
Hate
Driverless
Champion
Life
Robot
Disease
Tech
Language
Device
Tech
Streaming
Industry
Tech
Able
Speech
Road
Program
Novel
Percent
Diagnosis
Letter
Recognition
Smart
Country
Robot
Ethical
Trade
Algorithm
Deepfake
Wheel
Victory
Series
Economic
Clinical
Ban
Image
Home
Foreign
Science
Use
Financial
Problem
Platform
Automotive
Board
Art
Manufacturing
Treatment
Lethal
Search
N = 298
N = 112
N = 130
N = 294
N = 236
N = 564
N = 113
N = 77
N = 100
N = 691
N = 93
N = 101
N = 110
N = 179
Furthermore, the researchers built a dictionary for identifying the four major data risks based on the results of a preliminary qualitative content analysis (N = 180): key phrases that represent a specific data risk were listed as “signals” for a N.E.R.-algorithm to pick up in each article (Table 3). The A.C.A. combines inductive (frame detection) and deductive (data risks detection) steps.
Table 3
Dictionary approach for N.E.R
Data risk
Indicators (examples)
Cyber-crime and cyber-war
Cyber-attack, cyber-crime
Information disorder
Fake news, misinformation
Surveillance
Privacy, privacy intrusion
Data bias
Discrimination, racism, sexism
To test the validity of the results of the A.C.A., two researchers manually coded a random sample of 210 articles to determine the dominant emphasis frame (Burscher et al. 2016) as well as the presence of data risk references. Intercoder reliability was measured using Krippendorff’s Alpha and resulted in acceptable scores of 0.82 for two human coders and the A.C.A (Hayes and Krippendorff, 2007).
The human coders agreed with the A.C.A. results for the emphasis frame detection in 90% of the cases; the scores for the human–computer agreement for the data risk presence were equally high. The final data set includes variables for the title, date of publication, author(s), word count, assigned frame (k-means cluster), data risk references (data bias, surveillance, information disorder, and cyber-crime/cyber-war), and sentiment score per article.

6 Results

6.1 News frames in A.I. reporting

Media interest in A.I. steadily increased over the past decade and nearly quadrupled from 2010 to 2015 (Fig. 1). While the overall number of articles covering A.I. is likely to be comparatively small in the overall daily output of most news outlets, the topic visibly attracted attention in just a few years. News content on A.I. issues also became longer and grew by ca. 39% from 2010 (M = 1003 words) to 2020 (M = 1394 words). The number of articles continued to increase until reaching a peak in 2018, before it started to decrease. This may indicate a “hype-cycle” for A.I., not unlike the media buzz around big data a few years earlier.
The cluster analysis yielded 14 distinct frames (Table 4) that represent diverse aspects of A.I.’s impact on society over time. A.I.’s multi-domain/multi-systems relevance increased rapidly between 2013 and 2014 (Fig. 2). Before that, the technology was mostly perceived as a subject of research or a concept in culture before it found concrete practical application in different sectors.
Table 4
Frames in A.I. news reporting
Frame
N
Automated Services & Products
298
China, International Politics & AI Competition
112
A.I. in Entertainment
130
A.I. & Governance
294
A.I. & Business
236
A.I. & Research
564
A.I., Public Discourse & Information Disorder
113
Autonomous Vehicles & Robots
77
A.I. & Games
100
Robots in Culture
691
Automation of Labour
93
A.I. & Healthcare
101
A.I. Weapons
110
A.I. Technology Development
179
Total
3098
Several broader themes emerge here: A.I. is often covered with respect to consumer-centric technologies (Automated Services & Products), its potential for broader transformations in the business sector, and economic growth (A.I. Business & Finance). This is unsurprising, given the business interests tied to the development and rapid distribution of A.I. solutions across society and expected returns of investment. Automation of Labour is a more recent frame that centres on the economic pros and cons of A.I. technology.
The political implications of A.I. are frequent news topics that cluster into distinct news frames: China, International Politics & A.I. Competition describes how the emergence of the U.S.–China-rivalry attained media attention around 2016, partly connected to claims about a new “arms race” or “A.I. cold war”. A.I. & Governance covers news content on governmental–political interventions and debates on regulation. A.I. was not considered an issue that needed much governmental or regulatory intervention before 2015. However, a few years later, the issue was more frequently at the centre of a governance framing. This development gained visibility as a news frame especially after 2017, at which point A.I. coverage peaked in domain diversity (Figs. 2 and 3). The number of articles in the governance frame grew from 2 in 2012 to 84 in 2019. The news frame A.I., Public Discourse & Information Disorder consists of articles on the rise of bots in social media platforms, “fake news”, “deepfakes”, and the automation of the digital public sphere. A.I. Weapons contains news content that specifically addresses militaristic uses of automation in novel defense systems. Both developments also connect to questions of A.I. governance.
Other recurring themes in A.I. news coverage concern tech development and research, both as a subject of and driver for scientific progress. A.I. Technology Development concerns the advancement of A.I. through innovations in both software and hardware, including infrastructure as well as methods (e.g., cloud computing, natural language processing). This takes place in the private and public sectors (e.g., universities) or the intersection of both. A.I. Research describes news articles with a focus on fundamental research on A.I. technology but also how the technology finds application in other scientific/academic disciplines.
A.I. & Healthcare and Autonomous Vehicles & Robots emerged as distinct news frames that cover innovations in specific fields of application, which indicates that these two sectors are currently considered as particularly important for tangible impacts of A.I. transformations. A.I. & Games connect to a considerable extent to research as well, as this frame primarily includes news articles on how A.I. gained capabilities in beating humans at diverse games (e.g., Go or chess). However, it further includes a few articles that discuss A.I. as a subject in video games. The remaining news frames centre around the impact of A.I. on culture and society: A.I. in Entertainment concerns how A.I. changes the creative industries, such as film and music. Robots in Culture is a more diverse frame that includes articles on A.I. as a subject of the arts or as a topic of cultural discourses but also as a general presence throughout society.
The 14 frames fall into four broader groups or “meta-frames” (Table 5): A.I. & Politics, A.I. & Economics, A.I. & Research/Science, A.I. in Society & Culture. Simply put, A.I. discourses make sense of the technology with respect to scientific progress, economic prospects, political challenges (including international competition), and questions about human–machine relationships in diverse contexts. These are not closed silos, but have fuzzy borders, and several of the emphasis frames would fall into the intersection of two or more of these meta-frames. For example, A.I. & Healthcare-related news content often connects science-related progress in the use of A.I. for medical purposes with their economic potential (e.g., return of investment, reducing costs). The same applies to Autonomous Vehicles, A.I. Weapons or A.I. Technology Development.
Table 5
Meta-frames
A.I. & politics
A.I. & economics
A.I. & research/science
A.I. in society & culture
China, International Politics & A.I. Competition
Automated Services & Products
A.I. & Research
A.I. & Entertainment
A.I. & Governance
A.I. & Business
A.I. & Healthcare
A.I. & Games
A.I., Public Discourse & Information Disorder
Autonomous Vehicles
A.I. Technology Development
Robots in Culture
A.I. Weapons
Automation of Labour
  
N = 629
N = 704
N = 844
N = 921
To sum up, news media captured the development of A.I. from a niche topic to a ubiquitous presence across society. A.I. coverage rapidly diversified over a brief time span. This reflects the speed at which the technology triggered transformations in diverse domains, which in turn also raised questions of governance and regulation. Oftentimes, debates over governmental interventions are reactions to perceived risks that come with the use of A.I.

6.2 Data risks in A.I. news framing

Benefits associated with A.I. are often of economic nature, i.e., connected to financial gains, which are the result of increased efficiency and productivity. Other perceived benefits place emphasis on increased convenience for consumers, better management of processes and resources (e.g., to protect the environment), liberation from mundane and difficult tasks (including driving), and enhancement of human capacities in generating knowledge and creativity. This is supported by the results of the sentiment analysis, which show that words related to efficiency, progress, and support dominate positive terms (Fig. 4). However, the potential of A.I. is accompanied by risks that are frequently pointed to in news reporting. Risks are discursive constructs and their perception, as well as evaluation, are highly subjective, context-dependent processes (Lupton 2013). Yet, expert discourses in the news play a key role in constructing risks and creating awareness for these risks in the first place (ibid.). The A.I. news discourse became more sensitive to risks over time (Fig. 3): 54.9% (1761) of all articles include a reference to one of the four most prevalent data risks (Fig. 5).
While diverse types of risks emerged in the A.I. discourse, issues related to data bias and algorithmic discrimination occur most frequently. Automated decision-making systems are in use in different contexts where they have a direct impact on people. In healthcare, they are used to detect illnesses. In the security domain, they categorise individuals according to some assigned risk potential. In governance, they decide over, e.g., taxation classes or eligibility for social welfare.
In these contexts, data bias is a potential risk for certain demographic groups. For example, A.I. in healthcare has been shown to be less reliable for ethnic minorities due to different biases in the sampling and training data for algorithms. A different type of data bias is automated monitoring and decision-making that single out specific groups who are overrepresented in databases against their favour (e.g., ethnic minorities and predictive policing trained on biased historical datasets). Data bias connects to questions of racism, sexism, ageism, ableism, and other forms of exclusion or discrimination. “Bias” also emerged as one of the most frequent negative words in the sentiment analysis (Fig. 6).
Surveillance and privacy invasion is another prevalent data risk in the A.I. discourse that received media attention over the past decade. The main concern is that automated systems rely on large volumes of data and that the processes of expanding data collection pose a risk to individual privacy (e.g., facial recognition systems). While the issue of surveillance is a visible risk theme in A.I. news framing, it has slightly less relevance than the manifold challenges of data bias. One potential reason for this is how A.I. is proposed as a technical solution in general: to use autonomous data-driven systems for classification, ranking, and prediction. However, data are needed to achieve these goals, which links back to questions of privacy.
Forms of cyber-crime and/or cyber warfare in which A.I. technology becomes a tool for malicious activities, or the target of digital attacks is the third largest group of data risks. This includes A.I. in hacking attacks, bot-attacks, etc. Finally, A.I.’s negative impact on public discourse in form of bots in social media and/or a source of “fake news”, “deep fakes”, and hate speech is the fourth largest risk category.
The frequencies for data risks’ references increased along a similar trajectory as the diversification of societal domains affected by A.I. transformations (Fig. 7). Certain news frames are also more likely to refer to specific data risks more frequently than others (Table 6). For example, A.I. & Governance-related articles often mention surveillance and/or data bias as risks of automation, while for China, International Politics & A.I. Competition cyber-war/cyber-crime and surveillance are the most frequent data risks. The news coverage of data risks associated with A.I. is indeed context-dependent, as the more specific issues and relationships between stakeholders in each frame lead to the emergence of specific data risks. While all risks are present in all frames, the intensity to which they are addressed differs noticeably between contexts. It appears that news stories focusing on technological advances and fundamental A.I. research do less often thematise risks than those that cover contexts in which A.I. technology finds concrete application. This may indicate that potential downsides to new tech developments are not fully foreseeable or appear too distant before they are introduced as products and services.
Table 6
Frames and data risks
Frame
N
Cyber-war and cyber-crime**
Information disorder**
Surveillance**
Data bias**
Automated Services & Products
298
12.8% (38)
4.0% (12)
34.6% (103)
19.5% (58)
China, International Politics & A.I. Competition
112
55.4% (62)
8.0% (9)
55.4% (62)
39.3% (44)
A.I. in Entertainment
130
18.5% (24)
3.8% (5)
8.5% (11)
20.8% (27)
A.I. & Governance
294
15.3% (45)
13.9% (41)
59.5% (175)
69.4% (204)
A.I. Business & Finance
236
23.7% (56)
16.1% (38)
23.7% (56)
30.1% (71)
A.I. & Research
564
18.8% (106)
7.3% (41)
16.1% (91)
30.7% (173)
A.I., Public Discourse & Information Disorder
113
15.0% (17)
57.5% (65)
22.1% (25)
23.0% (26)
Autonomous Vehicles
77
23.4% (18)
3.9% (3)
14.3% (11)
18.2% (14)
A.I. & Games
100
14.0% (14)
1.0% (1)
3.0% (3)
11.0% (11)
Robots in Culture
691
22.3% (154)
15.1% (42)
14.5% (100)
22.4% (155)
Automation of Labour
93
11.8% (11)
1.1% (1)
15.1% (14)
39.8% (37)
A.I. & Healthcare
101
13.9% (14)
2.0% (2)
40.6% (41)
29.7% (30)
A.I. Weapons
110
27.3% (30)
9.1% (10)
38.2% (42)
55.5% (61)
A.I. Technology Development
179
17.9% (32)
5.0% (9)
12.3% (22)
11.2% (20)
TOTAL
3098
621
279
756
931
Bold indicates the most frequent data risks per frame
**p = 0.000
The news coverage of data risks connects to questions of ‘data justice’ (Dencik et al. 2019; Taylor 2017): the critical discussion of ethical challenges in the widespread adoption of datafication and automation, especially with respect to individual rights, data ownership, discrimination, and exclusion. These are new challenges and the number of articles addressing them started to increase in recent years. It remains to be seen how this trend evolves, but given the widespread adoption of A.I. solutions, future developments are likely to raise more questions about accountability, responsibility, ownership, and fairness. Critical news coverage contributes to building public understanding of the social and political stakes involved with the rise of A.I. The results from the sentiment analysis indicate that news outlets have become more critical over time (Fig. 8). However, there are differences between the outlets in respect to the overall sentiment. Wired (M = 0.08, SD = 0.81) and Gizmodo (M = 0.32, SD = 1.2) are slightly more positive than NYT (M = − 0.15, SD = 0.85) and the Guardian (M = − 0.02, SD = 1.01), [F (3, 3162) = 37.89, p = 0.000]. This would indicate that mainstream outlets are slightly more critical/negative about A.I. than their tech-focused counterparts. A potential explanation is the underlying political leaning of a news outlet. The NYT and the Guardian are considered to be more progressive and left-leaning in the U.S. and British media landscapes, which might explain a more critical outlook on tech businesses. Therefore, while they are part of the media mainstream, they should not be considered as fully representative of the same, since centre-right leaning outlets may differ in their framing of A.I. along the benefits-risks spectrum.

6.3 The authors of A.I. news content

At face value, the group of journalists and commentators writing about A.I. appears diverse. 1098 individuals of different backgrounds in journalism, technology, academia/science, and governance produced the analysed text volume. While male authors form a clear majority, their female counterparts’ presence is noticeable (Fig. 9). Despite these numbers, a closer look reveals that a handful of “A.I. alpha journalists” dominate news coverage in the respective news outlets. For example, Cade Metz is one of the most prolific A.I. journalists on both Wired and the New York Times, where he wrote close to 200 articles on A.I.’s impact in diverse fields. Other examples of individual contributors with a noticeably high output are Tom Simonite (Wired) and George Dvorsky (Gizmodo). The Guardian appears a bit more balanced, with few authors contributing more than 20 articles individually (e.g., Alex Hern or Ian Sample).
Figure 10 illustrates the network of authors between the three sampled outlets. It shows that several write about A.I. for more than just one outlet and that there is a dense network, especially between the three U.S.-based news brands. There are signs of a transfer of expertise between the NYT and Wired, with the latter also sharing connections to Gizmodo. However, only a handful of A.I. writers published for both the U.K.-based Guardian and any of its American counterparts. The U.K. context has its own set of A.I. experts who cover the trend from a regional perspective. More research into the social dynamics of the A.I. discourse is needed, but the findings imply that A.I. reporting depends on A.I. experts who write about the issue across diverse domains and that local–regional journalistic cultures matter in the reporting of the global A.I. trend.

7 Discussion

Concerning research question 1 “How do news outlets frame A.I.?”, the findings show that A.I. is covered in a diversity of contexts, reflecting the versatility and ubiquity of this technological trend. The 14 distinguishable frames that dominate the AI media discourse can be grouped into four larger themes: A.I. & Politics, A.I. & Economics, A.I. & Research/Science, and A.I. in Society & Culture. A.I. news framing shifted from portraying the technology as a concept or research subject and topic of science fiction to focusing on the concrete economic, social, cultural, and political impacts in the decade between 2010 and 2020. It evolved from a niche topic into a mainstream issue in just a few years. The economic implications and politicisation of A.I. connect to questions of governance. News reporting on A.I. plays a key role in making these transformations visible in public discourse, as they monitor the societal impacts of wide-scale technology adoption. However, there are noticeable contrasts. While A.I. adoption comes with social and political challenges, it is often associated with value-creation. Tensions between the promises of A.I. and potentially negative effects of tech disruption shape the media discourse, though the perceived balance between both poles differs per context. News articles further differ with respect to the level of detail in which the technology is discussed, which ranges from mere mentioning A.I. as a related keyword to in-depth reporting. A.I. reporting is more than just another sub-topic in technology dossiers, but its multi-domain relevance poses a potential challenge for contextualising and explaining its effects in respect to a given development and/or event.
Regarding research question 2 “What are the most common risks mentioned in news content on A.I.?”, data bias stands out as the most frequently mentioned risk associated with the technology. This mainly concerns forms of discrimination in algorithmic systems that put certain demographic groups at a disadvantage. Since A.I. often serves for automated categorisation and ranking tasks, questions of fairness, inclusion/exclusion, and biases inevitably arise. News media provide here partially quite thorough investigations into how the problem has a tangible, negative impact on people in, e.g., culture, healthcare, or security. Specific risks seem to co-occur more frequently with certain frames than others. The concrete association depends on the stakeholders and (potential) conflicts in each context. For example, news on A.I. and global politics involving China refer more frequently to cyber-war/cyber-crime issues than news on AI governance, which is more concerned with surveillance and data bias. A.I. news reporting contextualises the impact of the technology, which underlines the relevance of domain-knowledge on part of journalists. Furthermore, reporting on A.I. has become more critical over time, both with respect to increased risk references and tone of voice. News media act as critical observers of technology trends and can contribute to general data literacy among their audiences as parts of the wider public. Indeed, the news frequently covers data scandals and technology abuses, which can shape individual recipients’ understanding of the A.I. transformation. However, while this is an important factor for critical public discourses and informed individual decision-making about technology (where this is possible), it also begs the question of how news outlets balance this with reporting about the benefits of the technology in the future. Again, more context-specific benefit–risk assessments are likely.
In respect to research question 3 ‘Who are the journalists/opinion leaders that produce A.I.-related news content?’, the results show that a relatively large number of journalists and other commentators write about the issues but that also a handful of individuals account for a considerable number of articles. To some extent, A.I. news reporting still forms a journalistic niche that a few experts appear to dominate. These frequently address the role of A.I. in diverse contexts. However, despite commonalities in which A.I. is being utilized by different organisations across society, the impact is very much context-dependent. It is not unlikely that journalists that have previously not covered A.I. technology per se will need to do so in the (near-)future. In other words: what once was a relatively small expert domain may have become part of most journalists’ portfolios, as A.I. leaves a mark in sports, finance, culture, politics, so on and so forth. This raises the question of how well prepared the profession is to make sense of this complex technology and its manifold effects. Training in critical data literacy then becomes an increasingly important aspect of journalists’ skills and competencies. To make sense of the technology’s effects, journalists need a profound conceptual understanding of A.I. This does not necessarily include technical knowledge but centres on capabilities in explaining the technology to lay audiences and critically reflecting on its impact along the benefit–risk spectrum.

8 Conclusion

The present study charted the news framing of A.I. between 2010 and 2021, with a focus on globally renowned news outlets that publish in English. Via an automated, inductive content analysis, 14 dominant frames were identified that can be further grouped into four ‘meta-frames’ that highlight the economic, cultural, social, and political impacts of A.I. in digital society. A.I. news reporting rapidly evolved from a niche- to a mainstream issue. A.I. is a news topic in transition. With technology entering a growing number of societal domains, the issue becomes part of diverse journalistic departments. A.I. reporting directly connects to critical data literacy in two dimensions. First, critical A.I. news reporting can contribute to a better public understanding of datafication and automation on a conceptual level: what changes they trigger, who controls these developments, and who benefits from them. The widespread adoption of A.I. raises inherently political questions, and news media are important sources of broadly accessible information about these issues. This would presuppose journalists themselves develop a sufficient conceptual understanding of A.I. for critically reporting about its impact. In that sense, critical data literacy becomes an important component in journalistic training, but further research is needed.
There are several limitations to this study. First, while the quantitative findings of the A.C.A. allow for inductively charting the A.I. discourse and deriving some generalisations about trends therein, the methodological approach is not per se suitable for a deeper analysis of recurring metaphors and less “manifest” discursive practices in news reporting for lay audiences. The present findings would need to be combined with additional qualitative investigations, for example in form of critical discourse analysis. There is also room to zone-in on specific time periods and to further unearth discursive practices and trends in the history of the A.I. discourse. Second, the sample is limited to English, U.S.- and U.K.-based news outlets. A broader, more international sample is likely to reveal cultural differences in the news framing and public perception of A.I. Third, the results are mostly descriptive and more research into audience perceptions and journalists’ views on covering A.I. is needed. Furthermore, future research must expand the critical-systematic analysis to social media as crucial sites for public discursivity on technology. While news media are important, they are only one part of contemporary public spheres and social media platforms profoundly re-shape the dynamics societal discourses.
Nevertheless, the findings provide a basis for further investigating how news media observe, comment on, and are affected by the rise of A.I. technology. This concerns research into framing practices, but also framing effects, and journalists’ perceptions of their own practices with respect to critical data literacy.

Declarations

Conflict of interest

The authors have no conflicts of interest to report.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literature
go back to reference Brossard D, Scheufele DA, Kim E, Lewenstein BL (2008) Religiosity as a perceptual filter. Examining processes of opinion formation about nanotechnology. Public Underst Sci 18(5):546–558CrossRef Brossard D, Scheufele DA, Kim E, Lewenstein BL (2008) Religiosity as a perceptual filter. Examining processes of opinion formation about nanotechnology. Public Underst Sci 18(5):546–558CrossRef
go back to reference Burscher B, Vliegenhart R, de Vreese C (2016) Frames beyond words. Applying cluster and sentiment analysis to news coverage of the nuclear power issue. Soc Sci Comput Rev 34(5):530–545CrossRef Burscher B, Vliegenhart R, de Vreese C (2016) Frames beyond words. Applying cluster and sentiment analysis to news coverage of the nuclear power issue. Soc Sci Comput Rev 34(5):530–545CrossRef
go back to reference Cave S, Dihal K (2019) Hopes and fears for intelligent machines in fiction and reality. Nat Mach Intell 1(2):74–78CrossRef Cave S, Dihal K (2019) Hopes and fears for intelligent machines in fiction and reality. Nat Mach Intell 1(2):74–78CrossRef
go back to reference Chong D, Druckman JN (2007) Framing theory. Annu Rev Polit Sci 10:103–126CrossRef Chong D, Druckman JN (2007) Framing theory. Annu Rev Polit Sci 10:103–126CrossRef
go back to reference Crépel M, Cardon D (2021) Criticism and prophecy in media coverage of AI. In: Annual Meeting of the Society for Social Studies of Science (4S), October 2021, Toronto Crépel M, Cardon D (2021) Criticism and prophecy in media coverage of AI. In: Annual Meeting of the Society for Social Studies of Science (4S), October 2021, Toronto
go back to reference Crépel M, Do S, Cointet J-P, Cardon D, Bouachera Y (2021) Mapping AI issues in media through NLP methods. CHR2021: computational Humanities Research Conference, November 2021, Amsterdam Crépel M, Do S, Cointet J-P, Cardon D, Bouachera Y (2021) Mapping AI issues in media through NLP methods. CHR2021: computational Humanities Research Conference, November 2021, Amsterdam
go back to reference Cutcliffe SH, Pense CM, Zvalaren M (2012) Framing the discussion: nanotechnology and the social construction of technology-what STS scholars are saying. NanoEthics 2:81–99CrossRef Cutcliffe SH, Pense CM, Zvalaren M (2012) Framing the discussion: nanotechnology and the social construction of technology-what STS scholars are saying. NanoEthics 2:81–99CrossRef
go back to reference Darling K (2015) Who’s Johnny?” Anthropomorphic framing in human-robot interaction, integration and policy. In: Lin P, Bekey G, Abney K, Jenkins R (eds) (2017): ROBOT ETHICS 2.0. Oxford University Press, Oxford Darling K (2015) Who’s Johnny?” Anthropomorphic framing in human-robot interaction, integration and policy. In: Lin P, Bekey G, Abney K, Jenkins R (eds) (2017): ROBOT ETHICS 2.0. Oxford University Press, Oxford
go back to reference Dencik L, Hintz A, Redden J, Trere E (2019) Exploring data justice. Conceptions, applications and directions. Inf Commun Soc 22(7):873–881CrossRef Dencik L, Hintz A, Redden J, Trere E (2019) Exploring data justice. Conceptions, applications and directions. Inf Commun Soc 22(7):873–881CrossRef
go back to reference De Vreese CH (2005) News framing: theory and typology. Inf Des J Doc Des 13:51–62 De Vreese CH (2005) News framing: theory and typology. Inf Des J Doc Des 13:51–62
go back to reference Entman RM (1993) Framing. Toward clarification of a fractured paradigm. J Commun 43(4):51–58CrossRef Entman RM (1993) Framing. Toward clarification of a fractured paradigm. J Commun 43(4):51–58CrossRef
go back to reference Groves T, Figuerola CG, Groves MA (2015) Ten years of science news. A longitudinal analysis of scientific culture in the Spanish digital press. Public Underst Sci 25(6):691–705CrossRef Groves T, Figuerola CG, Groves MA (2015) Ten years of science news. A longitudinal analysis of scientific culture in the Spanish digital press. Public Underst Sci 25(6):691–705CrossRef
go back to reference Hayes AF, Krippendorff K (2007) Answering the call for a standard reliability measure for coding data. Commun Methods Meas 1:77–89CrossRef Hayes AF, Krippendorff K (2007) Answering the call for a standard reliability measure for coding data. Commun Methods Meas 1:77–89CrossRef
go back to reference Holliman R (2004) Media coverage of cloning: a study of media content, production and reception. Public Underst Sci 13:107–130CrossRef Holliman R (2004) Media coverage of cloning: a study of media content, production and reception. Public Underst Sci 13:107–130CrossRef
go back to reference Lakoff G, Johnson M (2003) Metaphors we live by. University of Chicago Press, ChicagoCrossRef Lakoff G, Johnson M (2003) Metaphors we live by. University of Chicago Press, ChicagoCrossRef
go back to reference Lecheler S, de Vreese C (2019) News framing effects. Routledge, London and New York Lecheler S, de Vreese C (2019) News framing effects. Routledge, London and New York
go back to reference Matthes J (2014) Framing. Baden-Baden:Nomos Matthes J (2014) Framing. Baden-Baden:Nomos
go back to reference Matthes J, Kohring M (2008) The content analysis of media frames. Toward improving reliability and validity. J Commun 58:258–279CrossRef Matthes J, Kohring M (2008) The content analysis of media frames. Toward improving reliability and validity. J Commun 58:258–279CrossRef
go back to reference McQuail D, Deuze M (2020) McQuail’s media & mass communication theory. Sage, London McQuail D, Deuze M (2020) McQuail’s media & mass communication theory. Sage, London
go back to reference Michael M, Lupton D (2015) Toward a manifesto for the “public understanding” of big data. Public Underst Sci 25:104–116CrossRef Michael M, Lupton D (2015) Toward a manifesto for the “public understanding” of big data. Public Underst Sci 25:104–116CrossRef
go back to reference Nguyen D (2017) Europe, the crisis, and the internet. A web sphere analysis. Palgrave MacMillan, London Nguyen D (2017) Europe, the crisis, and the internet. A web sphere analysis. Palgrave MacMillan, London
go back to reference Paganoni MC (2019) Framing big data. A linguistic and discursive approach. Palgrave Macmillan, LondonCrossRef Paganoni MC (2019) Framing big data. A linguistic and discursive approach. Palgrave Macmillan, LondonCrossRef
go back to reference Pentzold C, Brantner C, Fölsche L (2019) Imagining big data: illustrations of ‘big data’ in US news articles, 2010–2016. New Media Soc 21(1):139–167CrossRef Pentzold C, Brantner C, Fölsche L (2019) Imagining big data: illustrations of ‘big data’ in US news articles, 2010–2016. New Media Soc 21(1):139–167CrossRef
go back to reference Taylor L (2017) What is data justice? The case for connecting digital rights and freedoms globally. BD & Society 1–14 Taylor L (2017) What is data justice? The case for connecting digital rights and freedoms globally. BD & Society 1–14
go back to reference Tucker C (2012) Using social network analysis and framing to assess collective identity in the genetic engineering resistance movement of Aotearoa New Zealand. Soc Mov Stud 12(1):81–95CrossRef Tucker C (2012) Using social network analysis and framing to assess collective identity in the genetic engineering resistance movement of Aotearoa New Zealand. Soc Mov Stud 12(1):81–95CrossRef
go back to reference Williamson B (2018) Big data in education. The digital future of learning, policy, and practice. Sage, London Williamson B (2018) Big data in education. The digital future of learning, policy, and practice. Sage, London
Metadata
Title
The news framing of artificial intelligence: a critical exploration of how media discourses make sense of automation
Authors
Dennis Nguyen
Erik Hekman
Publication date
23-06-2022
Publisher
Springer London
Published in
AI & SOCIETY / Issue 2/2024
Print ISSN: 0951-5666
Electronic ISSN: 1435-5655
DOI
https://doi.org/10.1007/s00146-022-01511-1

Other articles of this Issue 2/2024

AI & SOCIETY 2/2024 Go to the issue

Curmudgeon Corner

Is LaMDA sentient?

Premium Partner