As artificial intelligence (AI) becomes increasingly embedded in everyday life and rises on the political agenda, research on how the public thinks and feels about it becomes increasingly relevant. Drawing on cognitive appraisal theory, which holds that emotions arise from people’s interpretations of external phenomena, this study investigates whether Finnish non-experts’ conceptions of AI relate to their emotional orientations. Prior to a public AI lecture series, 141 Finnish adults completed a survey. Open responses to “What is AI?” and “How AI works?” were abductively coded into eight features spanning technical, capability, and relational conceptions. Emotional orientations were measured with four Likert items (excitement, fear, personal relevance, trust) and analyzed using hierarchical clustering. Logistic regression tested whether conception features predicted profile membership. Most common conception features were Autonomous (72%), Data-driven (66%), Anthropomorphic (53%), Human-controlled (40%), and Rule-based (31%). Additionally, 37% expressed uncertainty. Four emotional profiles emerged: Optimists (22%), Cautious (28%), Disengaged (26%), and Indifferent (24%). Conceptions only weakly predicted orientations: anthropomorphic and probabilistic framings increased the likelihood of Optimist membership, but model accuracy was modest. The findings underscore the need for AI literacy initiatives that address both diverse misconceptions and emotional diversity. Limitations include convenience sampling in one country and challenges in coding anthropomorphism, which is partly embedded in technical terminology. Future work should further explore the role of anthropomorphic and probabilistic conceptions in shaping emotional orientations towards AI.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
1 Introduction
Public interest in artificial intelligence (AI) has surged since the launch of large language model—based conversational agents like ChatGPT in late 2022 (Open AI 2022). As AI becomes increasingly visible in everyday life, understanding how the public perceives and relates to these technologies is critical. Zhang (2024) outlines two main reasons why public opinion matters for AI governance: (1) normatively, because the public is a key stakeholder in shaping AI s future, and (2) pragmatically, because attitudes towards AI can influence adoption, regulation, and political contestation—as has been the case with nuclear energy and genetically modified organisms.
With the current pace of technological development, the public understanding of AI does not always reflect the developments of the science. Public understanding is often shrouded in ambiguity and misconceptions, with the term ‘AI’ representing a broad spectrum of technologies (Haenlein and Kaplan 2019) that often stop being perceived as AI once they become sufficiently well established (Hofstadter 1999). Moreover, public perceptions are often emotionally charged, ranging from hopeful or utopian expectations to anxious skepticism and dystopian fears (Cugurullo and Acheampong 2024; Hirsch-Kreinsen 2024; Hick and Ziefle 2022).
Anzeige
According to cognitive appraisal theory (Lazarus 1991), emotional responses do not arise directly from external stimuli, but from how individuals interpret and evaluate them. In the context of AI, people's conceptions of what AI is and what it does can be seen as cognitive appraisals that shape their emotional orientations towards it. These emotional orientations, in turn, may influence both how individuals choose to use AI in their own lives and how they perceive its broader acceptability in society (See Fig. 1).
Fig. 1
Suggested causal model regarding public perceptions of AI
In this paper, we explore how non-experts in Finland conceive of AI and how these conceptions relate to their emotional orientation towards AI. A more nuanced understanding of how laypeople make sense of AI is essential for developing interventions to improve AI literacy, for ensuring that diverse public voices are included in decisions about how AI is governed, and in designing meaningful human AI interactions.
1.1 Definitions, conceptions, and public understanding of AI
The difficulty of rigorously defining “AI” is reflected in the variety of expert definitions found in literature (Table 1). Bearman and Ajjawi (2024) suggest categorizing different conceptualisations and definitions of AI in technical definitions (“What AI is”), capability definitions (“What AI does”), and relational definitions (“How AI works within a social system”). Computer scientists tend to define AI by its technical functionality (LeCun et al. 2015; Russell and Norvig 2016) or capability (McCarthy et al. 2006; Rai et al. 2019), while the public's definition of AI tends to compare it with human behaviour and intelligence (Zhang 2024). When investigating conceptions of non-experts, it may be sensible to focus, besides the technical and capability aspects, also on the relational aspects (i.e., how humans interact with these technologies), as suggested by Bearman and Ajjawi (2023).
Table 1
Selected AI definitions from the literature, based on historical value and use in information systems research (Collins et al. 2021)
“A set of multiple interconnected layers of neurons, inspired by the human brain, which can be trained to represent data at high levels of abstraction”
“The ability of a machine to perform cognitive functions that we associate with human minds, such as perceiving, reasoning, learning, interacting with the environment, problem solving, decision-making, and even demonstrating creativity”
“In the context of a particular interaction, a computational artefact provides a judgement about an optimal course of action and that this judgement cannot be traced.”
EU AI Act
2024
Technical, capability, relational
“A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”
Interpretation of the definition type by authors
Several studies have found that people are confused and have misunderstandings on how AI works at the technical level (Bewersdorff et al. 2023). These include, for example, inability to distinguish technical terms (Antonenko and Abramowitz 2023; Teng et al. 2022; Lindner and Berges 2020), and a view that AI algorithms cannot make mistakes (Antonenko and Abramowitz 2023). Misunderstandings of AI s technical foundations are often related to the use of metaphors that oversimplify complex processes. Colburn and Shute (2008) argue that metaphors related to computer science help manage complexity through abstraction, but this abstraction can also obscure the true nature of systems. For example, Murray-Rust et al. (2022) highlight how the metaphor of “learning” in machine learning can lead to misunderstandings: while models do adapt based on new data, equating this process to human learning implies cognitive abilities like reasoning and generalisation and that “fitting” might be a better metaphor to capture the mechanical nature of the process. Brauner et al. (2023) conclude that AI is still a “black box” for many and inability to assess the risks and possibilities of AI can lead to irrational control beliefs of AI. On the other hand, Bearman and Ajjawi (2023) point out that uncertainty of the technical details of AI does not imply that AI could not be used effectively.
Anzeige
AI ethicists are concerned that portraying AI as human-like may give people a misleading understanding of its risks and benefits (Cave et al. 2018). The tendency to anthropomorphize non-human entities, i.e. attribute human-like characteristics or attributes to them is well documented (Bewersdorff et al. 2023; Epley et al. 2007; Blut et al. 2021). This natural tendency arises from our cognitive predisposition to interpret behaviour in terms of intentions and emotions, even when interacting with inanimate or technological systems (Boyer 1996). For example, Scott et al. (2023) showed how non-experts perceive some level of consciousness in voice assistants, chatbots, and robot vacuum cleaners. Furthermore, recent evidence suggests that people's moral evaluations of AI agents are shaped by multiple human-like features, including emotion expression, cooperation, and moral judgment (Ladak et al. 2024). In a Finnish context, studies with children have shown that conceptions of AI are often anthropomorphic, without references to the role of data in training AI (Mertala et al. 2022). Another study further explored children s misconceptions of AI, showing that Finnish 5th and 6th graders typically framed AI either as human cognition, as a human-like entity, or as a machine with pre-installed intelligence; however, since most children assessed their AI knowledge as poor, their misconceptions appear superficial rather than profound. Mertala and Fagerlund (2024).
The introduction of large language models (LLMs) (Floridi and Chiriatti 2020) and machines’ increasing ability to produce coherent language about a variety of topics has added a new flavor to the discussion on what AI is. Bender et al. (2021) used the metaphor of a “stochastic parrot” to describe a language model and warned about the dangers should these models be used at scale. Whether language models can “understand” is an ongoing debate among AI researchers, splitting the views according to a survey (Michael et al. 2023). Mitchell and colleagues have suggested that the capabilities of language models should be considered “new modes of understanding” or an “alien form of intuition” (Mitchell and Krakauer 2023), accepting a distinction from human understanding. In this, the discussion around LLMs is approaching, terminologically, the distinction made in 1980 by Searle (1980) between “weak” and “strong” AI. Weak AI acts as a technological tool under the control of humans, whereas strong AI “can literally be said to understand and have other cognitive states”(Searle 1980).
Understanding what AI is and how it works shapes how we trust, use, and relate to these tools and technologies in our daily lives. Misconceptions about AI can lead to unrealistic expectations or misplaced trust. As AI becomes more integrated into society, it is crucial that people have a clear, accurate view of what these systems can and cannot do.
1.2 Emotional orientations towards AI
As Braidotti (2019, p. 50) aptly describes, “Moments or periods of euphoria at the astonishing technological advances [...] alternate with moments or periods of anxiety in view of exceedingly high prize we [...] are paying for these transformations.” In our responses to rapid technological progress, excitement over innovation is tempered by growing concerns about its societal and ethical costs. When Hick and Ziefle (2022) explored public perceptions towards AI in qualitative interviews, two main themes emerged: a dystopian view where AI could lead to destructive outcomes or loss of human control (akin to movies like The Terminator) and an exaggerated or utopian attitude about the possibilities and abilities of AI as a tool that could transform society for the better (Hick and Ziefle 2022). This tension is reflected in the growing body of research focused on public beliefs, attitudes, and emotions towards AI and the factors that shape them (Schepman and Rodway 2023; Neudert et al. 2020; Kelley et al. 2021; Araujo et al. 2020; Stein et al. 2024).
Familiarity with AI, algorithms, or computer programming often leads to more positive views regarding AI technologies, particularly in contexts like automated decision-making (Araujo et al. 2020). Similarly, Mantello et al. (2023) found that individuals familiar with AI, and those who perceived tangible benefits, were less likely to feel threatened by it. Moreover, Li and Zheng (2024) found that social media use predicted more positive attitudes towards AI indirectly through increased perceived AI fairness and reduced perceived AI threat. On the other hand, deeply held beliefs can intensify negative emotional reactions towards AI. Kozak and Fel (2024) found that religious individuals, who may see AI as challenging the divine control or human uniqueness, are more likely to experience intense emotions, such as fear and anger. Furthermore, Stein et al. (2024) found that susceptibility to conspiracy beliefs connects to a more negative attitude. Moreover, Kelley et al. (2021) identified four key public sentiments towards AI: exciting, useful, worrying, and futuristic—with notable differences across countries: developing nations expressed more excitement, while developed nations exhibited greater concern.
Another construct that describes our affective relationship with AI is trust. People form varying levels of trust in machines (Mcknight et al. 2011), and as Jacovi et al. (2021) highlight, too much trust in AI can lead to over-reliance while too little trust can result in resistance, even in cases where AI offers benefits. Bao et al. (2022) found five public segments based on perceptions of AI s risks and benefits: the negative, who saw risks outweighing benefits; the ambivalent, who perceived both as high; the tepid, who leaned slightly towards benefits; the ambiguous, who saw moderate levels of both; and the indifferent, who perceived low levels of both risks and benefits. Pataranutaporn et al. (2023) found that framing AI systems as empathetic or caring increased perceptions of trustworthiness and led to more positive emotional responses. In contrast, the absence of trust may generate “algorithm anxiety”: Jhaver et al. (2018) interviewed Airbnb hosts who reported significant anxiety caused by a perceived lack of control and uncertainty over how algorithmic evaluation works on the platform. Trust in humans versus machines also varies by context. Mou and Xu (2017) found users more willing to disclose personal information to humans than AI. However, Ta et al. (2020) discovered that a social companion chatbot creates a “safe space” where users feel comfortable discussing any topic without fear of judgment. Similarly, Sundar and Kim (2019) found that users were more likely to trust a machine agent with their credit card information than a human agent. Zhang et al. (2023) expanded on this theme by exploring how trust is affected by the perceived identity of “AI teammates”. In their study, participants who were deceived into thinking their AI teammate was human trusted and accepted their teammate's decisions less often than those who knew their teammate was AI.
Emotional orientations towards AI matter because they strongly influence how people engage with, adopt, or resist new technologies. Prior work shows that affective reactions such as trust, fear, or excitement often have a substantial effect on their behaviour (Bao et al. 2022; Pataranutaporn et al. 2023). By examining not only what people think AI is but also how they feel about it, our study contributes to a more complete understanding of public orientations towards AI and their implications for societal adoption.
1.3 Aim of the present study
Insufficient understanding of AI deepens the digital divide (Carter et al. 2020) in terms of access, skills, and outcomes (Lutz 2019), and also limits public engagement in AI governance (Curtis et al. 2023). As AI development continues to advance, it is crucial for different societal stakeholders to understand the diverse conceptions, attitudes, beliefs, and emotions people have towards these technologies. While existing research has provided valuable insights into public perceptions and emotional orientations towards AI, more subtle understanding is needed regarding how non-experts understand and feel about AI—and whether and how specific emotional orientations follow specific conceptions. In the present study, we focus on AI conceptions, emotional orientations to AI, and the relationship of these among non-experts in Finland (Fig. 2).
Fig. 2
Focus of the present study in relation to the suggested causal model
First, we investigate AI conceptions. While it is known that expert definitions of AI differ from those of non-experts (Zhang 2024; Salles et al. 2020; Krafft et al. 2020), a more detailed picture is needed. We use Bearman and Ajjawi’s (2024) model (technical, capability, relational) to analyse Finnish non-experts’ written descriptions of AI. Our first research question is:
RQ1: How do technical, relational, and capability features manifest, coexist, and interact in written conceptions on AI and its functionality among non-experts in Finland?
Using abductive analysis, we recognize different features in written descriptions of what AI is and how it works. By addressing this question, we aim to uncover how people make sense of AI across different dimensions. Understanding these conceptions can inform the design of AI literacy efforts and improve how to communicate about AI with the public.
Second, we examine emotional orientations towards AI. Prior research (Hick and Ziefle 2022; Neudert et al. 2020; Kelley et al. 2021) has highlighted the presence of both highly positive and highly negative emotional responses to AI. However, the picture may be more nuanced – e.g., individuals may experience positive and negative feelings simultaneously (Liu et al. 2024), or neither. To explore this complexity, we adopt a person-oriented approach, similar to Bao et al. (2022) in their analysis of perceived risks and benefits. Our second research question is:
RQ2: What are the distinct emotional orientation profiles towards AI among the Finnish non-experts, and how are these profiles identified?
Using multiple affect-related measures and clustering, we identify distinct emotional orientation profiles. Recognizing these patterns can deepen our understanding of emotional diversity and support the development of tailored AI literacy efforts that align with different emotional orientations.
Third, we investigate the relationship between AI conceptions and emotional orientations. Previous research has shown, that increased knowledge about AI fosters trust and more positive affective relations towards AI (Araujo et al. 2020; Mantello et al. 2023; Li and Zheng 2024), while deeply held personal beliefs, lack of control, and uncertainty on how algorithms work may cause anxiety (Jhaver et al. 2018; Kozak and Fel 2024; Stein et al. 2024). Based on this, we hypothesise that individuals with conceptions more near the expert definitions of AI show more positive emotional orientation towards AI. Our third research questions is:
RQ3: How do the Finnish non-experts conceptions of what AI is and how AI works relate to the different emotional orientation profiles towards AI?
By answering this question, we aim to better understand the cognitive affective interplay in public perceptions of AI and provide groundwork for future research in this area. Overall, the present study aims to provide increased detail in analyzing non-experts’ understanding of AI and feelings towards AI, and explore the dynamics between these. A more detailed understanding of public’s relations with AI is needed for developing interventions to improve AI literacy, for ensuring that diverse public voices are included in decisions about how AI governed, and in designing meaningful human–AI interactions.
2 Methods
2.1 Context and participants
Participants signed up for a general audience online lecture series on artificial intelligence offered by a non-profit organisation responsible for organising education for the general public. The lecture series was given by one of the authors. Before the lectures, the participants were asked to complete an online survey about their understanding of and attitudes towards AI. Lectures and survey were conducted in Swedish, one of the two official languages in Finland.
This type of study does not require ethical review per the guidelines of the relevant national board of research ethics, or the guidelines of the host university: the participants were adults making an informed decision to volunteer for the study, and the study does not involve physical intervention, especially strong stimuli, risk of causing mental harm that exceeds the limits of normal daily life, or a threat to the safety of the participants, their families or the researchers. The participants also had the opportunity to take part in the survey, thereby providing information to the lecturer, while still opting out of the study.
A total of 150 participants completed the survey, with 141 giving informed consent for their data to be used for research purposes. Most respondents (83.7%) identified as female, 14.9% as male, and 1.4 % preferred not to disclose their gender. The largest age group was 50–59 years (36.2% of respondents) followed by 40–49 years (29.8% of respondents), 30–39 years (15.6% of respondents) and over 60 years (15.6% of respondents).
As AI is a rather new topic for non-experts, we deemed that asking for participants’ education level would not provide us with much information on their AI knowledge and skills. Given the introductory nature of the lecture series mentioned in marketing and the reach of the organization offering the series, it is feasible to assume that the participants were non-experts/novices in relation to AI. To get some self-assessed information of the participants, we asked them which of the five roles on Rogers’ innovation curve (Rogers 1962; Rogers et al. 2008) best describes how they view themselves in relation to digitalisation and technological development. For each of the five roles, we also wrote a plain text explanation of how to interpret the role:
Innovator: You like to try out new ideas and products. You are not afraid to take risks and are often the first to adopt an innovation.
Early adopter: You are not the first to try out new things but typically follow close behind the innovators. You also have a strong influence on others by spreading new ideas to, for instance, colleagues.
Early majority: You adopt innovations earlier than average. You are open to change, but not as adventurous as the innovators and early adopters.
Late majority: You adopt innovations later than average. You prefer to wait until the innovation has proven its benefits.
Laggard: You are cautious and want to be sure the innovation works before you consider adopting it.
The results showed that only 5.7% viewed themselves as innovators and 19.1% as early adopters. Most (53.9%) identified as early majority, 20.6% as late majority, and 0.7% as laggards.
2.2 Data collection
Participants responded to an online survey prior to attending the lectures. The survey consisted of background variables and questions related to conceptions of AI and emotional orientation towards AI. Participants were asked to report their gender, age group, and self-assessed role on Rogers’ innovation curve. Open response items regarding conceptions of AI (e.g., ‘What do you think AI is? and ‘How do you think AI works?’) were adapted from the study by Mertala et al. (2022). Regarding emotional orientation, we employed several items that were answered on a Likert scale from 1 (‘Completely disagree’) to 5 (‘Completely agree’). In line with work by Kelley et al. (2021), we employed items on excitement (‘AI is exciting’), worrying (‘AI is frightening’) and personal relevance (‘AI is not for me’). Additionally, we employed one item on trust towards AI versus humans (‘I trust humans more than AI’).
2.3 Data analysis
2.3.1 Abductive analysis of AI features (RQ1)
We applied abductive analysis (Mayring 2015) to identify specific features in non-experts conceptions of AI, with the goal of creating a set of true/false variables for subsequent analysis. We chose an abductive approach because it combines theory-driven and data-driven reasoning, allowing us to move iteratively between our data and existing theoretical frameworks. This helped us capture both expected and unexpected features in participants conceptions of AI.
Two open-ended questions were used as the basis for this analysis: What do you think AI is? and How do you think AI works?. Responses were first machine-translated into English for analysis and reporting. One of the authors checked all translations for accuracy. Because many participants described how AI works in their answer to the first question, we combined each participant s answers to the two questions and treated this combined text as the unit of analysis.
The analysis began with the first author reading all responses and producing an initial, data-driven list of features. This list was discussed with the other authors and compared with relevant literature. Following (Bearman and Ajjawi 2024), we focused on three broad types of conceptions: capabilities of AI, technical processes by which AI operates, and relational aspects of AI as part of social systems. From this discussion, we developed a list of eight features to code for in the responses. Each author then independently coded the first 50 responses for these features. After comparing and discussing any differences, we agreed on a final codebook (Table 2).
Table 2
Codebook for ’What AI is?’ and ’How AI works?’ open responses and interrater reliability
Conception type
Code
Description
Fleiss’ Kappa
Capability
Self-learning
AI is described as having the ability to learn continuously and/or independently from its experiences or data inputs, without needing direct human intervention
0.73
Anthropomorphic
AI is attributed human-like qualities or behaviours, such as thinking, decision-making, or emotions
0.43
Relational
Human-controlled
Response emphasizes the role of humans in guiding, controlling, or interacting with AI
0.71
Autonomous
AI as an autonomous entity with the ability to act, make decisions, or perform tasks independently
0.58
Technical
Data-driven
AI is primarily understood as being driven by data. It includes references to AI processing large datasets, learning from data, or making decisions based on data inputs
0.82
Rule-based
AI is described as following a set of predefined rules or algorithms. It reflects a deterministic view of AI, where its actions are based on programmed instructions or specific logical pathways
0.66
Probabilistic
Descriptions of AI that focus on its use of statistical models or probabilistic reasoning to make predictions or decisions
0.74
Other
Uncertain
Instances where respondents express uncertainty, lack of knowledge, or confusion about what AI is or how AI works
0.82
Next, each author coded all responses independently using the agreed codebook. We calculated Fleiss kappa for each feature to assess intercoder reliability. For subsequent analyses, we used a consensus coding, where a feature was marked as present if at least two of the three coders agreed.
2.3.2 Co-occurrence of the features using pairwise correlations (RQ1)
We examined co-occurrence between binary-coded AI features using Kendall's \(\tau _b\) correlations, a non-parametric measure suited for dichotomous data that adjusts for ties and accommodates differing marginal distributions. This approach provides a standardized effect size for each feature pair, allowing direct comparison across associations (See Table 3).
Table 3
Cluster validity indices for three-, four-, and five-cluster solutions from hierarchical clustering (average linkage, Pearson distance)
Number of clusters
\(R^2\)
Silhouette
Dunn index
Calinski–Harabasz
3
0.34
0.51
0.17
35.0
4
0.43
0.53
0.17
34.3
5
0.48
0.58
0.17
31.3
2.3.3 Hierarchical clustering (RQ2)
To explore emotional orientation towards AI on a person-oriented approach, (i.e., focusing on identifying subgroups of individuals who share similar patterns of responses across variables rather than examining relationships between variables across the whole sample, see e.g. Bao et al. (2022)), hierarchical clustering was applied to the four emotional orientation items using JASP (Love et al. 2019). Variables were scaled before clustering and Pearson was used as the distance metric as the aim was to group respondents by response patterns rather than absolute levels. Visual inspection suggested that three-, four-, and five-cluster solutions were all plausible (see Fig. 3). While the five-cluster solution yielded the highest \(R^2\) and silhouette, the four-cluster solution was chosen for conceptual clarity and balanced cluster sizes, with comparable validity scores.
Fig. 3
Dendrogram from hierarchical clustering, with the red line representing the chosen four-cluster solution
To examine whether features of AI conceptions (found in RQ1) predicted membership in the four emotional orientation profiles (found in RQ2), we conducted separate binary logistic regression analyses for each profile using JASP (Love et al. 2019). For each analysis, the dependent variable was coded as 1 for participants in the predicted cluster and 0 for all others. The independent variables were binary indicators representing the presence of specific AI conception features. Given the exploratory nature of the research question, we used stepwise model selection based on the Akaike Information Criterion (AIC) to identify the simplest model that explained the data for each cluster. The model with the lowest AIC was retained as the final model.
3 Results
3.1 RQ1: features of AI conceptions
To understand the non-experts’ conceptions of AI and its functionality, an abductive thematic analysis was conducted on responses to two open-ended questions (What do you think AI is? and How do you think AI works?) as described in section 2.3.1. In order to shed light on how the technical, relational and capability features were manifested in the public's responses, we illustrate the codes by providing excerpts from the open-ended answers. Most responses manifested multiple codes, either representing one or multiple definition types (technical, relational or capability) (see Table 4). Table 5 presents a selection of quotes, with the parts related to a certain code highlighted in the text. The quotes were selected to illustrate the broad diversity found in the responses, with some being mainly technical or relational in their nature, and others being more mixed.
On the level of definition types, relational features describing AI as a part of a social system were mentioned by 82% of the participants, features describing AI’s technical operations by 79% of the participants, and capability features describing what AI can do by 55% of the participants. Additionally, 37% of participants mentioned uncertainty about what AI is or how it works. There was a statistically significant positive correlation between relational and capability-based features (\(\tau _{{\text{b}}} = 0.36\),\(p<0.001\)), indicating significant co-occurrence of these categories in non-experts’ definitions. Furthermore, there was a statistically significant negative correlation between technical features and uncertainty (\(\tau _{{\text{b}}} = - 0.21\),\(p=0.01\)), indicating that non-experts’ definitions including technical features were less likely to include mentions of uncertainty. Interestingly, correlations between capability and technical features as well as between relational and technical features were near zero.
Table 4
Kendall’s \(\tau _{{\text{b}}}\) correlations between AI definition types
Capability
Relational
Technical
Uncertain
Capability
1.00
Relational
0.36**
1.00
Technical
0.01
−0.01
1.00
Uncertain
−0.07
0.01
−0.21*
1.00
N
141
141
141
141
M
0.55
0.82
0.79
0.37
SD
0.50
0.38
0.41
0.48
* \(p<0.05\)
** \(p<0.001\)
On the level of coded features, the most prevalent features were Autonomous (72% of participants), Data-driven (66%), and Anthropomorphic (53%), followed by Human-controlled (40%), Rule-based ( 31%), Self-learning (20%), and Probabilistic (9%). To investigate co-occurrence of the features, pairwise correlations using Kendall’s \(\tau _{{\text{b}}}\) were employed. Anthropomorphic and Autonomous showed a very high correlation (\(\tau _{{\text{b}}} = 0.54\),\(p=0.04\)). High positive correlations were also found between Self-learning and Autonomous (\(\tau _{{\text{b}}} = 0.54\),\(p<0.001\)), as well as Self-learning and Anthropomorphic (\(\tau _{{\text{b}}} = 0.37\),\(p<0.001\)), indicating large co-occurrence of these features. Less strong but still statistically significant correlations were found between Human-controlled and Rule-based (\(\tau _{{\text{b}}} = 0.20\),\(p=0.02\)), Autonomous and Data-driven (\(\tau _{{\text{b}}} = 0.18\),\(p=0.03\)), and Self-learning and Data-driven (\(\tau _{{\text{b}}} = 0.17\),\(p=0.04\)). All other pairwise correlations were nonsignificant (see Table 6).
Table 5
Selected quotes and corresponding codes
Resp
Quote
26
This is interesting because I don’t know (uncertain). I have thought that it works so that an AI system collects (autonomous) a lot of information (data-driven) and draws conclusions (autonomous) based on it. But it can’t be that simple (uncertain) because if there is a lot of incorrect information, then there will also be incorrect conclusions
28
Artificial Intelligence - conclusions drawn from vast amounts of data (data-driven) that no human can hold in their head
93
A tool (human-controlled) really, which can facilitate processes, provide new perspectives, save time and formulate, combine, compile things quickly and easily
57
Artificial intelligence means to me that computers are increasingly taking over everything that we humans used to do: our jobs, our thoughts...soon we will no longer be needed, so AI takes care of most things instead of us (autonomous)
27
A system created by humans (human-controlled) that can independently take in more information from the surrounding world and further learn from it (self-learning)
74
An artificial superbrain (anthropomorphic) that lacks human limitations, such as fatigue and "bad days", but needs to be controlled (human-controlled) to avoid going off course
70
Intelligent machines that have learned to function (anthropomorphic) and where algorithms (rule-based) play a major role
31
Humans have created (human-controlled) a kind of intelligence for machines. Then the machines can develop/use it (autonomous) in a way that is impossible for a human
94
Processes that occur automatically. It can be based on, for example, large amounts of data being processed (data-driven) and where certain patterns are found, or on a sequence of steps where the process continues and data processed according to a certain pattern (rule-based)
42
AI equals programs that learn themselves and evolve automatically (self-learning, anthropomorphic) along the process, at an accelerating pace without necessarily needing human intervention (autonomous)
54
Current AI is essentially autocorrect on steroids. Calculating probabilities (probabilistic) on large datasets (data-driven) and spitting out what the algorithm (rule-based) generates
51
AI are educated guesses (probabilistic) to some extent based on “scraped” material (data-driven). They guess their way forward, learn from their mistakes and missteps (self-learning)
26
This is interesting because I don’t know (uncertain). I have thought that it works so that an AI system collects (autonomous) a lot of information and draws conclusions based on it (data-driven)
32
Artificial intelligence, different models based on masses of available data (data-driven) in order to mimic human intelligence in problem solving of various kinds (plus that AI can use new data and feedback to "learn" itself (autonomous, anthropomorphic, self-learning)
46
For me as a humanist, AI really feels like something "scary". AI comes across as a superintelligence that surpasses everything that the human brain (=my human brain) can understand, process, and perform (autonomous, antropomorphism). I know that the concept means artificial intelligence that is expressed in countless programs and software systems (rule-based) at different levels, but I find it difficult to grasp (uncertain) AI as a phenomenon.. Seems to be an electronic superbrain that, according to well-developed algorithms (rule-based), is able to process (literally) astronomical amounts of data (data-driven) and also come up with sensible insights on the material
33
It uses algorithms (rule-based) and also searches for information on the internet (autonomous), and then you can feed it information to teach it more (data-driven, human-controlled). For example, you can feed in a lot of audio files and from them teach the AI how a voice sounds, and then get it to sing (anthropomorphic) anything
Codes are in parentheses in the teletype font following the bolded segments they correspond to
Table 6
Kendall’s \(\tau _{{\text{b}}}\) correlations between AI conception feature codes
1
2
3
4
5
6
7
8
Capability
Self-learning (1)
1.00
Anthropomorphic (2)
0.37***
1.00
Relational
Human-controlled (3)
0.14
0.05
1.00
Autonomous (4)
0.23**
0.54***
0.03
1.00
Technical
Data-driven (5)
0.17*
0.13
0.06
0.18*
1.00
Rule-based (6)
0.05
−0.00
0.20*
0.02
−0.07
1.00
Probabilistic (7)
0.09
−0.04
−0.06
0.09
0.13
−0.06
1.00
Other
Uncertain (8)
−0.05
−0.07
−0.14
0.03
−0.13
−0.13
0.01
1.00
N
141
141
141
141
141
141
141
141
M
0.20
0.53
0.40
0.72
0.66
0.31
0.09
0.37
SD
0.40
0.50
0.49
0.45
0.48
0.47
0.29
0.48
* \(p<0.05\), ** \(p<0.01\), *** \(p<0.001\)
3.2 RQ2: emotional orientation profiles
To explore the distinct emotional orientation profiles towards AI among non-experts, a hierarchical clustering analysis was conducted using the four affective dimensions of Exciting, Frightening, Not for me, and Trust humans more than AI. The goal of this analysis was to identify meaningful subgroups who share similar patterns of responses across the emotional orientation variables.
The analysis ended up in a four-cluster solution (\(R^2=0.43\), silhouette score 0.53, Calinski-Harabasz 34.3), with the four emotional orientation profiles (Fig. 4) named to depict the variable profile as follows:
Optimists (n = 31, 22.0%) reported the highest excitement and the lowest fear combined with the highest personal relevance and above sample mean trust in humans over AI. They hold a confident and positive stance towards AI, with enthusiasm outweighing concerns.
Cautious (n = 40, 28.4%) combined above sample mean excitement with relatively high fear. They see AI as personally relevant and showed slightly below sample mean trust in humans over AI. This group is open to AI s potential yet wary of risks.
Disengaged (n = 36, 25.5%) showed significantly below sample mean excitement and slightly above sample mean fear, and the highest sense that AI is not for me. They strongly trusted humans over AI, suggesting a lack of personal investment or relevance rather than strong opposition.
Indifferent (n = 34, 24.1%) scored near the sample mean in excitement and fear, with a relatively high sense that AI is "not for me" and the lowest trust in humans over AI. They appear detached, neither strongly supportive nor strongly opposed, reflecting an indifferent stance.
Fig. 4
Cluster mean z-scores for four emotional orientation profiles towards AI
In sum, the clustering analysis revealed four distinct emotional orientation profiles towards AI among non-experts. These groups illustrate the diversity of affective stances, ranging from enthusiastic engagement to wary openness, and from disengagement to indifference.
3.3 RQ3: predicting emotional orientation profile based on features of AI conceptions
Logistic regression analyses examined whether certain features in non-experts conceptions of AI (RQ1) predicted their emotional orientation profile (RQ2). The models provided little evidence that non-experts conceptions of AI would reliably predict their emotional orientation towards AI.
The model with Optimists cluster membership as a dependent variable showed modest fit and above-chance accuracy (McFadden \(R^2=0.09\), balanced accuracy = 0.58), suggesting that those viewing AI as anthropomorphic (\(Est.=1.31, SE=0.47, p<0.01\)) and probabilistic (\(Est.=1.47, SE=0.64, p=0.02\)) were more likely to belong to the Optimists profile.
The model with Cautious cluster membership as a dependent variable showed weak fit and accuracy on par with chance (McFadden \(R^2=0.05\), balanced accuracy = 0.50), and revealed no predictors reaching conventional significance.
For the Disengaged and Indifferent profiles, no predictors were retained in the final models, indicating that the measured features did not significantly differentiate these groups from others (see Table 7).
Table 7
Final logistic regression models using stepwise AIC model selection
Predictor
Dependent variable
Optimists
Cautious
Disengaged
Indifferent
Est
SE
p
Est
SE
p
Est
SE
p
Est
SE
p
Self-learning
Anthropomorphic
1.31
0.47
<0.01
−0.68
0.39
0.08
Data-driven
0.61
0.42
0.15
Rule-based
Probabilistic
1.47
0.64
0.02
−1.84
1.07
0.08
Human-controlled
Autonomous
Uncertain
Constant
−2.24
0.41
<0.01
−0.89
0.38
0.02
−1.07
0.19
<0.01
−1.15
0.20
<.01
N
141
141
141
141
\(\chi ^{2}\)
12.87
8.45
–
–
df
138
137
–
–
p
<0.01
0.04
–
–
\(R^2\) (McFadden)
0.09
0.05
0.00
0.00
Sensitivity
0.16
0.00
0.00
0.00
Specificity
0.99
1.00
1.00
1.00
Balanced accuracy
0.58
0.50
0.50
0.50
4 Discussion
4.1 Review of the results
4.1.1 Conceptions of AI
Non-experts conceptions of AI in our study show both similarities and differences compared to the expert AI definitions used in literature (Table 1). In line with previous research (Salles et al. 2020), non-experts’ definitions often included anthropomorphic descriptions: 53% of respondents attributed human-like characteristics to AI. On the other hand, technical descriptions were also common: 79% of respondents’ descriptions included at least one technical feature. A notable difference from most expert definitions was the relatively high prevalence of the Human-controlled feature (40%). While Searle (Searle 1980) discussed human control in the context of distinguishing “weak” from “strong” AI, many widely cited expert definitions tend not to explicitly address the human role in guiding or constraining AI systems (Russell and Norvig 2016; LeCun et al. 2015; Rai et al. 2019). In this respect, participants conceptions are more in line with Bearman and Ajjawi s (2023) call to study AI as part of broader social systems.
The correlation patterns (Table 5) suggest that non-experts conceptions of AI may be organized around two partially distinct cognitive clusters. One cluster links self-learning, autonomy, and anthropomorphism, reflecting a view of AI as an independent, human-like agent capable of adapting and acting without direct oversight. The other cluster associates human control with rule-based functioning, indicating a more mechanistic and deterministic understanding of AI. This separation may imply that lay perceptions of how AI operates (technical features) are often disconnected from ideas about what AI can do (capability features) or how it works in a social system (relational features).
Interestingly, the occurrence of technical features was negatively associated with uncertainty at the definition-type level. At the feature level, uncertainty correlated negatively with data-driven and rule-based conceptions, but showed a near-zero positive association with probabilistic conceptions. This pattern may suggest that probabilistic conceptions align more closely with the actual complexity of AI technologies, whereas rule-based conceptions combined with lack of uncertainty may reflect an oversimplified and deterministic understanding of AI s technical nature. These findings align with the previous research on non-experts’ superficial use of technical terms (Lindner and Berges 2020) and the view that AI algorithms cannot make mistakes (Antonenko and Abramowitz 2023).
4.1.2 Emotional orientation profiles
Employing hierarchical clustering, we identified four emotional orientation profiles — Optimists, Cautious, Disengaged, and Indifferent—demonstrating that the frequently discussed dichotomy between utopian and dystopian views of AI is more complex than a simple binary. Out of the two groups showing notable excitement towards AI, the Optimists reported high excitement coupled with the lowest fear levels, expressing a generally enthusiastic stance. By contrast, the Cautious group combined moderate excitement with elevated fear, echoing Braidotti's (2019) description of alternating euphoria and anxiety in response to rapid technological change and its societal effects. The remaining two profiles displayed limited enthusiasm: the Disengaged expressed low excitement and little fear but considered AI largely irrelevant for themselves, whereas the Indifferent scored near the sample mean on excitement and fear, yet combined moderate rejection with the lowest relative trust in humans over AI.
These profiles somewhat align with Bao et al. (2022) segmentation of the public based on perceived risks and benefits of AI although their framework is primarily cognitive in focus, whereas ours foregrounds affective orientations. The Cautious echo the “ambivalent” and “ambiguous” groups, holding optimism and concerns simultaneously, whereas the Indifferent mirror Bao and colleagues’ “indifferent” segment, showing muted responses. On the other hand, our Optimists group had no clear counterpart in Bao and colleagues typology, raising the question of whether this group reflects a form of unreserved or even naive optimism.
Trust emerged as a particularly important differentiating dimension between the groups—most clearly between the Disengaged and Indifferent, and to a smaller extent between the Optimists and Cautious. Prior research has suggested that excessive trust in AI can lead to over-reliance, whereas insufficient trust can result in resistance (Jacovi et al. 2021). This pattern is visible in the Disengaged, who expressed the highest trust in humans over AI and simultaneously viewed AI as “not for me.” By contrast, the Indifferent reported the lowest relative trust in humans. One interpretation is that this reflects not heightened enthusiasm for AI, but rather diminished confidence in human reliability: prior studies have shown that people sometimes prefer algorithmic agents in contexts where they are perceived as less biased or judgmental than humans (Sundar and Kim 2019; Ta et al. 2020).
4.1.3 Exploring the role of AI conceptions in shaping emotional orientations
Our findings offer only limited support for the causal pathway suggested in Fig. 1. Cognitive appraisal theory posits that emotional responses emerge from individuals interpretations of external phenomena (Lazarus 1991), which, in this context, we operationalized as conceptions of AI. However, the weak and mostly non-significant associations between AI conception features and emotional orientations suggest that, at least in our sample, this link from AI conceptions to emotional orientations towards AI may be less direct or less robust than anticipated. It is possible that other factors, such as prior experiences with AI, media exposure, or broader sociocultural narratives, mediate or overshadow the role of conceptions in shaping emotional responses.
Nonetheless, two features stood out in the regression models and warrant further attention. Anthropomorphic and Probabilistic conceptions showed opposite associations across profiles: participants whose answers included these features were more likely to belong to the Optimists group, whereas their absence was predictive of membership in the Cautious group. While these findings should be interpreted cautiously given the modest explanatory power of the models, they point to anthropomorphic and probabilistic conceptions as potentially influential dimensions that merit closer examination in future research.
4.2 Limitations
We have identified several limitations and methodological considerations that should be taken into account while interpreting our results.
First, due to the open nature of the lecture series, we used convenience sampling, that is, participation was based on self-selection, where individuals chose to register for the lecture series and thus became study participants. This approach limits generalizability as it is likely to attract those with a pre-existing interest in the topic. Our sample is predominantly female, from a single Nordic country, and largely middle-aged or older. Consequently, the results should not be assumed to apply to other populations. Nevertheless, we argue that this focus on a non-expert population offers a valuable point of comparison to the large body of research centred on other groups.
Second, while the amount of data analyzed in this study in total is non-trivial, it remained a limiting factor. Due to demographic imbalance, we were not able to make credible analysis based on age and gender. Moreover, the sample size leaves open the possibility of Type II errors, and some marginal trends, particularly the potential role of anthropomorphic and probabilistic conceptions of AI as predictors of emotional orientation, warrant further investigation in future research.
Third, coding in RQ1 was particularly challenging for the Anthropomorphic feature. In our initial discussions, we agreed that a code along the lines of ‘excessive anthropomorphism’ would have been interesting, and it appeared somewhat intuitive to all of us. While a certain degree of anthropomorphism arises from the scientific and technical terminology used in relation to AI systems (e.g., “intelligence,” “learning,”), there is clearly a level that extends beyond this baseline (e.g., “wants”). However, in practice, distinguishing between the technical and non-technical terms was difficult in the terse responses: many words such as “knows” or “understands” can be used in a technical context (e.g., “knowledge base,” “natural language understanding”), but they could also have been used in a highly anthropomorphic fashion, with little to distinguish between the two in the plain text of the participants’ responses. Despite the difficulties, the interrater reliability (Fleiss’ Kappa = 0.43) may be considered moderate according to Landis and Koch (1977).
Fourth, trust in AI was measured in relative terms, comparing it to trust in human performance for the same task. This framing means that our results reflect perceptions of both AI and human trustworthiness, and cannot be interpreted as absolute trust in AI. A participant s high relative trust in AI could stem either from strong confidence in AI or from low confidence in humans (or both). Future research could combine both relative and absolute measures of trust to provide a more comprehensive understanding of how people evaluate AI systems.
4.3 Implications
Our findings suggest that people approach AI with different baseline mental models of how the technology functions. These different kinds of mental models have contrasting consequences for both AI literacy and public discourse. A deterministic, rule-based and human-controlled view of AI may risk downplaying concerns about bias, transparency, and unpredictability by creating an illusion of human control. However, it may be rather easily corrected through targeted teaching about data-driven learning, model training, and the probabilistic nature of AI in general. In contrast, anthropomorphic conceptions may fuel either dystopian fears of machines taking over or utopian hype about AI as a superhuman partner. In AI literacy initiatives, these kinds of conceptions could be addressed by focusing besides the technical aspects of AI, also on the relational aspects, such as the human agency as creators and users of AI systems, and providers of training data.
Regarding the emotional orientations, our four profiles show that public orientations toward AI are more granular than a simple utopia dystopia split. AI literacy interventions should optimally recognise baseline emotional differences as different profiles have different kinds of needs for the intervention. Optimists benefit from critical reflection on limits and risks of AI to avoid over-reliance and hype, whereas people with disengaged or indifferent emotional orientation first need relevance and motivation. People with cautious orientation balance interest and concern, which may be seen as an emotional level target of an AI literacy intervention. Furthermore, a mix of participants with varying emotional profiles may provide a chance for fruitful debates that broaden perspectives. Regarding societal participation in AI policies, the disengaged and indifferent groups may be the most challenging to reach, which risks public debate being dominated by the most enthusiastic and the most anxious voices. This dynamic can reduce perceived legitimacy and inclusiveness of policy processes and increase polarisation.
While we found little evidence that non-experts conceptions of AI reliably predict their emotional orientations, weak models could be formed for the Optimists and Cautious, whereas the Disengaged and Indifferent yielded only null models. This might suggest that for the latter groups, emotional orientations are shaped less by specific conceptions and more by broader factors such as prior experiences, media exposure, or sociocultural narratives. In practice, this means that interventions aimed at correcting misconceptions may influence the already engaged, but reaching disengaged or indifferent individuals likely requires strategies that highlight personal relevance. At the societal level, this underscores that cultivating broad-based engagement with AI cannot rest solely on correcting misconceptions, but must also address the motivational and relational conditions that enable even the indifferent or disengaged to see themselves as stakeholders in technological futures.
Declarations
Conflict of interest
The authors declare no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
“It Can’t Be That Simple”: exploring Finnish non-experts’ conceptions of AI and whether these predict emotional orientation towards AI
Verfasst von
Joonas Merikko
Leo Leppänen
Linda Mannila
Publikationsdatum
31.10.2025
Verlag
Springer London
Erschienen in
AI & SOCIETY
Print ISSN: 0951-5666
Elektronische ISSN: 1435-5655
DOI
https://doi.org/10.1007/s00146-025-02699-8
Antonenko P, Abramowitz B (2023) In-service teachers’(mis) conceptions of artificial intelligence in k-12 science education. J Res Technol Educ 55(1):64–78CrossRef
Araujo T, Helberger N, Kruikemeier S, De Vreese CH (2020) In AI we trust? perceptions about automated decision-making by artificial intelligence. AI Soc 35(3):611–623CrossRef
Bao L, Krause NM, Calice MN, Scheufele DA, Wirz CD, Brossard D, Newman TP, Xenos MA (2022) Whose AI? how different publics think about AI and its social impacts. Comput Hum Behav 130:107182. https://doi.org/10.1016/j.chb.2022.107182CrossRef
Bearman M, Ajjawi R (2023) Learning to work with the black box: Pedagogy for a world with artificial intelligence. Br J Edu Technol 54(5):1160–1173CrossRef
Bearman M, Ryan J, Ajjawi R (2023) Discourses of artificial intelligence in higher education: a critical literature review. High Educ 86(2):369–385CrossRef
Bender EM, Gebru T, McMillan-Major A, Shmitchell S (2021) On the dangers of stochastic parrots: can language models be too big? In: Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp. 610–623
Bewersdorff A, Zhai X, Roberts J, Nerdel C (2023) Myths, mis- and preconceptions of artificial intelligence: a review of the literature. Comput Educ Artif Intell 4:100143CrossRef
Blut M, Wang C, Wünderlich NV, Brock C (2021) Understanding anthropomorphism in service provision: a meta-analysis of physical robots, chatbots, and other ai. J Acad Mark Sci 49:632–658CrossRef
Boyer P (1996) What makes anthropomorphism natural: Intuitive ontology and cultural representations. J R Anthropol Inst 2(1):83-97. https://doi.org/10.2307/3034634
Braidotti R (2019) Posthuman knowledge, vol 2. Polity Press Cambridge
Brauner P, Hick A, Philipsen R, Ziefle M (2023) What does the public think about artificial intelligence?—a criticality map to understand bias in the public perception of AI. Front Comput Sci 5:1113903CrossRef
Carter L, Liu D, Cantrell C (2020) Exploring the intersection of the digital divide and artificial intelligence: a hermeneutic literature review. AIS Trans Hum Comput Interact 12(4):253–275. https://doi.org/10.17705/1thci.00138CrossRef
Cave S, Craig C, Dihal K, Dillon S, Montgomery J, Singler B, Taylor L (2018) Portrayals and perceptions of AI and why they matter. The Royal Society. https://doi.org/10.17863/CAM.34502
Collins C, Dennehy D, Conboy K, Mikalef P (2021) Artificial intelligence in information systems research: a systematic literature review and research agenda. Int J Inf Manag 60:102383
Cugurullo F, Acheampong RA (2024) Fear of ai: an inquiry into the adoption of autonomous cars in spite of fear, and a theoretical framework for the study of artificial intelligence technology acceptance. AI Soc 39:1569–1584. https://doi.org/10.1007/s00146-022-01598-6CrossRef
Epley N, Waytz A, Cacioppo JT (2007) On seeing human: a three-factor theory of anthropomorphism. Psychol Rev 114(4):864CrossRef
Floridi L, Chiriatti M (2020) Gpt-3: Its nature, scope, limits, and consequences. Mind Mach 30:681–694CrossRef
Haenlein M, Kaplan A (2019) A brief history of artificial intelligence: on the past, present, and future of artificial intelligence. Calif Manag Rev 61(4):5–14CrossRef
Hofstadter DR (1999) Gödel, Escher, Bach: an eternal golden braid, twentieth anniversary edn. Basic books
Jacovi A, Marasović A, Miller T, Goldberg Y (2021) Formalizing trust in artificial intelligence: prerequisites, causes and goals of human trust in AI. In: Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. FAccT ’21. Association for computing machinery, New York, NY, USA, pp. 624–635
Jhaver S, Karpfen Y, Antin J (2018) Algorithmic anxiety and coping strategies of airbnb hosts. In: Proceedings of the 2018 CHI conference on human factors in computing systems, pp. 1–12
Kelley PG, Yang Y, Heldreth C, Moessner C, Sedley A, Kramm A, Newman DT, Woodruff A (2021) Exciting, useful, worrying, futuristic: public perception of artificial intelligence in 8 countries. In: Proceedings of the 2021 AAAI/ACM conference on AI, ethics, and society, pp. 627–637
Kozak J, Fel S (2024) The relationship between religiosity level and emotional responses to artificial intelligence in university students. Religions 15(3):331CrossRef
Krafft PM, Young M, Katell M, Huang K, Bugingo G (2020) Defining ai in policy versus practice. In: Proceedings of the AAAI/ACM conference on AI, ethics, and society, pp. 72–78
Ladak A, Harris J, Anthis JR (2024) Which artificial intelligences do people care about most? a conjoint experiment on moral consideration. In: Proceedings of the 2024 CHI conference on human factors in computing systems. CHI ’24. Association for computing machinery, New York, NY, USA . https://doi.org/10.1145/3613904.3642403
Landis JR, Koch GG (1977) An application of hierarchical kappa-type statistics in the assessment of majority agreement among multiple observers. Biometrics 33(2):363–374. https://doi.org/10.2307/2529786
Lazarus RS (1991) Cognition and motivation in emotion. Am Psychol 46(4):352CrossRef
LeCun Y, Bengio Y, Hinton G (2015) Deep learning nature 521(7553):436–444
Li W, Zheng X (2024) Social media use and attitudes toward AI: The mediating roles of perceived AI fairness and threat. Hum Behav Emerg Technol 2024(1):3448083MathSciNet
Lindner A, Berges M (2020) Can you explain ai to me? teachers’ pre-concepts about artificial intelligence. In: 2020 IEEE frontiers in education conference (FIE). IEEE, pp. 1–9
Liu Z, Li H, Chen A, Zhang R, Lee Y-C (2024) Understanding public perceptions of AI conversational agents: a cross-cultural analysis. In: Proceedings of the CHI conference on human factors in computing systems. CHI ’24. Association for computing machinery, New York, NY, USA . https://doi.org/10.1145/3613904.3642840
Love J, Selker R, Marsman M, Jamil T, Dropmann D, Verhagen J, Ly A, Gronau QF, Šmíra M, Epskamp S et al (2019) Jasp: graphical statistical software for common statistical designs. J Stat Softw 88:1–17CrossRef
Mantello P, Ho M-T, Nguyen M-H, Vuong Q-H (2023) Machines that feel: behavioral determinants of attitude towards affect recognition technology—upgrading technology acceptance theory with the mindsponge model. Humanities Soc Sci Commun 10(1):1–16
Mayring P (2015) Qualitative content analysis: theoretical background and procedures. In: Bikner-Ahsbahs, A., Knipping, C., Presmeg, N. (eds) Approaches to Qualitative Research in Mathematics Education. Advances in Mathematics Education. Springer, Dordrecht. https://doi.org/10.1007/978-94-017-9181-6_13
McCarthy J, Minsky ML, Rochester N, Shannon CE (2006) A proposal for the dartmouth summer research project on artificial intelligence, august 31, 1955. AI Mag 27(4):12–12
Mcknight DH, Carter M, Thatcher JB, Clay PF (2011) Trust in a specific technology: an investigation of its components and measures. ACM Trans Manag Inform Syst (TMIS) 2(2):1–25CrossRef
Mertala P, Fagerlund J (2024) Finnish 5th and 6th graders’ misconceptions about artificial intelligence. Int J of Child Comput Interac 39:100630CrossRef
Mertala P, Fagerlund J, Calderon O (2022) Finnish 5th and 6th grade students’ pre-instructional conceptions of artificial intelligence (ai) and their implications for AI literacy education. Comput Educ Artif Intell 3:100095CrossRef
Michael J, Holtzman A, Parrish A, Mueller A, Wang A, Chen A, Madaan D, Nangia N, Pang RY, Phang J, Bowman SR (2023) What do NLP researchers believe? results of the NLP community metasurvey. In: Rogers, A., Boyd-Graber, J., Okazaki, N. (eds.) Proceedings of the 61st annual meeting of the association for computational linguistics (volume 1: long papers). Association for computational linguistics, Toronto, Canada, pp. 16334–16368. https://doi.org/10.18653/v1/2023.acl-long.903 . https://aclanthology.org/2023.acl-long.903
Mitchell M, Krakauer DC (2023) The debate over understanding in ai’s large language models. Proc Natl Acad Sci 120(13):2215907120CrossRef
Mou Y, Xu K (2017) The media inequality: Comparing the initial human-human and human-AI social interactions. Comput Hum Behav 72:432–440CrossRef
Murray-Rust D, Nicenboim I, Lockton D (2022) Metaphors for designers working with AI. In: Lockton, D., Lenzi, S., Hekkert, P., Oak, A., Sádaba, J., Lloyd, P. (eds.), DRS2022: Bilbao, 25 June - 3 July, Bilbao, Spain. https://doi.org/10.21606/drs.2022.667
Neudert L-M, Knuutila A, Howard PN (2020) Global attitudes towards AI, machine learning & automated decision making
Pataranutaporn P, Liu R, Finn E, Maes P (2023) Influencing human-ai interaction by priming beliefs about AI can increase perceived trustworthiness, empathy and effectiveness. Nat Mach Intell 5(10):1076–1086CrossRef
Rai A, Constantinides P, Sarker S (2019) Editor’s Comments: Next-Generation Digital Platforms: Toward Human–AI Hybrids. MIS Quarterly, 43(1):iii–x. https://www.jstor.org/stable/26847848
Russell SJ, Norvig P (2016) Artificial intelligence: a modern approach. Pearson
Salles A, Evers K, Farisco M (2020) Anthropomorphism in AI. AJOB Neurosci 11(2):88–95CrossRef
Schepman A, Rodway P (2023) The general attitudes towards artificial intelligence scale (gaais): confirmatory validation and associations with personality, corporate distrust, and general trust. Int J Hum Comput Interact 39(13):2724–2741CrossRef
Scott AE, Neumann D, Niess J, Woźniak PW (2023) Do you mind? user perceptions of machine consciousness. In: Proceedings of the 2023 CHI conference on human factors in computing systems. CHI ’23. Association for computing machinery, New York, NY, USA . https://doi.org/10.1145/3544548.3581296
Searle JR (1980) Minds, brains, and programs. Behav Brain Sci 3(3):417–424CrossRef
Stein J-P, Messingschlager T, Gnambs T, Hutmacher F, Appel M (2024) Attitudes towards ai: measurement and associations with personality. Sci Rep 14(1):2909CrossRef
Sundar SS, Kim J (2019) Machine heuristic: When we trust computers more than humans with our personal information. In: Proceedings of the 2019 CHI conference on human factors in computing systems, pp. 1–9
Ta V, Griffith C, Boatfield C, Wang X, Civitello M, Bader H, DeCero E, Loggarakis A et al (2020) User experiences of social support from companion chatbots in everyday contexts: thematic analysis. J Med Internet Res 22(3):16235CrossRef
Teng M, Singla R, Yau O, Lamoureux D, Gupta A, Hu Z, Hu R, Aissiou A, Eaton S, Hamm C et al (2022) Health care students’ perspectives on artificial intelligence: countrywide survey in Canada. JMIR Med Educ 8(1):33390CrossRef
Zhang G, Chong L, Kotovsky K, Cagan J (2023) Trust in an ai versus a human teammate: The effects of teammate identity and performance on human-AI cooperation. Comput Hum Behav 139:107536CrossRef