Skip to main content
Top

Open Access 24-03-2023 | Open Forum

Hey Alexa, why are you called intelligent? An empirical investigation on definitions of AI

Author: Lucas Caluori

Published in: AI & SOCIETY

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

This paper seeks to examine the questions of what criteria definitions of Artificial Intelligence (AI) use to define AI, what the disagreements that revolve around the term AI are based on, and what correlations can be drawn to other parameters. Framed as a problem of classification, a random sample of 45 definitions from various text sources was subjected to a qualitative content analysis. The criteria found are concluded in five dimensions, namely (1) learning ability, (2) human likeness, (3) state of “mind”, (4) complexity of the problem, and (5) successfulness. Further, the results support the view that there is no consensus neither on which of these criteria are crucial to define AI nor on how these criteria must be fulfilled. By opposing the frequencies of the dimensions found with the metadata collected, it can be seen that most of these, e.g., country, scientific field, or gender of the author, are statistically independent of content variables, while the medium in which the definition was published shows a strong correlation. Since different mediums target different purposes and different readers, it must be taken into account that writing a definition of AI is to be seen in the context of its distribution area and its goal.
Notes
I would like to thank AI & Society editor Karamjit S. Gill and the anonymous reviewers. Special thanks go to Prof. Dr. Gabriel Abend for his great academic and personal support, and to Christine Müller and Marthe Wiggers for their help in improving this paper. In addition, I would like to express my gratitude to the Herbert Maissen-Stiftung and the Heinrich Schwendener-Stiftung for supporting me in financing my studies, and to the Lucerne University of Applied Sciences and Arts for open access funding.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction and research question

“This sentence was written by an AI.” Well, the AI did not literally write it by itself and on its own intention, but it comes from the AI translation tool DeepL. DeepL is a German company that “has set itself the goal of eliminating language barriers worldwide by using artificial intelligence” (DeepL n.d.). In this sense, AI has permeated our daily life: not only DeepL but also Amazon’s virtual assistant Alexa, Apple’s Siri, and the currently much-discussed ChatGPT are popular and pervasive examples of AI technology. While companies are not fussy in labeling their newest products as AI, it is still unclear—not to say a huge mess—what exactly AI is. This may be a problem or not—I will say something about this next—but it is not my aim to target this (possible) issue. Instead, from a sociological point of view, there is an interest in how things are classified; this goes back to Émile Durkheim (2011 [1903]: 4), who wrote in collaboration with Marcel Mauss about class grouping:
We may well perceive, more or less vaguely, their resemblances. But the simple fact of these resemblances is not enough to explain how we are led to group things which, thus, resemble each other, to bring them together in a sort of ideal sphere, enclosed by definite limits, which we call a class, a species, etc.
Grouping, thus, includes defining limits. This happens, for example, by claiming a definition, and this is where this paper ties in: by investigating definitions and analyzing the setting of limits. According to this, the question addressed here is not: What is AI? —but rather: How does the class of AI come about? Before I explicate my research question in more detail, I will share some thoughts on why this—especially in the field of AI—is of relevance.
As mentioned above, not having a clear definition is not a problem in general, neither within the field of AI nor in the broader public. But as Pei Wang (2019: 2) outlines, there are different stakeholders and institutions that are interested in a clear definition: actors from within the economic field, since AI is a huge business opportunity and challenge where money is at stake,1 as well as policymakers from politics and law.2 Therefore, it is not surprising that defining AI is high on the agenda; countless episodes of various formats like TED Talks (e.g., Anderson and Thrun 2017), articles (e.g., Legg and Hutter 2007), or podcasts (e.g., Ben Byford 2021) in the past few years have had the sole aim of offering a sound definition of AI. Defining AI has become a trendy social practice.3 In this paper, I would like to shed some light on this.
By investigating a bundle of definitions of AI, I aimed to identify different dimensions of criteria that are crucial for AI. In other words, I wanted to find out which features are decisive for a computer or a machine to be called intelligent. Therefore, my first research question is as follows:
1.
What criteria do definitions of AI use to define AI?
 
Out of this, two further questions are addressed:
2.
On which criteria is there (dis)agreement?4
 
3.
What correlations can be drawn between the found criteria and further parameters?
 
As it should be clear from the setting of these questions, my aim is not to place a normative argument, like rating definitions or criteria, and it is certainly not the goal to offer my own definition; rather, my analysis follows a descriptive approach. It should also be noted that a lot of terms dealt with herein are—like the term AI itself—conceptually and terminologically challenging. I am aware that the identified dimensions and categories are ambiguous in the way they are interpreted. But since they are all collected from the given material, I take the liberty of adopting the categories in their blurriness and it is not my point to clarify them in this paper.

2 Data and methods

My empirical research consisted of a content analysis of a sample of various text sources. I already intimated that defining AI seems to become more the center of attention the last few years although the term entered the scientific stage back in the 1950s, when John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon proposed the groundbreaking workshop at the Dartmouth College (see Russell and Norvig 2010: 17). To examine if and how definitions of AI have changed over time, the collection of definitions covers, at best, the time since the term was first officially used, dating back to the 1956 proposal paper of the aforementioned Dartmouth Summer Research Project on Artificial Intelligence. Since naturally most scientific journals in the field of AI were established later—for example, Elsevier’s journal Artificial Intelligence started in 1970—a media-based selection was not expedient. A more helpful way here was the search engine Google Scholar, which lists the first hit of the keywords “definition of artificial intelligence” in 1962. Following this search term, Google Scholar offers a total of 1262 text sources (status 7 July 2021), including books, journal articles, dissertations, reports, conference papers, and others.
To have an informative and broad historical sample, I divided the sample into three different time periods and drew a random sample from each of these. The first period lasts from 1962 to 1985, the second from 1986 to 2005, and the third from 2006 to 2021. Since publications about AI increased massively over time and to counteract this imbalance, the earlier periods are set longer. Nevertheless, as expected, the third period contained an exponentially larger number of results compared to the first and the second period (see Table 1). Therefore, these samples constitute a different fraction of the overall number of empirical pieces found in Google Scholar. It is consequently important to note that “inferences from the samples’ estimates to the populations’ parameters have different degrees of confidence” (Abend 2006).
Table 1
Characteristics of the sample
Time Period
Total empirical articlesa
Sample size
Proportion in %
Substitutions
Proportion incl. Substitutions %
1962–1985
46
15
32.61
21
78.26
1986–2005
246
15
6.10
29
17.89
2006–2021
970
15
1.55
5
2.06
aFound in Google Scholar by searching the keywords “definition of artificial intelligence”
For collecting my sample, I used the method of substitution since, especially in the first and the second period, a lot of sources were not useful, either because access to the document was precluded or because they did not contain a definition. Therefore, the total number of sampled sources was 100, from which 45 now form the analyzed sample. The characteristics of my sample are summarized in Table 1.
The large number of substitutions could be an indication that the search term was chosen too broadly. But as substitutions were a lot less in the last period, it is obvious that (1) texts about AI today shed more light on making clear definitions and (2) access to today’s papers is easier.
How representative this data is for research in AI in general cannot be estimated. But as other studies (e.g., Malouin 2020: 65 f.) have shown, the number of published papers related to AI is increasing significantly since 2000. This growth reflects an uptick in publicity and funding but is also seen critically by some academics.5 Besides this trend toward more papers after 2000, the data show a significant peak in 1982–1985: While for a time range of 4 years the sample shows an overall average of 3.27 texts, the period between 1982 and 1985 contains 8 sources. Although especially in 1984 AI seemed to be high on the research agenda, it is probably not George Orwell’s merit, but attributed to the rise of expert systems introduced by Edward Feigenbaum and some offensive funding programs, like the one from the Japanese government (see Anyoha 2017). Regarding the random sampling, it is important to emphasize that the frequency of a definition or criterion should not be taken as a benchmark of its validity, since the accuracy of a definition is hardly statistical by nature. However, for my research interest, which revolves only around the frequency of used criteria within definitions of AI, the statistical approach is suitable.
To analyze the collected text sources, I followed the procedural model of Mayring (2014: 54). After defining the units of analysis, which led to some substitutions because of missing or unclear definitions, I started coding and creating a category system inductively out of the material. Thereby, theoretical thoughts and considerations about my research questions were continuously incorporated. After coding around 50% of my sample, I reevaluated the categories and redesigned them with regard to the analysis purpose. Some sources contained more than only one definition. I then tried to figure out which one complies best with the authors’ view or which one the authors used for their purpose and coded only these. Thus, the definitions are not always created by the authors themselves but could be cited. A list of all coded definitions can be found in the appendix.
In advance, it must be noted that the content analysis carried out required some interpretative judgments, especially the dimensions regarding the criteria of the definitions are methodologically challenging. However, they potentially also offer more significant insights and, as Abend et al. (2013: 610) stated: “it does not follow from the need for human judgment that the instrument cannot be reliable, let alone that any interpretation of meaning or coding judgment is as good as any other.” The interpretative work varies depending on the variable. In particular the metadata, e.g., year of publication or native country of authors,6 are variables where no or only a few interpretations of meaning are involved.
For reasons outlined in the introduction, the goal of the analysis was structuring the material, after which an explorative and a relational design was followed. This means, the aim is (1) to offer new insights into the criteria used to define AI and (2) to show correlations or independencies of different parameters on both content-related and context-related variables. In sum, including the metadata, the coding system included around 20 distinct variables whereof 12 are now considered for this analysis.
To set the variables in dependencies, for the dimensions human likeness, state of “mind”, and successfulness, I put the expressions found in ordinal levels, which allows for statistical tests as well. For this, I had to arguably decide which category is greater or higher. I did this in a general manner with the question “what is harder to reach?”.7 Creating this order is based on theoretical thoughts, required interpretative work on my part as well and was, therefore, challenging. Although I made the categorization and the division of the subcategories inductively from the material, they are not able to represent all characteristics and all properties, and are therefore simplified. Nevertheless, for my analytical purposes, this simplification and categorization were needed and reasonable. If in the following sections, specifications about statistical correlations are given, expressions are coded in the order represented in Table 2. This means, e.g., for the variable human likeness: “Is not a criterion” = 0, “In terms of behavior” = 1, “In terms of success” = 2, “In terms of state” = 3.
Table 2
Criteria used for definitions of AI
 
1962–1985 (%)
1986–2005 (%)
2006–2021 (%)
Total (%)
Learning ability
    
Is not a criterion
66.7
86.7
53.3
68.9
Is a criterion
33.3
13.3
46.7
31.1
Human likeness
    
Is not a criterion
46.7
20.0
20.0
28.9
In terms of behavior
26.7
40.0
13.3
26.7
In terms of success
13.3
26.7
20.0
20.0
In terms of state
13.3
13.3
46.7
24.4
State of “mind”
    
Is not a criterion
46.7
60.0
46.7
51.1
In terms of logical/rational operations
6.7
0.0
20.0
8.9
In terms of reasoning
13.3
6.7
0.0
6.7
In terms of knowledge
20.0
6.7
0.0
8.9
In terms of understanding/having meaning
13.3
0.0
13.3
8.9
In terms of mental/cognitive capacities
0.0
26.7
20.0
15.6
Complexity of the problem
    
Is not a criterion
86.7
93.3
80.0
86.7
Is a criterion
13.3
6.7
20.0
13.3
Successfulness
    
Is not a criterion
53.3
66.7
40.0
53.3
In terms of goal-orientedness
6.7
6.7
0.0
4.4
In terms of satisfying problem solving
13.3
13.3
46.7
24.4
In terms of efficiency
6.7
13.3
6.7
8.9
In terms of the optimal/best solution
20.0
0.0
6.7
8.9
Authors take definition as objective/subjective
    
AI definition is something that is objectively
60.0
26.7
40.0
42.2
AI definition is something that is subjectively
20.0
40.0
26.7
28.9
Unclear
20.0
33.3
33.3
28.9
Supports or follows Tesler’s theorem
    
No
86.7
66.7
86.7
80.0
Yes
13.3
33.3
13.3
20.0
Total number of articles
15
15
15
45

3 Findings 1: Five criteria used for defining AI

The first findings address research question one and offer a list of criteria that are used for definitions of AI. Out of my material, I identified five main dimensions that—so my argument goes—are central to the dissent over the question “What is AI?”: (1) learning ability, (2) human likeness, (3) state of “mind”, (4) complexity of the problem, and (5) successfulness. As the data show, the dissent is not only about which of these criteria are distinctive for AI, but also about the degree to which a criterion must be fulfilled.
In addition, I coded two further variables regarding the definitions: If the authors claim an objective definition of AI or if they see the definition as something subjective, and if the given definition supports or follows what is known as Tesler’s theorem8: “Intelligence is whatever machines haven't done yet” (Tesler n.d.). The categories and their frequencies found are, grouped by time period, summarized in Table 2. With that, I turn to the individual dimensions and have a look at their development over time.

3.1 Learning ability

Learning ability9 as a criterion for AI is of particular interest since today machine learning is often used as a synonym for AI (see Anderson 2017). As the data show, this criterion became used more frequently in the youngest time period than in the previous two periods. Furthermore, if only publications after 1985 are included, a Pearson’s Chi-square test shows a significant positive correlation between the year of publication and the use of this criterion (r = 0.43, p = 0.017), while over the entire period, the correlation is less striking (r = 0.14, p = 0.369). Even more noticeable is the data in the time 2006–2021: Six of the seven definitions that contained learning ability as criterion were published in 2018 or later. Since the definitions are highly divided in using this parameter or not, I argue that—especially in the newer time—learning ability can be identified as one of the critical dimensions in the dissent over definitions of AI.

3.2 Human likeness

Resemblance to human beings is probably what makes AI so fascinating, not only in science but also in pop culture. As we can see from the data, it is the most used criterion for definitions of AI, although the similarity is applied at different levels. What is especially striking here is that this variable, just like learning ability, shifts significantly over time; while in the first period, almost half of the definitions did not use this criterion, it seems it became more important later. In addition, we can see that the level that has to be reached on this parameter increased as well. A Spearman’s test here shows a significant positive correlation between the year of publication and the use, respectively, the pursued level, of this criterion (ρ = 0.46, p = 0.001). These findings support the argument that what AI is, or better: what counts as AI, is a question of history. However, although we can see some consensus here since most of the definitions use this parameter, there is still dissent over the required level, respectively, over how AI needs to be humanlike. As the most used criterion in my data and as important constituent in the common conception of AI, it could be argued that any definition of AI is partly based on human likeness. If this is true, my analysis must be understood as an inquiry into whether human likeness is used implicitly or explicitly, and in relation to which aspects.

3.3 State of “mind”

The historical development mentioned above can also be found in this dimension, even if it is less obvious. For example, it can be seen that the category “knowledge” particularly occurs around the year 1980, which falls together with the rise of knowledge and expert systems (see Russell and Norvig 2010: 23 f.). Since this parameter shows a large variation—both in referring to and not using it at all—it is arguable that there is no consensus. What is more is that this category, apart from the raised distributions in this study, is probably highly challenging in substantive senses, which makes consensus even less likely. But this is another story.10

3.4 The complexity of the problem

The complexity of the problem is the least controversial dimension, in the sense that most definitions do not consider it crucial.11 This is true for all time periods. With this said, one could claim that there is nothing of interest in this dimension. But I would like to argue why this result is still noteworthy: First, to Marvin Minsky, who is considered one of the founders of the term AI and had—and still has—a large influence on the research of AI,12 this was the only criterion when he defined (artificial) intelligence solely as “the ability to solve hard problems” (Minsky 1985). Second, the hope that AI could solve hitherto unsolved problems often resonates in public discourse, which implies that AI must be able to solve difficult tasks. However, since other dimensions offer more controversial aspects, I turn to the next one.

3.5 Successfulness

The dimension successfulness shows similar distribution characteristics as state of “mind”, which means it is also controversial in use and levels. A general direction cannot be seen, but what is striking is the frequency it has been used since 2006 in terms of “satisfying problem solving”. An interpretation for this could be that there is some disillusion about AI as the best solution for problems, as well as the general expectation that AI can offer a pragmatic solution for our real problems, e.g., climate change, as it is often claimed (see Marr 2021). Anyway, even if the cruciality of this criterion is controversial, it can be noted that at least in the younger period, there is some consensus about the required level.

3.6 Author claims for objectivity

To measure the setting in which the definitions are made, I coded whether the authors claim their definition (or a cited definition) as objectively true or something subjective. This offers some insights into the practice of defining AI since the claim made could be essential for this. Regarding the time, we cannot see significant differences in this variable, but I will come back to this later.

4 Findings 2: The emergence space of definitions

With this more general and historical analysis made, I turn to my third research question: What correlations can be drawn between the found criteria and further parameters? Or to put it another way: Is defining AI and the dissent about it something that can be explained “culturally”? To answer this question, I collected parameters from the material gathered that represent social or “cultural” aspects. These metadata are summarized in Table 3. Subsequently, I compared these parameters with the criteria found to prove if there are some correlations. Before I come to these findings, some comments and limitations must be addressed. First, the term “cultural”—highly disputed in general (see Romano 2002)—is probably not quite appropriate here since my variables are not sufficient to catch culture, nor do all the variables analyzed here represent culture. But I use culture because it is suitable in the sense that it refers to a larger context: the emergence space of the definitions. I could have used external or social factors or something similar, but the word factor is problematic in view of my second caveat: It is not my aim to make causal claims about relationships. Although my findings possibly offer some indications for interdependencies, arguing for causality would need further research as well as a more theoretical explanation.
Table 3
Collected meta data
 
1962–1985 (%)
1986–2005 (%)
2006–2021 (%)
Total (%)
Year of publication
    
1962–1985
100.0
0.0
0.0
33.3
1986–2005
0.0
100.0
0.0
33.3
2006–2021
0.0
0.0
100.0
33.3
Type of publication
    
Book
0.0
20.0
0.0
6.7
Book section
0.0
0.0
13.3
4.4
Conference paper
13.3
0.0
6.7
6.7
Journal article
13.3
20.0
46.7
26.7
Report
33.3
6.7
6.7
15.6
Thesis Ph.D.
26.7
20.0
6.7
17.8
Thesis non-Ph.D.
13.3
33.3
20.0
22.2
Country
    
USA
93.3
60.0
20.0
57.8
Not USA
6.7
40.0
80.0
42.2
Gender of the author(s)
    
Unknown
0.0
6.7
6.7
4.4
Male
80.0
73.3
53.3
68.9
Female
20.0
20.0
13.3
17.8
Both
0.0
0.0
26.7
8.9
Authors’ field of expertise
    
Unknown
20.0
13.3
0.0
11.1
Computer science and informatics
53.3
26.7
26.7
35.6
Philosophy and social science
13.3
33.3
6.7
17.8
Economics and business administration
6.7
0.0
60.0
22.2
Physics and nature science
0.0
6.7
6.7
4.4
Others (e.g., musicology, theology)
6.7
20.0
0.0
8.9
Total number of articles
15
15
15
45

4.1 The medium

The first cultural parameter I consider is the type of publication. In my sample, I found seven different types of mediums: books, book sections, conference papers, journal articles, reports, and theses (Ph.D. and lower than Ph.D.). Cross tabled with the substantive variable human likeness, it can be shown that definitions from reports or conference papers refer much less to human likeness: Only two out of ten text sources from these mediums use this criterion, while over all mediums it is used in up to 71.1% (see Table 4). For an interpretation of this distribution, one must be aware that there is presumably also an interaction with the year of publication, because the types of publication are not distributed equally over time. Nevertheless, compared to other variables this is very striking, which could mean that the question of whether human likeness is crucial for a definition of AI varies with the medium, and thus with the target audience.
Table 4
Cross table: Type of publication
 
Human likeness is criterion1
Authors take definition as2
Supports or follows Tesler’s theorem3
No
Yes
Objective
Subjective
No
Yes
(%)
(%)
(%)
(%)
(%)
(%)
Book
33.3%
66.7
100.0
0.0
100.0
0.0
Book section
0.0
100.0
100.0
0.0
100.0
0.0
Conference paper
100.0
0.0
100.0
0.0
100.0
0.0
Journal article
16.7
83.3
22.2
77.8
66.7
33.3
Report
71.4
28.6
50.0
50.0
100.0
0.0
Thesis Ph.D.
25.0
75.0
100.0
0.0
100.0
0.0
Thesis non-Ph.D.
0.0
100.0
50.0
50.0
50.0
50.0
Total
28.9
71.1
59.4
40.6
80.0
20.0
Fisher’s test: 1p = 0.001, 2p = 0.027, 3p = 0.066
The same is true for whether an author claims a definition objectively true or not: While in journal articles claims for subjectivity of definitions were found more frequently, definitions in Ph.D. theses were most claimed to be objectively true (see also Table 4). A noticeable variance, although not statistically significant, can also be found in whether a definition supports or follows Tesler’s theorem. Only in journal articles and non-Ph.D. theses, this criterion could be found.
The findings presented here, I argue, are a strong indication that definitions of AI are not independent of the medium they are published in—neither regarding the used criteria nor the claims the authors make with the definition. Here, a theoretical link to Ludwik Fleck (2017 [1935]: 146 f.) can be made, who arranges different scientific activities and their results upon its medium: Thus he speaks of “Zeitschriftenwissenschaften” (journal sciences), “Handbuchwissenschaften” (handbook sciences), “Lehrbuchwissenschaften” (textbook sciences), and “populäre Wissenschaften” (popular sciences).

4.2 Field of expertise

While the type of publication shows some significant correlations, the field the author comes from does not seem to be crucial for the criteria used for the definitions of AI (see Table 5). Although we can see some trends over time—definitions today are more often offered from the field of economics (see Table 3)—substantively no significant correlation could be found. This indicates that the dissent over the definition of AI is not a dissent between different research fields. What is most striking here is that comparatively many definitions that support or follow Tesler’s theorem do not come from the main classical fields of AI research.
Table 5
Cross table: Field of expertise
 
Human likeness is criterion1
State of “mind” is criterion2
Supports or follows Tesler’s theorem3
No
Yes
No
Yes
No
Yes
(%)
(%)
(%)
(%)
(%)
(%)
Computer science and informatics
37.5
62.5
43.8
56.3
81.3
18.8
Philosophy and social science
25.0
75.0
25.0
75.0
87.5
12.5
Economics and business administration
10.0
90.0
60.0
40.0
90.0
10.0
Physics and nature science
50.0
50.0
100.0
0.0
50.0
50.0
Others
25.0
75.0
75.0
25.0
25.0
75.0
Total
27.5
72.5
50.0
50.0
77.5
22.5
Fisher’s test: 1p = 0.497, 2p = 0.263, 3p = 0.077

4.3 Gender of the author

In the use of the criterion human likeness, there is no difference observable between men and women (Fisher’s test: p = 0.665). However, if we analyze on which level men and women apply this parameter, a different picture is observable: Table 6 shows that male authors use this parameter more often in terms of “behavior or state”, while females focus on “success”. This is evidence for the argument that men and women have different views of what AI is; not in the criteria, but in the levels that need to be reached. Needless to mention: this is a simplified conclusion and to make a general claim about gender differences in the practice of defining AI, deeper research is required.
Table 6
Cross table: Gender of author
 
Human likeness is criterion1
Authors take definition as2
No
In terms of …
Objective
Subjective
Behavior
Success
State
(%)
(%)
(%)
(%)
(%)
(%)
Female
37.5
12.5
50.0
0.0
85.7
14.3
Male or both
25.7
28.6
14.3
31.4
52.0
48.0
Total
27.9
25.6
20.9
25.6
59.4
40.6
Fisher’s test: 1p = 0.053, 2p = 0.195

4.4 Country

What could be suggested regarding the dissent about defining AI is that, in different places in the world, definitions vary. But if we have a look at the corresponding variable (see Table 7), we see that the country the authors come from is statistically independent of all the substantive dimensions. It must be noted here that only text sources in English were taken into account, which is seen as a constraint for this argument.
Table 7
Cross table: Country
 
Learning ability is criterion1
Human likeness is criterion2
State of “mind” is criterion3
Success is criterion4
No
Yes
No
Yes
No
Yes
No
Yes
(%)
(%)
(%)
(%)
(%)
(%)
(%)
(%)
Not USA
73.7
26.3
26.3
73.7
52.6
47.4
57.9
42.1
USA
65.4
34.6
30.8
69.2
50.0
50.0
50.0
50.0
Total
68.9
31.1
28.9
71.1
51.1
48.9
53.3
46.7
Fisher’s test: 1p = 0.746, 2p = 1, 3p = 1, 4p = 0.895

5 Conclusion

The main finding of this paper is that the dissent on the question of “What is AI?” can be broken down into five different parameters, namely learning ability, human likeness, state of “mind”, successfulness, and complexity of the problem. Thereby, the latter is not as controversial as the former ones. While since the implementation of the concept of AI in the 1950s, the use of some of these parameters shifted over time, the discourse on definitions of AI, in general, seems to revolve stably around these dimensions. But as the data have shown, there is no overarching consensus; neither on which of these criteria are crucial to mark out AI nor on how these criteria must be fulfilled. However, by focusing closer on some aspects, it can be said that there is some general agreement in the use of two of these parameters: First, the complexity of the problem for most of the definitions is not decisive, and second, in the era since the 1990s, human likeness is accepted widely as a key attribute for AI—although on different levels.
By opposing the found frequencies to the collected metadata, which present some “cultural” aspects from the emergence space of the text sources, it is seen that only a few of them have significant correlations. The data do not show any relation between the content characteristics and the country or the gender of the author or the scientific field in which the text source was written. On the other hand, some substantive variables are not statistically independent of the medium the definition was published in. Since different mediums target different purposes and different audiences, we can assume that writing a definition of AI must be seen in the context of its distribution area and its goal. This finding may correspond with the often-made difference between a working definition and a dictionary definition (see Wang 2019: 3), although I believe that the boundaries of these categories must be considered fluid.
The subsequent question of what implications the findings of this paper have—for the research of AI itself or the social practice of defining AI—is left open. Likewise, no causal relationships could be claimed upon the correlations found here since this requires deeper and wider research as well as theoretical linkages. So far, the results must be viewed with prudence and caution. The data show convincingly that defining Artificial Intelligence, as a form of classifying, is a social practice where different external, cultural, and historical influences come into play and must be taken into account.
I aimed to point out that in the ongoing debate about AI, we should not only focus on the definitions themselves but also pay attention to the context of their social reality. By contrasting various content criteria with various cultural dimensions, this paper attempts to contribute to that end and hopefully offers starting points for further research.
Finally, I do not know what Alexa answers if one asks “her” why “she” is labeled intelligent. One suggestion could be something like: “Because I can listen and talk like a human being.”13 This is—according to the findings presented here—only a partly good answer since it could be seen that the trend regarding the human likeness points more and more toward human-like states (e.g., of consciousness). Speaking, which is a manner of behaving like a human, seems not sufficient anymore to be called intelligent. Sorry, Alexa.

Declarations

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix

Appendix

Data: Collection of the sampled definitions1
Definition
Source2
One definition of Artificial Intelligence says that it is the ability of a machine to perform a task that it was not specifically designed or built to do. (p. 1)
Hodes, L. (1966): Programming languages, logic and cooperative games. In: Proceedings of the first ACM symposium on Symbolic and algebraic manipulation, New York, USA, 1 January: 1201–1217
[T]he central paradigm of A.I. research is heuristic search. A tree of “tries” (aliases: sub-problems, reductions, candidates, solution attempts, alternatives-and-consequences, etc.) is sprouted (or sproutable) by a generator. Solutions (variously defined) exist at particular (unknown) depths along particular (unknown) paths. To find one is a “problem”. For any task regarded as nontrivial, the search space is very large. Rules and procedures called heuristics are applied to direct search, to limit search, to constrain the sprouting of the tree, etc. (p. 7)
Feigenbaum, E.A. (1968): Artificial Intelligence: Themes In The Second Decade. Memo AI-67, 15 August. Stanford University. Available at: https://​apps.​dtic.​mil/​sti/​citations/​AD0680487 (accessed 11 June 2021)
Intuitively what is being postulated [by Artificial Intelligence] is the situation in which a number of possible solutions for a problem (which itself may be large or small) are available, and at least one of these solutions is desired. In most cases, the “solution” is required to be optimal according to some criteria. The facet of AI techniques which involve “self-improving” facilities in the programs is also needed. (p. 60 f.)
Sammet, J.E. (1971): Challenge to Artificial Intelligence: Programming Problems to be Solved. In: IJCAI, September 1971: 59–65
A broad, albeit vague, definition of Artificial Intelligence is anything done by a computer which would be called intelligent if done by a human. (p. 1)
Gillenson, M.L. (1974): The interactive generation of facial images on a CRT using a heuristic strategy. The Ohio State University. Available at: https://​www.​proquest.​com/​openview/​6b9472acc265c91c​f371530077c7d937​/​1?​pq-origsite=​gscholar&​cbl=​18750&​diss=​y (accessed 22 July 2021)
One biting definition of Artificial Intelligence characterizes it by saying that it encompasses those computer-based activities in which one does not really know what one actually is doing. (p. 136)
Marmier, E. (1975): Automatic verification of Pascal programs. ETH Zürich. Available at: http://​hdl.​handle.​net/​20.​500.​11850/​136876 (accessed 15 June 2021)
Artificial Intelligence research is that part of computer science that is concerned with the symbol-manipulation pro[gramm]es that produce intelligent action. By “intelligent action” is meant [an a]ct or decision that is goal-oriented, arrived at by an understand[able] chain of symbolic analysis and reasoning steps, in which knowledge of the work informs and guides the reasoning. (p. 5)
Feigenbaum, E.A., Lederberg, J. and Buchanan, B. (1977): Final Technical Report on Heuristic Programming Project for Contract Period August 1, 1973 through July 31, 1977. Stanford: Computer Science Departement of Stanford University. Available at: https://​apps.​dtic.​mil/​sti/​citations/​ADA131700 (accessed 11 June 2021)
1This list provides a synopsis of the coded definitions. However, these extracts only contain the coding unit, but not the context unit that was referred to during coding
2 Some definitions are quoted or contain quotations and could, therefore, originate from authors other than those listed here. To identify the original author, the source must be consulted.
(a) Artificial things are synthesized ... by man. (b) Artificial things may imitate appearances in natural things while lacking in one or many respects the reality of the latter. (c) Artificial things can be characterized in terms of functions goals adaptation. Research in AI, thus, seeks to find effective machine processes (a above) for accomplishing complex tasks (c above) imitating human processes only when this proves to be the most efficient way to do the job (b above). McCarthy and Hayes (1969) divide “intelligence” into two parts: (1) epistemological dealing with representations of knowledge; and (2) heuristic dealing with the problem solving mechanisms that operate on that representation. (p. 7)
Smith, L.C. (1979): Selected Artificial Intelligence Techniques In Information Retrieval Systems Research. Syracuse University. Available at: https://​www.​proquest.​com/​openview/​3ff3a82afb865da1​baaa87dacf7204fa​/​1?​cbl=​18750&​diss=​y&​pq-origsite=​gscholar (accessed 15 June 2021)
Artificial Intelligence is a branch of computer science which studies methods of making computers behave in ways we think of as “intelligent”. Since the concept of intelligence and how it works is not yet fully understood, it is not surprising that this definition of Artificial Intelligence is vague; however, intelligent behavior is generally assumed to include reasoning, inference, and judgment. (p. 1)
Groveston, P.K. (1982): An Overview of Artificial Intelligence-Based Teaching Aids. 1 December. Virginia: The Mitre Corporation. Available at: https://​apps.​dtic.​mil/​sti/​citations/​ADA127702 (accessed 8 June 2021)
There is no standard definition of Artificial Intelligence. An often used definition is the capability of a machine to do a task which if done by a human would be considered to require intelligence. […] This definition is, however, too simplistic. Under IT, all bit and ate would be considered manifestations of AI. Since, locating faults requires intelligence. […] Hence, a more sophisticated definition is called for. To eliminate the trivial mechanizations falling under the simple definition, we shall add the criteria that the machine must have the capability of forming and manipulating abstractions. This would include such activities as forming hypotheses (e.g., the location of a failure), the testing of hypotheses, learning, and inference. It should be noted that, for our purposes, it does not matter whether or not the machine solves a problem in the same manner a human would. While much research in AI is directed at understanding human intelligence, we will be completely indifferent to the processes involved, so long as they represent the most cost-effective solutions to the practical applications of interest. (p. 11)
Coppola, A. (1983): Artificial Intelligence Applications to Maintenance Technology. Working Group Report (IDA/OSD R&M Study). IDA Record Document D-28, 1 August. Institute for Defense Analyses: Science and Technology Division. Available at: https://​apps.​dtic.​mil/​sti/​citations/​ADA137329 (accessed 11 June 2021)
The term “intelligence” will, thus, refer to a style of behavior rather than something hidden away inside our heads. If we ask, then, in an age in which robots and computers have begun to drastically alter the face of our work environment, what it is about computers that makes them so like us, or, how smart can these machines get, it seems that an equally relevant question to ask is directed at what it is about humans that computers (so far) cannot imitate. What are the limits of Artificial Intelligence? (p. 1)
Brice, J.C. (1984): Mind over matter: Considerations of understanding and artificial intelligence. Honors Theses (Paper 414). Available at: https://​scholarship.​richmond.​edu/​honors-theses/​414 (accessed 22 July 2021)
To paraphrase the Turing test: If we can build systems such that users believe they are talking to an “expert “ in an expert system, or they really can speak [type] natural language when querying a database or applications program, then for our intents and purposes, such systems are intelligent. (p. 64)
Cooper, P.A. (1984): Artificial intelligence: A heuristic search for commercial and management science applications. Massachusetts Institute of Technology. Available at: https://​core.​ac.​uk/​reader/​4425494 (accessed 22 July 2021)
Artificial Intelligence “AI” […] is the study of how to make computers do things at which, at the moment, humans are better. (p. 38)
Kohler, J.A. (1984): A study of sensor technology for robotic workcells. Lehigh University. Available at: https://​core.​ac.​uk/​download/​pdf/​228674174.​pdf (accessed 8 June 2021)
[Intelligence is] “the ability to learn or understand or to deal with new and trying situations, the skilled use of reason … the ability to apply knowledge.” The research of AI, therefore, deals primarily with the development of man-made systems, which exhibit attributes of intelligent human behavior, such as the ability to learn, to understand, to use reason and to apply knowledge to various situations to achieve a solution. (p. 7)
Kruppenbacher, T.A. (1984): The Application of Artificial Intelligence to Contract Management. Technical Manuscript, 1 August. Construction Engineering Research Laboratory. Available at: https://​apps.​dtic.​mil/​sti/​citations/​ADA146681 (accessed 15 June 2021)
Artificial Intelligence is the ability of a human-made machine (an automaton) to emulate or simulate human methods for the deductive and inductive acquisition and application of knowledge and reason. (p. 182)
Bock, P. (1985): The Emergence of Artificial Intelligence: Learning to Learn. AI Magazine 6(3): 180–190
Artificial Intelligence is the study of how the human brain solves exponentially hard search problems in polynomial time. (p. 2)
Routh, R.L. (1985): Cortical Thought Theory: A Working Model of the Human Gestalt Mechanism. Faculty of the School of Engineering of the Air Force Institute of Technology, Ohio. Available at: https://​apps.​dtic.​mil/​sti/​citations/​ADA163215 (accessed 8 June 2021)
Artificial Intelligence is about making machines behave in a way similar to that of humans. Winston (1984) has given the following definition of Artificial Intelligence “Artificial intelligence is the study of ideas that enable computers to be intelligent.” (p. 10)
Bunn, A.R. (1987): A study of the design expertise for plants handling hazardous materials. Loughborough University. Available at: https://​core.​ac.​uk/​reader/​288390308 (accessed 22 July 2021)
To me, AI is two unrelated things—human emulation that does not work and a few software techniques that are ready for use. I define AI as a programming style, where programs operate on data according to rules accomplish goals. (p. 10)
Taylor, W.A. (1988): What Every Engineer Should Know about Artificial Intelligence. Cambridge: MIT Press
“Artificial intelligence […] is the study of how to make computers do things which, at the moment, people do better.” (p. 225)
Schaffer, J.W. (1990): Intelligent Tutoring Systems: New Realms in CAI? Music Theory Spectrum 12(2): 224–235
A formal definition of Artificial Intelligence cannot be provided, but for our purpose it suffices to say that “AI”, as it is often simply called, consist of the science of processing symbols by computers. What exactly is subsumed under the AI umbrella changes with time, a fact which has nicely been summarized in Tesler’s law: AI is whatever hasn't been done yet. (p. 69)
Adorf, H.M. (1991): Artificial Intelligence for Astronomy. ESO Course Held in 1990. The Messenger 63: 69–72
“Artificial Intelligence is the part of computer Science concerned with designing intelligent computer systems, that exhibit the characteristics we associate with intelligence in human behaviour.” (p. 66)
Hasnain, S.B. (1992): Computer automation and artificial intelligence. INIS-MF–13,741. Pakistan Inst. of Nuclear Science and Technology. Available at: http://​inis.​iaea.​org/​Search/​search.​aspx?​orig_​q=​RN:​25029353 (accessed 15 June 2021)
In general, Al concerns itself with making machines do things that seem intelligent. In some respects, the ultimate goal of Al is to build a mechanical person, a goal far beyond current technology. Research in Al touches on several fields of study, all involving activities that are easy for people, but hard for computers, to carry out. (p. 68)
Hecker, M.A. (1992): An Expert System for Processing Uncorrelated Satellite Tracks. Naval Postgraduate School, Monterey, California. Available at: http://​adsabs.​harvard.​edu/​abs/​1992MsT.​.​.​.​.​.​.​.​.​.​6H (accessed 8 June 2021)
For the purposes of this study, Artificial Intelligence will refer to that discipline which has the primary goal of making machines smarter […]. (p. 16)
Piotrowski, S.M. (1992): An investigation into experts’ opinions of the role of artificial intelligence in college engineering and science curricula during the next five years. Purdue University. Available at: https://​www.​proquest.​com/​openview/​00c0b7f2e814a5bb​04522d44405e2a24​/​1?​cbl=​18750&​diss=​y&​pq-origsite=​gscholar (accessed 15 June 2021)
A program is often said to exhibit intelligence if it does tasks that are thought to require intelligence if undertaken by a human being: “Man is the measure of all things.” A successful expert system—that is, a system capable of performing automatically one or more professional tasks—is, by this definition, an example of AI. However, this simple definition is not quite adequate. We do not call programs that invert matrices, solve linear programming problems, or perform other number-crunching feats AI programs. […] Moreover, AI programs do not usually seek optimal solutions (although they may find local optima for sub-problems), but look for solutions that are satisfactory, or “as good as possible”, and discoverable within acceptable computational times and costs. (p. 64)
Simon, H.A. (1993) A Very Early Expert System. IEEE Annals of the History of Computing 15(3): 64–68
Common understanding of what AI is include:—intelligent behavior: The machine is able to solve logical problems and communicate using natural language […]. The machine performs comparable to what an intelligent human being would do. This is the standard definition which will be used here—a machine doing things usually done by humans. […]—Tesler’s Proposal: Artificial Intelligent is all that has not been done yet […].—Machines having minds in exactly the same way we do. (p. 19 f.)
Lameter, C. (1996): The Limits of formal Languages. Available at: http://​www.​christ21.​org/​pdf/​1996-formal-languages.​pdf (accessed 15 June 2021)
Artificial Intelligence (AI) is the part of computer science concerned with designing intelligent computer systems, that is, systems that exhibit the characteristics we associate with intelligence in human behavior—understanding language, learning, reasoning, solving problems, and so on. (p. 8)
Johnson, N.R. (1999): A Hybrid Intelligent Model for Rapid Assessment of Military Bridging Procedures. University Of Missouri, Rolla. Available at: https://​rzs.​swisscovery.​slsp.​ch (accessed 8 June 2021)
Artificial Intelligence is a term that has many different definitions. One of the best definitions is, “AI is behavior by a machine that, if performed by a human being, would be called intelligent” […]. A different definition provided by Rich and Knight is, “AI is the study of how to make computers do things at which at the moment, people are better” […]. (p. 19)
Erturk, A. (2000): An Expert System for Reward Systems Design. Naval Postgraduate School Monterey Ca. Available at: https://​apps.​dtic.​mil/​sti/​citations/​ADA383532 (accessed 15 June 2021)
Generally speaking, however, so called intelligent techniques are inspired by an understanding of information processing in biological systems. In some cases, an attempt is made to mimic some aspects of biological systems. When this is the case, the process will include an element of adaptive or evolutionary behavior similar to biological systems, and like the biological model, there will often be a very high level of connectionism between distributed processing elements. Data and information processing paradigms that exhibit these attributes can be referred to as members of the family of techniques that make up the knowledge-based engineering area. Researchers are trying to develop AI systems that are capable of performing, in a limited sense, “like a human being.” (p. 9)
Jain, Ashlesha, Jain Ajita, Jain S, et al. (2000): Artificial Intelligence Techniques in Breast Cancer Diagnosis and Prognosis. Singapur: World Scientific
Weak Artificial Intelligence is the design of computer programs with the intention of adding functionality while decreasing user intervention. […] Strong Artificial Intelligence is the design of a computer program that may be considered a self-contained intelligence (or intelligent entity). The intelligence of these programs is defined more in terms of human thought. They are designed to think in the same way that people think. Passage of the Turing test, for example, might be one criterion for development of a strong AI system. (p. 3 f.)
Dunnuck, N.S. (2002): The Effect of Emerging Artificial Intelligence Techniques on the Ethical Role of Computer Scientists. University of Virginia. Available at: http://​www.​cs.​virginia.​edu/​~evans/​theses/​dunnuck.​pdf (accessed 22 July 2021)
So the study of Artificial Intelligence is about producing some human made device, with the ability to acquire knowledge, and using this knowledge to reason rational. […] Artificial Intelligence can be defined as the simulation of human intelligence on a machine, so as to make the machine efficient to identify and use the right piece of “knowledge”, which in my opinion is in the same category as the Turing test, and that is systems that act like humans. Whether one agrees with this definition or not, it should be intuitively clear, that Artificial Intelligence is about getting machines to find solutions to problems in ways somewhat equal to that of humans, and this involves borrowing characteristics from human intelligence, and applying them to machines. (p. 4–6)
Heinze, M. (2004): Intelligent game agents. Aalborg University Esbjerg. Available at: https://​citeseerx.​ist.​psu.​edu/​viewdoc/​download?​doi=​10.​1.​1.​136.​195&​rep=​rep1&​type=​pdf (accessed 8 June 2021)
The definition of Artificial Intelligence (or, as we argue, of AI), in the Handbook––computer systems exhibiting exclusively cognitive capacities such as language, learning, reasoning, and problem solving––is characteristic of AI’s current research program. (p. 23)
Franchi, S. and Güzeldere, G. (2005): Mechanical Bodies, Computational Minds: Artificial Intelligence from Automata to Cyborgs. Cambridge: MIT Press
[I]ntelligence is the general ability to recognize patterns that have been previously encountered, and relate new patterns to those previously encountered in such a way that allows for continual learning and adaptation. […] Thus, computer systems which can accurately predict and adapt to changes in a variety of environments may be considered intelligent in different way than those which pass the Turing Test. (p. 3)
Stiner, D., Campbell, A. and March, C.N. (2011): Autonomous Learning in a Simulated Environment. CPRE 585X Project Proposal. Available at: /paper/Autonomous-Learning-in-a-Simulated-Environment-CPR-Stiner-Campbell/6829c0ef03e17a850e11f188ec2d23fc41953448 (accessed 8 June 2021)
Artificial Intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it. It has been defined the field as “the study and design of intelligent agents,” where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. […] A standard definition of Artificial Intelligence, or AI, is that computers simply mimic behaviors of humans that would be regarded as intelligent if a human being did them. (p. 495)
Ali, I. (2013): Artificial Life with Artificial Intelligence. International Journal Of Engineering Sciences & Research Technology 2(3): 495–500
But most definitions presented in this regard are established on four premises as follows:—systems that think logically,—systems that act logically,—systems that think like human,—systems that act like human. Perhaps one can describe Artificial Intelligence as Karami states: “Artificial intelligence includes the study of the fact that how computers can be urged into tasks that man is doing it better currently.” (p. 6437)
Karamzadeh, K. and Moharrami, H. (2015): Survey of robust artificial intelligence classifier proper for various digital data. International Journal of Computers & Technology 15(1): 6436–6443
What mainstream media call Artificial Intelligence is a folkloristic way to refer to neural networks for pattern recognition (a specific task within the broader definition of intelligence and, for sure, not an exhaustive one). Patter recognition is possible, thanks to the calculus of the inner state of a neural network that embodies the logical form of statistical induction. The ‘intelligence’ of neural networks is, therefore, just a statistical inference of the correlations of a training dataset. (p. 14)
Pasquinelli, M. (2017): Machines that Morph Logic: Neural Networks and the Distorted Automation of Intelligence as Statistical Inference. Glass Bead Journal: Logic Gate, The Politics Of The Artifactual Mind. Available at: https://​www.​glass-bead.​org/​article/​machines-that-morph-logic/​?​lang=​enview (accessed 22 July 2021)
A standard definition of intelligence is “…the ability to learn or understand things or to deal with new or difficult situations.” With this in mind, generally, the goal of research in Artificial Intelligence is to create computers, software, and machines that are capable of intelligent and, in some cases, unpredictable and creative behavior. […] I propose that the basic ingredient of Artificial Intelligence is algorithms which can be described as a procedure for solving a problem in a finite number of steps, or as stated by Microsoft’s Tarleton Gillespie, algorithms are “encoded procedures of transforming input data into a determined output, based on specified calculations.” […] But algorithms which model complex human performance, human thought processes, and that can learn from experience, are considered by most to be an example of Artificial Intelligence; and when systems with these capabilities operate autonomously from humans, several established areas of law are challenged. For instance, for purposes of assigning liability under tort law, not all algorithms can be traced back to a human programmer, especially algorithms associated with techniques identified as deep learning. This is important because the more artificially intelligent systems are controlled by algorithms that were not written by humans, the more likely they will display behaviors that were not just unforeseen by humans, but were wholly unforeseeable. For the law, this is significant because foreseeability is a key ingredient in negligence. (p. 3 f.)
Barfield, W. (2018): Towards a law of artificial intelligence. In: Barfield, W. and Pagallo, U. (eds.); Research Handbook on the Law of Artificial Intelligence. Cheltenham, UK: Edward Elgar Publishing
It [AI] is a new technical science to study and develop theories, methods, technologies and application systems for simulating extending human intelligence. As a comprehensive and interdisciplinary subject, Artificial Intelligence involves many scientific fields such as computer science, physiology, philosophy, psychology and mathematics. Its short-term goal is to build intelligent application of machine level, and more hope to realize Artificial Intelligence of human level. As an intelligent system, the essence of Artificial Intelligence is the activity of various complex conditioned reflex neural network circuits established through adaptive training or learning. The core task of AI is to construct a behavior system that can imitate human brain function and be controlled by human computer system. The application of this technology expands the types of education resources and provides a more diversified learning system. (p. 608 f.)
Han, L. (2018): New advances in the application of AI to education system. In: Pavlova, M. and He, R. (eds.) Proceedings of the 3rd International Conference on Education, E-Learning and Management Technology (EEMT 2018). Advances in Social Science, Education and Humanities Research, volume 220. Atlantis Press: 608–611
[T]he concept of AI refers to intelligence similar to human intelligence, by which human tasks can be performed, and by which adaptation to or interaction with the environment can be made. (p. 5)
Johansson, S. and Björkman, I. (2018): What impact will Artificial Intelligence have on the future leadership role?—A study of leaders’ expectations. Lund University. Available at: http://​lup.​lub.​lu.​se/​student-papers/​record/​8952603 (accessed 8 June 2021)
AI cannot be defined anymore just as “thinking machine” but, in a more exhaustive way, we can consider it as “The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” (p. 11)
Cecchinel, M. (2019): The role and the impact of Artificial Intelligence in the domain of healthcare. A business opportunity. Università degli Studi di Padova. Available at: http://​tesi.​cab.​unipd.​it/​63405/​ (accessed 8 June 2021)
Intelligence, that is, the individual's comprehensive ability to analyze, judge, and purposeful actions on objective things and effectively solve environmental problems. “Intelligence” can also be seen as “wisdom” and “ability”, and the combination of the two has a meaning of intelligence. Although both human intelligence and Artificial Intelligence contain “intelligence”, they cannot be equated. Human beings are different from Artificial Intelligence in that human beings have the subconscious mind. Artificial Intelligence is an area of human intelligence research, from the initial mechanical automation of research and development. (p. 50)
Liu, X. (2019): Artificial Intelligence Landscape Design Application Based on University Management. In: 9th International Conference on Education and Management. Francis Academic Press: 49–54
In order for a system to be described as an Artificial Intelligence system, it has to demonstrate human behaviors such as planning, learning, reasoning, problem solving, knowledge, representation, perception, motion and manipulation. This can be differentiated into two sectors: narrow AI and general AI. Narrow AI means that the machine can only perform one task at a time. General AI means that machines can attempt to think and function as the human mind. (p. 72)
Thakor, S.D. (2019): Effect of Artificial Intelligence (AI) on retail business. International Journal of Research in all Subjects in Multi Languages 7(2): 72–74
“Intelligence measures an agent’s ability to achieve goals in a wide range of environments.” […] I submit that Artificial Intelligence is a non-biological intelligence. (p. 11/72)
Malouin, M. (2020): The Importance of Learning in the Age of Artificial Intelligence: The Evolution of the Role of the Auditor. Szent István University, Gödöllő, Hungary. Available at: https://​uni-mate.​hu/​sites/​default/​files/​mario_​malouin_​dissertation.​pdf (accessed 15 June 2021)
Strong Artificial Intelligence is the machine’s ability to think, act, and communicate like a human being […]. Strong Artificial Intelligence is a machine that could possess human-like mental state, it has self-consciousness and have the ability to act and behave by itself. Weak Artificial Intelligence, on the other hand, was designed merely to imitate the act of the human being to solve a problem or complete a task […]. (p. 2)
Louis, M., Fernandez, A.A., Abdul Manap, N., et al. (2021): Artificial Intelligence: Is It A Threat Or An Opportunity Based On Its Legal Personality And Criminal Liability? Journal of Information System and Technology Management 6(20): 1–9
Artificial Intelligence refers to a framework’s capacity to accurately decipher external information to gain from such information and utilize those learnings to reach specific objectives through flexible variation. Not to mention, AI refers to the study of an intelligent specialist that obtains instructions from the environment and acts. […] Given the several meanings of Artificial Intelligence―it includes the study, plan, and building of an intelligent specialist that can accomplish objectives that create computer programs to finish tasks that require human knowledge. That is to handle issues, recognize languages, and rational thinking for dynamics. (p. 1)
Okundia, J. (2021): The Role of Artificial Intelligence in Marketing. Rhein-Waal. Available at: http://​rgdoi.​net/​10.​13140/​RG.​2.​2.​16843.​31523 (accessed 8 June 2021)
“AI” is an umbrella term that encompasses a vast degree of computer technologies (e.g., expert systems, computer vision, robotics, and machine learning) […] as well as a concept of a machine-imitating human intelligence […]. Modern AI is defined as a system’s ability to (1) perceive the current world, i.e., data; (2) to cause and compare different approaches to achieve specific goals based on given data; (3) to tune their performance and apply to unseen data; and (4) to repeat the previous processes multiple times and update the previous learning. (p. 704)
Park, S.H., Mazumder, N.R., Mehrotra, S., et al. (2021): Artificial Intelligence-related Literature in Transplantation: A Practical Guide. Transplantation 105(4): 704–708
Weak AI that can replace humans in some aspects, strong AI with a high degree of perception of the outside world and automatic learning capabilities, and Artificial Super Intelligence, which has super Artificial Intelligence that far exceeds human intelligence. (p. 84 f.)
Tuo, Y., Ning, L. and Zhu, A. (2021): How Artificial Intelligence Will Change the Future of Tourism Industry: The Practice in China. Information and Communication Technologies in Tourism 2021: 83–94
Footnotes
1
One example here is the German AI Award invented in 2019, which is endowed with a prize sum of € 100.000 and is supported by companies like Airbus, BMW, McKinsey, and others (see Buck 2019).
 
2
A research handbook on this topic for example comes from Woodrow Barfield and Ugo Pagallo (2018).
 
3
To the fact that science is always to be seen as subject to social and historical conditions, what is the underlying premise here, has Ludwik Fleck pointed out: “Wenigstens drei Viertel und vielleicht die Gesamtheit alles Wissenschaftsinhaltes sind denkhistorisch, psychologisch und denksoziologisch bedingt und erklärbar” (2017: 32). (“At least three quarters, and perhaps the entirety of all scientific content, can be explained in terms of the history of thought, psychology and the sociology of thought” [translated by the author]).
 
4
Agreement is understood here in a quantitative sense, that most of the definitions use a criterion the same often and/or in the same way.
 
5
For example, Zachary Lipton, assistant professor at Carnegie Mellon University, advocated for “a one-year moratorium on studies for the entire industry to encourage ‘thinking’ as opposed to ‘sprinting/hustling/spamming’ toward deadlines” (Wiggers 2021).
 
6
Although, finding out where the author comes from was not easy either. If the origin of the author could not be traced, the place of the stated associated institution or the place of the publication was used.
 
7
If a definition contained more than one criterion, only the higher expression was coded.
 
8
This theorem is a formulation of what others have called the “AI Effect”. It is–not entirely accurate–often quoted as follows: “Artificial intelligence is whatever hasn’t been done yet.”.
 
9
Pei Wang (2019: 19) suggests using the broader category of adaptation instead of learning to define AI. Although these terms can hardly be understood as synonyms, they are often used similarly in the context of AI. Without regard to the psychological meaning and implications of the differences, but because learning was used much more frequently than adaptation in my data set, I decided to stick with the former term.
 
10
Because “mind” is a vague and ambiguous notion, many AI researchers try to avoid it – which is also reflected in my data. Avoiding the term might make it easier to define AI; but as Mark Coeckelbergh (2020: 19) points out, the ambiguities about the concept of AI are closely tied to the ambiguities about the concept of mind: “In the background of the discussion about AI are thus deep disagreements about the nature of the human, human intelligence, mind, understanding, consciousness, creativity, meaning, human knowledge, science, and so on. If it is a ‘battle’ at all, it is one that is as much about the human as it is about AI.” Therefore, AI researchers may not be able to avoid striving for clarity in the concept of “mind” too.
 
11
By saying this dimension is not considered crucial, I do not want to say it is not problematic. On the contrary, in its utility, this is probably one of the most cumbersome criteria since there is no consensus on complexity measurement for this purpose.
 
12
For example, Minsky was co-founder of the Massachusetts Institute of Technology's AI laboratory (see Russell and Norvig 2010: 19).
 
13
The argument that Artificial Intelligence erroneously is often ascribed to systems that generate contingency and thereby reach to reproduce communication—but not intelligence—was pointed out well by Elena Esposito (2017).
 
Literature
go back to reference Barfield W, Pagallo U (2018) Research handbook on the law of artificial intelligence. Edward Elgar Publishing, CheltenhamCrossRef Barfield W, Pagallo U (2018) Research handbook on the law of artificial intelligence. Edward Elgar Publishing, CheltenhamCrossRef
go back to reference Malouin M (2020) The importance of learning in the age of artificial intelligence: the evolution of the role of the auditor. Szent István University, Hungary Malouin M (2020) The importance of learning in the age of artificial intelligence: the evolution of the role of the auditor. Szent István University, Hungary
go back to reference Minsky M (1985) The society of mind. Simon & Schuster, New York Minsky M (1985) The society of mind. Simon & Schuster, New York
go back to reference Russell S, Norvig P (2010) Artificial intelligence: a modern approach, 3rd edn. Pearson, Upper Saddle RiverMATH Russell S, Norvig P (2010) Artificial intelligence: a modern approach, 3rd edn. Pearson, Upper Saddle RiverMATH
go back to reference Durkheim É, Mauss M (2011 [1903]) Primitive classification. Routledge, London Durkheim É, Mauss M (2011 [1903]) Primitive classification. Routledge, London
go back to reference Fleck L (2017 [1935]) Entstehung und Entwicklung einer wissenschaftlichen Tatsache: Einführung in die Lehre vom Denkstil und Denkkollektiv. 11. Aufl., Frankfurt am Main: Suhrkamp Fleck L (2017 [1935]) Entstehung und Entwicklung einer wissenschaftlichen Tatsache: Einführung in die Lehre vom Denkstil und Denkkollektiv. 11. Aufl., Frankfurt am Main: Suhrkamp
go back to reference Mayring P (2014) Qualitative content analysis: theoretical foundation, basic procedures and software solution. SSOAR, Klagenfurt Mayring P (2014) Qualitative content analysis: theoretical foundation, basic procedures and software solution. SSOAR, Klagenfurt
Metadata
Title
Hey Alexa, why are you called intelligent? An empirical investigation on definitions of AI
Author
Lucas Caluori
Publication date
24-03-2023
Publisher
Springer London
Published in
AI & SOCIETY
Print ISSN: 0951-5666
Electronic ISSN: 1435-5655
DOI
https://doi.org/10.1007/s00146-023-01643-y

Premium Partner