AI is going to make our lives better in the future.—Mark Zuckerberg, CEO, Facebook
Introduction
Industry or Usage Context (specific firm or AI application) | Description |
---|---|
AI in driverless cars (e.g., Tesla) | In the future, AI-enabled cars may allow for car journeys without any driver input, with the potential to significantly impact various industries (e.g., insurance, taxi services) and customer behaviors (e.g., whether they still buy cars). |
Online retailing AI (e.g., Birchbox) | AI will enable better predictions for what customers want, which may cause firms to move away from a shopping-then-shipping business model and toward a shipping-then-shopping business model. |
Fashion-related AI (e.g., Stitch Fix) | AI applications support stylists, who curate a set of clothing items for customers. Stitch Fix’s AI analyzes both numeric and image/other non-numeric data. |
Sales AI (e.g. Conversica) | AI bots can automate parts of the sales process, augmenting the capabilities of existing sales teams. There may be backlash if customers know (upfront) that they are chatting with an AI bot (even if the AI bot is otherwise capable) |
Customer service robots (e.g., Rock’em and Sock’em; Pepper) | Robots with task-automating AI respond to relatively simple customer service requests (e.g., making cocktails). |
Emotional support AI (e.g., Replika) | AI aims to provide emotional support to customers by asking meaningful questions, offering social support, and adjusting to users’ linguistic syntax. |
In-car AI (e.g., Affectiva) | In-car AI that analyzes driver data (e.g., facial expression) to evaluate drivers’ emotional and cognitive states. |
Customer screening AI (e.g. Kanetix) | AI used to identify customers who should be provided incentives to buy insurance (and avoid those who (1) are already likely to buy and (2) those unlikely to buy). |
Business process AI (e.g., IBM Interact) | AI used for multiple (simple) applications, such as customized offers (e.g., Bank of Montreal). |
Retail store AI (e.g., Café X, Lowebot, 84.51, Bossa Nova) | Robots that can serve as coffee baristas, respond to simple customer service requests in Lowe’s stores, and identifying misshelved items in grocery stores. |
Security AI (e.g., Knightscope’s K5) | Security robots patrol in offices or malls, equipped with superior sensing capabilities (e.g., thermal cameras). |
Spiritual support AI (e.g., BlessU-2; Xian’er) | Customizable robot priest/monk offering blessings in different languages to the user. |
Companion robot AI (e.g., Harmony from Realbotix) | Customizable robot companion, which promises reduced loneliness to the user. |
Introduction to artificial intelligence
A framework for understanding artificial intelligence
Level of intelligence
Paper | Domain | Dimension | Takeaways |
---|---|---|---|
Agrawal et al. (2018) | BUS | Artificial intelligence (AI) reduces the cost of prediction. | |
Gans et al. (2017) | BUS | ||
Rahwan et al. (2019) | CS/R | To best understand AI, bring in insights from not only computer science, but also other disciplines | |
Shankar (2018) | MKTG | AI “refers to programs, algorithms, systems and machines that demonstrate intelligence” (Shankar 2018, p. vi), is “manifested by machines that exhibit aspects of human intelligence” (Huang and Rust 2018, p. 155), involves machines mimicking “intelligent human behavior” (Syam and Sharma 2018, p. 136), and provides means to “interpret external data correctly, learn from such data, and exhibit flexible adaptation” (Kaplan and Haenlein 2019, p. 17). | |
Huang and Rust (2018) | MKTG | ||
Syam and Sharma (2018) | MKTG | Huang and Rust (2018) - Mechanical and analytical intelligences involve simple, rule-based tasks. Intuitive and empathetic intelligences involve complex tasks requiring empathy, holistic thinking and context-specific responses. | |
Kaplan and Haenlein (2019) | MKTG | Kaplan and Haenlein (2019) – Used the terms narrow versus general AI. Narrow AI somewhat maps onto mechanical and analytical intelligences, whereas general AI maps onto intuitive and empathetic intelligences. | |
Davenport and Ronanki (2018) | BUS | LVLINT | Another way to describe AI is by stating its marketing and business outcomes, such as automating business processes, gaining insights from data, or engaging customers and employees |
Davenport and Kirby (2016) | BUS | LVLINT | Contrasts task automation with context awareness. The former involves AI applications that are standardized, or rule based (akin to narrow AI). The latter is a form of intelligence that requires machines and algorithms to ‘learn how to learn’ and extend beyond their initial programming (akin to general AI). |
Ghahramani (2015) | CS/R | LVLINT | How machines can learn from experience, using probabilistic machine learning. |
Mnih et al. (2015) | CS/R | LVLINT | How artificial agents can learn to generalize from past experience to new situations, using reinforcement learning. |
Müller and Bostrom (2016) | BUS | LVLINT | Artificial general intelligence (AGI) is a hypothetical technology that would be the equivalent of a human intelligence in terms of its flexibility and capability of performing and learning a vast range of tasks (similar to context awareness). In a survey of AI researchers, the median estimate was for a 50% chance of achieving an AGI by 2050 and a 90% chance of achieving one by 2075. |
Reese (2018) | BUS | LVLINT | Defines narrow versus general AI and analytical AI versus humanized AI; both contrasts are very similar to the contrast between task automation versus context awareness. Reese (2018) cautions that AGI does not exist, and that there is no guarantee that it ever will. |
Baum et al. (2011) | SOC | LVLINT | |
Davenport (2018) | BUS | LVLINT | The state-of-the-art AI is closer to task automation than context awareness. |
Gray (2017) | PSY | LVLINT | Customers appear to hold AI to a higher standard than is normatively appropriate. A preliminary hypothesis suggests that customers trust AI less, and so hold AI to a higher standard, because they believe that AI cannot “feel”. |
Castelo (2019) | MKTG | LVLINT | To the extent a task appears subjective, involving intuition or affect, customers likely are less comfortable with AI (Castelo 2019). Customers are less willing to use AI for tasks involving subjectivity, intuition, and affect, because they perceive AI as lacking the affective capability or empathy needed to perform such tasks (Castelo et al. 2018). |
Builds from: Castelo et al. (2018) | MKTG | ||
Castelo and Ward (2016) | MKTG | LVLINT | Using AI for consequential tasks is perceived as involving more risk, in turn reducing adoption intentions. This is more so amongst (1) conservative consumers, for whom risks are more salient, (2) women, who perceive more risk in general, and take on less risk. |
Builds from: Bettman (1973) | MKTG | ||
Gustafsod (1998) | PSY | ||
Byrnes et al. (1999) | PSY | ||
Leung et al. (2018) | MKTG | LVLINT | If a certain consumption activity is central to a customer’s identity, the customer would like to take credit for consumption outcomes. Some customers perceive that using AI for these consumption activities is tantamount to cheating, and this hinders the attribution of credit post-consumption. Hence if an activity is central to a customer’s identity, then the customer may be less likely to adopt AI for this activity. |
Kim and Duhachek (2018) | MKTG | LVLINT | Customers do not associate AI applications with autonomous goals (Kim and Duhachek 2018). In line with this perception, customers are more likely to focus on “how” (rather than “why”) the AI application performs; implying that when engaging with AI, customers will be in a low level construal mindset. Because persuasion is more effective when the perceived characteristics of the persuasion source and the persuasion message match, communication from AI should be more effective when it highlights how rather than why in its messaging (regulatory construal fit; Lee et al. 2009; Motyka et al. 2014). AI persuasion messages are more effective in persuading consumers to buy the recommended product or services when the message highlights “how” to use the product rather than “why” to use the product. These effects are because customers doubt whether AI can understand “why” it is important for customers to engage in certain behaviors. |
Builds from: Lee et al. (2009) | MKTG | ||
Motyka et al. (2014) | MKTG | ||
Longoni et al. (2019) | MKTG | LVLINT | Examining the case of medical decision making, Longoni et al. (2019) propose that customers’ reservations are due to their concerns about uniqueness neglect (i.e., the AI is perceived as less able to identify and relate with customers’ unique features). Further, building from prior work (Şimşek and Yalınçetin 2010; also see Haslam et al. 2005), Longoni et al. (2019) propose that these reservations would be more for customers who have higher scores on the ‘personal sense of uniqueness’ scale (Şimşek and Yalınçetin 2010). |
Builds from: Şimşek and Yalınçetin (2010) | PSY | ||
Haslam et al. (2005) | PSY | ||
Luo et al. (2019) | MKTG | LVLINT | Examines how (potential) customers engage with AI bots. In reality, AI bots can be as effective as trained salespersons, and 4 times effective as inexperienced salespersons. However, if it is disclosed that the customer is conversing with an AI bot, purchase rates reduce by 75%. Because customers perceive the AI bot as less empathetic, they are curt when interacting with AI bots, and so purchase less. Ties into themes from Castelo et al. (2018). |
LeCun et al. (2015) | CS/R | TSKTYPE | How deep learning has improved the state-of-the-art in speech and visual object recognition. |
You et al. (2016) | CS/R | TSKTYPE | How using a new algorithm improves visual object recognition. |
Milgram et al. (1995) | PSY | ROBOT | Proposes the virtuality-reality continuum. |
Wainer et al. (2006) | CS/R | ROBOT | Interacting with a physical robot is perceived as more enjoyable than either interacting with a simulated robot on a computer or interacting with a real robot presented through teleconferencing. |
Kwak et al. (2013) | CS/R | ROBOT | When asked to administer electric shocks to a (physical) robot or a simulated robot on a computer screen, individuals empathized more with the (physical) robot. |
Kidd and Breazeal (2008) | CS/R | ROBOT | Interactions were longer with a robot diet coach than either a virtual diet coach or a pen-and-paper diet diary. |
Lammer et al. (2014) | CS/R | ROBOT | Individuals express reciprocity towards robots. |
Adami (2015) | CS/R | ROBOT | With suitable machine learning algorithms, robots can learn from past experiences. |
Kober et al. (2013) | CS/R | ROBOT | Reinforcement learning can work for robots embedded with suitable machine learning algorithms. |
Mori (1970) | PSY | ROBOT | Making robots look more human is beneficial, but only up to a certain point, after which such robots elicit negative reactions (UVH). |
Gray and Wegner (2012) | PSY | ROBOT | Machines are perceived as more unnerving when individuals ascribe to machines the capacity to feel, rather than capacity to do. |
Mende et al. (2019) | MKTG | ROBOT | Interactions with robots trigger discomfort (linked to UVH) and so further trigger compensatory behaviors. |
Boyd and Holton (2018) | SOC | ROBOT | Will the combination of robotics and AI lead to an unprecedented social transformation? |
Pedersen et al. (2018) | SOC | ROBOTS | Outlines the issues surrounding use of social robots in medical treatment, care facilities, and private homes. Also, outlines ethical concerns. |
André et al. (2018) | MKTG | LVLINT | Because AI facilitates data-driven, micro-targeting marketing offerings, customers should view such offerings favorably, because it reduces search costs. Yet this could undermine customers’ perceived autonomy, with implications for their subsequent evaluations and choices. |
Aguirre et al. (2015) | MKTG | LVLINT | Proposes the privacy–personalization paradox, whereby individuals balance privacy concerns against the benefits of personalized recommendations. |
Wang and Kosinski (2018) | PSY | TSKTYPE | How to use deep neural networks to identify sexual orientation, merely by analyzing facial images |
Task type
AI in robots
The current state and likely evolution of AI
Short- and medium-term time horizon
Long-term time horizon
Agenda for future research
AI and marketing strategy
- Can AI analyze customer communication and other customer information (e.g., social media posts) in ways to devise future communications that are more persuasive or increase engagement?
- Can AI provide real-time feedback to salespeople to help them improve their sales pitches, based on assessments of customers’ verbal and facial responses?
- How might AI combine text and other communication inputs (e.g., voice data), actual customer behavior, and other information (e.g., behaviors of similar customers) to predict repurchases? This effort demands non-numeric data, in line with cells 2, 4, 5 and 6.
- Considering Luo et al.’s (2019) findings, how should firms deploy AI sales bots effectively?