1 Introduction
The 1982 science fiction movie Blade Runner opens with a police car flying over Los Angeles in what is supposed to be the year 2019. Later on, we learn that the car needs a driver. The interior of the Blade Runner cars is a small utilitarian space, much more like a military plane than the current car. The cars are box-shaped and their dashboards resemble 1990s computer screens, with a black background and fluorescent letters.
Although the cars envisioned by today’s automobile industry cannot fly, they have some features that surpass the predictions in Blade Runner. Car drivers have been promised autonomous vehicles, enabling them to relax or work, while being driven. Several countries already allow autonomous cars on their roads, with some restrictions (Campbell
2017). Cars with more limited automation, for example Tesla’s Model S with its autopilot feature, are already on the market. However, car manufacturers are now also envisioning cars as fully digitalized personal assistants, with Artificial Intelligence (AI) and Recommender Systems features, rather than utility machines transporting the user from A to B. Former visions might have gotten us used to the idea of flying cars and robots as personal assistants, but failed to prepare us for cars as intelligent personal assistants. Such previous visions also failed to prepare us for a spectrum of potential unwanted societal implications, e.g., new business models based on user data collection and user monitoring.
Furthermore, while the AI and Recommender Systems features play a dominant role in car manufacturers' visions about forthcoming AV developments, research about the potential negative implications of incorporating these features in AVs remains unexplored. These features have not been addressed in the literature discussing ethics and user attitudes of Autonomous Vehicles (AV). Rather, the focus has thus far been on studying the implications of the automated driving function, which is admittedly relevant and novel, but it is not the only feature to consider when discussing autonomous vehicles. In other words, by focusing on assessing the automated driving function, current research on future implications of the cars of the future is not discussing a broad enough spectrum of their potential implications.
The academic literature on the ethics and people’s attitudes towards future cars has thus far focused on automation and safety concerns relating to potential accidents with automated cars (see, e.g., Kyriakidis et al.
2015; Liljamo et al.
2018; Molnar et al.
2018; Nyholm
2018a,
b). Likewise, press coverage of the social and ethical aspects of cars of the future has also focused on these topics (e.g. Doctorow
2015; Marshall
2018; Wamsley
2018). A few studies on user attitudes have briefly mentioned privacy in relation to automation, e.g., user location, as one of the reasons users do not trust AVs, however, with conflicting evaluations regarding the significance of privacy concerns in relation to user trust (Kaur and Rampersad
2018; Kyriakidis et al.
2015; Liljamo et al.
2018). Furthermore, online commentaries have featured the issues of data collection and driverless cars’ business models (Patel
2017; Ross
2018).
In this paper we aim to contribute to the research on attitudes and ethics of AVs, by studying user attitudes of AVs, beyond the automated driving function. We will focus on the attitudes of young people for two reasons. First, even though car developers refrain from providing a specific date for their ready-to-use AVs, this generation could be seen as its prospective users.
1 Estimates about the market availability of AVs differ and are being updated with the advancing technological and political developments. Second, adolescents are expected to be more willing to accept AI technology compared to former generations, because they have been exposed to smart technology and AI since their early ages. In terms of the broader theoretical contribution, we aim to provide insights in how a societally desirable AVs would look like. For doing so, we focus on the future visions of today’s car manufacturers, who present cars that are able to drive themselves, collect vast amounts of data about users, recommend and manage activities and places for users to visit. These visions are modelled on the success of the smart phone, and online website recommender systems. Also called Personal Digital Assistants, smart phones are multifunctional devices with various apps that owners can use for planning, entertainment, and socializing. Car manufacturers are increasingly following this example by imagining future cars as what might be called “smart phones on wheels.” Just like smart phones, the envisioned cars will be equipped with a personal Recommender Systems
2 (similar to the iPhone’s Siri). These AI features will assist with basic cognitive tasks: searching, planning, messaging, and scheduling etc. (Danaher
2018), all while autonomously driving users to their desired destination.
These developments are partially driven by technological innovations relating to collecting and processing user data (IEEE
2018); car manufacturers are thus able to deploy new business models developed around the data generated by users (Templeton
2018). While making users’ daily lives “easier” and fully utilizing the latest technological innovations, these developments also raise new social and ethical concerns. Such concerns about Recommender Systems (RS) and AI in general involve personal autonomy and privacy; the subordination of individuals’ interests to those of large corporations; and moral responsibility regarding the value of human choice and agency (Danaher
2018; Floridi et al.
2018).
We argue that by focusing almost exclusively on automation, the current literature fails to fully consider the wider range of relevant car features and concerns—such as autonomy, invasiveness, dependency and personal privacy—that are central to the car industry’s vision. For doing so we answer the following research questions: ‘what user attitudes and ethical issues have been identified in the literature about AVs?’; ‘what attitudes do young adults have regarding the key features of the car of the future beyond automation?’, “are the ethical concerns about AI and RS relevant in shaping young adults attitudes about AVs?’’. Regarding the second question, we are particularly interested in whether the car industry’s vision responds to potential users’ attitudes and values.
Our paper starts with a literature review (Sect.
2), followed by theory and propositions (Sect.
3), methods (Sect.
4), two studies (Sect.
5), findings 1, 2 (Sects.
6,
7), discussion (Sect.
7) and conclusions (Sect.
8).
3 Theory
With an increasing awareness of the potential impacts (often unwanted and unintended) of science and technology (e.g., privacy infringement) prospective analysis has become a prominent dimension of studying emerging technologies (Stilgoe et al.
2013). We follow the broader ideas of the Futures Studies field for studying the societal implications of the AVs emerging technology. The field rests on the premise that since the future is inherently uncertain, it should be explored by assessing numerous scenarios, which are then assessed in terms of plausibility as well as desirability (van Asselt
2010; Bell
2004). To account for future uncertainties, scenarios should not be mere extrapolations of the present to the future. Societally desirable scenarios are typically used to guide decisions in order to make informed decisions in the present, or to steer current events towards the preferable future (van Asselt
2010; Schwartz
1998).
Broadly speaking, there are two futures studies frameworks for studying emerging technologies: technology foresight and Technology Assessment/Responsible Research and Innovation. Their commonality is dealing with visions, which are defined as depictions of “a fuller portrait of an alternative world that includes revised social orders, governance structures, and societal values” (Konrad et al.
2017, p. 467). Whereas the former framework focuses on developing visions, the latter one goes further to assessing visions.
First, technology foresight is the process of systematically studying longer term futures of science and innovation with strategic goals (Hussain et al.
2017; van Lente
2012). Foresight is a successor of the technology forecasting approach. Forecasting extrapolates current trends into the future, which is seen as problematic, because it does not take complexity and uncertainty into account and because it assumes that the future can be predicted (Miles
2010). Technology foresight, on the other hand, accounts for the limitations of predicting the future while also considering the interaction between society and technology. Technology Foresight is commonly done through the approach of scenario planning, which are “a tool for ordering one’s perceptions about alternative future environments in which one’s decisions might be played out concretely, so people can help people make better decisions” (Schwartz
1998, p. 4; Ryan
2019). Scenarios, or visions, are typically developed through a combination of expert interviews and literature study (see Ryan
2019).
Second, once scenarios have been developed, they have to be assessed, which can be done through using the conceptual frameworks of Technology Assessment (TA)/Responsible Research and Innovation (RRI). The approaches are similar in exploring and advising decision makers about how to promote the development of technologies that would be aligned with societal values (Grunwald
2014). RRI aims to resolve the tension between the potential harms and benefits of emerging innovations through promoting ethically aligned technology designs, to increase their societal embedding (Stilgoe et al.
2013; von Schomberg
2011). We apply these frameworks, because they match our broader interest in the roles that prospective user attitudes play in the development of future cars. The framework is based on four interlinked and widely adopted RRI framework dimensions: anticipation, reflexivity, inclusion, and responsiveness. The anticipation, reflexivity, and inclusion dimensions are particularly relevant for our research aims. Anticipation relates to responding to uncertainty regarding potentially undesirable and unintended future impacts of emerging innovations. Reflexivity refers to scrutinizing the value systems framing particular innovations, which may not be universally held, as well as to avoiding “tunnel vision” by asking “what if questions”. In turn, inclusion calls for assessing future visions through dialogues with direct and indirect stakeholders.
In the literature review section we have identified that the literature on user attitudes and ethics of AV has mainly looked at a limited vision of AVs. This gap was identified by looking at the visions put forward by the car manufacturers. Predominantly the focus in the literature has been on the automated driving, which is merely one of the envisioned AV features, as AI and RS are also increasingly implemented in cars. As such, future cars are studied as projections of current cars with a fully matured function for a full automated driving. The problem in these visions is that cars are still predominantly used as a means for getting from A to B, where their use is likely to change more to becoming personal assistants, or smart phones on wheels. Furthermore, these visions are not accounting for a prominent technology trend of convergence, where technologies that were previously unrelated, such as traditional cars and smart phones, become more closely integrated and unified. Such an example is Renault SYMBIOZ, which is an integrated house and an electric car that work together in harmony.
4
Following the futures studies approach, in our study of user attitudes of AVs, we first developed likely future scenarios, in order to inform participants in the study about the envisioned car of the future. Following a common approach for generating a vision (Ryan
2019), we wanted to interview to AV developers. We contacted the following companies: Amber Mobility, Audi, BMW, Citroen, Jaguar, Mercedes-Benz, Peugeot, Tesla, Volkswagen, and Waymo, but they did not grant us interviews. Hence, we turned to publicly available promotional materials (particularly videos). We examined the topics discussed at car industry conferences, for example recent conferences in Germany on functional safety and cyber security,
5 and another on how to monetize car data.
6 This helped to establish an idea of what the car industry believes users will desire, and what business models are envisioned for the car of the future, such as fully monetizing user data.
We made a 7.47 min video collage from well-known car brands’ promotional materials. It featured 10 different car manufacturers, including BMW, Mercedes-Benz, Google car, Rolls Royce, Amber mobility, Volkswagen, Byton, Nissan, and Honda.
In selecting the video material, we aimed to capture presentations of numerous smart features beyond automation, e.g., recommender systems and mood detection. Furthermore, to trigger reactions to (assessment of) other features besides automation, we firstly showed automated driving and then proceeded to recommender systems used for controlling various aspects of users’ daily lives. See the video using the following link:
Car of the future video.
Before we proceed to presenting the study, we will explain the main three propositions we developed by integrating insights from the literature review and theory sections. The propositions explain what we expect to find in the surveys with prospective users.
4 Methods
We employed a mixed methods research approach, which involved collecting both qualitative data and quantitative, integrating the two forms of data (Creswell
2014). A mixed methods approach is based on the assumption that the combination of qualitative and quantitative approaches provides a more comprehensive understanding of a research problem than either approach alone (ibid.). We adopted the so-called “exploratory sequential mixed methods design”, with which we started the qualitative phase (data collection and analysis) and then used the initial findings in the second quantitative phase (ibid.). The second data set was built based on the results of the initial data set to develop more specific measurements of user attitudes.
Following the mixed methods approach, we conducted two studies with bachelor students aged between 17 and 25 years. During the first round we collected 221 responses, and during the second one 271 responses. First, we researched what aspects of the car of the future prospective users are concerned about, besides automation, and why. Open-ended questions were necessary to meet the paper´s aim was of avoiding to predetermine and influence respondents’ answers. We wanted the respondents to, in their own words, describe why they find a car feature desirable/undesirable. Open-ended questions
7 have been said to be particularly fruitful when dealing with a novel field that is not yet structured and requires preliminary understanding (Patton
1990). We analysed the survey responses through a thematic analysis and category coding. This was done manually, since the surveys were also filled in manually. Furthermore, manual coding, as opposed to using a software, saved us time a lot of the answers were quite long and sometimes challenging to analyze. We followed a general approach of seeking connections within the data and generating initial codes, which led us to define thematic patterns, or themes (Williamson and Johanson
2018). Many of the themes in the survey responses overlapped with the features presented in the car of the future video, and we also noted additional themes, mostly relating to the potential impact of future cars. We identified three clusters of answers, ranging from technology features to societal implications: “car features and their impacts”, “societal impacts”, and “availability and accessibility”. These subcategories are not mutually exclusive, and sometimes overlap. We transferred the data into Microsoft Excel, where we also calculated percentages and developed charts.
Second, we measured the intensity of the respondents’ attitudes by conducting a second study using the Likert scale 1–7 (1 referring to “I do not agree at all”, 7 referring to “I totally agree”) for the different scales and items presented. We based the questions on nine of the most prominently answered responses gathered in the previous survey round. We analysed the results with the SPSS Statistics software for Windows, download version 25, using the descriptive statistics function. This enabled us to calculate the means and standard deviations for each of the nine questions. We downloaded the results in charts, which we will present below.
6 Study II
Based on the results of the first survey round, we ran another survey, in which we quantitatively researched the intensity of the prospective user attitudes towards the AI personal assistant function identified in the first study. The study respondents (
n = 251) perceived the assistant and mood detection features as helpful. The participants were between 17 and 25 years old, with an average age of 19.70. User attitudes were nuanced and ranged from both positive to negative. However, to a smaller degree, we identified more negative and worried attitudes than positive ones (Table
1).
Table 1
Means and standard deviation (SD) of the Study II
How useful, or helpful, do you find the AI personal assistant in the car of the future (e.g., Hanna)? | 4.37 | 1.581 |
How invasive, or creepy, do you find the AI personal assistant in the car of the future in your private life? | 5.33 | 1.577 |
How controlling (jeopardizing your autonomy) do you find the car the AI personal assistant in the car of the future? | 4.96 | 1.551 |
How worried are you that the future car will compromise your personal data? | 4.98 | 1.664 |
How worried are you that the future car will merge your information from different devices and online platforms? | 4.87 | 1.737 |
How helpful do you find the car’s emotion and mood detection function (e.g., sleepiness, stress)? | 3.86 | 1.739 |
How invasive or creepy do you find the car’s emotion and mood detection (e.g., sleepiness, stress)? | 5.08 | 1.734 |
How worried are you about depending on the car, e.g., to assist you in your everyday life, or with your mental tasks? | 4.22 | 1.867 |
How worried are you about the car of the future making people boring or lazy? | 4.69 | 1.93 |
6.1 Attitudes about the car AI and RS (personal assistant) features
The AI and RS features were perceived as helpful and useful (M = 4.37; Std. Dev.: 1.581). However, their implications were at the same time, with an even stronger opinion, perceived negatively. Specifically, we tested the implications that relate to this feature being invasive and creepy (M = 5.33; Std. Dev.: 1.577); as well as controlling in the sense of jeopardizing one’s autonomy (M = 4.96; Std. Dev.: 1.551).
6.2 Attitudes towards car emotion and mood detection features
Similarly to the results shown in the previous section, we found that the participants assessed the cars' emotion and mood detection features ambivalently. On one hand, the car emotion and mood detection features were perceived as useful or helpful (M = 3.86; Std. Dev.: 1.739), although to a smaller extent than the above-mentioned AI personal assistant. On the other hand, this feature was also perceived as invasive or creepy, and on a much higher scale than it was perceived to be helpful (M = 5.08; Std. Dev.: 1.734).
6.3 Concerns about the car of the future
Several features of the car of the future were assessed to be worrisome: The highest level of concerns was found in relation to the cars' compromising respondents’ personal data (M = 4.98; Std. Dev.: 1.664). Also relating to the issues of data that the car of the future will be collecting, the respondents expressed concerns about the cars merging their information from different devices and online platforms (M = 4.87; Std. Dev.: 1.737). We tested how concerned prospective users are about two additional features: for inducing dependence on the car, e.g., to assist in everyday life, or with mental tasks (M = 4.22; Std. Dev.: 1867); and about making people boring or lazy (M = 4.69; Std. Dev.: 1.93).
7 Discussion
In this section we will discuss whether our findings confirmed, or contradicted, our propositions.
The first proposition was supported, as our research showed that there are more user concerns at stake than those discussed in the literature on ethics and public attitudes on AVs. The literature discusses the concerns relating safety, reduced travel time, cyberattacks, informational privacy, e.g., data ownership, accident responsibility, energy use and access (Cavoli et al.
2017; Cohen et al.
2018; Liljamo et al.
2018; Molnar et al.
2018; Nyholm and Smids
2016). On one hand, our Study I confirmed that all of the concerns researched thus far are relevant factors in shaping attitudes of young adults regarding AVs. However, on the other hand, the study also confirmed that there are additional issues that have not been previously discussed extensively in the literature. We have identified that the following user concerns and AV features should be included in future discussions about AVs: autonomy and control over the technology and personal data, personal privacy, modern sleek AV design, technology dependence, becoming lazy or docile, de-skilling, lack of social interaction, car-and ride sharing possibility. This is not to say that the literature is not completely wrong, but rather that it is limited, because the AV industry is envisioning various additional developments.
The RRI conceptual framework is instrumental in understanding why certain user concerns may have been omitted from academic discussions. Anticipation, a crucial dimension of responsible governance of emerging technology, is challenged by projecting current images of the car as a device taking us from A to B, rather than anticipating the impact of new emergent uses, such as cars as personal assistants governing our daily lives or “smartphones on wheels”. Consequently, several above-mentioned potential implications, and uses of autonomous cars, have not been anticipated in the literature (Ryan
2019).
The second proposition was also confirmed as the AI and RS features were the second most often addressed AV feature (the automated driving feature was addressed the most). Furthermore, AI and RS were the features that were the most negatively evaluated. In other words, the participants in our studies were nearly as concerned with aspects related to AI and RS as they were with automation. The concerns identified in Study I include: autonomy (AV has too much control in planning ones daily life, e.g., through functioning as a personal assistant), invasiveness (“the car is too involved in your personal life”), and personal privacy (the car is collecting personal information such as users mental state, bank account number, contacts, agenda,…). To a smaller extent the issue of making humans docile was addressed. In Study II we identified that while the AI personal assistant feature was perceived as massively invasive and creepy (M = 5.33), and jeopardizing autonomy (M = 4.96), it was also perceived as a helpful and a useful function (M = 4.37). To sum up, this study shows that several issues, and uses of AVs, that have been identified in the ethics literature researching the implications of AI and RS (but not in AV literature) are relevant in shaping user attitudes about AVs.
The proposition P3a was supported; Study II confirmed the findings about user attitudes regarding AVs being ambivalent, which has also been identified in previous research on user attitudes of AVs (Cavoli et al.
2017). User attitudes are nuanced. Nearly every AV feature was perceived positively by roughly half of the participants, but also negatively by the other half. Nevertheless, there were some exceptions, such the AV data collection feature, which was mainly assessed negatively (84.5%). On the flip side, the automated driving feature was largely positively evaluated (60.1%, compared to 23% of negative 16.9% having additional questions). As most of the other AV features were assessed more neutrally (having gathered nearly the same amount of positive and negative responses), the overall young respondents' attitudes towards AVs remain ambiguous. However, we identified that for a small percentage, the negative attitudes towards AVs prevailed.
The subproposition building on the previous proposition, namely, P3b, was roughly supported and offered another perspective in understanding the ambivalences in user attitudes. On one hand, we identified that prospective users are finding car features both useful and concerning. First, while the AI personal assistant was perceived as having numerous issues such as being massively invasive and creepy (M = 5.33), as well as jeopardizing autonomy (M = 4.96), it was also perceived as helpful and useful (M = 4.37). The attitudes about the emotion and mood detection function were also perceived ambivalently, albeit as less helpful and useful (M = 3.86) while significantly invasive or creepy (M = 5.08). Furthermore, the majority of the study participants consistently reported being worried that the car will compromise their personal data (M = 4.98), and merge their data (M = 4.87), make them docile (4.69) and dependent (M = 4.22). Therefore, while AVs, as currently envisioned by the car manufacturers, are posing several concerns, this technology is not perceived as a strictly negative, or an undesirable, mobility solution. However, on the other hand, while the study results clearly show that young adults have ambivalent attitudes towards AV features (P3a) and that they find them both concerning and useful (P3b), it is still possible to identify, even if by a small percentage, whether a certain AV feature was perceived more positively of more negatively.
Despite the considerable amount of negative user attitudes and the user concerns identified during our studies, one should not simply conclude that prospective users will not be interested in using AVs. Several new technologies such as smart phones and social media have shown that sometimes people are willing to tolerate the negative impacts of a technology for the sake of enjoying the useful and helpful side of the technology. Looking into the future in which the AVs will be more widely used, people may prefer using the technology over not using it. Prospective users, therefore, might accept, e.g., handing over the control over their personal data, because they will not want to miss out on the benefits of being taken around in a personalized assistant on wheels.
8 Conclusion
In terms of our broader theoretical contribution, we have aimed to provide insights into what a societally desirable AV would look like. Our study suggests that it is challenging to develop such an image as there are more issues at stake than have been previously acknowledged by the stakeholders. On one hand, the car manufacturers’ promotional material envisioning future AVs as digital personal assistants, which increasingly govern various aspects of users’ daily lives, is not in line with the values of prospective users and the ethical restrictions suggested by AI ethicists. Thus, AI and RS, which are increasingly being embedded in various technologies, conflict with some of the values widely held in society today.
8 Accordingly, the additional public concerns should be addressed and included in visions on AVs to stimulate the development of desirable future AVs and greater acceptance of such AI-intensive technologies.
The often-mentioned example of the smart phone becoming a widely used innovation, despite prospective users’ initial negative evaluations, highlights two points. One, the challenging nature of anticipating future user attitudes, due to our ignorance regarding the future (technologies may be used in unexpected ways, and user values are likely to co-evolve through the use of new technology, Jasanoff
2004). Two, the pertinence of seeking inclusive anticipation, since technologies tend to pose unintended and sometimes undesirable impacts, as in the case of the smart phone, concerning privacy, exploitation of scarce materials, social life dynamics, and so on. Yet, since the smart phone has become so embedded in our daily lives, it is now difficult to change it to avoid some of its negative impacts—a prime example of the Collingridge (
1980) dilemma.
Seeking alignment early on in the innovation process is thus both necessary and challenging. We present our study as an example of inclusive anticipation. Instead of asking for views on predetermined issues, we used a vision developed by car manufacturers and invited prospective users to share their thoughts regarding any aspect of the car of the future. This interaction with users revealed the scope for further research on how the values held by car designers and other tech companies could be negotiated (Van Der Hoven
2013) to achieve better alignment of future AVs with prospective users’ attitudes. For example, the survey participants on one hand perceived the AI personal assistant in the car of the future as useful, while, on the other hand, also as invasive and jeopardizing their autonomy. Further research should explore (and consult users about) which mechanisms and AI principles would need to be put in place in order for users to feel that their values are not jeopardized, that they are in control, and that cars are not invading their personal sphere. New business models and technology designs could facilitate disabling certain options, limit the car data collection, and provide full transparency on how the stored data could be used, perhaps even consulting the users before selling the data to third parties. We thus propose sometimes compromising, or giving up part of the technological sophistication, for protection of the key user values, such as, autonomy.
The main limitation of our study relates to the difficulties of identifying of emerging user attitudes. As an emerging technology, AVs will evolve in a dynamic process with an uncertain outcome. This can be seen as the above described process of co-creation (Jasanoff
2004), according to which user attitudes will be co-created in relation with how the car manufactures will develop AVs.
In conclusion, although the flying cars seen in Blade Runner are not yet on the horizon, the emerging smartphones on wheels are posing new ethical concerns that require further research. This research should address the values at the heart of prospective users’ worries to explain why our respondents feared cars being able to detect their emotions and gather personal information.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.