Familiarity Breeds Affinity – How Personal Experiences Change Employees’ Attitudes Towards a Social Robot
- Open Access
- 01-07-2025
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by (Link opens in a new window)
Abstract
1 Introduction
With rapid technological advancements in social robots, companies are increasingly likely to implement them into their operations [1]. Furthermore, the social aspect of social robot technology opens up possibilities for employing robots in social and interactive domains that have long been reserved for human workers [2]. Consequently, the work environment is changing for employees who previously had little to no exposure to such technologies in their industry (e.g., hospitality, service) and, therefore, rarely have experience in working or interacting with social robots [3]. Although scholars broadly agree that individuals’ past experiences with social robots may significantly influence their attitudes towards social robots in general and reactions to them in subsequent interactions [4, 5], most empirical studies that have examined experience-driven mechanisms in social robotics research have either focused on non-work settings [4, 6‐8] or have taken a broader stance by analyzing employees’ attitudes towards social robots as a technology in general [9‐12]. In contrast, there is a surprising dearth of studies exploring the influence of direct personal experience with a specific social robot on employees’ attitudes towards this robot that is being implemented into their organization. Existing research has primarily analyzed mere exposure effects, according to which repeated personal experiences with a stimulus can lead to more positive attitudes towards it [13, 14], thereby providing first answers to the question of whether prior experiences with a social robot may impact peoples’ attitudes towards it [6]. However, with the growing significance of social robot technology in diverse work settings and increasing interaction points between human employees and social robots [2], employees’ experiences with social robots at work consist of more than mere exposure. Furthermore, to the best of our knowledge, what is missing so far is an in-depth investigation of the underlying mechanisms, that is, how prior experience can alter employees’ attitudes towards a specific social robot. Therefore, unresolved questions persist regarding whether and how employees’ personal experiences—gained through interactions—with one particular social robot in their work environment can affect their attitudes towards this robot.
Thus, the goal of this study is twofold: First, we investigate if there are differences in employees’ perceptions of and attitudes towards a specific social robot that is being implemented into their organization, depending on whether they directly gained personal experience by interacting with it or not. Second, we seek to shed light on mechanisms that underlie potentially emerging differences between employees with and without such personal experience by identifying possible experience-driven reasons for these differences. Our investigation of the reactions to a specific robot is guided by a focus on potential differences in relation to three constructs that the literature on human-robot interaction (HRI) has established as key antecedents of successful collaboration between humans and social robots: (1) overall evaluation, (2) perceived human-likeness, and (3) user’s acceptance of the robot. First, how people overall evaluate a robot, in general, has been identified as a significant antecedent of, for example, their willingness to collaborate with it [15] and safety perceptions [16]. Second, the perceived human-likeness of a robot plays a significant role, especially in people’s reactions to social robots, as the human-like aspects (e.g., appearance and behavior) of a robot can elicit human responses that go beyond technology-appropriate reactions [17, 18], a notion that brings challenges but also further opportunities to human-robot interaction [19], and has to be considered when implementing such technology at the workplace [20]. Finally, we include users’ acceptance of the social robot. Due to this study’s focus on a work setting with employees as robot users, we refer to the users’ acceptance as the robot’s acceptance at work. Following a long tradition of technology acceptance research that recognizes users’ acceptance as a central goal of technology implementation endeavors [21, 22], it is a critical success factor for the possibility of reaping potentially positive effects of robot technology in the workplace of the future [23].
Advertisement
We conducted a kind of natural experiment in cooperation with a local organization that runs three large canteens for students and university staff, where the social robot Pepper (Softbank Robotics) was implemented by the organization in one of the canteens for six weeks. Afterward, we invited all of the organization’s employees to participate in the collection of data to assess their reactions to the focal robot. As such, we asked them to rank ten pictures of social robots—including the robot implemented at their workplace—according to their overall evaluation, perceived human-likeness, and acceptance at work. To further illuminate their ranking choices, we additionally collected qualitative data regarding this ranking through semi-structured interviews. This empirical setup allows us to answer recent calls for studies with a more extended time period to gain experiences with a robot, qualitative approaches to investigate the role of experiences in attitude formation towards social robots, and to observe user reactions in their natural environment [6, 146]. Therefore, the results of this study provide valuable insights into the question of whether personal experience with a social robot implemented in one’s workplace can influence attitudes toward this robot. Furthermore, they shed light on the underlying processes contributing to attitude formation towards a social robot in the workplace and the potential differences therein.
The remainder of the paper is organized as follows. Section 2 encompasses, first, existing empirical literature related to the constructs observed in this study, namely previously empirically identified and influential factors on overall evaluation, perceived human-likeness, and acceptance at work. Second, it displays existing social psychological theories that seek to explain experience-driven changes in attitudes. Section 3 describes our research method, including the setting, collection, and analysis of the quantitative and qualitative data used in this study. Section 4, first, reports the results of the quantitative analysis of differences between employees with and without personal experiences with the robot, and second, it displays the results of the qualitative analysis that sought to identify mechanisms underlying the quantitative differences. Finally, the discussion of the results, limitations, and future research implications can be found in Sect. 5.
2 Previous Related Work
Previous HRI literature has identified various antecedents of individuals’ overall evaluation of a robot, perception of the robot’s human-likeness, and acceptance of the robot. Section 2.1 provides an overview of empirical research identifying attributes for these dimensions on both the robot side and the human side. On the robot side, attributes are summarized under the categories of behavior, design, and functionality. On the human side, attributes are categorized as situational (i.e., varying over time) or dispositional (i.e., stable over time). While these empirical findings significantly contribute to the understanding of attitude formation towards social robots at work, questions regarding the mechanisms of how experiences made with a social robot in own interactions may alter individual’s attitudes towards a robot at their workplace remain understudied [6‐10]. Section 2.2 provides an overview of existing applications of mere exposure theory [13, 14] in HRI research and subsequently introduces different social psychological theories describing mechanisms of attitude changes through the routes of affect, cognition, and behavior. While these theories show that social psychology offers a wide range of possible mechanisms through which personal experiences may, in general, effectuate changes in attitudes, it is largely unknown how these potential effects may unfold in the specific research setting of this study, that is, individuals personally experiencing a social robot at work. However, these theories provide a fruitful foundation for the explorative investigation of such mechanisms by the empirical approach of this study.
2.1 Empirical Evidence Regarding Antecedents of Robot-Related Attitudes
2.1.1 Factors Influencing the Overall Evaluation of a Social Robot
Central to the overall evaluation of a given stimulus (e.g., a robot) is the construct of valence, which originates in Kurt Lewin’s field theory [24] and describes how individuals assess a given stimulus in a holistic way (positive/negative), thus characterizing it as “something that pulls one towards or pushes one away [from the stimulus]” [25]. Later on, appraisal literature expanded on this construct by highlighting different levels of valence, such as micro-valence and macro-valence [26, 27]. While macro-valence resembles the one-dimensional conceptualization (positive/negative), micro-valence decomposes this overall evaluation of the stimulus into smaller parts, such as, for example, its pleasantness or goal conduciveness [26]. Although this micro-perspective helps assess the underlying mechanism of ‘mixed feelings’ [26], social psychology scholars agree upon the macro-perspective’s pivotal relevance for numerous intentional and behavioral outcomes [28‐30]. Consequently, prior literature has highlighted the importance of macro-valence for overall evaluation results in human-robot interaction [31]. However, terminology is inconsistent, as the very same positive or negative overall evaluation is sometimes labeled as, for example, general attitudes [4, 32], affective attitudes [5, 33], or emotional responses [34] towards a robot. Despite these inconsistencies, prior literature has identified various antecedents of a positive or negative overall evaluation of a robot on either the side of the focal robot or the human.
Robot Side
On the side of the robot, prior research has identified several attributes of robots that influence humans’ overall evaluation, broadly subsumed as behavior, design, and functionality. Regarding robot behavior, scholars have shown that allowing a robot to mimic human-like side-by-side walking can positively influence its evaluation [35]. Similarly, human-like interaction behavior, such as showing politeness [36] and empathy [37, 38], can contribute to a more positive evaluation. Furthermore, a recent study by Kang and colleagues [39] found that robots that give feedback are evaluated more positively than robots giving no feedback at all, largely independent of whether the feedback is provided in a human-like or machine-like manner (e.g., characteristics of acoustic feedback). Concerning robot design, the uncanny valley theory [40] suggested early on that an increasing number of human-like traits would not necessarily lead to a better overall evaluation but that it might trigger a switch from positive to negative evaluation once the robot reached a certain threshold degree of human-likeness [40]. Furthermore, evidence from Kwak and colleagues [41] indicates that object-based robots are evaluated more positively than organism-based robots when focusing on a functional task. Finally, concerning functionality, prior literature has shown that increased perceived usefulness can positively impact overall evaluation [42]. In contrast, a lack of reliability of the robot’s components (e.g., malfunctioning) may have the opposite effect [43]. Among the three previously mentioned attributes behavior, design, and functionality, existing HRI research has identified individuals’ perception of functionality to be particularly important for the evaluation of a robot in a functional or work context [44, 45]. Specifically, scholars have found that in settings where the robot is assigned a given task, users’ perception of robot functionality regarding the fulfillment of the task outshines behavior and design in its contribution to overall evaluation [44‐47]. Thus, existing literature indicates that, in a work context, positive perceptions of robot functionality are the key determinant of a positive overall evaluation of the robot.
Human Side
Not only the robot itself but also the human actor in HRI may possess attributes that influence the human actor’s overall evaluation of a given robot. These attributes can be broadly categorized as situational (i.e., changing over time) or dispositional (i.e., constant over time). Regarding situational attributes, HRI literature has yielded ambiguous insights into how prior HRI experience can influence a robot’s overall evaluation. While experiences with the prototype of a given robot have been identified to positively impact the overall evaluation of its further developed version [48], increasing experiences solely with the finalized version of a robot did not exhibit the same positive effect [42]. Taken together, these studies suggest a need to investigate the experiences in detail to understand why and how experiences with a robot alter its overall evaluation. Additionally, HRI research has highlighted that these experiences are compared against expectations toward the interaction and that the confirmation or lack of confirmation of these expectations plays a crucial role in attitude formation [49, 50]. Further, extant HRI literature has identified antecedents of expectations in this context, such as, for example, previous interactions with social robots [6, 51, 52] or (fictitious) media representation of robots [53, 54], and documented that these antecedents may differ between individuals, and may change over time. Thus, individuals may enter interactions with a social robot equipped with different levels of expected interaction quality; at the extreme, they may even expect this interaction quality to be similar to interaction with humans [55]. These expectations toward the interaction, however, are compared against actual experiences, positively altering an individual’s evaluation of the robot if they are met or negatively if they are not met [52, 56, 57]. This mechanism is captured in the recently developed “social robot expectation gap evaluation framework” by Rosén and colleagues [6, 58]. However, given possible differences between user reactions in laboratory and real settings, as well as the influence of the interaction context on expectations [53, 54], further work on the interaction of expectations and experiences in real settings is still needed [6]. Concerning dispositional attributes, Stafford and colleagues have investigated the influence of users’ gender, finding that men tend to have a generally more positive evaluation of robots than women [59]. Furthermore, scholars have argued that users’ cultural background can influence, first, expectations toward robots prior to the interaction, second, experiences made during interaction with robots, and, third, the overall evaluation of a robot [60].
2.1.2 Factors Influencing Perceived Human-Likeness of a Social Robot
Perceiving a social robot as more or less human-like results from a process in which the robot is anthropomorphized [19]. However, existing studies often lack a clear definition of anthropomorphism or provide no explicit definition in the specific study context [61]. While a seminal study by Epley and colleagues describes anthropomorphism as the attribution of human motivations and intentions to the behavior of non-human entities [62], others view anthropomorphism as a multidimensional construct that encompasses various aspects (e.g., appearance and movements) and describes a person’s overall assessment of a robot’s human-likeness [63‐66]. Indeed, several studies have documented the presence of anthropomorphism in settings without observable behavior that could be interpreted as expressing human-like intentions and motivations (or lack thereof) [67, 68]. Therefore, this study aligns with the broader definition of anthropomorphism as a multidimensional construct of perceived human-likeness [63‐65]. Thus, we interpret anthropomorphizing a robot and perceiving it as human-like as strongly aligned, with anthropomorphism being the process itself and perceived human-likeness being the result of this process [63]. As such, prior research has argued and found empirical support for numerous factors on both sides of the dyadic relationship between a robot and a human actor that can influence a person’s tendency to perceive a given robot as either more or less human-like (for a detailed review, see [69]).
Robot Side
On the side of the robot, extant literature has empirically identified two of the previously mentioned robot attributes behavior, design, and functionality as main strands of features that may increase the human-likeness attributed to it: behavior and design [64, 65, 69‐71]. Concerning behavior, a robot is generally considered more human-like if its behavior aligns with human behavioral characteristics, for example, the way it moves [62]. Consequently, a robot is considered more human-like when it can achieve smoother and slower movements through higher degrees of freedom (the number of axes in which movable parts can move), as these are more similar to natural movements performed by humans [69, 72]. For interactive and social behavior, this means that the robot is perceived as more human-like if it allows mutual [70] and predictable [73] interactions and if it signals a certain warmth during interactions [74, 75]. Additionally, the perceived human-likeness increases with the perception of behavior that can be acquired through socialization, such as politeness [76] and higher competencies in verbal [77, 78] and non-verbal [79] communication. Regarding robot design, visual appearance has so far received most of the attention and comprises several aspects itself: Phillips and colleagues [80] analyzed the contribution of 18 different visual features (e.g., arms, eyes, wheels) to a human-like perception of the robot and discovered a strong influence of the robot’s surface look and body. This aligns with empirical observations that suggest a stronger influence of the presence of a human-like body compared to other visual features in terms of anthropomorphism [67]. Besides visual appearance, a robot’s haptic appearance and feedback, such as different handshake styles (e.g., passive or active motion performed by the robot) [81, 82], as well as the presence of a human name given to the robot [95], can also influence perceptions of human-likeness [65]. Lastly, the presence and the characteristics of a robot’s voice can impact perceived human-likeness. First, research has shown that the presence of voice can lead to the anthropomorphizing of technology, while, in contrast, the absence of it can even lead to dehumanization, a process where the source is perceived as more machine-like rather than human-like [83]. Thus, robots incorporating a voice as an output channel tend to be perceived as more human-like compared to robots without voice output [84]. Second, concerning voice characteristics, a voice mimicking human voices increases perceived human-likeness [85]. Furthermore, technology incorporating female-sounding voices seems more likely to be anthropomorphized [86]. In sum, while existing literature has identified in a robot’s behavior and design several antecedents of increased perceptions of its human-likeness, questions regarding the role of perceived functionality in this regard remain largely unanswered to the best of our knowledge.
Human Side
Even though the robot may incorporate distinct levels of human-like features, prior literature has identified several individual factors on the human side that can influence the overall likelihood of people to “see human” [62] in a robot, which can, similarly to attributes altering overall evaluation, be categorized as either situational or dispositional. Regarding situational attributes, a meta-analysis of Blut and colleagues revealed that higher competence perceptions in interacting with robots and negative attitudes towards robots increased the tendency to anthropomorphize a robot, whereas a need for interaction decreased it [71, 93]. These interpersonal differences in the tendency to anthropomorphize non-human agents have been argued to exist irrespectively of the robot’s design as such [88, 89]. Furthermore, anthropomorphism is increased if individuals perceive the robot as an in-group member during the interaction situation [96]. Interacting with the robot over an extended period, in turn, may negatively influence anthropomorphism. Specifically, interaction with a robot over an extended period may shed light on machine-like cues of the robot that separate them from humans [97], such as having to press a button before being able to speak to it [98], that may have been outshined by the robot’s social cues and the novelty of such social interactions with robots within the first few interactions [62, 85]. Besides these situational attributes, existing HRI research has also identified several dispositional attributes influencing individuals’ tendency to anthropomorphize a robot. Besides individuals’ cultural background [87], scholars have identified increasing age [71] and internal locus of control [94] to negatively influence individuals’ tendency to anthropomorphize, whereas higher scores in the personality traits extraversion [90], agreeableness [91], and openness to experience [92] positively influence it.
2.1.3 Factors Influencing Employees’ Acceptance of a Social Robot at Work
Several models of technology acceptance have been used to assess the acceptance of social robots, such as the technology acceptance model (TAM) [21], the unified theory of acceptance and use of technology (UTAUT) [22], the diffusion of innovation theory [99], and the Almere model [100]. However, due to the novelty of social robot implementation at work, there is a significant dearth of studies investigating factors that influence the acceptance of a focal social robot at a workplace. Specifically, most of the existing empirical literature in the context of social robotics at work focuses on either the acceptance of social robots as a general category of technology [for detailed reviews, see 4, 101] rather than investigating antecedents of the acceptance of a specific robot in a work environment, or isolates antecedents of individuals’ acceptance toward one social robot in a non-work setting. However, the studies isolating antecedents of individuals’ acceptance of a focal robot—though in non-work setting—have provided important insights into potentially influential characteristics for acceptance on the robot side, whereas research on employees’ attitudes toward social robots in general consists of factors on the human side that may potentially alter acceptance of a specific robot, too.
Robot Side
Concerning robot characteristics, humans’ acceptance of it at work is, similarly to overall evaluation, influenced by the robot’s behavior, design, and functionality. Regarding behavior, Babel and colleagues found that participants preferred a robot that provided task-oriented communication rather than small talk [78]. Furthermore, acceptance of robots that engage in small talk is higher if the robot fixates on the human with its gaze and the human is to start the conversation [81]. In contrast, in the case of task-oriented communication, a proactive robot is preferred [81]. Likewise, acceptance of a robot is higher if it can act independent of human employees [15], thus, be useful in the work setting on its own. This also refers to perceptions of functionality, which, in line with general technology acceptance literature [21], was found to increase the acceptance of a social robot at work [102, 103]. Finally, prior studies have examined the effects of design elements on the acceptance of a social robot in the workplace. Both social cues [104] and the resulting perceptions of humanoid appearance [15] and anthropomorphism [105] can positively impact the acceptance of the robot.
Human Side
On the side of humans, literature has identified situational as well as dispositional attributes that can influence individuals’ acceptance of a social robot. Regarding situational attributes, individuals who have more trust in robots in general and perceive robots as safe have been considered as likely to accept robots at work [104]. Additionally, higher technology readiness [15] and beliefs about the positive societal impact of technology [105] have been found to increase individuals’ acceptance of social robots at work. On the contrary, privacy and data protection concerns [104], perceptions of job-related threats from robots [103], robot bias [15], and insecurity about the impact of technology on society and the world [106] can negatively influence acceptance of social robots at work. Concerning dispositional attributes, scholars have highlighted that increased trait anxiety might hinder individuals from accepting social robots at work [105]. Furthermore, HRI research has pointed out cultural differences in social robot acceptance at work [107]. By comparing the cases of Egypt and Malaysia, Abdelhakim and colleagues [108] found that differences in Hofstede’s cultural dimensions [109] can significantly moderate different processes underlying the acceptance of social robots by fast-food workers as they were proposed by the UTAUT model [22]. For example, they found that uncertainty avoidance positively moderates the link between effort expectancy and behavioral intention [108].
2.2 Theoretical Perspectives on Experientially-Based Changes in Attitudes
2.2.1 Changes in Attitudes Due to Mere Exposure
Research in social psychology has extensively addressed how attitudes toward a stimulus can change due to experiences with the focal stimulus [110]. In a seminal paper, Zajonc (1968) posited positive changes in attitudes resulting from mere exposure to a stimulus [13], an effect that forms the basis of mere exposure theory and that has been empirically documented in prior HRI studies [97, 111, 112]. However, a recent study by Fiolka and colleagues (2024) casts doubts on the universal validity of strictly positive attitudinal changes resulting from mere exposure effects and instead suggests the need for a more nuanced perspective. Specifically, they found construct-specific differences in the mere exposure effect. After seeing a social robot more frequently, participants reported less eeriness—i.e., a change regarding a negatively valenced attitude—but also less human-likeness—a change regarding an unvalenced attitude—, thus indicating possible changes in unvalenced attitudes in addition to changes in valenced attitudes [112]. Furthermore, a recent experimental study by Rosén and colleagues suggested that rather than mere exposure, it might be participants’ (past) experiences with robots in general that strongly predict attitudes towards a focal robot [6]. These ambiguous findings regarding the effects of exposure to robots indicate that building upon the mere exposure effect alone might be insufficient to capture the complexity of underlying mechanisms of attitude change towards a social robot. For example, while past exposure to social robots in general may help individuals to build a mental model of social robots [6], thus potentially altering individuals’ attitudes towards social robots via a cognitive route, it may be insufficient to explain attitude changes to a focal robot, which may develop over time and can potentially rely on affect, too. Furthermore, experiences made in HRI consist of more than (passive) mere exposure but rather contain active interaction, thus raising the question of how specific aspects of the interaction beyond the mere exposure can shape and change attitudes towards a social robot.
2.2.2 Changes in Attitudes Through Affect, Cognition, and Behavior
Despite the technological nature of social robots, human-like characteristics of this technology blur the boundaries between human interaction and interaction with a social robot [113, 114]. This core feature of social robot technology implies that social psychological theories, initially developed for human interactions, may also apply to the field of human-robot interactions [17]. To understand how attitudes can change through personal experiences, social psychology has developed a fine-grained view by decomposing attitudes into affect-based, cognition-based, and behavior-based attitudes [115]. Changes in the respective domain regarding a stimulus (e.g., affect) can cause changes in the interrelated attitudes towards this stimulus (e.g., affect-based attitudes) [103]. Still, they may also spill over onto attitudes that are initially rooted in a different domain (e.g., cognition-based attitudes) [116, 117]. Based on this decomposition of attitudes into affective, cognitive, and behavioral components, various theories facilitate understanding of human-human interaction with respect to the three different components. In the past, HRI research, particularly in the field of social robotics, greatly benefited from exploring the transferability of social psychological theories to HRI. We, therefore, believe that the following social psychological theories on attitude change could enable us to develop a more fine-grained view on the changes in attitudes towards a focal social robot and thus offer valuable contributions to HRI literature.
First, regarding affect, studies have shown that positive affect associated with a stimulus can positively influence attitudes towards this stimulus, an effect referred to as “affect heuristic” [118]. If individuals experience positive emotions in the presence of a stimulus (e.g., a robot), their overall attitude towards the stimulus (e.g., the robot) becomes more positive [118]. Prior research has shown that this affective route is particularly likely to be triggered if the individual possesses limited knowledge about the type of stimulus [119]. Thus, for individuals with limited experiences in human-robot interaction, this might mean that the valence of emotional responses during HRI (e.g., enjoying vs. disliking the interaction) with a specific robot could exceed a same-directional effect on attitudes towards the robot.
Second, from a cognitive perspective, attitudes can be characterized as object-based evaluations [120]. To explain how these object-based evaluations can be changed, the Elaboration Likelihood Model (ELM) posits that if provided with the necessary motivation and ability to process new information, individuals alter their cognition-based attitudes according to the favorability of the latest information [121]. Applied to the field of HRI, direct experiences with a robot can significantly increase individuals’ information about the robot [98]. Therefore, direct experiences could allow them to re-evaluate the robot based on this new information. This process could then alter existing attitudes that rely on obsolete information or mere assumptions about the robot. In addition to replacing or altering existing information, obtaining new information about the robot can also reduce uncertainty regarding it (e.g., its abilities, functionality, and limitations). The role of individuals’ perception of uncertainty during interaction with a robot has been examined extensively in HRI research. In new interaction situations, such as initial contact with social robots [53], individuals often experience uncertainty regarding their interaction partner. Subsequently, they seek to reduce this uncertainty by collecting information about their interaction partner [122, 123] to predict future behavior [124]. Besides collecting information through interaction, prior research has found that, if individuals do not possess adequate scripts for interaction with a given technology, perceiving human-like cues during interaction can also lead to the activation of interaction scripts from human interaction. This activation, in turn, can enhance the perceived predictability of the interaction partner’s behavior and thus further reduce uncertainty [125]. Past HRI research has identified various attitudinal consequences of uncertainty reduction through personal interaction with a robot, such as a reduced tendency to anthropomorphize the robot [50, 126], reduced feelings of social presence [55], the emergence of trust [127, 128], and altered acceptance [129‐131, 134‐137]. However, unlike regarding anthropomorphism, social presence, and trust, regarding acceptance, existing literature has yielded mixed results. As such, literature examining the effect of uncertainty reduction on acceptance while focusing on the collaborative nature of the relationship between human and robot (e.g., working together) has shown that reduction in uncertainty—hence, enhanced predictability of the robot’s behavior—can positively influence robot acceptance [129‐131]. However, scholars focusing on the social aspects in HRI have argued that uncertainty during a social interaction with a robot can—to some extent—be desirable, as it aligns with the expectations evoked by its anthropomorphized design and keeps the (social) interaction engaging [132, 133]. Consequently, literature that focuses on the social nature of the relationship between human and robot has indicated that uncertainty reduction may also exert a negative effect on acceptance [134‐137].
Finally, cognitive dissonance theory [138] can offer possible explanations for attitude changes through the route of behavior performed in previous HRI experiences. Cognitive dissonance theory suggests that conflicts in individuals’ attitudes (e.g., negative attitudes towards social robots) and actual past behavior (e.g., interacting with a social robot) create an internal tension that individuals try to resolve by aligning their attitudes accordingly [139]. In the context of individuals who have previously engaged in HRI, this could mean that they positively alter their attitudes towards the robot, and thus perceive it as “worthy” of interaction in order to justify their past behavior.
3 Methods
3.1 Empirical Design
In order to analyze our research question, we exploited a kind of natural experiment, enabled by a collaboration with a local organization. The organization employs a total of 110 staff members. It provides various services for students at three different locations (‘sites’), including the operation of canteens, housing, and other advisory services at each of the three sites. The organization sought to potentially implement a social robot (‘Pepper’ by Softbank Robotics). It announced its implementation—at the largest of the three sites—to all employees, six months in advance along with the information that this implementation would be accompanied and evaluated by a research team. Thus, the subsequent deployment of the social robot for six weeks at the largest of the three sites, where approximately 50% of the employees work, represented what we interpret here as a stimulus with respect to employees’ robot-related attitudes. Further, while all employees received the announcement of Pepper’s implementation, only employees who worked at this site could directly interact with the robot during their work shifts. They thus constitute what we refer to as the ‘treatment’ group. Employees who worked at the other two sites had no opportunity to see the robot or to interact with it; they form what we refer to as the ‘control’ group. During the six-week-long deployment period, Pepper was placed at various locations at the focal site throughout each day and initiated a conversation whenever it detected a human nearby (closer than three meters). Pepper was positioned at different places throughout each workday: Pepper was positioned next to the time clock at the beginning of each day. For employees at the deployment site, it was compulsory to check in at the time clock when they started their shift (see Fig. 1a). Thus, each member of the ‘treatment’ group encountered Pepper in the morning and interacted with the robot—at least at a baseline level—over the course of the deployment period. Members of the ‘control’ group, in turn, work at sites located at different places in the same city. Therefore, they were not in danger of mistakenly entering the wrong building and encountering Pepper. Each day, subsequently, Pepper was positioned in the employee break room (see Fig. 1b). During mealtimes, Pepper was deployed alongside employees in customer-facing roles in the canteen (see Fig. 1c for an example), with specific use cases varying daily (e.g., asking for customer feedback and presenting the day’s meals). Table 1 provides a comprehensive overview of Pepper’s locations, abilities, and tasks throughout the day during the deployment period.1
Advertisement
Fig. 1
Pictures of the robot deployment
Table 1
Overview over pepper’s deployment
Time | Place | Function | Abilities 1 |
|---|---|---|---|
Constant on each day of the deployment period | |||
06:00 a.m. – 08:30 a.m. | Time clock | Welcome arriving employees | • Greeting employees with “Good morning” as they approached the time clock. • Inquiring about the quality of their sleep and subsequently engaging in interaction based on their response. • Engaging in casual small talk. |
08:30 a.m. – 11:30 a.m. | Employee break room | Entertain employees | • Proposing an interaction when Pepper detects a human. • Extensively introducing itself with a short presentation. ◘ • Playing iconic rock songs (e.g., Thunderstruck) while mimicking playing an air guitar. ◘ • Playing mini games with employees (e.g., memory) ◘ • Dancing to music when employees request it. ◘ • Telling jokes. ◘ • Engaging in advanced small talk. |
Daily rotation in irregular sequence2 | |||
11:30 a.m. – 02:00 p.m. | Canteen entrance | Welcome arriving guests | • Providing information about the daily canteen offerings. ◘ • Offering the possibility to take a selfie with Pepper. ◘ |
Food counter | Entertain guests during waiting time | • Offering the opportunity to play games with Pepper or to tell a joke. ◘ • Dancing to music or playing iconic rock songs. ◘ | |
Self check-out | Provide help with self-checkout | • Providing information about the meal pricing. ◘ • Providing information about the self-checkout procedure. ◘ | |
Dining area | Entertain guests at the tables | • Offering the opportunity to play games, tell a joke, and dance. ◘ • Extensively introduce itself with a short presentation. ◘ | |
Dishes return | Explain the process of dish return | • Providing information about dishes arrangement when returning dishes. ◘ | |
Exit | Farewell and soliciting feedback | • Proactively bidding farewell to guests as they enter the exit area. • Asking for a ranking of the canteen experience (1 to 5) on the display. | |
3.2 Data Collection and Sample
Employees from all sites were invited to participate in picture ranking tasks embedded into a collection of qualitative data to share their experiences with the research team subsequent to the robot’s deployment period (i.e., semi-structured interviews). Due to the highly fragmented organizational structure, the disclosure of even basic socio-demographic information in combination with their experiences with the robot (e.g., at a specific place within the company) could have resulted in the possibility of identifying a respondent. Given the natural setting and the existing relationship of our respondents with the company as their employer—which might have influenced employees’ answers if they could be traced back to them—we wanted to ensure full anonymity to our respondents. Therefore, we took several measures: First, we chose not to collect socio-demographic information (e.g., age, tenure, functional role)2. Second, we scheduled data collection without asking for the disclosure of personal information (e.g., names). Third, the data collection was double-blind and conducted by individuals hired explicitly for this task who had had no prior contact with the company or their services. Finally, the data collection was arranged such that respondents would not encounter their colleagues or individuals from the research team. During data collection, we provided participants with a picture ranking task, that is, ten standardized pictures of different social robots. We asked them to rank these robots according to their (1) overall evaluation, (2) human-likeness, and (3) acceptance at work. To access the pictures, we followed the approach of Phillips and colleagues [72] by using the robot picture database ABOT3. Only robots with a complete body were used for this purpose, with one robot per decile of the overall human-likeness scale of ABOT, resulting in ten pictures of social robots with a distinct overall human-likeness score and encompassing the entire range of the scale4. To increase image comparability, all pictures were resized to the same size and converted into black and white [140]. We showed participants the images three times and asked them to rank them on semantic differentials with varying endpoints. First, we asked them to rank the robots according to their overall evaluation from “most negative” to “most positive.” Subsequently, we shuffled the pictures and asked them to rank the robots according to their human-likeness from “least human-like” to “most human-like.” Finally, to capture participants’ acceptance of the robots at work, we asked them to rank the robots from “least like to have that at work” to “love to have that at work.” Fig. 2 shows the complete instrument, including pictures of the robots. After the picture ranking task, we collected qualitative interview data regarding their experiences with the robot to further shed light on their ranking of the Pepper robot.
Fig. 2
Measurement instrument for the picture ranking task
Table 2
Overview of respondents
Code | Human-robot interaction (yes/no) | Sex | Length (min) | Pepper’s rank1 | ||
|---|---|---|---|---|---|---|
Overall evaluation2 | Human-likeness | Acceptance at work | ||||
IN1 | No | male | 15:48 | 10th | 4th | 10th |
IN2 | Yes | female | 32:37 | 10th | 8th | 10th |
IN3 | Yes | female | 27:37 | 8th | 6th | 9th |
IN4 | Yes | female | 15:54 | 9th | 5th | 8th |
IN5 | No | female | 17:06 | 6th | 5th | 5th |
IN6 | No | female | 22:15 | 6th | 5th | 7th |
IN7 | No | female | 14:21 | 5th | 6th | 5th |
IN8 | Yes | female | 20:20 | 5th | 6th | 6th |
IN9 | Yes | male | 29:10 | 9th | 5th | 10th |
IN10 | Yes | male | 15:38 | 9th | 4th | 3rd |
IN11 | Yes | female | 21:57 | 10th | 10th | 10th |
IN12 | No | female | 14:01 | 10th | 7th | 8th |
IN13 | No | female | 22:30 | 9th | 1st | 5th |
IN14 | Yes | male | 18:55 | 9th | 5th | 6th |
IN15 | No | female | 18:22 | 9th | 6th | 9th |
IN16 | No | male | 16:10 | 5th | 5th | 6th |
IN17 | No | male | 17:53 | 5th | 5th | 7th |
IN18 | No | female | 21:51 | 8th | 5th | 8th |
IN19 | Yes | female | 30:01 | 6th | 5th | 6th |
IN20 | Yes | male | 25:47 | 7th | 5th | 7th |
IN21 | Yes | female | 25:49 | 10th | 5th | 10th |
IN22 | No | male | 16:45 | 6th | 5th | 4th |
IN23 | No | female | 26:03 | 8th | 5th | 6th |
IN24 | Yes | male | 25:05 | 9th | 7th | 8th |
IN25 | No | male | 18:52 | 5th | 5th | 4th |
IN26 | No | male | 22:50 | 6th | 5th | 6th |
IN27 | Yes | female | 26:38 | 8th | 8th | 7th |
IN28 | No | female | 19:25 | 7th | 6th | 8th |
IN29 | Yes | female | 26:51 | 7th | 5th | 7th |
IN30 | No | female | 21:40 | 2nd | 6th | 5th |
IN31 | No | female | 16:26 | 6th | 5th | 4th |
Approximately 30% of the company’s staff from the different sites participated in our data collection. In line with our research question, we categorized them based on whether they had personally interacted with Pepper (N = 14) or not (N = 17). The duration of the data collection varied by participant, ranging from 14 to just under 33 min. Table 2 provides an overview of the respondents and their rankings of Pepper on the different dimensions.
3.3 Data Analysis
The data analysis of the present study employs a mixed methods approach by enriching the quantitative data of the picture ranking task with qualitative data for a subset of respondents—those respondents who had interacted with Pepper prior to answering the picture ranking task—in order to shed light on experience-based mechanisms underlying the formation of robot-related attitudes. As the task to rank the different robots according to their overall evaluation, human-likeness, and acceptance at work produced ordinal rank data, we used SPSS v28 to conduct Mann-Whitney-U-Tests to assess differences in the constructs depending on participants’ contact with Pepper [142]. We utilized Rosenthal’s approach to calculate the standardized effect size r by dividing the z-scores by the square root of our sample size [143]. Furthermore, we calculated Spearman’s rho to illustrate the relationships between the three constructs [144]. For the qualitative data (interviews of the 14 employees who reported to have themselves interacted with the robot), we inductively coded the interviews with MAXQDA 2022 and analyzed them using thematic analysis to find recurring themes within participants’ responses regarding their perceptions of and attitudes towards the robot [141]. Examples of inductively generated codes are experienced fun, cuteness, appearance, behavior, functionality, and “cool and funny.”
4 Results
4.1 Picture Ranking Task
We employed Mann-Whitney U tests (Table 3) to examine differences in the evaluation of Pepper, comparing individuals with and without personal interaction with Pepper [126]. Results show significant differences in overall evaluation (r = − 0.40, p = 0.028) and marginally significant differences in acceptance at work (r = − 0.35, p = 0.054). However, the results do not indicate significant differences in perceived human-likeness (r = − 0.21, p = 0.239). All robots’ human-likeness scores were assessed through the ABOT database. According to this normed score, Pepper should be ranked fifth among all the robots included. To examine whether the groups differed in over- or under-perception of Pepper’s human-likeness, we re-checked the group differences by comparing them to the normed rank of five. Results show that people who had themselves interacted with Pepper ranked it significantly higher in human-likeness compared to its normed rank of five (r = − 0.37, p = 0.041), whereas the ranking of those without such interaction experiences did not significantly differ from the normed rank (r = − 0.13, p = 0.483).
Table 3
Differences in Pepper’s ranking in the picture ranking task
Personal interaction vs. no interaction | Personal interaction vs. normed score | Personal interaction vs. normed score | |||||||
|---|---|---|---|---|---|---|---|---|---|
z | r | p | z | r | p | z | r | p | |
Overall evaluation | − 2.20 | − 0.40* | 0.028 | - | - | - | - | - | - |
Human-likeness | − 1.18 | − 0.21 | 0.239 | − 2.05 | − 0.37* | 0.041 | − 0.70 | − 0.13 | 0.483 |
Acceptance at work | − 1.93 | − 0.35† | 0.054 | - | - | - | - | - | - |
Besides inter-group differences based on interaction experience (or lack thereof) with the robot, we also considered Spearman’s rho rs to assess the relationships between the variables [144], displayed in Table 4. Results show a highly significant relationship between overall evaluation and acceptance at work (rs = 0.66, 95% BCa CI [0.384, 0.824], p < 0.001), as well as a significant relationship between perceived human-likeness and acceptance at work (rs = 0.37, 95% BCa CI [0.004, 0.645], p = 0.042). Interestingly, there is no significant relationship between perceived human-likeness and overall evaluation (rs = 0.08, 95% BCa CI [-0.296, 0.428], p = 0.683).
Table 4
Descriptive information and correlations
Pepper’s rank | |||||||
|---|---|---|---|---|---|---|---|
Min. | Max. | Mean | SD | 1 | 2 | 3 | |
1. Overall evaluation | 2 | 10 | 7.39 | 2.03 | - | - | - |
2. Human-likeness | 1 | 10 | 5.48 | 1.50 | 0.08 | - | - |
3. Acceptance at work | 3 | 10 | 6.90 | 2.02 | 0.66** | 0.37* | - |
4.2 Complementary Qualitative Data on Mechanisms
As the picture ranking task showed inter-group differences based on interaction experience (or lack thereof) with the robot, a subsequent analysis of the complementary qualitative interview data focuses on the 14 employees who reported to have themselves interacted with the robot, digs deeper and identifies possible reasons that link the observed ranking differences to specific features of individuals’ experiences. During the interviews, participants expressed thoughts that added meaning and context to the interpersonal differences observed in the picture ranking task in the robot’s overall evaluation, perceived human-likeness, and acceptance at work. Figure 3 displays the interaction experience-based factors extracted from the interviews that either increased or decreased overall evaluation, acceptance at work, and perceived human-likeness.
Fig. 3
Interaction experience-induced factors influencing overall evaluation, acceptance at work, and perceived human-likeness of the focal robot
4.2.1 Effects of Personal Human-Robot Interactions on Overall Evaluation
Participants expressed various ways in which their interactions with the robot had altered their overall evaluation of it, either positively or negatively. However, consistent with the results of the picture ranking task, most expressions indicated that personal interaction with Pepper had induced a more positive overall evaluation. Specifically, from the qualitative data, several reasons emerged that, according to participants’ accounts, might have given rise to the resultant evaluation.
Fun associated with the robot. In the interviews, several participants highlighted that they had enjoyed interacting with the robot because it was fun for them, as one employee described:
I was really curious to see how it would turn out, and I just had a lot of fun, especially in the mornings when arriving. [IN19]
Another employee expanded on this thought and pointed to the conversation style of the robot as an antecedent of the fun and enjoyment experienced in the interactions:
The responses were very friendly, and sometimes a bit, I would say, with a bit of a playful tone. There were some teasing answers as well. It was quite amusing. [IN29]
Politeness of the robot. In a similar vein, many interviewees emphasized the politeness of the robot as a key driver of their positive overall assessment. One employee described in detail that she particularly appreciated the interest shown by the robot:
Just a normal positive feeling, you know? I was impressed by what he shared and how politely he asked if I had slept well. I actually found this quite nice. [IN3]
However, despite the employees describing the robot as polite, one of them highlighted a sense of uncertainty about the robot’s further emotional capabilities. Specifically, she explicitly expressed being able to imagine less friendly forms of interaction with the robot than those that she had experienced:
He might be able to be angry too, I don’t know. I haven’t experienced that. He was always friendly. [IN11]
Cuteness of the robot. In addition to these characteristics of interactions with the robot, the robot’s appearance was also repeatedly mentioned in combination with a positive overall evaluation. In particular, respondents suggested that the childlike appearance of Pepper created a sense of cuteness, or as one employee described the robot:
Quite cute, like just a little playful child. [IN19]
Stress reduction. Beyond aspects directly related to interactions with the robot, some respondents also mentioned more indirect positive consequences of those interactions, reinforcing the impression of an overall positive assessment of the robot, due to, for example, such effects as a reduction in work-related stress:
Well, I have to say, sometimes, on certain days, I actually found it quite nice when I chatted with him for a few minutes. It was helpful for stress relief, especially on days when things were a bit hectic. [IN9]
Being fascinated by the robot. For most employees, encountering Pepper in their workplace represented their first contact with a social robot. The novelty of this experience, as they had had no previous contact with this type of technology, might have contributed to being intrigued by this particular robot. This sense of fascination may, in turn, have created an impression of the robot that lasted beyond the individual interaction, thereby positively altering the valence of the overall evaluation, or as one employee expressed it:
Personally, I was actually a bit fascinated by Pepper. But I was also incredibly distracted by it. […] Actually, he reminded me of movies everyone knows, and you think, ‘Well, it’s just a movie,’ but now it’s a bit real. [IN27]
Being disappointed by the robot. However, there were not only positive ways in which personal experience appeared to have affected the valence of the overall evaluation. Some employees indicated that interacting with the robot had also revealed the limits of its functionality. Interestingly, though, most of those participants who eluded at this issue qualified this seemingly negative statement right away and explicitly stated that they believed that the resulting awareness of the limits of Pepper’s functionality did not influence their overall evaluation of the robot. A few interviewees, however, acknowledged that it might have adversely affected the overall evaluation, as the following statement shows:
Well, at the beginning, it was interesting, and then I quickly lost interest. [IN10]
Feeling of impersonality. In line with this somewhat negative stance, another employee indicated that the interaction with the robot did not result in exclusively positive consequences such as fun, enjoyment, or being fascinated. Instead, the interaction triggered negative feelings, too. As such, she expressed a sense of impersonality that she perceived in the interaction, or as she put it:
[I felt] actually not so good. Because for me, it’s impersonal. [IN24]
4.2.2 Effects of Personal Human-Robot Interactions on Perceived Human-Likeness
The results of the picture ranking task indicated that employees who had personally interacted with the robot perceived it as more human-like than we would have expected based on its normed value (based on human-likeness score assessed through the ABOT database). The qualitative data suggest several reasons that could explain the increased perception of human-likeness, such as the robot’s appearance and behavior. However, it also became apparent that direct contact with the robot can be a double-edged sword for this perception, with functionality and employees’ expectations regarding functionality playing key roles during an interactive episode:
Interaction like a human. On the one hand, it became clear that Pepper exceeded some employees’ expectations regarding its functionality. These employees had not previously had any contact with a social robot. The robot’s ability to smoothly master social interaction—something perceived by many interviewees as an inherently human skill—seems to have surprised many of them. This surprise may have subsequently led to an increased perception of human-likeness, as described by one employee:
Almost frighteningly human. So, I imagined it differently. Frightening in a positive sense. It’s already very human, if I can say that. I was surprised by what’s all inside such a robot, what it can do. [IN2]
Malfunction like a machine. On the other hand, there were also employees whose statements suggest that the lack of functionality that they perceived in some of their interactions with the robot made it particularly salient to them that this was a robot pre-programmed for (at least partially anticipated) interaction rather than a human (or human-like) actor, able to adapt to entirely novel interactive situations. This impression arose in particular when the programming was perceived to be failing, as described by one person:
So, I see it as a machine. (…) Well, depending on what I said, he didn’t really function properly. That’s why I think he probably has to be programmed for many things he should or must do. [IN8]
Perception of warmth. Some employees mentioned that the robot’s behavior, movements, and interaction style radiated a certain warmth, which reduced a possible feeling of alienation and coldness that they might have feared to prevail when interacting with a robot:
Pepper, for example, comes across as very human. He is empathetic, and in that sense, he doesn’t seem cold. He doesn’t seem cold. I think that’s the best description. [IN19]
Human-like cues. Finally, the Pepper robot has been designed to imitate a childlike appearance [145]. Respondents in our study did indeed perceive it in this way and mentioned its childlike appearance as a facilitator of their perceptions of human-likeness. In combination with the robot’s capability to interact, some employees described the robot not as a robot but as another person:
One almost feels like it’s a little person with artificial intelligence, right? [IN3]
Besides other characteristics such as its voice, the robot’s eyes, which simulate human blinking through irregular flashing, were emphasized in their pivotal role in inducing perceived human-likeness:
Pepper’s eyes. Pepper’s eyes, voice, and size made it seem so childish, cute, and adorable to me. [IN27]
Relatedly, another respondent emphasized the significance of her personal experience of eye contact for perceiving the human-likeness of the robot, despite being aware of the difference with humans:
It’s just like with humans. Well, a robot is a machine. When you look into its eyes, it doesn’t react like a human. But it’s still a bit different than if you haven’t seen it at all. [IN11]
4.2.3 Effects of Personal Human-Robot Interactions on Acceptance at Work
The results of the picture ranking task showed marginally significant differences in acceptance at work: Employees with robot interactions during the deployment period ranked the robot higher in workplace acceptance than those without any such interactions. Again, some explanatory mechanisms emerged from the interviews that might illuminate this difference, although not entirely without ambiguity. As with perceived human-likeness, more skeptical experience-driven assessments were mainly rooted in how employees perceived the robot’s functionality.
Perception of benefits. The most frequently mentioned aspect promoting acceptance was the perception of the benefits of using Pepper in the workplace. Employees made it clear that the perception of these benefits was the result of a process during the deployment period, starting without a concrete idea and mostly ending with a positive view:
At the beginning, I thought, “Well, what’s he going to do there, what can he really achieve, and so on?” But when the background was clear, for example, that he is also meant to create good vibes in the employee areas, which really happened. [IN19]
Interestingly, this positive development in terms of assessment seemed to apply not only to Pepper but appeared to (at least partially) spill over into a broader, overarching attitude towards robots in general in the workplace, as illustrated, for example, by the following statement:
And well, in general, I would say I now look at things in relation to robots a bit differently. Because I see there is some potential there, and I already have a somewhat different attitude towards the robot. [IN9]
Reduction of robot-related uncertainty. Additionally, employees with experiences of personal interaction with the robot also attributed positively altered opinions regarding robot acceptance to the reduction of uncertainty related to the robot. Furthermore, several interviewees indicated that prejudices had prevailed, in particular following the announcement of the robot’s imminent deployment, but that these prejudices and reservations were ultimately alleviated at least to some extent by employees’ personal experiences with the robot, as summarized by one respondent:
So, I would say that they have definitely changed for the better. The acceptance has increased, and even if there were prejudices, I would say they have definitely diminished. Because at first, when they say robots are coming, they will be deployed here and there, you don’t really have a clear idea of how it will actually work. But I believe that over the weeks or months, you really got to see it more, and speaking for myself, I have definitely gained a positive impression. [IN2]
Recognition of non-threatening nature of the robot. One of the prejudices, which initially represented an obstacle to forming positive attitudes, was referred to explicitly by respondents, as well as how interacting with the robot mitigated this concern. Specifically, employees reported that they realized that the robot’s purpose could be augmenting them rather than replacing them:
So, when Pepper was deployed with us, I noticed that it was not meant to replace any employees. [IN2]
Lack of functionality. As with the first two constructs—overall evaluation of the robot and perceived human-likeness—perceptions of the robot’s functionality again constituted an essential factor that adversely impacted the third construct, that is, workplace acceptance. However, statements suggest that this assessment was not final, and further enhancement of the robot model was envisaged to potentially lead to a further increase in acceptance:
If he were to be further developed on the communicative level for informational tasks, like at the cash register, for example, it could make sense. But not with this model. And not as it is now. [IN20]
Consequently, the core mechanism behind the increased acceptance at the workplace of this specific model of Pepper might be rooted in different, task-unrelated attitudes towards it, as one employee described:
Oh, he’s a funny little guy, like an attraction. But as assistance, I couldn’t really see him that way, especially due to his small size. [IN14]
5 Discussion
In the present study, we examined differences in employees’ ranking of a social robot that was implemented at their workplace on the dimensions of the robot’s overall evaluation, perceived human-likeness, and acceptance at work, depending on whether they had interacted with the robot themselves or not. Furthermore, we sought to uncover experience-driven antecedents of employees’ distinct attitudes toward the robot by additionally conducting a qualitative analysis of interview data from employees who had interacted with the robot. By exploiting a natural experiment-like setting, we were able to identify group differences between employees who had the possibility to directly interact with Pepper and those employees who lacked this possibility. Thus, we contribute to HRI literature by answering recent calls for a qualitative investigation of experience-driven mechanisms in changes in robot-related attitudes [6] and for observing users’ reactions in their natural environment (e.g., their workplace) [146]. We were interested in (1) whether we would observe differences in employees’ attitudes depending on the personal experiences with the robot in their work setting (captured by the picture ranking tasks) and (2) how any such differences—should they emerge — would relate back to employees’ specific experiences (captured by qualitative analysis of interview data). Results show that employees with personal experience through interacting with the robot ranked it significantly more positively and reported significantly higher acceptance of it at work than their colleagues who had not had the opportunity to interact with Pepper. Furthermore, employees in the former group ranked Pepper significantly higher in human-likeness than the rank suggested by its normed human-likeness score, thus overestimating its human-likeness. Finally, a thematic analysis of qualitative data revealed different experience-driven factors that shaped employees’ attitudes towards the social robot, which we discuss in detail below in relation to prior literature.
5.1 Personal Human-Robot Interaction Experiences and Overall Evaluation
Results from the picture ranking task showed that personal experiences with the robot influenced employees’ overall evaluation of it: Individuals who had interacted with Pepper at work ranked it significantly more positively. Building upon this observation, findings from the qualitative interviews shed light on how experiences positively influenced employees’ overall evaluation of the robot. First, in line with prior literature, respondents highlighted the effect of the robot’s interaction behavior for overall evaluation, such as, for example, being polite [36]. Second, participants indicated that their feeling of being intrigued increased, whereas their feeling of disappointment decreased their overall evaluation of the robot. Both of those feelings result from non-congruency of their expectations towards interactions with the robot, on the one hand, and their actual experiences in interactions with it, on the other hand [6, 58]. As such, they provide further empirical evidence for the importance of alignment between expectations and experiences for positive evaluations of social robots [52, 56, 57]. However, and this is striking in view of extant empirical literature, respondents in our study did not mention the robot’s functionality as an influence in its own right to have impacted their overall evaluation of it; instead, they solely referred to it in the context of prior expectations (being intrigued/disappointed). Given prior empirical evidence that has strongly emphasized the importance of functionality on the robot side for the overall evaluation of social robots in function-based or work-related settings [44‐47], the absence of a direct relationship in the accounts of our interviewees is surprising. Instead, results show that employees focused on different attributes of the robot side, such as behavior and design, which shaped their overall evaluation of the robot. Specifically, they expressed that their experiences activated affective routes which, therefore, may offer valuable extensions to the answers regarding how employees evaluate a social robot in a work setting: Regarding robot design, respondents did not mention Pepper’s human-like appearance to influence their evaluation but the positive influence of Pepper’s cuteness as a distinct design choice. Thereby, the results indicate the possibility of a “cute response” [147] towards social robots. Cute responses, which are primarily examined in the context of pets, describe unconscious reactions triggered by infantile features (e.g., large eyes) that are universally appealing to humans [147]. While there has been some (limited) research suggesting that a robot’s cuteness may influence humans’ reaction towards it [148, 149], the present study is, to the best of our knowledge, the first study to detect a cute response to a social robot in a work setting, thus implying the presence of antecedents of overall evaluation of a robot in a work-setting that go beyond work-related factors, such as functionality. In line with this notion, participants described those other affective reactions during the HRI (fun, stress reduction, or feeling of impersonality) caused by Peppers’ behavior also spilled over into an overall more positive or more negative picture of Pepper. This finding provides, to the best of our knowledge, first empirical evidence for the presence of the affect heuristic, which states that the valence of an affect experienced during interaction with a stimulus can exert same-directional effects on broader attitudes towards this stimulus [118], in the context of HRI with a social robot at work.
Thus, although the positive relationship between direct experiences with a social robot and a positive evaluation had been found in earlier HRI literature, studies also indicated a strong relationship between positive evaluations and perceptions of functionality when a robot is being evaluated in a functional or work setting [44‐47]. The results of the present study, which analyzed a real work setting, thus contribute to HRI literature by highlighting potential mechanisms—cute responses caused by the robot’s design and affect heuristic caused by the robot’s behavior—that positively influence employees’ evaluation of a social robot in a work setting independently of the perceived functionality. To the best of our knowledge, these findings add novel insights to research on HRI at work. Based on these results, we encourage future research to, first, investigate in-depth potential interaction effects of employees’ functionality perception and affective reactions on their overall evaluation of a social robot at their workplace. Second, we encourage future research on employees’ overall evaluation of a focal robot following a functional perspective to explicitly consider and control for potential affect-based mechanisms that may confound effects on overall evaluation.
5.2 Personal Human-Robot Interaction Experiences and Perceived Human-Likeness
The interpersonal differences in the ranking of Pepper’s picture on the semantic differential that captured perceived human-likeness revealed that those with personal interaction experiences with the robot significantly overrated Pepper regarding its human-likeness, unlike employees who lacked this experience. That is, experiences of interacting with Pepper led employees to perceive it as overly human-like, thus confirming the question of whether personal experiences can influence the perceived human-likeness of a focal robot in a work setting. To identify potential reasons for this observation, results from qualitative interview data provided valuable insights into how experiences with Pepper shaped its perceived human-likeness. In line with prior literature, respondents indicated that observing human-like cues, such as a child-like body [67], eyes [72], and voice [71, 85], and perceiving human-like interaction characteristics, such as warmth [74, 75], positively impacted their perceived human-likeness of Pepper. Furthermore, some participants indicated a feeling of a social presence during the interaction with Pepper [150, 151], as illustrated by their mentioning of the feeling that they encountered ‘almost a little person’ when interacting with Pepper. Additionally, we observed surprising results regarding the influence of participants’ perceptions of Pepper’s functionality on its perceived human-likeness. As such, some employees highlighted that a broad scope of functions in terms of being able to engage in a proper conversation not only surprised them but also made them perceive the interaction as very natural and human-like. On the other side, some participants highlighted that malfunctioning, such as the robot not understanding certain specific questions, reminded them of the technological nature of their interaction partner, thus decreasing their perceptions of its human-likeness. Moreover, some participants reacted to such experiences of malfunction by engaging in a kind of ‘hypothesis generation,’ where they tried to figure out potential reasons, for example, programming failures or problems in connecting to the network. Thus, our findings indicate that malfunctioning may trigger (more) in-depth reflection about robot-inherent processes, which, in turn, might demystify Pepper’s humanoid appearance in the sense of ‘judging the book by more than its cover.’ Overall, such reflection might negatively influence its perceived human-likeness. Prior literature has so far yielded mixed findings regarding possible consequences of malfunctioning on perceived human-likeness [152]. Our findings of potential demystification through in-depth reflection about reasons for malfunctioning offer a possible explanation for this ambiguity. In particular, we argue that such in-depth reflection requires a certain level of knowledge about the functionality of robots and technology in general. As individuals differ in their levels of such relevant knowledge, these differences would be reflected in corresponding interpersonal differences in susceptibility to experience robot ‘demystification’ and, ultimately, in a decrease of perceptions of human-likeness. We thus encourage future research to investigate a potentially moderating influence of robot- and technology-related knowledge on the relationship between robot failure/malfunctioning and perceptions of human-likeness.
5.3 Personal Human-Robot Interaction Experiences and Acceptance at Work
Again, results from the picture ranking task confirmed whether employees differed in acceptance of Pepper at work depending on whether they had personally experienced interacting with it. In the qualitative interviews, respondents expressed different ways how experiences with Pepper shaped their increased acceptance of it at work. First, the functionality of Pepper emerged as a key antecedent that, depending on how it was evaluated, potentially increased or decreased acceptance of the robot at work. As such, some employees mentioned that a positive evaluation of the robot’s functionality induced the perception of benefits they could reap from its presence at their workplace. In contrast, the perception of a lack of functionality had the opposite effect, wherein employees saw no (personal) benefit from the presence of a robot at the workplace, which, in turn, reduced acceptance. This finding is in line with the prior literature that has conceptualized functionality as a key antecedent for the acceptance of technology, such as the TAM [22] and the UTAUT [23], and, more specifically, for social robots, the Almere model [100]. However, our participants also revealed complementary processes that increased their acceptance of the robot and counterbalanced the negative effect of the perceived lack of functionality in overall acceptance. Shedding light on these complementary processes in acceptance formation, we believe, can significantly contribute to fostering an understanding of the process of employees’ acceptance of a social robot at their workplace.
Accordingly, many participants referred to their experiences with the robot as having caused a reduction in the uncertainty associated with it and thus having positively influenced their acceptance of it. Furthermore, some employees expressed concerns that the robot might play a role in the organization’s increasing automation. However, they also described that this view changed in response to their experiences, leading them to recognize that their new ‘robot colleague’s role’ was to augment rather than replace them. Ultimately, this process resulted in an increased acceptance of the robot. These findings can be explained by the uncertainty reduction theory, which originates in human communication research and posits that more significant uncertainty about another person diminishes an individual’s willingness to accept and be around this other person [122, 123]. Initial and ongoing interactions with this person thus serve as an opportunity to reduce uncertainty and can, therefore, indirectly increase acceptance [122, 123]. Although these mechanisms have so far been proposed for interaction among humans, scholars have pointed to the possible transferability of these social responses to interactions between humans and technology [17]. Specifically, they suggest such acceptance-increasing effects of uncertainty reduction towards different technological “interaction partners,” such as COVID-19 tracing apps [153], conversational agents [154], and mobile government security response systems [155]. In the domain of social robotics, prior research has found that uncertainty reduction increased trust in a focal robot by children [156] and by visitors of a public service center [157].
However, regarding acceptance of social robots, existing HRI research has hinted at potential differences of uncertainty reduction effects, depending on whether the focus is on social aspects or the collaborative nature of the interaction. HRI research focusing on the collaborative nature of HRI (e.g., at work) has argued for positive effects of uncertainty reduction on acceptance [129‐131]. In contrast, research focusing on social aspects has argued that a certain level of uncertainty, as an attribute inherent to social interaction, can be a characteristic that—to some extent—is actually desirable in anthropomorphized agents, and its decrease may negatively affect their acceptance [134‐137]. To the best of our knowledge, the present study is the first to investigate possible uncertainty reduction effects on employees’ acceptance of a social robot in a real work setting. In our specific case, the robot was implemented into a work setting—hence, a collaborative setting—where its tasks were fulfilled through verbal interaction—hence, it acted as a social agent. Due to this combination, ambiguous empirical evidence regarding a positive [129‐131] or negative [134‐137] influence of uncertainty reduction on robot acceptance raised questions about the net effect of uncertainty reduction in a situation where both aspects are combined. By detecting positive uncertainty reduction effects on acceptance of the robot in a setting where these two characteristics overlap, we believe that the results of this study contribute to existing HRI research by indicating that potential positive effects of uncertainty reduction due to the collaborative setting may outweigh potential negative effects of uncertainty reduction caused by demystification of a social agent. It thus provides fruitful ground for future research that could probe the robustness of this net uncertainty reduction effect on workplace acceptance of social robots using other robots with different types of functionalities and anthropomorphic features, thus expanding existing technology [21, 22] and social robot [100] acceptance models by enhancing scholars’ understanding of the influences of uncertainty reduction that is triggered by personal experiences with a focal robot.
Furthermore, the mechanism identified in this study—increased acceptance due to uncertainty reduction triggered by experiences from personal interactions—suggests intriguing considerations related to the integration of large language models (LLMs) into social robots. Integration of LLMs arguably enhances the interactive capabilities of social robots and may therefore impact their acceptance by users. Specifically, scholars have argued that individuals starting to interact socially with a technology may experience uncertainty, as they lack adequate interaction scripts and experiences for such situations [19, 158]. Moreover, existing research has shown that users’ perceptions of similarity between interaction with humans and with AI—as would be the case with LLMs—can reduce this uncertainty, because it allows them to refer to known categories (i.e., human interaction schemes) [159, 160]. This, in turn, has been argued to increase trust [159, 161], which is a central antecedent of acceptance [128]. In conjunction with these findings of prior research on humans’ reactions, our empirical results—i.e., positive effects of uncertainty reduction on workplace acceptance—imply that we may expect further increases in workplace acceptance of social robots if their interactive capabilities are enhanced through the integration of LLMs. However, as this was not part of the robot’s design when deployed by the focal organization in this study, it remains an issue that has to be addressed by future research.
5.4 Limitations and Future Research
As with every empirical work, the present study and its findings are subject to several limitations that must be considered. Also, they provide promising avenues for future research. First, our setting consisted of one organization that implemented a particular robot (Pepper), which limits external validity. However, we undertook several measures (e.g., anonymity of participants) and collected data from different departments within the organization to increase internal validity. Moreover, we believe that this empirical setting infuses our findings with particular relevance for organizational practice, as this often involves the introduction of a social robot within a single organization rather than considering multiple, diverse organizations simultaneously. Nevertheless, future research could build upon and validate our findings in different organizations and industries and with different (social) robots.
Furthermore, prior literature has shown that reaction to technology [162] and several of the observed mechanisms, such as ambitions to reduce uncertainty [109], may be subject to cultural influences. Thus, we call upon scholars to build upon our findings in future research and broaden the view toward cross-cultural considerations that may help isolate potentially culture-dependent influences. Second, the mechanisms proposed in this study to underlie the experience-driven differences in attitudes resulted from an analysis of qualitative data from semi-structured interviews. Although we believe in the power of qualitative analysis to detect these underlying mechanisms [163, 164] and to thereby enrich the primary, quantitative data from the picture ranking task [165, 166], follow-up studies using large-scale quantitative data to statistically test the proposed mechanisms would be highly valuable. We thus encourage future research to build upon the insights of this study and to quantitatively investigate the proposed influences of experiences with a social robot on employees’ attitudes towards this robot, as well as potential mediating (e.g., uncertainty reduction) and moderating effects (e.g., technical knowledge) that this study’s findings tentatively point towards.
Finally, we deliberately chose not to collect socio-demographic information of our respondents to ensure their anonymity. While we believe that this reduced the threat of social desirability in the answers of our respondents and thus increased the quality of the data used in this study, it represents a limitation regarding the generalizability of our findings. Previous literature has identified gender [59] and age [71] as influential factors for individuals’ attitude toward social robots. Additionally, it is reasonable to assume that individuals’ attitudes to a social robot being implemented at their workplace—and the effect of personal experiences on them—differ depending on job-related attributes, such as, for example, overlap of individuals’ job content with the tasks given to the robot, or time available to engage in interactions with the robot. Nonetheless, we believe that the present study provides valuable insights into the role of experiences with a social robot at work in general, which constitutes fruitful ground for further in-depth investigation. Therefore, we encourage future research to build upon our results and replicate this study in a larger company, thus providing the opportunity to analyze differences due to gender, age, and job positions without threatening respondent’s anonymity.
6 Conclusion
In the present study, we presented the results of an empirical study examining whether and how one’s personal experiences with a social robot implemented in their work environment affect employees’ attitudes towards this focal robot. A collaboration with a local service organization allowed us to make use of a kind of ‘natural experiment’ in which the organization deployed the social robot Pepper initially for six weeks at the organization’s main site (out of a total of three sites with 110 employees overall). At this site, Pepper was positioned in various places throughout each day, both in the staff and customer areas, where it took over tasks that had previously been the responsibility of human employees (e.g., providing information to customers). Subsequent to the initial deployment period, we invited all company employees to share their assessments of the robot with us. We collected quantitative data from a picture ranking task, where respondents were asked to rank ten pictures of different social robots according to their overall evaluation, perceived human-likeness, and acceptance at work of these robots, as well as qualitative interview data. We found that employees who had interacted with Pepper themselves during its deployment ranked it significantly more favorable regarding overall evaluation and acceptance at work. Further, this group of employees overestimated Pepper’s human-likeness—unlike employees without personal interaction experience with the robot. Results from a qualitative analysis of interview data indicated different experience-driven mechanisms that increased overall evaluation, perceived human-likeness, and acceptance at work. As such, findings indicated the presence of a cute response [131], where the perception of the robot as cute spilled over into a more positive general evaluation, as well as the activation of an affect heuristic [118], leading to a more favorable overall evaluation caused by an experience of positive feelings (e.g., fun) in the presence of the robot. In contrast to prior literature, these affective reactions outshone perceptions of functionality, which were not mentioned by our respondents in regard to their overall evaluation of the robot. Furthermore, our findings suggest that perceptions of functionality play a key role in perceiving a robot as human-like instead. While functionality had the potential to increase perceptions of human-likeness (especially when it exceeded expectations), malfunctioning experiences led some respondents to reconsider the robot’s technological nature, thus demystifying it and ultimately decreasing its perceived human-likeness. Finally, we found that reduction of robot-related uncertainty through personal experiences with a social robot is a factor that considerably increases acceptance of the focal robot at work, despite the threat of uncertainty reduction to demystify the robot and weaken its status as a social agent. Summarizing the results of our quantitative and qualitative analysis, we can conclude that familiarity with a robot breeds affinity, which is reflected in the way different attitudes towards the robot are altered.
Taken together, we believe that the present study offers a valuable contribution to understanding employees’ attitudes towards a social robot that is being implemented into their work environment. Specifically, the study highlights the power of employees’ personal experiences as a key influence on forming beneficial attitudes. As the number of social robots deployed in such settings is constantly rising, these findings not only expand the growing body of literature on social robots in work settings but also offer important implications for organizations considering adopting such robots.
Declarations
Ethics Approval
The study received approval from the Ethics Committee of Trier University (No. 38/2022).
Informed Consent
Participants in the study were informed and consented that their participation would be completely anonymous and voluntary, that the resulting audio recordings and transcripts would not be shared with third parties (e.g., the organization), and that participation could be interrupted, terminated, or withdrawn at any time without giving reasons. Furthermore, all participants were informed that neither their participation in this study nor any decision to terminate our withdraw their participation would affect their employment by the organization in any form. Finally, participants explicitly agreed to the audio recording during the interviews.
Employment
None of the authors has to disclose recent, present, or anticipated employment by any organization that may gain or lose financially through publication of this manuscript.
Competing Interests
The authors have no financial or non-financial interests to disclose.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.