Skip to main content
Top

Familiarity Breeds Affinity – How Personal Experiences Change Employees’ Attitudes Towards a Social Robot

  • Open Access
  • 01-07-2025
Published in:

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

This article delves into the transformative power of personal experiences with social robots in workplace settings. It reveals how direct interactions with a social robot can significantly alter employees' overall evaluation, perceived human-likeness, and acceptance of the robot. The study highlights the role of affective responses, such as the 'cute response' and the 'affect heuristic,' in shaping employees' attitudes. It also explores how perceptions of functionality and uncertainty reduction influence attitudes towards the robot. The findings suggest that familiarity with a robot breeds affinity, offering valuable insights for organizations considering the adoption of social robots. The article combines quantitative data from a picture ranking task with qualitative interview data, providing a comprehensive understanding of the underlying mechanisms. This unique approach sets it apart from other studies and makes it a compelling read for those interested in the intersection of technology and workplace dynamics.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

With rapid technological advancements in social robots, companies are increasingly likely to implement them into their operations [1]. Furthermore, the social aspect of social robot technology opens up possibilities for employing robots in social and interactive domains that have long been reserved for human workers [2]. Consequently, the work environment is changing for employees who previously had little to no exposure to such technologies in their industry (e.g., hospitality, service) and, therefore, rarely have experience in working or interacting with social robots [3]. Although scholars broadly agree that individuals’ past experiences with social robots may significantly influence their attitudes towards social robots in general and reactions to them in subsequent interactions [4, 5], most empirical studies that have examined experience-driven mechanisms in social robotics research have either focused on non-work settings [4, 68] or have taken a broader stance by analyzing employees’ attitudes towards social robots as a technology in general [912]. In contrast, there is a surprising dearth of studies exploring the influence of direct personal experience with a specific social robot on employees’ attitudes towards this robot that is being implemented into their organization. Existing research has primarily analyzed mere exposure effects, according to which repeated personal experiences with a stimulus can lead to more positive attitudes towards it [13, 14], thereby providing first answers to the question of whether prior experiences with a social robot may impact peoples’ attitudes towards it [6]. However, with the growing significance of social robot technology in diverse work settings and increasing interaction points between human employees and social robots [2], employees’ experiences with social robots at work consist of more than mere exposure. Furthermore, to the best of our knowledge, what is missing so far is an in-depth investigation of the underlying mechanisms, that is, how prior experience can alter employees’ attitudes towards a specific social robot. Therefore, unresolved questions persist regarding whether and how employees’ personal experiences—gained through interactions—with one particular social robot in their work environment can affect their attitudes towards this robot.
Thus, the goal of this study is twofold: First, we investigate if there are differences in employees’ perceptions of and attitudes towards a specific social robot that is being implemented into their organization, depending on whether they directly gained personal experience by interacting with it or not. Second, we seek to shed light on mechanisms that underlie potentially emerging differences between employees with and without such personal experience by identifying possible experience-driven reasons for these differences. Our investigation of the reactions to a specific robot is guided by a focus on potential differences in relation to three constructs that the literature on human-robot interaction (HRI) has established as key antecedents of successful collaboration between humans and social robots: (1) overall evaluation, (2) perceived human-likeness, and (3) user’s acceptance of the robot. First, how people overall evaluate a robot, in general, has been identified as a significant antecedent of, for example, their willingness to collaborate with it [15] and safety perceptions [16]. Second, the perceived human-likeness of a robot plays a significant role, especially in people’s reactions to social robots, as the human-like aspects (e.g., appearance and behavior) of a robot can elicit human responses that go beyond technology-appropriate reactions [17, 18], a notion that brings challenges but also further opportunities to human-robot interaction [19], and has to be considered when implementing such technology at the workplace [20]. Finally, we include users’ acceptance of the social robot. Due to this study’s focus on a work setting with employees as robot users, we refer to the users’ acceptance as the robot’s acceptance at work. Following a long tradition of technology acceptance research that recognizes users’ acceptance as a central goal of technology implementation endeavors [21, 22], it is a critical success factor for the possibility of reaping potentially positive effects of robot technology in the workplace of the future [23].
We conducted a kind of natural experiment in cooperation with a local organization that runs three large canteens for students and university staff, where the social robot Pepper (Softbank Robotics) was implemented by the organization in one of the canteens for six weeks. Afterward, we invited all of the organization’s employees to participate in the collection of data to assess their reactions to the focal robot. As such, we asked them to rank ten pictures of social robots—including the robot implemented at their workplace—according to their overall evaluation, perceived human-likeness, and acceptance at work. To further illuminate their ranking choices, we additionally collected qualitative data regarding this ranking through semi-structured interviews. This empirical setup allows us to answer recent calls for studies with a more extended time period to gain experiences with a robot, qualitative approaches to investigate the role of experiences in attitude formation towards social robots, and to observe user reactions in their natural environment [6, 146]. Therefore, the results of this study provide valuable insights into the question of whether personal experience with a social robot implemented in one’s workplace can influence attitudes toward this robot. Furthermore, they shed light on the underlying processes contributing to attitude formation towards a social robot in the workplace and the potential differences therein.
The remainder of the paper is organized as follows. Section 2 encompasses, first, existing empirical literature related to the constructs observed in this study, namely previously empirically identified and influential factors on overall evaluation, perceived human-likeness, and acceptance at work. Second, it displays existing social psychological theories that seek to explain experience-driven changes in attitudes. Section 3 describes our research method, including the setting, collection, and analysis of the quantitative and qualitative data used in this study. Section 4, first, reports the results of the quantitative analysis of differences between employees with and without personal experiences with the robot, and second, it displays the results of the qualitative analysis that sought to identify mechanisms underlying the quantitative differences. Finally, the discussion of the results, limitations, and future research implications can be found in Sect. 5.
Previous HRI literature has identified various antecedents of individuals’ overall evaluation of a robot, perception of the robot’s human-likeness, and acceptance of the robot. Section 2.1 provides an overview of empirical research identifying attributes for these dimensions on both the robot side and the human side. On the robot side, attributes are summarized under the categories of behavior, design, and functionality. On the human side, attributes are categorized as situational (i.e., varying over time) or dispositional (i.e., stable over time). While these empirical findings significantly contribute to the understanding of attitude formation towards social robots at work, questions regarding the mechanisms of how experiences made with a social robot in own interactions may alter individual’s attitudes towards a robot at their workplace remain understudied [610]. Section 2.2 provides an overview of existing applications of mere exposure theory [13, 14] in HRI research and subsequently introduces different social psychological theories describing mechanisms of attitude changes through the routes of affect, cognition, and behavior. While these theories show that social psychology offers a wide range of possible mechanisms through which personal experiences may, in general, effectuate changes in attitudes, it is largely unknown how these potential effects may unfold in the specific research setting of this study, that is, individuals personally experiencing a social robot at work. However, these theories provide a fruitful foundation for the explorative investigation of such mechanisms by the empirical approach of this study.

2.1 Empirical Evidence Regarding Antecedents of Robot-Related Attitudes

2.1.1 Factors Influencing the Overall Evaluation of a Social Robot

Central to the overall evaluation of a given stimulus (e.g., a robot) is the construct of valence, which originates in Kurt Lewin’s field theory [24] and describes how individuals assess a given stimulus in a holistic way (positive/negative), thus characterizing it as “something that pulls one towards or pushes one away [from the stimulus]” [25]. Later on, appraisal literature expanded on this construct by highlighting different levels of valence, such as micro-valence and macro-valence [26, 27]. While macro-valence resembles the one-dimensional conceptualization (positive/negative), micro-valence decomposes this overall evaluation of the stimulus into smaller parts, such as, for example, its pleasantness or goal conduciveness [26]. Although this micro-perspective helps assess the underlying mechanism of ‘mixed feelings’ [26], social psychology scholars agree upon the macro-perspective’s pivotal relevance for numerous intentional and behavioral outcomes [2830]. Consequently, prior literature has highlighted the importance of macro-valence for overall evaluation results in human-robot interaction [31]. However, terminology is inconsistent, as the very same positive or negative overall evaluation is sometimes labeled as, for example, general attitudes [4, 32], affective attitudes [5, 33], or emotional responses [34] towards a robot. Despite these inconsistencies, prior literature has identified various antecedents of a positive or negative overall evaluation of a robot on either the side of the focal robot or the human.
Robot Side
On the side of the robot, prior research has identified several attributes of robots that influence humans’ overall evaluation, broadly subsumed as behavior, design, and functionality. Regarding robot behavior, scholars have shown that allowing a robot to mimic human-like side-by-side walking can positively influence its evaluation [35]. Similarly, human-like interaction behavior, such as showing politeness [36] and empathy [37, 38], can contribute to a more positive evaluation. Furthermore, a recent study by Kang and colleagues [39] found that robots that give feedback are evaluated more positively than robots giving no feedback at all, largely independent of whether the feedback is provided in a human-like or machine-like manner (e.g., characteristics of acoustic feedback). Concerning robot design, the uncanny valley theory [40] suggested early on that an increasing number of human-like traits would not necessarily lead to a better overall evaluation but that it might trigger a switch from positive to negative evaluation once the robot reached a certain threshold degree of human-likeness [40]. Furthermore, evidence from Kwak and colleagues [41] indicates that object-based robots are evaluated more positively than organism-based robots when focusing on a functional task. Finally, concerning functionality, prior literature has shown that increased perceived usefulness can positively impact overall evaluation [42]. In contrast, a lack of reliability of the robot’s components (e.g., malfunctioning) may have the opposite effect [43]. Among the three previously mentioned attributes behavior, design, and functionality, existing HRI research has identified individuals’ perception of functionality to be particularly important for the evaluation of a robot in a functional or work context [44, 45]. Specifically, scholars have found that in settings where the robot is assigned a given task, users’ perception of robot functionality regarding the fulfillment of the task outshines behavior and design in its contribution to overall evaluation [4447]. Thus, existing literature indicates that, in a work context, positive perceptions of robot functionality are the key determinant of a positive overall evaluation of the robot.
Human Side
Not only the robot itself but also the human actor in HRI may possess attributes that influence the human actor’s overall evaluation of a given robot. These attributes can be broadly categorized as situational (i.e., changing over time) or dispositional (i.e., constant over time). Regarding situational attributes, HRI literature has yielded ambiguous insights into how prior HRI experience can influence a robot’s overall evaluation. While experiences with the prototype of a given robot have been identified to positively impact the overall evaluation of its further developed version [48], increasing experiences solely with the finalized version of a robot did not exhibit the same positive effect [42]. Taken together, these studies suggest a need to investigate the experiences in detail to understand why and how experiences with a robot alter its overall evaluation. Additionally, HRI research has highlighted that these experiences are compared against expectations toward the interaction and that the confirmation or lack of confirmation of these expectations plays a crucial role in attitude formation [49, 50]. Further, extant HRI literature has identified antecedents of expectations in this context, such as, for example, previous interactions with social robots [6, 51, 52] or (fictitious) media representation of robots [53, 54], and documented that these antecedents may differ between individuals, and may change over time. Thus, individuals may enter interactions with a social robot equipped with different levels of expected interaction quality; at the extreme, they may even expect this interaction quality to be similar to interaction with humans [55]. These expectations toward the interaction, however, are compared against actual experiences, positively altering an individual’s evaluation of the robot if they are met or negatively if they are not met [52, 56, 57]. This mechanism is captured in the recently developed “social robot expectation gap evaluation framework” by Rosén and colleagues [6, 58]. However, given possible differences between user reactions in laboratory and real settings, as well as the influence of the interaction context on expectations [53, 54], further work on the interaction of expectations and experiences in real settings is still needed [6]. Concerning dispositional attributes, Stafford and colleagues have investigated the influence of users’ gender, finding that men tend to have a generally more positive evaluation of robots than women [59]. Furthermore, scholars have argued that users’ cultural background can influence, first, expectations toward robots prior to the interaction, second, experiences made during interaction with robots, and, third, the overall evaluation of a robot [60].

2.1.2 Factors Influencing Perceived Human-Likeness of a Social Robot

Perceiving a social robot as more or less human-like results from a process in which the robot is anthropomorphized [19]. However, existing studies often lack a clear definition of anthropomorphism or provide no explicit definition in the specific study context [61]. While a seminal study by Epley and colleagues describes anthropomorphism as the attribution of human motivations and intentions to the behavior of non-human entities [62], others view anthropomorphism as a multidimensional construct that encompasses various aspects (e.g., appearance and movements) and describes a person’s overall assessment of a robot’s human-likeness [6366]. Indeed, several studies have documented the presence of anthropomorphism in settings without observable behavior that could be interpreted as expressing human-like intentions and motivations (or lack thereof) [67, 68]. Therefore, this study aligns with the broader definition of anthropomorphism as a multidimensional construct of perceived human-likeness [6365]. Thus, we interpret anthropomorphizing a robot and perceiving it as human-like as strongly aligned, with anthropomorphism being the process itself and perceived human-likeness being the result of this process [63]. As such, prior research has argued and found empirical support for numerous factors on both sides of the dyadic relationship between a robot and a human actor that can influence a person’s tendency to perceive a given robot as either more or less human-like (for a detailed review, see [69]).
Robot Side
On the side of the robot, extant literature has empirically identified two of the previously mentioned robot attributes behavior, design, and functionality as main strands of features that may increase the human-likeness attributed to it: behavior and design [64, 65, 6971]. Concerning behavior, a robot is generally considered more human-like if its behavior aligns with human behavioral characteristics, for example, the way it moves [62]. Consequently, a robot is considered more human-like when it can achieve smoother and slower movements through higher degrees of freedom (the number of axes in which movable parts can move), as these are more similar to natural movements performed by humans [69, 72]. For interactive and social behavior, this means that the robot is perceived as more human-like if it allows mutual [70] and predictable [73] interactions and if it signals a certain warmth during interactions [74, 75]. Additionally, the perceived human-likeness increases with the perception of behavior that can be acquired through socialization, such as politeness [76] and higher competencies in verbal [77, 78] and non-verbal [79] communication. Regarding robot design, visual appearance has so far received most of the attention and comprises several aspects itself: Phillips and colleagues [80] analyzed the contribution of 18 different visual features (e.g., arms, eyes, wheels) to a human-like perception of the robot and discovered a strong influence of the robot’s surface look and body. This aligns with empirical observations that suggest a stronger influence of the presence of a human-like body compared to other visual features in terms of anthropomorphism [67]. Besides visual appearance, a robot’s haptic appearance and feedback, such as different handshake styles (e.g., passive or active motion performed by the robot) [81, 82], as well as the presence of a human name given to the robot [95], can also influence perceptions of human-likeness [65]. Lastly, the presence and the characteristics of a robot’s voice can impact perceived human-likeness. First, research has shown that the presence of voice can lead to the anthropomorphizing of technology, while, in contrast, the absence of it can even lead to dehumanization, a process where the source is perceived as more machine-like rather than human-like [83]. Thus, robots incorporating a voice as an output channel tend to be perceived as more human-like compared to robots without voice output [84]. Second, concerning voice characteristics, a voice mimicking human voices increases perceived human-likeness [85]. Furthermore, technology incorporating female-sounding voices seems more likely to be anthropomorphized [86]. In sum, while existing literature has identified in a robot’s behavior and design several antecedents of increased perceptions of its human-likeness, questions regarding the role of perceived functionality in this regard remain largely unanswered to the best of our knowledge.
Human Side
Even though the robot may incorporate distinct levels of human-like features, prior literature has identified several individual factors on the human side that can influence the overall likelihood of people to “see human” [62] in a robot, which can, similarly to attributes altering overall evaluation, be categorized as either situational or dispositional. Regarding situational attributes, a meta-analysis of Blut and colleagues revealed that higher competence perceptions in interacting with robots and negative attitudes towards robots increased the tendency to anthropomorphize a robot, whereas a need for interaction decreased it [71, 93]. These interpersonal differences in the tendency to anthropomorphize non-human agents have been argued to exist irrespectively of the robot’s design as such [88, 89]. Furthermore, anthropomorphism is increased if individuals perceive the robot as an in-group member during the interaction situation [96]. Interacting with the robot over an extended period, in turn, may negatively influence anthropomorphism. Specifically, interaction with a robot over an extended period may shed light on machine-like cues of the robot that separate them from humans [97], such as having to press a button before being able to speak to it [98], that may have been outshined by the robot’s social cues and the novelty of such social interactions with robots within the first few interactions [62, 85]. Besides these situational attributes, existing HRI research has also identified several dispositional attributes influencing individuals’ tendency to anthropomorphize a robot. Besides individuals’ cultural background [87], scholars have identified increasing age [71] and internal locus of control [94] to negatively influence individuals’ tendency to anthropomorphize, whereas higher scores in the personality traits extraversion [90], agreeableness [91], and openness to experience [92] positively influence it.

2.1.3 Factors Influencing Employees’ Acceptance of a Social Robot at Work

Several models of technology acceptance have been used to assess the acceptance of social robots, such as the technology acceptance model (TAM) [21], the unified theory of acceptance and use of technology (UTAUT) [22], the diffusion of innovation theory [99], and the Almere model [100]. However, due to the novelty of social robot implementation at work, there is a significant dearth of studies investigating factors that influence the acceptance of a focal social robot at a workplace. Specifically, most of the existing empirical literature in the context of social robotics at work focuses on either the acceptance of social robots as a general category of technology [for detailed reviews, see 4, 101] rather than investigating antecedents of the acceptance of a specific robot in a work environment, or isolates antecedents of individuals’ acceptance toward one social robot in a non-work setting. However, the studies isolating antecedents of individuals’ acceptance of a focal robot—though in non-work setting—have provided important insights into potentially influential characteristics for acceptance on the robot side, whereas research on employees’ attitudes toward social robots in general consists of factors on the human side that may potentially alter acceptance of a specific robot, too.
Robot Side
Concerning robot characteristics, humans’ acceptance of it at work is, similarly to overall evaluation, influenced by the robot’s behavior, design, and functionality. Regarding behavior, Babel and colleagues found that participants preferred a robot that provided task-oriented communication rather than small talk [78]. Furthermore, acceptance of robots that engage in small talk is higher if the robot fixates on the human with its gaze and the human is to start the conversation [81]. In contrast, in the case of task-oriented communication, a proactive robot is preferred [81]. Likewise, acceptance of a robot is higher if it can act independent of human employees [15], thus, be useful in the work setting on its own. This also refers to perceptions of functionality, which, in line with general technology acceptance literature [21], was found to increase the acceptance of a social robot at work [102, 103]. Finally, prior studies have examined the effects of design elements on the acceptance of a social robot in the workplace. Both social cues [104] and the resulting perceptions of humanoid appearance [15] and anthropomorphism [105] can positively impact the acceptance of the robot.
Human Side
On the side of humans, literature has identified situational as well as dispositional attributes that can influence individuals’ acceptance of a social robot. Regarding situational attributes, individuals who have more trust in robots in general and perceive robots as safe have been considered as likely to accept robots at work [104]. Additionally, higher technology readiness [15] and beliefs about the positive societal impact of technology [105] have been found to increase individuals’ acceptance of social robots at work. On the contrary, privacy and data protection concerns [104], perceptions of job-related threats from robots [103], robot bias [15], and insecurity about the impact of technology on society and the world [106] can negatively influence acceptance of social robots at work. Concerning dispositional attributes, scholars have highlighted that increased trait anxiety might hinder individuals from accepting social robots at work [105]. Furthermore, HRI research has pointed out cultural differences in social robot acceptance at work [107]. By comparing the cases of Egypt and Malaysia, Abdelhakim and colleagues [108] found that differences in Hofstede’s cultural dimensions [109] can significantly moderate different processes underlying the acceptance of social robots by fast-food workers as they were proposed by the UTAUT model [22]. For example, they found that uncertainty avoidance positively moderates the link between effort expectancy and behavioral intention [108].

2.2 Theoretical Perspectives on Experientially-Based Changes in Attitudes

2.2.1 Changes in Attitudes Due to Mere Exposure

Research in social psychology has extensively addressed how attitudes toward a stimulus can change due to experiences with the focal stimulus [110]. In a seminal paper, Zajonc (1968) posited positive changes in attitudes resulting from mere exposure to a stimulus [13], an effect that forms the basis of mere exposure theory and that has been empirically documented in prior HRI studies [97, 111, 112]. However, a recent study by Fiolka and colleagues (2024) casts doubts on the universal validity of strictly positive attitudinal changes resulting from mere exposure effects and instead suggests the need for a more nuanced perspective. Specifically, they found construct-specific differences in the mere exposure effect. After seeing a social robot more frequently, participants reported less eeriness—i.e., a change regarding a negatively valenced attitude—but also less human-likeness—a change regarding an unvalenced attitude—, thus indicating possible changes in unvalenced attitudes in addition to changes in valenced attitudes [112]. Furthermore, a recent experimental study by Rosén and colleagues suggested that rather than mere exposure, it might be participants’ (past) experiences with robots in general that strongly predict attitudes towards a focal robot [6]. These ambiguous findings regarding the effects of exposure to robots indicate that building upon the mere exposure effect alone might be insufficient to capture the complexity of underlying mechanisms of attitude change towards a social robot. For example, while past exposure to social robots in general may help individuals to build a mental model of social robots [6], thus potentially altering individuals’ attitudes towards social robots via a cognitive route, it may be insufficient to explain attitude changes to a focal robot, which may develop over time and can potentially rely on affect, too. Furthermore, experiences made in HRI consist of more than (passive) mere exposure but rather contain active interaction, thus raising the question of how specific aspects of the interaction beyond the mere exposure can shape and change attitudes towards a social robot.

2.2.2 Changes in Attitudes Through Affect, Cognition, and Behavior

Despite the technological nature of social robots, human-like characteristics of this technology blur the boundaries between human interaction and interaction with a social robot [113, 114]. This core feature of social robot technology implies that social psychological theories, initially developed for human interactions, may also apply to the field of human-robot interactions [17]. To understand how attitudes can change through personal experiences, social psychology has developed a fine-grained view by decomposing attitudes into affect-based, cognition-based, and behavior-based attitudes [115]. Changes in the respective domain regarding a stimulus (e.g., affect) can cause changes in the interrelated attitudes towards this stimulus (e.g., affect-based attitudes) [103]. Still, they may also spill over onto attitudes that are initially rooted in a different domain (e.g., cognition-based attitudes) [116, 117]. Based on this decomposition of attitudes into affective, cognitive, and behavioral components, various theories facilitate understanding of human-human interaction with respect to the three different components. In the past, HRI research, particularly in the field of social robotics, greatly benefited from exploring the transferability of social psychological theories to HRI. We, therefore, believe that the following social psychological theories on attitude change could enable us to develop a more fine-grained view on the changes in attitudes towards a focal social robot and thus offer valuable contributions to HRI literature.
First, regarding affect, studies have shown that positive affect associated with a stimulus can positively influence attitudes towards this stimulus, an effect referred to as “affect heuristic” [118]. If individuals experience positive emotions in the presence of a stimulus (e.g., a robot), their overall attitude towards the stimulus (e.g., the robot) becomes more positive [118]. Prior research has shown that this affective route is particularly likely to be triggered if the individual possesses limited knowledge about the type of stimulus [119]. Thus, for individuals with limited experiences in human-robot interaction, this might mean that the valence of emotional responses during HRI (e.g., enjoying vs. disliking the interaction) with a specific robot could exceed a same-directional effect on attitudes towards the robot.
Second, from a cognitive perspective, attitudes can be characterized as object-based evaluations [120]. To explain how these object-based evaluations can be changed, the Elaboration Likelihood Model (ELM) posits that if provided with the necessary motivation and ability to process new information, individuals alter their cognition-based attitudes according to the favorability of the latest information [121]. Applied to the field of HRI, direct experiences with a robot can significantly increase individuals’ information about the robot [98]. Therefore, direct experiences could allow them to re-evaluate the robot based on this new information. This process could then alter existing attitudes that rely on obsolete information or mere assumptions about the robot. In addition to replacing or altering existing information, obtaining new information about the robot can also reduce uncertainty regarding it (e.g., its abilities, functionality, and limitations). The role of individuals’ perception of uncertainty during interaction with a robot has been examined extensively in HRI research. In new interaction situations, such as initial contact with social robots [53], individuals often experience uncertainty regarding their interaction partner. Subsequently, they seek to reduce this uncertainty by collecting information about their interaction partner [122, 123] to predict future behavior [124]. Besides collecting information through interaction, prior research has found that, if individuals do not possess adequate scripts for interaction with a given technology, perceiving human-like cues during interaction can also lead to the activation of interaction scripts from human interaction. This activation, in turn, can enhance the perceived predictability of the interaction partner’s behavior and thus further reduce uncertainty [125]. Past HRI research has identified various attitudinal consequences of uncertainty reduction through personal interaction with a robot, such as a reduced tendency to anthropomorphize the robot [50, 126], reduced feelings of social presence [55], the emergence of trust [127, 128], and altered acceptance [129131, 134137]. However, unlike regarding anthropomorphism, social presence, and trust, regarding acceptance, existing literature has yielded mixed results. As such, literature examining the effect of uncertainty reduction on acceptance while focusing on the collaborative nature of the relationship between human and robot (e.g., working together) has shown that reduction in uncertainty—hence, enhanced predictability of the robot’s behavior—can positively influence robot acceptance [129131]. However, scholars focusing on the social aspects in HRI have argued that uncertainty during a social interaction with a robot can—to some extent—be desirable, as it aligns with the expectations evoked by its anthropomorphized design and keeps the (social) interaction engaging [132, 133]. Consequently, literature that focuses on the social nature of the relationship between human and robot has indicated that uncertainty reduction may also exert a negative effect on acceptance [134137].
Finally, cognitive dissonance theory [138] can offer possible explanations for attitude changes through the route of behavior performed in previous HRI experiences. Cognitive dissonance theory suggests that conflicts in individuals’ attitudes (e.g., negative attitudes towards social robots) and actual past behavior (e.g., interacting with a social robot) create an internal tension that individuals try to resolve by aligning their attitudes accordingly [139]. In the context of individuals who have previously engaged in HRI, this could mean that they positively alter their attitudes towards the robot, and thus perceive it as “worthy” of interaction in order to justify their past behavior.

3 Methods

3.1 Empirical Design

In order to analyze our research question, we exploited a kind of natural experiment, enabled by a collaboration with a local organization. The organization employs a total of 110 staff members. It provides various services for students at three different locations (‘sites’), including the operation of canteens, housing, and other advisory services at each of the three sites. The organization sought to potentially implement a social robot (‘Pepper’ by Softbank Robotics). It announced its implementation—at the largest of the three sites—to all employees, six months in advance along with the information that this implementation would be accompanied and evaluated by a research team. Thus, the subsequent deployment of the social robot for six weeks at the largest of the three sites, where approximately 50% of the employees work, represented what we interpret here as a stimulus with respect to employees’ robot-related attitudes. Further, while all employees received the announcement of Pepper’s implementation, only employees who worked at this site could directly interact with the robot during their work shifts. They thus constitute what we refer to as the ‘treatment’ group. Employees who worked at the other two sites had no opportunity to see the robot or to interact with it; they form what we refer to as the ‘control’ group. During the six-week-long deployment period, Pepper was placed at various locations at the focal site throughout each day and initiated a conversation whenever it detected a human nearby (closer than three meters). Pepper was positioned at different places throughout each workday: Pepper was positioned next to the time clock at the beginning of each day. For employees at the deployment site, it was compulsory to check in at the time clock when they started their shift (see Fig. 1a). Thus, each member of the ‘treatment’ group encountered Pepper in the morning and interacted with the robot—at least at a baseline level—over the course of the deployment period. Members of the ‘control’ group, in turn, work at sites located at different places in the same city. Therefore, they were not in danger of mistakenly entering the wrong building and encountering Pepper. Each day, subsequently, Pepper was positioned in the employee break room (see Fig. 1b). During mealtimes, Pepper was deployed alongside employees in customer-facing roles in the canteen (see Fig. 1c for an example), with specific use cases varying daily (e.g., asking for customer feedback and presenting the day’s meals). Table 1 provides a comprehensive overview of Pepper’s locations, abilities, and tasks throughout the day during the deployment period.1
Fig. 1
Pictures of the robot deployment
Full size image
Table 1
Overview over pepper’s deployment
Time
Place
Function
Abilities 1
Constant on each day of the deployment period
06:00 a.m. – 08:30 a.m.
Time clock
Welcome arriving employees
• Greeting employees with “Good morning” as they approached the time clock.
• Inquiring about the quality of their sleep and subsequently engaging in interaction based on their response.
• Engaging in casual small talk.
08:30 a.m. – 11:30 a.m.
Employee break room
Entertain employees
• Proposing an interaction when Pepper detects a human.
• Extensively introducing itself with a short presentation. ◘
• Playing iconic rock songs (e.g., Thunderstruck) while mimicking playing an air guitar. ◘
• Playing mini games with employees (e.g., memory) ◘
• Dancing to music when employees request it. ◘
• Telling jokes. ◘
• Engaging in advanced small talk.
Daily rotation in irregular sequence2
11:30 a.m. – 02:00 p.m.
Canteen entrance
Welcome arriving guests
• Providing information about the daily canteen offerings. ◘
• Offering the possibility to take a selfie with Pepper. ◘
Food counter
Entertain guests during waiting time
• Offering the opportunity to play games with Pepper or to tell a joke. ◘
• Dancing to music or playing iconic rock songs. ◘
Self check-out
Provide help with self-checkout
• Providing information about the meal pricing. ◘
• Providing information about the self-checkout procedure. ◘
Dining area
Entertain guests at the tables
• Offering the opportunity to play games, tell a joke, and dance. ◘
• Extensively introduce itself with a short presentation. ◘
Dishes return
Explain the process of dish return
• Providing information about dishes arrangement when returning dishes. ◘
Exit
Farewell and soliciting feedback
• Proactively bidding farewell to guests as they enter the exit area.
• Asking for a ranking of the canteen experience (1 to 5) on the display.
1 All abilities could be activated through voice interaction with Pepper. ◘ indicates which abilities could also be activated through buttons on Pepper’s touchscreen display
2 Ability to engage in casual small talk was available at any point

3.2 Data Collection and Sample

Employees from all sites were invited to participate in picture ranking tasks embedded into a collection of qualitative data to share their experiences with the research team subsequent to the robot’s deployment period (i.e., semi-structured interviews). Due to the highly fragmented organizational structure, the disclosure of even basic socio-demographic information in combination with their experiences with the robot (e.g., at a specific place within the company) could have resulted in the possibility of identifying a respondent. Given the natural setting and the existing relationship of our respondents with the company as their employer—which might have influenced employees’ answers if they could be traced back to them—we wanted to ensure full anonymity to our respondents. Therefore, we took several measures: First, we chose not to collect socio-demographic information (e.g., age, tenure, functional role)2. Second, we scheduled data collection without asking for the disclosure of personal information (e.g., names). Third, the data collection was double-blind and conducted by individuals hired explicitly for this task who had had no prior contact with the company or their services. Finally, the data collection was arranged such that respondents would not encounter their colleagues or individuals from the research team. During data collection, we provided participants with a picture ranking task, that is, ten standardized pictures of different social robots. We asked them to rank these robots according to their (1) overall evaluation, (2) human-likeness, and (3) acceptance at work. To access the pictures, we followed the approach of Phillips and colleagues [72] by using the robot picture database ABOT3. Only robots with a complete body were used for this purpose, with one robot per decile of the overall human-likeness scale of ABOT, resulting in ten pictures of social robots with a distinct overall human-likeness score and encompassing the entire range of the scale4. To increase image comparability, all pictures were resized to the same size and converted into black and white [140]. We showed participants the images three times and asked them to rank them on semantic differentials with varying endpoints. First, we asked them to rank the robots according to their overall evaluation from “most negative” to “most positive.” Subsequently, we shuffled the pictures and asked them to rank the robots according to their human-likeness from “least human-like” to “most human-like.” Finally, to capture participants’ acceptance of the robots at work, we asked them to rank the robots from “least like to have that at work” to “love to have that at work.” Fig. 2 shows the complete instrument, including pictures of the robots. After the picture ranking task, we collected qualitative interview data regarding their experiences with the robot to further shed light on their ranking of the Pepper robot.
Fig. 2
Measurement instrument for the picture ranking task
Full size image
Table 2
Overview of respondents
Code
Human-robot interaction (yes/no)
Sex
Length (min)
Pepper’s rank1
Overall evaluation2
Human-likeness
Acceptance at work
IN1
No
male
15:48
10th
4th
10th
IN2
Yes
female
32:37
10th
8th
10th
IN3
Yes
female
27:37
8th
6th
9th
IN4
Yes
female
15:54
9th
5th
8th
IN5
No
female
17:06
6th
5th
5th
IN6
No
female
22:15
6th
5th
7th
IN7
No
female
14:21
5th
6th
5th
IN8
Yes
female
20:20
5th
6th
6th
IN9
Yes
male
29:10
9th
5th
10th
IN10
Yes
male
15:38
9th
4th
3rd
IN11
Yes
female
21:57
10th
10th
10th
IN12
No
female
14:01
10th
7th
8th
IN13
No
female
22:30
9th
1st
5th
IN14
Yes
male
18:55
9th
5th
6th
IN15
No
female
18:22
9th
6th
9th
IN16
No
male
16:10
5th
5th
6th
IN17
No
male
17:53
5th
5th
7th
IN18
No
female
21:51
8th
5th
8th
IN19
Yes
female
30:01
6th
5th
6th
IN20
Yes
male
25:47
7th
5th
7th
IN21
Yes
female
25:49
10th
5th
10th
IN22
No
male
16:45
6th
5th
4th
IN23
No
female
26:03
8th
5th
6th
IN24
Yes
male
25:05
9th
7th
8th
IN25
No
male
18:52
5th
5th
4th
IN26
No
male
22:50
6th
5th
6th
IN27
Yes
female
26:38
8th
8th
7th
IN28
No
female
19:25
7th
6th
8th
IN29
Yes
female
26:51
7th
5th
7th
IN30
No
female
21:40
2nd
6th
5th
IN31
No
female
16:26
6th
5th
4th
1 A higher rank indicates that the respective attribute was ranked more favorably for Pepper
2 A rank of six (and higher) indicates a (stronger) positive evaluation, a rank of five (and lower) indicates a (stronger) negative evaluation
Approximately 30% of the company’s staff from the different sites participated in our data collection. In line with our research question, we categorized them based on whether they had personally interacted with Pepper (N = 14) or not (N = 17). The duration of the data collection varied by participant, ranging from 14 to just under 33 min. Table 2 provides an overview of the respondents and their rankings of Pepper on the different dimensions.

3.3 Data Analysis

The data analysis of the present study employs a mixed methods approach by enriching the quantitative data of the picture ranking task with qualitative data for a subset of respondents—those respondents who had interacted with Pepper prior to answering the picture ranking task—in order to shed light on experience-based mechanisms underlying the formation of robot-related attitudes. As the task to rank the different robots according to their overall evaluation, human-likeness, and acceptance at work produced ordinal rank data, we used SPSS v28 to conduct Mann-Whitney-U-Tests to assess differences in the constructs depending on participants’ contact with Pepper [142]. We utilized Rosenthal’s approach to calculate the standardized effect size r by dividing the z-scores by the square root of our sample size [143]. Furthermore, we calculated Spearman’s rho to illustrate the relationships between the three constructs [144]. For the qualitative data (interviews of the 14 employees who reported to have themselves interacted with the robot), we inductively coded the interviews with MAXQDA 2022 and analyzed them using thematic analysis to find recurring themes within participants’ responses regarding their perceptions of and attitudes towards the robot [141]. Examples of inductively generated codes are experienced fun, cuteness, appearance, behavior, functionality, and “cool and funny.”

4 Results

4.1 Picture Ranking Task

We employed Mann-Whitney U tests (Table 3) to examine differences in the evaluation of Pepper, comparing individuals with and without personal interaction with Pepper [126]. Results show significant differences in overall evaluation (r = − 0.40, p = 0.028) and marginally significant differences in acceptance at work (r = − 0.35, p = 0.054). However, the results do not indicate significant differences in perceived human-likeness (r = − 0.21, p = 0.239). All robots’ human-likeness scores were assessed through the ABOT database. According to this normed score, Pepper should be ranked fifth among all the robots included. To examine whether the groups differed in over- or under-perception of Pepper’s human-likeness, we re-checked the group differences by comparing them to the normed rank of five. Results show that people who had themselves interacted with Pepper ranked it significantly higher in human-likeness compared to its normed rank of five (r = − 0.37, p = 0.041), whereas the ranking of those without such interaction experiences did not significantly differ from the normed rank (r = − 0.13, p = 0.483).
Table 3
Differences in Pepper’s ranking in the picture ranking task
 
Personal interaction vs. no interaction
Personal interaction vs. normed score
Personal interaction vs. normed score
z
r
p
z
r
p
z
r
p
Overall evaluation
− 2.20
− 0.40*
0.028
-
-
-
-
-
-
Human-likeness
− 1.18
− 0.21
0.239
− 2.05
− 0.37*
0.041
− 0.70
− 0.13
0.483
Acceptance at work
− 1.93
− 0.35
0.054
-
-
-
-
-
-
**p < 0.01, *p < 0.05, p < 0.1
Besides inter-group differences based on interaction experience (or lack thereof) with the robot, we also considered Spearman’s rho rs to assess the relationships between the variables [144], displayed in Table 4. Results show a highly significant relationship between overall evaluation and acceptance at work (rs = 0.66, 95% BCa CI [0.384, 0.824], p < 0.001), as well as a significant relationship between perceived human-likeness and acceptance at work (rs = 0.37, 95% BCa CI [0.004, 0.645], p = 0.042). Interestingly, there is no significant relationship between perceived human-likeness and overall evaluation (rs = 0.08, 95% BCa CI [-0.296, 0.428], p = 0.683).
Table 4
Descriptive information and correlations
 
Pepper’s rank
 
Min.
Max.
Mean
SD
1
2
3
1. Overall evaluation
2
10
7.39
2.03
-
-
-
2. Human-likeness
1
10
5.48
1.50
0.08
-
-
3. Acceptance at work
3
10
6.90
2.02
0.66**
0.37*
-
**p < 0.01, *p < 0.05

4.2 Complementary Qualitative Data on Mechanisms

As the picture ranking task showed inter-group differences based on interaction experience (or lack thereof) with the robot, a subsequent analysis of the complementary qualitative interview data focuses on the 14 employees who reported to have themselves interacted with the robot, digs deeper and identifies possible reasons that link the observed ranking differences to specific features of individuals’ experiences. During the interviews, participants expressed thoughts that added meaning and context to the interpersonal differences observed in the picture ranking task in the robot’s overall evaluation, perceived human-likeness, and acceptance at work. Figure 3 displays the interaction experience-based factors extracted from the interviews that either increased or decreased overall evaluation, acceptance at work, and perceived human-likeness.
Fig. 3
Interaction experience-induced factors influencing overall evaluation, acceptance at work, and perceived human-likeness of the focal robot
Full size image

4.2.1 Effects of Personal Human-Robot Interactions on Overall Evaluation

Participants expressed various ways in which their interactions with the robot had altered their overall evaluation of it, either positively or negatively. However, consistent with the results of the picture ranking task, most expressions indicated that personal interaction with Pepper had induced a more positive overall evaluation. Specifically, from the qualitative data, several reasons emerged that, according to participants’ accounts, might have given rise to the resultant evaluation.
Fun associated with the robot. In the interviews, several participants highlighted that they had enjoyed interacting with the robot because it was fun for them, as one employee described:
I was really curious to see how it would turn out, and I just had a lot of fun, especially in the mornings when arriving. [IN19]
Another employee expanded on this thought and pointed to the conversation style of the robot as an antecedent of the fun and enjoyment experienced in the interactions:
The responses were very friendly, and sometimes a bit, I would say, with a bit of a playful tone. There were some teasing answers as well. It was quite amusing. [IN29]
Politeness of the robot. In a similar vein, many interviewees emphasized the politeness of the robot as a key driver of their positive overall assessment. One employee described in detail that she particularly appreciated the interest shown by the robot:
Just a normal positive feeling, you know? I was impressed by what he shared and how politely he asked if I had slept well. I actually found this quite nice. [IN3]
However, despite the employees describing the robot as polite, one of them highlighted a sense of uncertainty about the robot’s further emotional capabilities. Specifically, she explicitly expressed being able to imagine less friendly forms of interaction with the robot than those that she had experienced:
He might be able to be angry too, I don’t know. I haven’t experienced that. He was always friendly. [IN11]
Cuteness of the robot. In addition to these characteristics of interactions with the robot, the robot’s appearance was also repeatedly mentioned in combination with a positive overall evaluation. In particular, respondents suggested that the childlike appearance of Pepper created a sense of cuteness, or as one employee described the robot:
Quite cute, like just a little playful child. [IN19]
Stress reduction. Beyond aspects directly related to interactions with the robot, some respondents also mentioned more indirect positive consequences of those interactions, reinforcing the impression of an overall positive assessment of the robot, due to, for example, such effects as a reduction in work-related stress:
Well, I have to say, sometimes, on certain days, I actually found it quite nice when I chatted with him for a few minutes. It was helpful for stress relief, especially on days when things were a bit hectic. [IN9]
Being fascinated by the robot. For most employees, encountering Pepper in their workplace represented their first contact with a social robot. The novelty of this experience, as they had had no previous contact with this type of technology, might have contributed to being intrigued by this particular robot. This sense of fascination may, in turn, have created an impression of the robot that lasted beyond the individual interaction, thereby positively altering the valence of the overall evaluation, or as one employee expressed it:
Personally, I was actually a bit fascinated by Pepper. But I was also incredibly distracted by it. […] Actually, he reminded me of movies everyone knows, and you think, ‘Well, it’s just a movie,’ but now it’s a bit real. [IN27]
Being disappointed by the robot. However, there were not only positive ways in which personal experience appeared to have affected the valence of the overall evaluation. Some employees indicated that interacting with the robot had also revealed the limits of its functionality. Interestingly, though, most of those participants who eluded at this issue qualified this seemingly negative statement right away and explicitly stated that they believed that the resulting awareness of the limits of Pepper’s functionality did not influence their overall evaluation of the robot. A few interviewees, however, acknowledged that it might have adversely affected the overall evaluation, as the following statement shows:
Well, at the beginning, it was interesting, and then I quickly lost interest. [IN10]
Feeling of impersonality. In line with this somewhat negative stance, another employee indicated that the interaction with the robot did not result in exclusively positive consequences such as fun, enjoyment, or being fascinated. Instead, the interaction triggered negative feelings, too. As such, she expressed a sense of impersonality that she perceived in the interaction, or as she put it:
[I felt] actually not so good. Because for me, it’s impersonal. [IN24]

4.2.2 Effects of Personal Human-Robot Interactions on Perceived Human-Likeness

The results of the picture ranking task indicated that employees who had personally interacted with the robot perceived it as more human-like than we would have expected based on its normed value (based on human-likeness score assessed through the ABOT database). The qualitative data suggest several reasons that could explain the increased perception of human-likeness, such as the robot’s appearance and behavior. However, it also became apparent that direct contact with the robot can be a double-edged sword for this perception, with functionality and employees’ expectations regarding functionality playing key roles during an interactive episode:
Interaction like a human. On the one hand, it became clear that Pepper exceeded some employees’ expectations regarding its functionality. These employees had not previously had any contact with a social robot. The robot’s ability to smoothly master social interaction—something perceived by many interviewees as an inherently human skill—seems to have surprised many of them. This surprise may have subsequently led to an increased perception of human-likeness, as described by one employee:
Almost frighteningly human. So, I imagined it differently. Frightening in a positive sense. It’s already very human, if I can say that. I was surprised by what’s all inside such a robot, what it can do. [IN2]
Malfunction like a machine. On the other hand, there were also employees whose statements suggest that the lack of functionality that they perceived in some of their interactions with the robot made it particularly salient to them that this was a robot pre-programmed for (at least partially anticipated) interaction rather than a human (or human-like) actor, able to adapt to entirely novel interactive situations. This impression arose in particular when the programming was perceived to be failing, as described by one person:
So, I see it as a machine. (…) Well, depending on what I said, he didn’t really function properly. That’s why I think he probably has to be programmed for many things he should or must do. [IN8]
Perception of warmth. Some employees mentioned that the robot’s behavior, movements, and interaction style radiated a certain warmth, which reduced a possible feeling of alienation and coldness that they might have feared to prevail when interacting with a robot:
Pepper, for example, comes across as very human. He is empathetic, and in that sense, he doesn’t seem cold. He doesn’t seem cold. I think that’s the best description. [IN19]
Human-like cues. Finally, the Pepper robot has been designed to imitate a childlike appearance [145]. Respondents in our study did indeed perceive it in this way and mentioned its childlike appearance as a facilitator of their perceptions of human-likeness. In combination with the robot’s capability to interact, some employees described the robot not as a robot but as another person:
One almost feels like it’s a little person with artificial intelligence, right? [IN3]
Besides other characteristics such as its voice, the robot’s eyes, which simulate human blinking through irregular flashing, were emphasized in their pivotal role in inducing perceived human-likeness:
Pepper’s eyes. Pepper’s eyes, voice, and size made it seem so childish, cute, and adorable to me. [IN27]
Relatedly, another respondent emphasized the significance of her personal experience of eye contact for perceiving the human-likeness of the robot, despite being aware of the difference with humans:
It’s just like with humans. Well, a robot is a machine. When you look into its eyes, it doesn’t react like a human. But it’s still a bit different than if you haven’t seen it at all. [IN11]

4.2.3 Effects of Personal Human-Robot Interactions on Acceptance at Work

The results of the picture ranking task showed marginally significant differences in acceptance at work: Employees with robot interactions during the deployment period ranked the robot higher in workplace acceptance than those without any such interactions. Again, some explanatory mechanisms emerged from the interviews that might illuminate this difference, although not entirely without ambiguity. As with perceived human-likeness, more skeptical experience-driven assessments were mainly rooted in how employees perceived the robot’s functionality.
Perception of benefits. The most frequently mentioned aspect promoting acceptance was the perception of the benefits of using Pepper in the workplace. Employees made it clear that the perception of these benefits was the result of a process during the deployment period, starting without a concrete idea and mostly ending with a positive view:
At the beginning, I thought, “Well, what’s he going to do there, what can he really achieve, and so on?” But when the background was clear, for example, that he is also meant to create good vibes in the employee areas, which really happened. [IN19]
Interestingly, this positive development in terms of assessment seemed to apply not only to Pepper but appeared to (at least partially) spill over into a broader, overarching attitude towards robots in general in the workplace, as illustrated, for example, by the following statement:
And well, in general, I would say I now look at things in relation to robots a bit differently. Because I see there is some potential there, and I already have a somewhat different attitude towards the robot. [IN9]
Reduction of robot-related uncertainty. Additionally, employees with experiences of personal interaction with the robot also attributed positively altered opinions regarding robot acceptance to the reduction of uncertainty related to the robot. Furthermore, several interviewees indicated that prejudices had prevailed, in particular following the announcement of the robot’s imminent deployment, but that these prejudices and reservations were ultimately alleviated at least to some extent by employees’ personal experiences with the robot, as summarized by one respondent:
So, I would say that they have definitely changed for the better. The acceptance has increased, and even if there were prejudices, I would say they have definitely diminished. Because at first, when they say robots are coming, they will be deployed here and there, you don’t really have a clear idea of how it will actually work. But I believe that over the weeks or months, you really got to see it more, and speaking for myself, I have definitely gained a positive impression. [IN2]
Recognition of non-threatening nature of the robot. One of the prejudices, which initially represented an obstacle to forming positive attitudes, was referred to explicitly by respondents, as well as how interacting with the robot mitigated this concern. Specifically, employees reported that they realized that the robot’s purpose could be augmenting them rather than replacing them:
So, when Pepper was deployed with us, I noticed that it was not meant to replace any employees. [IN2]
Lack of functionality. As with the first two constructs—overall evaluation of the robot and perceived human-likeness—perceptions of the robot’s functionality again constituted an essential factor that adversely impacted the third construct, that is, workplace acceptance. However, statements suggest that this assessment was not final, and further enhancement of the robot model was envisaged to potentially lead to a further increase in acceptance:
If he were to be further developed on the communicative level for informational tasks, like at the cash register, for example, it could make sense. But not with this model. And not as it is now. [IN20]
Consequently, the core mechanism behind the increased acceptance at the workplace of this specific model of Pepper might be rooted in different, task-unrelated attitudes towards it, as one employee described:
Oh, he’s a funny little guy, like an attraction. But as assistance, I couldn’t really see him that way, especially due to his small size. [IN14]

5 Discussion

In the present study, we examined differences in employees’ ranking of a social robot that was implemented at their workplace on the dimensions of the robot’s overall evaluation, perceived human-likeness, and acceptance at work, depending on whether they had interacted with the robot themselves or not. Furthermore, we sought to uncover experience-driven antecedents of employees’ distinct attitudes toward the robot by additionally conducting a qualitative analysis of interview data from employees who had interacted with the robot. By exploiting a natural experiment-like setting, we were able to identify group differences between employees who had the possibility to directly interact with Pepper and those employees who lacked this possibility. Thus, we contribute to HRI literature by answering recent calls for a qualitative investigation of experience-driven mechanisms in changes in robot-related attitudes [6] and for observing users’ reactions in their natural environment (e.g., their workplace) [146]. We were interested in (1) whether we would observe differences in employees’ attitudes depending on the personal experiences with the robot in their work setting (captured by the picture ranking tasks) and (2) how any such differences—should they emerge — would relate back to employees’ specific experiences (captured by qualitative analysis of interview data). Results show that employees with personal experience through interacting with the robot ranked it significantly more positively and reported significantly higher acceptance of it at work than their colleagues who had not had the opportunity to interact with Pepper. Furthermore, employees in the former group ranked Pepper significantly higher in human-likeness than the rank suggested by its normed human-likeness score, thus overestimating its human-likeness. Finally, a thematic analysis of qualitative data revealed different experience-driven factors that shaped employees’ attitudes towards the social robot, which we discuss in detail below in relation to prior literature.

5.1 Personal Human-Robot Interaction Experiences and Overall Evaluation

Results from the picture ranking task showed that personal experiences with the robot influenced employees’ overall evaluation of it: Individuals who had interacted with Pepper at work ranked it significantly more positively. Building upon this observation, findings from the qualitative interviews shed light on how experiences positively influenced employees’ overall evaluation of the robot. First, in line with prior literature, respondents highlighted the effect of the robot’s interaction behavior for overall evaluation, such as, for example, being polite [36]. Second, participants indicated that their feeling of being intrigued increased, whereas their feeling of disappointment decreased their overall evaluation of the robot. Both of those feelings result from non-congruency of their expectations towards interactions with the robot, on the one hand, and their actual experiences in interactions with it, on the other hand [6, 58]. As such, they provide further empirical evidence for the importance of alignment between expectations and experiences for positive evaluations of social robots [52, 56, 57]. However, and this is striking in view of extant empirical literature, respondents in our study did not mention the robot’s functionality as an influence in its own right to have impacted their overall evaluation of it; instead, they solely referred to it in the context of prior expectations (being intrigued/disappointed). Given prior empirical evidence that has strongly emphasized the importance of functionality on the robot side for the overall evaluation of social robots in function-based or work-related settings [4447], the absence of a direct relationship in the accounts of our interviewees is surprising. Instead, results show that employees focused on different attributes of the robot side, such as behavior and design, which shaped their overall evaluation of the robot. Specifically, they expressed that their experiences activated affective routes which, therefore, may offer valuable extensions to the answers regarding how employees evaluate a social robot in a work setting: Regarding robot design, respondents did not mention Pepper’s human-like appearance to influence their evaluation but the positive influence of Pepper’s cuteness as a distinct design choice. Thereby, the results indicate the possibility of a “cute response” [147] towards social robots. Cute responses, which are primarily examined in the context of pets, describe unconscious reactions triggered by infantile features (e.g., large eyes) that are universally appealing to humans [147]. While there has been some (limited) research suggesting that a robot’s cuteness may influence humans’ reaction towards it [148, 149], the present study is, to the best of our knowledge, the first study to detect a cute response to a social robot in a work setting, thus implying the presence of antecedents of overall evaluation of a robot in a work-setting that go beyond work-related factors, such as functionality. In line with this notion, participants described those other affective reactions during the HRI (fun, stress reduction, or feeling of impersonality) caused by Peppers’ behavior also spilled over into an overall more positive or more negative picture of Pepper. This finding provides, to the best of our knowledge, first empirical evidence for the presence of the affect heuristic, which states that the valence of an affect experienced during interaction with a stimulus can exert same-directional effects on broader attitudes towards this stimulus [118], in the context of HRI with a social robot at work.
Thus, although the positive relationship between direct experiences with a social robot and a positive evaluation had been found in earlier HRI literature, studies also indicated a strong relationship between positive evaluations and perceptions of functionality when a robot is being evaluated in a functional or work setting [4447]. The results of the present study, which analyzed a real work setting, thus contribute to HRI literature by highlighting potential mechanisms—cute responses caused by the robot’s design and affect heuristic caused by the robot’s behavior—that positively influence employees’ evaluation of a social robot in a work setting independently of the perceived functionality. To the best of our knowledge, these findings add novel insights to research on HRI at work. Based on these results, we encourage future research to, first, investigate in-depth potential interaction effects of employees’ functionality perception and affective reactions on their overall evaluation of a social robot at their workplace. Second, we encourage future research on employees’ overall evaluation of a focal robot following a functional perspective to explicitly consider and control for potential affect-based mechanisms that may confound effects on overall evaluation.

5.2 Personal Human-Robot Interaction Experiences and Perceived Human-Likeness

The interpersonal differences in the ranking of Pepper’s picture on the semantic differential that captured perceived human-likeness revealed that those with personal interaction experiences with the robot significantly overrated Pepper regarding its human-likeness, unlike employees who lacked this experience. That is, experiences of interacting with Pepper led employees to perceive it as overly human-like, thus confirming the question of whether personal experiences can influence the perceived human-likeness of a focal robot in a work setting. To identify potential reasons for this observation, results from qualitative interview data provided valuable insights into how experiences with Pepper shaped its perceived human-likeness. In line with prior literature, respondents indicated that observing human-like cues, such as a child-like body [67], eyes [72], and voice [71, 85], and perceiving human-like interaction characteristics, such as warmth [74, 75], positively impacted their perceived human-likeness of Pepper. Furthermore, some participants indicated a feeling of a social presence during the interaction with Pepper [150, 151], as illustrated by their mentioning of the feeling that they encountered ‘almost a little person’ when interacting with Pepper. Additionally, we observed surprising results regarding the influence of participants’ perceptions of Pepper’s functionality on its perceived human-likeness. As such, some employees highlighted that a broad scope of functions in terms of being able to engage in a proper conversation not only surprised them but also made them perceive the interaction as very natural and human-like. On the other side, some participants highlighted that malfunctioning, such as the robot not understanding certain specific questions, reminded them of the technological nature of their interaction partner, thus decreasing their perceptions of its human-likeness. Moreover, some participants reacted to such experiences of malfunction by engaging in a kind of ‘hypothesis generation,’ where they tried to figure out potential reasons, for example, programming failures or problems in connecting to the network. Thus, our findings indicate that malfunctioning may trigger (more) in-depth reflection about robot-inherent processes, which, in turn, might demystify Pepper’s humanoid appearance in the sense of ‘judging the book by more than its cover.’ Overall, such reflection might negatively influence its perceived human-likeness. Prior literature has so far yielded mixed findings regarding possible consequences of malfunctioning on perceived human-likeness [152]. Our findings of potential demystification through in-depth reflection about reasons for malfunctioning offer a possible explanation for this ambiguity. In particular, we argue that such in-depth reflection requires a certain level of knowledge about the functionality of robots and technology in general. As individuals differ in their levels of such relevant knowledge, these differences would be reflected in corresponding interpersonal differences in susceptibility to experience robot ‘demystification’ and, ultimately, in a decrease of perceptions of human-likeness. We thus encourage future research to investigate a potentially moderating influence of robot- and technology-related knowledge on the relationship between robot failure/malfunctioning and perceptions of human-likeness.

5.3 Personal Human-Robot Interaction Experiences and Acceptance at Work

Again, results from the picture ranking task confirmed whether employees differed in acceptance of Pepper at work depending on whether they had personally experienced interacting with it. In the qualitative interviews, respondents expressed different ways how experiences with Pepper shaped their increased acceptance of it at work. First, the functionality of Pepper emerged as a key antecedent that, depending on how it was evaluated, potentially increased or decreased acceptance of the robot at work. As such, some employees mentioned that a positive evaluation of the robot’s functionality induced the perception of benefits they could reap from its presence at their workplace. In contrast, the perception of a lack of functionality had the opposite effect, wherein employees saw no (personal) benefit from the presence of a robot at the workplace, which, in turn, reduced acceptance. This finding is in line with the prior literature that has conceptualized functionality as a key antecedent for the acceptance of technology, such as the TAM [22] and the UTAUT [23], and, more specifically, for social robots, the Almere model [100]. However, our participants also revealed complementary processes that increased their acceptance of the robot and counterbalanced the negative effect of the perceived lack of functionality in overall acceptance. Shedding light on these complementary processes in acceptance formation, we believe, can significantly contribute to fostering an understanding of the process of employees’ acceptance of a social robot at their workplace.
Accordingly, many participants referred to their experiences with the robot as having caused a reduction in the uncertainty associated with it and thus having positively influenced their acceptance of it. Furthermore, some employees expressed concerns that the robot might play a role in the organization’s increasing automation. However, they also described that this view changed in response to their experiences, leading them to recognize that their new ‘robot colleague’s role’ was to augment rather than replace them. Ultimately, this process resulted in an increased acceptance of the robot. These findings can be explained by the uncertainty reduction theory, which originates in human communication research and posits that more significant uncertainty about another person diminishes an individual’s willingness to accept and be around this other person [122, 123]. Initial and ongoing interactions with this person thus serve as an opportunity to reduce uncertainty and can, therefore, indirectly increase acceptance [122, 123]. Although these mechanisms have so far been proposed for interaction among humans, scholars have pointed to the possible transferability of these social responses to interactions between humans and technology [17]. Specifically, they suggest such acceptance-increasing effects of uncertainty reduction towards different technological “interaction partners,” such as COVID-19 tracing apps [153], conversational agents [154], and mobile government security response systems [155]. In the domain of social robotics, prior research has found that uncertainty reduction increased trust in a focal robot by children [156] and by visitors of a public service center [157].
However, regarding acceptance of social robots, existing HRI research has hinted at potential differences of uncertainty reduction effects, depending on whether the focus is on social aspects or the collaborative nature of the interaction. HRI research focusing on the collaborative nature of HRI (e.g., at work) has argued for positive effects of uncertainty reduction on acceptance [129131]. In contrast, research focusing on social aspects has argued that a certain level of uncertainty, as an attribute inherent to social interaction, can be a characteristic that—to some extent—is actually desirable in anthropomorphized agents, and its decrease may negatively affect their acceptance [134137]. To the best of our knowledge, the present study is the first to investigate possible uncertainty reduction effects on employees’ acceptance of a social robot in a real work setting. In our specific case, the robot was implemented into a work setting—hence, a collaborative setting—where its tasks were fulfilled through verbal interaction—hence, it acted as a social agent. Due to this combination, ambiguous empirical evidence regarding a positive [129131] or negative [134137] influence of uncertainty reduction on robot acceptance raised questions about the net effect of uncertainty reduction in a situation where both aspects are combined. By detecting positive uncertainty reduction effects on acceptance of the robot in a setting where these two characteristics overlap, we believe that the results of this study contribute to existing HRI research by indicating that potential positive effects of uncertainty reduction due to the collaborative setting may outweigh potential negative effects of uncertainty reduction caused by demystification of a social agent. It thus provides fruitful ground for future research that could probe the robustness of this net uncertainty reduction effect on workplace acceptance of social robots using other robots with different types of functionalities and anthropomorphic features, thus expanding existing technology [21, 22] and social robot [100] acceptance models by enhancing scholars’ understanding of the influences of uncertainty reduction that is triggered by personal experiences with a focal robot.
Furthermore, the mechanism identified in this study—increased acceptance due to uncertainty reduction triggered by experiences from personal interactions—suggests intriguing considerations related to the integration of large language models (LLMs) into social robots. Integration of LLMs arguably enhances the interactive capabilities of social robots and may therefore impact their acceptance by users. Specifically, scholars have argued that individuals starting to interact socially with a technology may experience uncertainty, as they lack adequate interaction scripts and experiences for such situations [19, 158]. Moreover, existing research has shown that users’ perceptions of similarity between interaction with humans and with AI—as would be the case with LLMs—can reduce this uncertainty, because it allows them to refer to known categories (i.e., human interaction schemes) [159, 160]. This, in turn, has been argued to increase trust [159, 161], which is a central antecedent of acceptance [128]. In conjunction with these findings of prior research on humans’ reactions, our empirical results—i.e., positive effects of uncertainty reduction on workplace acceptance—imply that we may expect further increases in workplace acceptance of social robots if their interactive capabilities are enhanced through the integration of LLMs. However, as this was not part of the robot’s design when deployed by the focal organization in this study, it remains an issue that has to be addressed by future research.

5.4 Limitations and Future Research

As with every empirical work, the present study and its findings are subject to several limitations that must be considered. Also, they provide promising avenues for future research. First, our setting consisted of one organization that implemented a particular robot (Pepper), which limits external validity. However, we undertook several measures (e.g., anonymity of participants) and collected data from different departments within the organization to increase internal validity. Moreover, we believe that this empirical setting infuses our findings with particular relevance for organizational practice, as this often involves the introduction of a social robot within a single organization rather than considering multiple, diverse organizations simultaneously. Nevertheless, future research could build upon and validate our findings in different organizations and industries and with different (social) robots.
Furthermore, prior literature has shown that reaction to technology [162] and several of the observed mechanisms, such as ambitions to reduce uncertainty [109], may be subject to cultural influences. Thus, we call upon scholars to build upon our findings in future research and broaden the view toward cross-cultural considerations that may help isolate potentially culture-dependent influences. Second, the mechanisms proposed in this study to underlie the experience-driven differences in attitudes resulted from an analysis of qualitative data from semi-structured interviews. Although we believe in the power of qualitative analysis to detect these underlying mechanisms [163, 164] and to thereby enrich the primary, quantitative data from the picture ranking task [165, 166], follow-up studies using large-scale quantitative data to statistically test the proposed mechanisms would be highly valuable. We thus encourage future research to build upon the insights of this study and to quantitatively investigate the proposed influences of experiences with a social robot on employees’ attitudes towards this robot, as well as potential mediating (e.g., uncertainty reduction) and moderating effects (e.g., technical knowledge) that this study’s findings tentatively point towards.
Finally, we deliberately chose not to collect socio-demographic information of our respondents to ensure their anonymity. While we believe that this reduced the threat of social desirability in the answers of our respondents and thus increased the quality of the data used in this study, it represents a limitation regarding the generalizability of our findings. Previous literature has identified gender [59] and age [71] as influential factors for individuals’ attitude toward social robots. Additionally, it is reasonable to assume that individuals’ attitudes to a social robot being implemented at their workplace—and the effect of personal experiences on them—differ depending on job-related attributes, such as, for example, overlap of individuals’ job content with the tasks given to the robot, or time available to engage in interactions with the robot. Nonetheless, we believe that the present study provides valuable insights into the role of experiences with a social robot at work in general, which constitutes fruitful ground for further in-depth investigation. Therefore, we encourage future research to build upon our results and replicate this study in a larger company, thus providing the opportunity to analyze differences due to gender, age, and job positions without threatening respondent’s anonymity.

6 Conclusion

In the present study, we presented the results of an empirical study examining whether and how one’s personal experiences with a social robot implemented in their work environment affect employees’ attitudes towards this focal robot. A collaboration with a local service organization allowed us to make use of a kind of ‘natural experiment’ in which the organization deployed the social robot Pepper initially for six weeks at the organization’s main site (out of a total of three sites with 110 employees overall). At this site, Pepper was positioned in various places throughout each day, both in the staff and customer areas, where it took over tasks that had previously been the responsibility of human employees (e.g., providing information to customers). Subsequent to the initial deployment period, we invited all company employees to share their assessments of the robot with us. We collected quantitative data from a picture ranking task, where respondents were asked to rank ten pictures of different social robots according to their overall evaluation, perceived human-likeness, and acceptance at work of these robots, as well as qualitative interview data. We found that employees who had interacted with Pepper themselves during its deployment ranked it significantly more favorable regarding overall evaluation and acceptance at work. Further, this group of employees overestimated Pepper’s human-likeness—unlike employees without personal interaction experience with the robot. Results from a qualitative analysis of interview data indicated different experience-driven mechanisms that increased overall evaluation, perceived human-likeness, and acceptance at work. As such, findings indicated the presence of a cute response [131], where the perception of the robot as cute spilled over into a more positive general evaluation, as well as the activation of an affect heuristic [118], leading to a more favorable overall evaluation caused by an experience of positive feelings (e.g., fun) in the presence of the robot. In contrast to prior literature, these affective reactions outshone perceptions of functionality, which were not mentioned by our respondents in regard to their overall evaluation of the robot. Furthermore, our findings suggest that perceptions of functionality play a key role in perceiving a robot as human-like instead. While functionality had the potential to increase perceptions of human-likeness (especially when it exceeded expectations), malfunctioning experiences led some respondents to reconsider the robot’s technological nature, thus demystifying it and ultimately decreasing its perceived human-likeness. Finally, we found that reduction of robot-related uncertainty through personal experiences with a social robot is a factor that considerably increases acceptance of the focal robot at work, despite the threat of uncertainty reduction to demystify the robot and weaken its status as a social agent. Summarizing the results of our quantitative and qualitative analysis, we can conclude that familiarity with a robot breeds affinity, which is reflected in the way different attitudes towards the robot are altered.
Taken together, we believe that the present study offers a valuable contribution to understanding employees’ attitudes towards a social robot that is being implemented into their work environment. Specifically, the study highlights the power of employees’ personal experiences as a key influence on forming beneficial attitudes. As the number of social robots deployed in such settings is constantly rising, these findings not only expand the growing body of literature on social robots in work settings but also offer important implications for organizations considering adopting such robots.

Declarations

Ethics Approval

The study received approval from the Ethics Committee of Trier University (No. 38/2022).
Participants in the study were informed and consented that their participation would be completely anonymous and voluntary, that the resulting audio recordings and transcripts would not be shared with third parties (e.g., the organization), and that participation could be interrupted, terminated, or withdrawn at any time without giving reasons. Furthermore, all participants were informed that neither their participation in this study nor any decision to terminate our withdraw their participation would affect their employment by the organization in any form. Finally, participants explicitly agreed to the audio recording during the interviews.

Employment

None of the authors has to disclose recent, present, or anticipated employment by any organization that may gain or lose financially through publication of this manuscript.

Competing Interests

The authors have no financial or non-financial interests to disclose.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Title
Familiarity Breeds Affinity – How Personal Experiences Change Employees’ Attitudes Towards a Social Robot
Authors
Jonas Ossadnik
Katrin Muehlfeld
Publication date
01-07-2025
Publisher
Springer Netherlands
Published in
International Journal of Social Robotics / Issue 7/2025
Print ISSN: 1875-4791
Electronic ISSN: 1875-4805
DOI
https://doi.org/10.1007/s12369-025-01268-9
1
Importantly, several months after the implementation period has ended, the organization’s management decided to abandon the purchase of a Pepper robot. This decision, however, was still pending during the time of data collection. Thus, at the point of data collection, employees were not aware that robot implementation will not continue.
 
2
Interviewees’ gender is included in the overview of interviewees (see Table 2). We believe that revealing this information as sole piece of socio-demographic information is in line with the assurances we provided to employees: Interviewees could not assume their gender to be hidden as we did not use any voice distorting device in the interviews but only visual blinds and interviewers without any connection to the company in question, who were hired solely for the purpose of conducting the interviews. Also, based on gender alone, it was not possible to identify any specific employee. Approximately two thirds of the company’s employees are women. This distribution is represented in both the total group of participants (20 of 31; 64.5% female) and the ‘treatment’ group (9 of 14; 64.3% female).
 
3
ABOT: The Anthropomorphic Robot Database, accessible via https://www.abotdatabase.info/.
 
4
Used robots: Big-I, Moro, Sota, Snackbot, Pepper, Reem-c, Topio, Sophia, Otonaroid, and Nadine.
 
1.
go back to reference Wirtz J, Patterson PG, Kunz WH et al (2018) Brave new world: service robots in the frontline. J Serv Manag 29:907–931. https://doi.org/10.1108/JOSM-04-2018-0119CrossRef
2.
go back to reference Youssef K, Said S, Alkork S et al (2022) A survey on recent advances in social robotics. Robotics 11:75–103. https://doi.org/10.3390/robotics11040075CrossRef
3.
go back to reference Mende M, Scott ML, van Doorn J et al (2019) Service robots rising: how humanoid robots influence service experiences and elicit compensatory consumer responses. J Mark Res 56:535–556. https://doi.org/10.1177/0022243718822827CrossRef
4.
go back to reference Naneva S, Sarda Gou M, Webb TL et al (2020) A systematic review of attitudes, anxiety, acceptance, and trust towards social robots. Int J Soc Robot 12:1179–1201. https://doi.org/10.1007/s12369-020-00659-4CrossRef
5.
go back to reference Savela N, Turja T, Oksanen A (2018) Social acceptance of robots in different occupational fields: a systematic literature review. Int J Soc Robot 10:493–502. https://doi.org/10.1007/s12369-017-0452-5CrossRef
6.
go back to reference Rosén J, Lindblom J, Lamb M et al (2024) Previous experience matters: an in-person investigation of expectations in human–robot interaction. Int J Soc Robot 16:447–460. https://doi.org/10.1007/s12369-024-01107-3CrossRef
7.
go back to reference Schönmann M, Bodenschatz A, Uhl M et al (2023) The care-dependent are less averse to care robots: an empirical comparison of attitudes. Int J Soc Robot 15:1007–1024. https://doi.org/10.1007/s12369-023-01003-2CrossRef
8.
go back to reference Reich-Stiebert N, Eyssel F (2015) Learning with educational companion robots? Toward attitudes on education robots, predictors of attitudes, and application potentials for education robots. Int J Soc Robot 7:875–888. https://doi.org/10.1007/s12369-015-0308-9CrossRef
9.
go back to reference Turja T, Oksanen A (2019) Robot acceptance at work: a multilevel analysis based on 27 EU countries. Int J Soc Robot 11:679–689. https://doi.org/10.1007/s12369-019-00526-xCrossRef
10.
go back to reference Turja T, Taipale S, Kaakinen M et al (2020) Care workers’ readiness for robotization: identifying psychological and socio-demographic determinants. Int J Soc Robot 12:79–90. https://doi.org/10.1007/s12369-019-00544-9CrossRef
11.
go back to reference Carradore M (2022) People’s attitudes towards the use of robots in the social services: a multilevel analysis using eurobarometer data. Int J Soc Robot 14:845–858. https://doi.org/10.1007/s12369-021-00831-4CrossRef
12.
go back to reference Sarda Gou M, Webb TL, Prescott T (2021) The effect of direct and extended contact on attitudes towards social robots. Heliyon 7:e06418. https://doi.org/10.1016/j.heliyon.2021.e06418CrossRef
13.
go back to reference Zajonc RB (1968) Attitudinal effects of Mere exposure. J Pers Soc Psychol 9:1–27. https://doi.org/10.1037/h0025848CrossRef
14.
go back to reference Bornstein RF, D’Agostino PR (1992) Stimulus recognition and the Mere exposure effect. J Pers Soc Psychol 63:545–552. https://doi.org/10.1037/0022-3514.63.4.545
15.
go back to reference Paluch S, Tuzovic S, Holz HF et al (2022) My colleague is a robot – exploring frontline employees’ willingness to work with collaborative service robots. J Serv Manag 33:363–388. https://doi.org/10.1108/JOSM-11-2020-0406CrossRef
16.
go back to reference Akalin N, Kiselev A, Kristoffersson A et al (2023) A taxonomy of factors influencing perceived safety in human–robot interaction. Int J Soc Robot 15:1993–2004. https://doi.org/10.1007/s12369-023-01027-8CrossRef
17.
go back to reference Nass C, Moon Y (2000) Machines and mindlessness: social responses to computers. J Soc Issues 56:81–103. https://doi.org/10.1111/0022-4537.00153CrossRef
18.
go back to reference Qin X, Chen C, Yam KC et al (2022) Adults still can’t resist: a social robot can induce normative conformity. Comput Hum Behav 127:107041. https://doi.org/10.1016/j.chb.2021.107041CrossRef
19.
go back to reference Złotowski J, Proudfoot D, Yogeeswaran K et al (2015) Anthropomorphism: opportunities and challenges in human–robot interaction. Int J Soc Robot 7:347–360. https://doi.org/10.1007/s12369-014-0267-6CrossRef
20.
go back to reference Ossadnik J, Muehlfeld K, Goerke L (2023) Man or machine – or something in between? Social responses to voice assistants at work and their effects on job satisfaction. Comput Hum Behav 149:107919. https://doi.org/10.1016/j.chb.2023.107919CrossRef
21.
go back to reference Davis FD (1989) Perceived usefulness, perceived ease of use, and user scceptance of information technology. MIS Q 13:319–340. https://doi.org/10.2307/249008CrossRef
22.
go back to reference Venkatesh V, Morris MG, Davis GB et al (2003) User acceptance of information technology: toward a unified view. MIS Q 27:425–478. https://doi.org/10.1086/375201CrossRef
23.
go back to reference Kopp T, Baumgartner M, Kinkel S (2021) Success factors for introducing industrial human-robot interaction in practice: an empirically driven framework. Int J Adv Manuf Technol 112:685–704. https://doi.org/10.1007/s00170-020-06398-0CrossRef
24.
go back to reference Lewin K (1951) Field theory in social science, 1st edn. Harper & Brothers, New York, NY, USA
25.
go back to reference Martin JL (2003) What is field theory? Am J Sociol 109:1–49. https://doi.org/10.1086/375201CrossRef
26.
go back to reference Shuman V, Sander D, Scherer KR (2013) Levels of Valence. Front Psychol 4:261–278. https://doi.org/10.3389/fpsyg.2013.00261CrossRef
27.
go back to reference Scherer KR (2013) The nature and dynamics of relevance and Valence appraisals: theoretical advances and recent evidence. Emot Rev 5:150–162. https://doi.org/10.1177/1754073912468166CrossRef
28.
go back to reference Walle EA, Dukes D (2023) We (still!) need to talk about Valence: contemporary issues and recommendations for affective science. Affect Sci 4:463–469. https://doi.org/10.1007/s42761-023-00217-xCrossRef
29.
go back to reference Russell JA (2003) Core affect and the psychological construction of emotion. Psychol Rev 110:145–172. https://doi.org/10.1037/0033-295x.110.1.145CrossRef
30.
go back to reference Watson D, Wiese D, Vaidya J et al (1999) The two general activation systems of affect: structural findings, evolutionary considerations, and Psychobiological evidence. J Pers Soc Psychol 76:820–838. https://doi.org/10.1037/0022-3514.76.5.820
31.
go back to reference Desideri L, Ottaviani C, Malavasi M et al (2019) Emotional processes in human-robot interaction during brief cognitive testing. Comput Hum Behav 90:331–342. https://doi.org/10.1016/j.chb.2018.08.013CrossRef
32.
go back to reference Fracasso F, Buchweitz L, Theil A et al (2022) Social robots acceptance and marketability in Italy and Germany: a cross-national study focusing on assisted living for older adults. Int J Soc Robot 14:1463–1480. https://doi.org/10.1007/s12369-022-00884-zCrossRef
33.
go back to reference Moshkina L, Park S, Arkin RC et al (2011) TAME: time-varying affective response for humanoid robots. Int J Soc Robot 3:207–221. https://doi.org/10.1007/s12369-011-0090-2CrossRef
34.
go back to reference Saunderson S, Nejat G (2019) How robots influence humans: a survey of nonverbal communication in social human–robot interaction. Int J Soc Robot 11:575–608. https://doi.org/10.1007/s12369-019-00523-0CrossRef
35.
go back to reference Karunarathne D, Morales Y, Kanda T et al (2018) Model of side-by-side walking without the robot knowing the goal. Int J Soc Robot 10:401–420. https://doi.org/10.1007/s12369-017-0443-6CrossRef
36.
go back to reference Torrey C, Fussell SR, Kiesler S (2013) How a robot should give advice. In: 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE https://doi.org/10.1109/HRI.2013.6483599
37.
go back to reference Leite I, Pereira A, Mascarenhas S et al (2013) The influence of empathy in human–robot relations. Int J Hum-Comput Stud 71:250–260. https://doi.org/10.1016/j.ijhcs.2012.09.005CrossRef
38.
go back to reference Tojib D, Abdi E, Tian L et al (2023) What’s best for customers: empathetic versus solution-oriented service robots. Int J Soc Robot 15:731–743. https://doi.org/10.1007/s12369-023-00970-wCrossRef
39.
go back to reference Kang D, Nam C, Kwak SS (2024) Robot feedback design for response delay. Int J Soc Robot 16:341–361. https://doi.org/10.1007/s12369-023-01068-zCrossRef
40.
go back to reference Mori M, MacDorman K, Kageki N (2012) The uncanny Valley [from the field]. IEEE Robot Automat Mag 19:98–100. https://doi.org/10.1109/mra.2012.2192811CrossRef
41.
go back to reference Kwak SS, Kim JS, Choi JJ (2017) The effects of organism- versus object-based robot design approaches on the consumer acceptance of domestic robots. Int J Soc Robot 9:359–377. https://doi.org/10.1007/s12369-016-0388-1CrossRef
42.
go back to reference Pütten Rosenthal-vonder, Krämer AM NC (2015) Individuals’ evaluations of and attitudes towards potentially uncanny robots. Int J Soc Robot 7:799–824. https://doi.org/10.1007/s12369-015-0321-zCrossRef
43.
go back to reference Luperto M, Monroy J, Renoux J et al (2023) Integrating social assistive robots, IoT, virtual communities and smart objects to assist at-home independently living elders: the movecare project. Int J Soc Robot 15:517–545. https://doi.org/10.1007/s12369-021-00843-0CrossRef
44.
go back to reference Skubis I (2025) Exploring the potential and perceptions of social robots in tourism and hospitality: insights from industry executives and technology evaluation. Int J Soc Robot 17:59–72. https://doi.org/10.1007/s12369-024-01197-zCrossRef
45.
go back to reference So KKF, Kim H, Liu SQ, Fang X, Wirtz J (2024) Service robots: the dynamic effects of anthropomorphism and functional perceptions on consumers’ responses. Euro J Mark 58:1–32. https://doi.org/10.1108/EJM-03-2022-0176CrossRef
46.
go back to reference Huang HL, Cheng LK, Sun PC, Chou SJ (2021) The effects of perceived identity threat and realistic threat on the negative attitudes and usage intentions toward hotel service robots: the moderating effect of the robot’s anthropomorphism. Int J Soc Robot 13:1599–1611. https://doi.org/10.1007/s12369-021-00752-2CrossRef
47.
go back to reference Yam KC, Bigman YE, Tang PM, De Cremer D, Soh H, Gray K (2021) Robots at work: people prefer—and forgive—service robots with perceived feelings. J Appl Psychol 106:1557–1572. https://doi.org/10.1037/apl0000834CrossRef
48.
go back to reference Reich-Stiebert N, Eyssel F, Hohnemann C (2019) Involve the user! Changing attitudes toward robots by user participation in a robot prototyping process. Comput Hum Behav 91:290–296. https://doi.org/10.1016/j.chb.2018.09.041CrossRef
49.
go back to reference Lohse M (2009) The role of expectations in HRI. New Front Hum Robot Interact 35–56
50.
go back to reference Spence PR, Westerman D, Edwards C, Edwards A (2014) Welcoming our robot overlords: initial expectations about interaction with a robot. Commun Res Rep 31(3):272–280. https://doi.org/10.1080/08824096.2014.924337CrossRef
51.
go back to reference Edwards A, Edwards C, Westerman D, Spence PR (2019) Initial expectations, interactions, and beyond with social robots. Comput HumBehav 90:308–314. https://doi.org/10.1016/j.chb.2018.08.042CrossRef
52.
go back to reference Jokinen K, Wilcock G (2017) Expectations and first experience with a social robot. In: Proceedings of the 5th international conference on human agent interaction (HAI 2017), pp 511–515. https://doi.org/10.1145/3125739.3132610
53.
go back to reference Horstmann AC, Krämer NC (2019) Great expectations? Relation of previous experiences with social robots in real life or in the media and expectancies based on qualitative and quantitative assessment. Front Psychol 10:939. https://doi.org/10.3389/fpsyg.2019.00939CrossRef
54.
go back to reference Horstmann AC, Krämer NC (2020) Expectations vs. actual behavior of a social robot. PLoS ONE 15(8):e0238133. https://doi.org/10.1371/journal.pone.0238133CrossRef
55.
go back to reference Edwards C, Edwards AP, Spence PR, Westerman DK (2016) Initial interaction expectations with robots: testing the human-tohuman interaction script. Commun Stud 67:227–238. https://doi.org/10.1080/10510974.2015.1121899CrossRef
56.
go back to reference Vlachos E, Jochum E, Demers L-P (2016) The effects of exposure to different social robots on attitudes toward preferences. Interact Stud 17:390–404. https://doi.org/10.1075/is.17.3.04vlaCrossRef
57.
go back to reference Hameed IA, Tan ZH, Thomsen NB, Duan X (2016) User Acceptance of Social Robots. In: The 9th Int. Conf. ACHI (ACHI’ 2016) pp. 274–279
58.
go back to reference Rosén J, Lindblom J, Billing E (2022) The social robot expectation gap evaluation framework. In: Kurosu M (ed) Human-Computer Interaction. Technological Innovation: Thematic Area, HCI 2022, Held as Part of the 24th HCI International Conference, HCII 2022, Virtual Event, June 26 – July 1, 2022, Proceedings, Part II, 1st ed. 2022. Springer International Publishing; Imprint Springer, Cham, pp 590–610. https://doi.org/10.1007/978-3-031-05409-9_43
59.
go back to reference Stafford RQ, MacDonald BA, Li X et al (2014) Older People’s prior robot attitudes influence evaluations of a conversational robot. Int J Soc Robot 6:281–297. https://doi.org/10.1007/s12369-013-0224-9CrossRef
60.
go back to reference Lim V, Rooksby M, Cross ES (2021) Social robots on a global stage: Establishing a role for culture during human–robot interaction. Int J Soc Robot 13:1307–1333. https://doi.org/10.1007/s12369-020-00710-4CrossRef
61.
go back to reference Damholdt MF, Quick OS, Seibt J et al (2023) A scoping review of HRI research on ‘anthropomorphism’: contributions to the method debate in HRI. Int J Soc Robot 15:1203–1226. https://doi.org/10.1007/s12369-023-01014-zCrossRef
62.
go back to reference Epley N, Waytz A, Cacioppo JT (2007) On seeing human: a three-factor theory of anthropomorphism. Psychol Rev 114:864–886. https://doi.org/10.1037/0033-295X.114.4.864CrossRef
63.
go back to reference Ruijten PAM, Haans A, Ham J et al (2019) Perceived human-likeness of social robots: testing the Rasch model as a method for measuring anthropomorphism. Int J Soc Robot 11:477–494. https://doi.org/10.1007/s12369-019-00516-zCrossRef
64.
go back to reference Duffy BR (2003) Anthropomorphism and the social robot. Robot Auton Syst 42:177–190. https://doi.org/10.1016/S0921-8890(02)00374-3CrossRef
65.
go back to reference von Zitzewitz J, Boesch PM, Wolf P et al (2013) Quantifying the human likeness of a humanoid robot. Int J Soc Robot 5:263–276. https://doi.org/10.1007/s12369-012-0177-4CrossRef
66.
go back to reference Kühne R, Peter J (2023) Anthropomorphism in human–robot interactions: a multidimensional conceptualization. Commun Theory 33:42–52. https://doi.org/10.1093/ct/qtac020CrossRef
67.
go back to reference Sacino A, Cocchella F, de Vita G et al (2022) Human- or object-like? Cognitive anthropomorphism of humanoid robots. PLoS ONE 17:e0270787. https://doi.org/10.1371/journal.pone.0270787CrossRef
68.
go back to reference Ferrari F, Paladino MP, Jetten J (2016) Blurring human–machine distinctions: anthropomorphic appearance in social robots as a threat to human distinctiveness. Int J Soc Robot 8:287–302. https://doi.org/10.1007/s12369-016-0338-yCrossRef
69.
go back to reference Dubois-Sage M, Jacquet B, Jamet F et al (2023) We do not anthropomorphize a robot based only on its cover: context matters too! Appl Sci 13:8743. https://doi.org/10.3390/app13158743CrossRef
70.
go back to reference Fink J (2012) Anthropomorphism and human likeness in the design of robots and human-robot interaction. In: Ge SS (ed) Social robotics: 4th International Conference, ICSR 2012, Chengdu, China, October 29–31, 2012; proceedings. Springer, Berlin, Heidelberg, pp 199–208. https://doi.org/10.1007/978-3-642-34103-8_20
71.
go back to reference Blut M, Wang C, Wünderlich NV et al (2021) Understanding anthropomorphism in service provision: a meta-analysis of physical robots, chatbots, and other AI. J Acad Mark Sci 49:632–658. https://doi.org/10.1007/s11747-020-00762-yCrossRef
72.
go back to reference Hostettler D, Mayer S, Hildebrand C (2023) Human-like movements of industrial robots positively impact observer perception. Int J Soc Robot 1339–1417. https://doi.org/10.1007/s12369-022-00954-2
73.
go back to reference Waytz A, Morewedge CK, Epley N et al (2010) Making sense by making sentient: effectance motivation increases anthropomorphism. J Pers Soc Psychol 99:410–435. https://doi.org/10.1037/a0020240CrossRef
74.
go back to reference Christoforakos L, Gallucci A, Surmava-Große T et al (2021) Can robots earn our trust the same way humans do? A systematic exploration of competence, warmth, and anthropomorphism as determinants of trust development in HRI. Front Robot AI 8:640444. https://doi.org/10.3389/frobt.2021.640444CrossRef
75.
go back to reference Fortunati L, Manganelli AM, Höflich J et al (2023) Exploring the perceptions of cognitive and affective capabilities of four, real, physical robots with a decreasing degree of morphological human likeness. Int J Soc Robot 15:547–561. https://doi.org/10.1007/s12369-021-00827-0CrossRef
76.
go back to reference Salem M, Ziadee M, Sakr M (2014) Marhaba, how may i help you? In: Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction. ACM, New York, NY, USA
77.
go back to reference Phillips E, Zhao X, Ullman D et al (2018) What is human-like? Decomposing robots’ human-like appearance using the anthropomorphic roBOT (ABOT) database. In: Kanda T, Sabanovic S, Hoffman G. (eds) Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. ACM, New York, NY, USA https://doi.org/10.1145/3171221.3171268
78.
go back to reference Prasad V, Stock-Homburg R, Peters J (2022) Human-robot handshaking: a review. Int J Soc Robot 14:277–293. https://doi.org/10.1007/s12369-021-00763-zCrossRef
79.
go back to reference Mura D, Knoop E, Catalano MG et al (2020) On the role of stiffness and synchronization in human–robot handshaking. Int J Robot Res 39:1796–1811. https://doi.org/10.1177/0278364920903792CrossRef
80.
go back to reference Pipitone A, Geraci A, D’Amico A et al (2023) Robot’s inner speech effects on human trust and anthropomorphism. Int J Soc Robot 1–13. https://doi.org/10.1007/s12369-023-01002-3
81.
go back to reference Babel F, Kraus J, Miller L et al (2021) Small talk with a robot? The impact of dialog content, talk initiative, and gaze behavior of a social robot on trust, acceptance, and proximity. Int J Soc Robot 13:1485–1498. https://doi.org/10.1007/s12369-020-00730-0CrossRef
82.
go back to reference Salem M, Eyssel F, Rohlfing K et al (2013) To err is human(-like): effects of robot gesture on perceived anthropomorphism and likability. Int J Soc Robot 5:313–323. https://doi.org/10.1007/s12369-013-0196-9CrossRef
83.
go back to reference Schroeder J, Epley N (2016) Mistaking Minds and machines: how speech affects dehumanization and anthropomorphism. J Exp Psychol 145:1427–1437. https://doi.org/10.1037/xge0000214CrossRef
84.
go back to reference Lu L, Zhang P, Zhang T (2021) Leveraging human-likeness of robotic service at restaurants. Int J Hosp Manag 94:102823. https://doi.org/10.1016/j.ijhm.2020.102823CrossRef
85.
go back to reference Eyssel F, Kuchenbrandt D, Bobinger S et al (2012) ‘If you sound like me, you must be more human’. In: Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction. ACM, New York, NY, USA https://doi.org/10.1145/2157689.2157717
86.
go back to reference Borau S, Otterbring T, Laporte S et al (2021) The most human Bot: female gendering increases humanness perceptions of bots and acceptance of AI. Psychol Mark 38:1052–1068. https://doi.org/10.1002/mar.21480CrossRef
87.
go back to reference Haring KS, Silvera-Tawil D, Watanabe K et al (2016) The Influence of Robot Appearance and Interactive Ability in HRI: A Cross-Cultural Study. In: Agah A, Cabibihan J-J, Howard AM. (eds) Social robotics: 8th international conference, ICSR 2016, Kansas City, MO, USA, November 1–3, 2016: proceedings. Springer, Cham, pp 392–401. https://doi.org/10.1007/978-3-319-47437-3_38
88.
go back to reference Waytz A, Cacioppo J, Epley N (2010) Who sees human? The stability and importance of individual differences in anthropomorphism. Perspect Psychol Sci 5:219–232. https://doi.org/10.1177/1745691610369336CrossRef
89.
go back to reference Cullen H, Kanai R, Bahrami B et al (2014) Individual differences in anthropomorphic attributions and human brain structure. Soc Cogn Affect Neurosci 9:1276–1280. https://doi.org/10.1093/scan/nst109CrossRef
90.
go back to reference Salem M, Lakatos G, Amirabdollahian F et al (2015) Would you trust a (faulty) robot? In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction. ACM, New York, NY, USA. https://doi.org/10.1145/2696454.2696497
91.
go back to reference Arona AS, Fleming M, Arora A et al (2021) Finding H in HRI: examining human personality traits, robotic anthropomorphism, and robot likeability in human-robot interaction. Int J Intell Inf Technol 17:19–38. https://doi.org/10.4018/IJIIT.2021010102CrossRef
92.
go back to reference Letheren K, Kuhn K-AL, Lings I et al (2016) Individual difference factors related to anthropomorphic tendency. Eur J Mark 50:973–1002. https://doi.org/10.1108/EJM-05-2014-0291CrossRef
93.
go back to reference Nomura T, Suzuki T, Kanda T et al (2006) Measurement of negative attitudes toward robots. Interact Stud 7:437–454. https://doi.org/10.1075/is.7.3.14nomCrossRef
94.
go back to reference Mays KK, Cummings JJ (2023) The power of personal ontologies: individual traits prevail over robot traits in shaping robot humanization perceptions. Int J Soc Robot 15:1665–1682. https://doi.org/10.1007/s12369-023-01045-6CrossRef
95.
go back to reference Araujo T (2018) Living up to the chatbot hype: the influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Comput Hum Behav 85:183–189. https://doi.org/10.1016/j.chb.2018.03.051CrossRef
96.
go back to reference Eyssel F, Kuchenbrandt D (2012) Social categorization of social robots: anthropomorphism as a function of robot group membership. Br J Soc Psychol 51:724–731. https://doi.org/10.1111/j.2044-8309.2011.02082.xCrossRef
97.
go back to reference Kühne R, Peter J, de Jong C, Barco A (2024) How does children’s anthropomorphism of a social robot develop over time? A six-wave panel study. Int J Soc Robot 16:1665–1679. https://doi.org/10.1007/s12369-024-01155-9CrossRef
98.
go back to reference de Graaf MMA, Ben Allouch S, van Dijk JAGM (2016) Long-term evaluation of a social robot in real homes. Interact Stud 17:461–490. https://doi.org/10.1075/is.17.3.08degCrossRef
99.
go back to reference Rogers EM, Singhal A, Quinlan M (2014) Diffusion of innovations. An integrated approach to communication theory and research. Routledge, pp 432–448. https://doi.org/10.4324/9780203710753-35
100.
go back to reference Heerink M, Kröse B, Evers V et al (2010) Assessing acceptance of assistive social agent technology by older adults: the Almere model. Int J Soc Robot 2:361–375. https://doi.org/10.1007/s12369-010-0068-5CrossRef
101.
go back to reference David D, Thérouanne P, Milhabet I (2022) The acceptability of social robots: a scoping review of the recent literature. Comput Hum Behav 137:107419. https://doi.org/10.1016/j.chb.2022.107419CrossRef
102.
go back to reference Conti D, Di Nuovo S, Buono S et al (2017) Robots in education and care of children with developmental disabilities: a study on acceptance by experienced and future professionals. Int J Soc Robot 9:51–62. https://doi.org/10.1007/s12369-016-0359-6CrossRef
103.
go back to reference Khaksar SMS, Khosla R, Singaraju S et al (2021) Carer’s perception on social assistive technology acceptance and adoption: moderating effects of perceived risks. Behav Inf Technol 40:337–360. https://doi.org/10.1080/0144929X.2019.1690046CrossRef
104.
go back to reference Etemad-Sajadi R, Soussan A, Schöpfer T (2022) How ethical issues Raised by human-robot interaction can impact the intention to use the robot? Int J Soc Robot 14:1103–1115. https://doi.org/10.1007/s12369-021-00857-8CrossRef
105.
go back to reference Sinha N, Singh P, Gupta M et al (2020) Robotics at workplace: an integrated Twitter analytics – SEM based approach for behavioral intention to accept. Int J Inf Manag 55:102210. https://doi.org/10.1016/j.ijinfomgt.2020.102210CrossRef
106.
go back to reference Tu Y, Liu W, Yang Z (2023) Exploring the influence of service employees’ characteristics on their willingness to work with service robots. J Serv Manag 34:1038–1063. https://doi.org/10.1108/JOSM-05-2022-0174CrossRef
107.
go back to reference Kaplan F (2004) Who is afraid of the humanoid? Investigating cultural differences in the acceptance of robots. Int J Hum Robot 1:465–480. https://doi.org/10.1142/S0219843604000289CrossRef
108.
go back to reference Abdelhakim AS, Abou-Shouk M, Ab Rahman NAFW et al (2023) The fast-food employees’ usage intention of robots: a cross-cultural study. Tour Manag Perspect 45:101049. https://doi.org/10.1016/j.tmp.2022.101049CrossRef
109.
go back to reference Hofstede G (2011) Dimensionalizing cultures: the Hofstede model in context. Online Read Psychol Cult 2. https://doi.org/10.9707/2307-0919.1014
110.
go back to reference Maio GR, Haddock G (2009) The psychology of attitudes and attitude change, 1. Publ. Sage social psychology program series. SAGE, Los Angeles
111.
go back to reference Paetzel M, Perugia G, Castellano G (2020) The Persistence of First Impressions. In: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction. ACM, New York, NY, USA https://doi.org/10.1145/3319502.3374786
112.
go back to reference Fiolka AK, Donnermann M, Lugrin B (2024) Investigating the mere exposure effect in relation to perceived eeriness and humaneness of a social robot. In: Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction. ACM, New York, NY, USA. https://doi.org/10.1145/3610978.3640665
113.
go back to reference Ciardo F, de Tommaso D, Wykowska A (2022) Human-like behavioral variability blurs the distinction between a human and a machine in a nonverbal turing test. Sci Robot 7:eabo1241. https://doi.org/10.1126/scirobotics.abo1241CrossRef
114.
go back to reference Reuten A, van Dam M, Naber M (2018) Pupillary responses to robotic and human emotions: the uncanny Valley and media equation confirmed. Front Psychol 9:774. https://doi.org/10.3389/fpsyg.2018.00774CrossRef
115.
go back to reference Breckler SJ (1984) Empirical validation of affect, behavior, and cognition as distinct components of attitude. J Pers Soc Psychol 47:1191–1205. https://doi.org/10.1037/0022-3514.47.6.1191
116.
go back to reference Edwards K (1990) The interplay of affect and cognition in attitude formation and change. J Pers Soc Psychol 59:202–216. https://doi.org/10.1037/0022-3514.59.2.202
117.
go back to reference Gawronski B, Bodenhausen GV, Becker AP (2007) I like it, because I like myself: associative self-anchoring and post-decisional change of implicit evaluations. J Exp Soc Psychol 43:221–232. https://doi.org/10.1016/j.jesp.2006.04.001CrossRef
118.
go back to reference Slovic P, Finucane ML, Peters E et al (2007) The affect heuristic. Eur J Oper Res 177:1333–1352. https://doi.org/10.1016/j.ejor.2005.04.006CrossRef
119.
go back to reference Averbeck JM, Jones A, Robertson K (2011) Prior knowledge and health messages: an examination of affect as heuristics and information as systematic processing for fear appeals. South Commun J 76:35–54. https://doi.org/10.1080/10417940902951824CrossRef
120.
go back to reference Fazio RH, Chen J-M, McDonel EC et al (1982) Attitude accessibility, attitude-behavior consistency, and the strength of the object-evaluation association. J Exp Soc Psychol 18:339–357. https://doi.org/10.1016/0022-1031(82)90058-0CrossRef
121.
go back to reference Petty RE, Cacioppo JT (1986) The elaboration likelihood model of persuasion. Adva Exp Soc Psychol 19:123–205. https://doi.org/10.1016/S0065-2601(08)60214-2CrossRef
122.
go back to reference Berger CR, Calabrese RJ (1975) Some explorations in initial interaction and beyond: toward a developmental theory of interpersonal communication. Hum Commun Res 1:99–112. https://doi.org/10.1111/j.1468-2958.1975.tb00258.xCrossRef
123.
go back to reference Berger CR (1986) Uncertain outcome values in predicted relationships uncertainty reduction theory then and now. Hum Commun Res 13:34–38. https://doi.org/10.1111/j.1468-2958.1986.tb00093.xCrossRef
124.
go back to reference Berger CR, Bradac JJ (1982) Language and social knowledge: uncertainty in interpersonal relations. Edward Arnold, London
125.
go back to reference Ososky S, Philips E, Schuster D, Jentsch F (2013) A picture is worth a thousand mental models: evaluating human Understanding of robot teammates. Proc Hum Factors Ergon Soc Annu Meet 57:1298–1302. https://doi.org/10.1177/1541931213571287CrossRef
126.
go back to reference Van Straten CL, Peter J, Kühne R (2023) Transparent robots: how children perceive and relate to a social robot that acknowledgesits lack of human psychological capacities and machine status. Int J Hum Comput Stud 177:103063. https://doi.org/10.1016/j.ijhcs.2023.103063CrossRef
127.
go back to reference Delgosha MS, Hajiheydari N (2021) How human users engage with consumer robots? A dual model of psychological ownership and trust to explain post-adoption behaviours. Comput Hum Behav 117:106660. https://doi.org/10.1016/j.chb.2020.106660CrossRef
128.
go back to reference Kraus J, Miller L, Klumpp M, Babel F, Scholz D, Merger J, Baumann M (2024) On the role of beliefs and trust for the intention trustworthiness beliefs model for robot acceptance. Int J Soc Robot 16:1223–1246. https://doi.org/10.1007/s12369-022-00952-4CrossRef
129.
go back to reference Roesler O, Begheri E, Aly A (2022) Toward Understanding the effects of socially aware robot behavior. Interact Stud 23:513–552. https://doi.org/10.1075/is.22029.roeCrossRef
130.
go back to reference Graaf MM, Ben Allouch S (2013) Exploring influencing variables for the acceptance of social robots. Robot Auton Syst 61:1476–1486. https://doi.org/10.1016/j.robot.2013.07.007CrossRef
131.
go back to reference Hebesberger D, Koertner T, Gisinger C, Pripfl J (2017) A long-term autonomous robot at a care hospital: a mixed methods study on social acceptance and experiences of staff and older adults. Int J Soc Robot 9:417–429. https://doi.org/10.1007/s12369-016-0391-6CrossRef
132.
go back to reference Schadenberg BR, Reidsma D, Heylen DK, Evers V (2021) „I see what you did there Understanding People’s social perception of a robot and its predictability. ACM Trans Hum-Robot Interact 10:1–28. https://doi.org/10.1145/3461534CrossRef
133.
go back to reference Tanevska A, Rea F, Sandini G, Canamero L, Sciutti A (2020) A socially adaptable framework for human-robot interaction. Front Robot AI 7:121. https://doi.org/10.3389/frobt.2020.00121CrossRef
134.
go back to reference Eyssel F, Kuchenbrandt D, Bobinger S (2011) Effects of anticipated human–robot interaction and predictability of robot behavior on perceptions of anthropomorphism. In: HRI 2011: Proceedings of the 6th ACM/IEEE international conference on human – robot interaction, pp 61–67. https://doi.org/10.1145/1957656.1957673
135.
go back to reference DeGraaf MA, Allouch S, Dijk B J.A.G.M (2019) Why would i use this in my home?A model of domestic social robot acceptance. Human-Comput Interact 34(2):115–171. https://doi.org/10.1080/07370024.2017.1312406CrossRef
136.
go back to reference Bishop L, van Maris A, Dogramadzi S, Zook N (2019) Social robots: the influence of human and robot characteristics on acceptance. Paladyn J Behav Rob 10:346–358. https://doi.org/10.1515/pjbr-2019-0028CrossRef
137.
go back to reference Ghazali AS, Ham J, Barakova E, Markopoulos P (2020) Persuasive robots acceptance model (PRAM): roles of social responses within the acceptance model of persuasive robots. Int J Soc Robot 12:1075–1092. https://doi.org/10.1007/s12369-019-00611-1CrossRef
138.
go back to reference Festinger L (1954) A theory of social comparison processes. Hum Relat 7:117–140. https://doi.org/10.1177/001872675400700202CrossRef
139.
go back to reference Albarracín D, Wyer RS (2000) The cognitive impact of past behavior: influences on beliefs, attitudes, and future behavioral decisions. J Pers Soc Psychol 79:5–22. https://doi.org/10.1037/0022-3514.79.1.5
140.
go back to reference Roesler E, Heuring M, Onnasch L (2023) (Hu)man-like robots: the impact of anthropomorphism and Language on perceived robot gender. Int J Soc Robot 15:1829–1840. https://doi.org/10.1007/s12369-023-00975-5CrossRef
141.
go back to reference Braun V, Clarke V (2012) Thematic analysis. APA handbook of research methods in psychology. American Psychological Assoc, Washington, DC, pp 57–71
142.
go back to reference Nachar N (2008) The Mann-Whitney U: a test for assessing whether two independent samples come from the same distribution. Tut Quant Methods Psychol 4:13–20. https://doi.org/10.20982/tqmp.04.1.p013CrossRef
143.
go back to reference Rosenthal R (2010) Meta-analytic procedures for social research, rev. Applied social research methods series, vol 6. Sage Publ, Newbury Park, Calif
144.
go back to reference Schober P, Boer C, Schwarte LA (2018) Correlation coefficients: appropriate use and interpretation. Anesth Analg 126:1763–1768. https://doi.org/10.1213/ANE.0000000000002864CrossRef
145.
go back to reference Pandey AK, Gelin R (2018) A mass-produced sociable humanoid robot: the first machine of its kind. IEEE Robot Autom Mag 25:40–48. https://doi.org/10.1109/MRA.2018.2833157CrossRef
146.
go back to reference Innes JM, Morrison B (2021) Experimental studies of human-robot interaction: threats to valid interpretation from methodological constraints associated with experimental manipulation. Int J Soc Robot 13:765–773. https://doi.org/10.1007/s12369-020-00671-8CrossRef
147.
go back to reference Archer J (1997) Why do people love their pets? Evol Hum Behav 18:237–259. https://doi.org/10.1016/S0162-3095(99)80001-4CrossRef
148.
go back to reference Caudwell C, Lacey C (2020) What do home robots want? The ambivalent power of cuteness in robotic relationships. Convergence 26:956–968. https://doi.org/10.1177/1354856519837792CrossRef
149.
go back to reference Shiomi M, Hayashi R, Nittono H (2023) Is two cuter than one? Number and relationship effects on the feeling of Kawaii toward social robots. PLoS ONE 18:e0290433. https://doi.org/10.1371/journal.pone.0290433CrossRef
150.
go back to reference Biocca F (1997) The Cyborg’s dilemma: progressive embodiment in virtual environments. J Comput Mediat Commun 3. https://doi.org/10.1111/j.1083-6101.1997.tb00070.x
151.
go back to reference Oh CS, Bailenson JN, Welch GF (2018) A systematic review of social presence: definition, antecedents, and implications. Front Robot AI 5:114. https://doi.org/10.3389/frobt.2018.00114CrossRef
152.
go back to reference Honig S, Oron-Gilad T (2018) Understanding and resolving failures in human-robot interaction: literature review and model development. Front Psychol 9:861. https://doi.org/10.3389/fpsyg.2018.00861CrossRef
153.
go back to reference Oldeweme A, Märtins J, Westmattelmann D et al (2021) The role of transparency, trust, and social influence on uncertainty reduction in times of pandemics: empirical study on the adoption of COVID-19 tracing apps. J Med Internet Res 23:E25893. https://doi.org/10.2196/25893CrossRef
154.
go back to reference Lee S, Choi J (2017) Enhancing user experience with conversational agent for movie recommendation: effects of self-disclosure and reciprocity. Int J Hum Comput Stud 103:95–105. https://doi.org/10.1016/j.ijhcs.2017.02.005CrossRef
155.
go back to reference Shahzad F, Xiu G, Shafique Khan MA et al (2020) Predicting the adoption of a mobile government security response system from the user’s perspective: an application of the artificial neural network approach. Technol Soc 62:101278. https://doi.org/10.1016/j.techsoc.2020.101278CrossRef
156.
go back to reference van Straten CL, Peter J, Kühne R et al (2022) On sharing and caring: investigating the effects of a robot’s self-disclosure and question- asking on children’s robot perceptions and child-robot relationship formation. Comp Hum Behav 129:107135. https://doi.org/10.1016/j.chb.2021.107135CrossRef
157.
go back to reference van Pinxteren MM, Wetzels RW, Rüger J et al (2019) Trust in humanoid robots: implications for services marketing. J Serv Manag 33:507–518. https://doi.org/10.1108/JSM-01-2018-0045CrossRef
158.
go back to reference Gambino A, Fox J, Ratan RA (2020) Building a stronger CASA: extending the computers are social actors paradigm. Hum-Mach Commun 1:71–85. https://doi.org/10.3316/INFORMIT.097034846749023CrossRef
159.
go back to reference Liu B (2021) In AI we trust? Effects of agency locus and transparency on uncertainty reduction in human-AI interaction. J Comput-Mediat Commun 26:384–402. https://doi.org/10.1093/jcmc/zmab013CrossRef
160.
go back to reference Croes EAJ, Antheunis ML, Goudbeek MB, Wildman NW (2022) I am in your computer while we talk to each other a content analysis on the use of language-based strategies by humans an a social chatbot in initial human-chatbot interactions. Int J Human-Comput Interact 39:2155–2173. https://doi.org/10.1080/10447318.2022.2075574CrossRef
161.
go back to reference Araujo T, Bol N (2024) From speaking like a person to being personal: the effects of personalized, regular interactions with conversational agents. Comput Hum Behav: Artif Hum 2:100030. https://doi.org/10.1016/j.chbah.2023.100030CrossRef
162.
go back to reference Rieger MO, Wang M, Massloch M et al (2021) Opinions on technology: a cultural divide between East Asia and Germany? Rev Behav Econ 8:73–110. https://doi.org/10.1561/105.00000130CrossRef
163.
go back to reference Polkinghorne DE (2005) Language and meaning: data collection in qualitative research. J Couns Psychol 52:137–145. https://doi.org/10.1037/0022-0167.52.2.137CrossRef
164.
go back to reference Fossey E, Harvey C, McDermott F et al (2002) Understanding and evaluating qualitative research. Aust N Z J Psychiatry 36:717–732. https://doi.org/10.1046/j.1440-1614.2002.01100.xCrossRef
165.
go back to reference Carter N, Bryant-Lukosius D, DiCenso A et al (2014) The use of triangulation in qualitative research. Oncol Nurs Forum 41:545–547. https://doi.org/10.1188/14.ONF.545-547CrossRef
166.
go back to reference Thurmond VA (2001) The point of triangulation. J Nurs Scholarsh 33:253–258. https://doi.org/10.1111/j.1547-5069.2001.00253.xCrossRef
Image Credits
in-adhesives, MKVS, Ecoclean/© Ecoclean, Hellmich GmbH/© Hellmich GmbH, Krahn Ceramics/© Krahn Ceramics, Kisling AG/© Kisling AG, ECHTERHAGE HOLDING GMBH&CO.KG - VSE