Skip to main content
Erschienen in: International Journal of Social Robotics 12/2023

Open Access 09.07.2023

A Taxonomy of Factors Influencing Perceived Safety in Human–Robot Interaction

verfasst von: Neziha Akalin, Andrey Kiselev, Annica Kristoffersson, Amy Loutfi

Erschienen in: International Journal of Social Robotics | Ausgabe 12/2023

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Safety is a fundamental prerequisite that must be addressed before any interaction of robots with humans. Safety has been generally understood and studied as the physical safety of robots in human–robot interaction, whereas how humans perceive these robots has received less attention. Physical safety is a necessary condition for safe human–robot interaction. However, it is not a sufficient condition. A robot that is safe by hardware and software design can still be perceived as unsafe. This article focuses on perceived safety in human–robot interaction. We identified six factors that are closely related to perceived safety based on the literature and the insights obtained from our user studies. The identified factors are the context of robot use, comfort, experience and familiarity with robots, trust, the sense of control over the interaction, and transparent and predictable robot actions. We then made a literature review to identify the robot-related factors that influence perceived safety. Based the literature, we propose a taxonomy which includes human-related and robot-related factors. These factors can help researchers to quantify perceived safety of humans during their interactions with robots. The quantification of perceived safety can yield computational models that would allow mitigating psychological harm.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Safety is one of the fundamental needs of humans [60]. In Maslow’s hierarchy of needs, safety needs are in the second-highest order, after physiological needs. Human safety perception, which is generally described as a state of protection from harm in the present and the future, pertains to the motivation of avoiding or minimizing losses [29]. Safety has been generally understood and studied as the physical safety of robots in human–robot interaction (HRI). Similarly, safety standards for robotics provide guidelines for physical aspects of the robotic systems. Assuring physical safety is of great importance. However, for safe HRI, the adverse psychological outcomes caused by interacting with a robot must also be eliminated. Can we say that a robot programmed in a way that does not physically harm a human is safe? The literature suggest that a physically safe robot by software and hardware design does not always ensure perceived safety of the human user [54, 82]. For example, a human working with a robotic system with a sharp end effector nearby which moves at very high speeds would experience constant stress and discomfort even if the system is capable of preventing injury [53]. Therefore, it is critical to provide both physical and perceived safety.
Although policymakers have started to point out psychological consequences of human–machine interactions, both the HRI literature and the safety standards lack a comprehensive investigation of perceived safety towards an enduring presence of robots sharing a common space with humans [80]. As an example, the European Parliament report [31] considered psychological consequences of HRI: “you (denoting users) are permitted to make use of a robot without risk or fear of physical or psychological harm” (p. 23). Within this context, this article aims to address perceived safety in detail. We present perceived safety considering a diverse set of human-related and robot-related features. We developed a taxonomy for perceived safety using the insights from the literature and three of our HRI experiments [13].
The paper is organized as follows: Sect. 2 provides a brief introduction to the safety components in HRI, namely physical safety, perceived safety and cyber security. Later, the paper continues with Sect. 3 which introduces perceived safety in HRI. Section 4 discusses the reasoning behind the influencing factors of perceived safety and presents each factor in subsections. Section 5 provides an overview of experimental paradigms and evaluation tools used in the HRI literature to measure perceived safety. Section 6 presents a taxonomy of the influencing factors of perceived safety. Finally, Sect. 8 concludes the paper.

2 Three Facets of Safety in HRI

Lasota et al. [52] presented a literature survey of potential methods for safe HRI. The authors outlined four categories: (1) safety through control, (2) motion planning, (3) prediction, and (4) consideration of psychological factors. Moreover, the authors argued that there are two components of safe HRI: physical safety and psychological safety. However, Boden et al. [15] considered another component which is security of robots. They proposed five ethical principles as a basis for responsible robotics, one of them being “Robots are products. They should be designed using processes that assure their safety and security” [15] (p. 126). The aim of this principle was explained as assured safety of the robots, including also the security that robots are secure for cyber attacks such that people can trust robots and have confidence in them. Boddington [14] analyzed three of these principles that are related to the safety of robotic systems. The author argues that safety in robotics should be considered broadly to comprise not only physical and material safety, but also the avoidance of disruption to psychological, social, moral, and other important social norms. Therefore, we argue that in the context of HRI, safety of robots include three components: (1) physical safety, (2) perceived safety, and (3) cyber security (Fig. 1). Physical safety refers to that the robot does not harm the human, perceived safety refers to that humans perceive no psychological threat of robots, and cyber security refers to that the robot is not vulnerable for cyber attacks. When it comes to perceived safety, it cannot be treated separately from the other safety components since they affect safety perception.
The protection of personal data is becoming more important considering the amount of online personal data. The data protection is started to be enforced by the General Data Protection Regulation legislation [33]. The devices connected to the Internet carry a potential threat regarding security and privacy. Cyber attacks can have psychological impact on individuals such as anxiety, worry, anger, outrage, and depression [9]. Domestic robots that are connected to the Internet can be hacked by unauthorized parties to steal personal data, to spy on people through robots’ sensors and to physically command robots which have the potential of causing physical harm [34]. We argue that three kinds of safety, namely physical safety, perceived safety, and cyber security are tightly coupled. Satisfying all three kinds of safety is a requisite for safe HRI. Security flaws not only affect the physical safety but also affect perceived safety of the robots with psychological consequences. As an example, a hacked robot may perform unexpected movements, which affects the perceived safety of its user.

3 Perceived Safety in HRI

In recent years, psychological aspects of safety has started to receive attention in robot safety reviews [52, 94], and the European Parliament reports [31]. Zacharaki et al. [94] outlined five categories one of them being societal and psychological factors. Perceived safety is not only the concern of domestic and service robots, but also industrial robots and autonomous vehicles (AVs). Perceived safety of AVs has received a considerable amount of attention. Examples include the relationship between perceived safety, AVs acceptance, educational level and active use of technology [64], relationships between perceived usefulness, trust, perceived safety and AVs acceptance [91] and individual social characteristics (such as age, gender, education level, employment, income) and perceived safety of AVs [65].

3.1 Definitions for Perceived Safety in HRI

There are two different trends to describe perceived safety. The first one is to articulate the existence of perceived safety with positive affective states such as comfortable, assured and stress-free. The second one is to define the term through the lack of safety perception. The pattern in the latter is phrasing perceived safety using negative affective states (emotional responses) such as discomfort, anger, nervous, worried, stressful, fear, and anxiety. The nomenclature for safety perception also varies in the HRI literature: psychological safety [48, 52], sense of safety and security , perceived safety [10], mental safety [61], and sense of security [68, 71].
Lasota et al. [52] described psychological safety as interactions that are stress-free and comfortable. The authors also expressed that to maintain psychological safety, the robot’s motion, appearance, embodiment, gaze, speech, posture, social conduct, or any other attribute should not result in any psychological discomfort or stress [52]. Bartneck et al. [10] proposed a questionnaire to measure perceived safety in HRI, and they defined perceived safety as “the user’s perception of the level of danger when interacting with a robot, and the user’s level of comfort during the interaction” (p. 76). Matsas et al. [61] defined mental safety in the context of human–robot collaboration in industrial settings as “the enhanced users’ vigilance and awareness of the robot motion, that will not cause any unpleasantness such as fear, shock or surprise.” (p. 140). Although Nanoka et al. [68] used both terms, namely mental safety and sense of security, they defined mental safety as: “the motions of the robot do not cause any unpleasantness like fear, shock, surprise to human.” (p. 2770). A review article on human–robot collaboration approaches [87] discussed the importance of mental safety, which described as  “mental stress and anxiety caused by close interaction with the robot” (p. 256).

4 Factors Influencing Perceived Safety in HRI

We started studying human safety perception in HRI based upon [10]. After our user studies [1], more factors influencing perceived safety began to emerge. The study presented in [1] was a video-based study in which participants watched four scenarios featuring interaction with eldercare robot. There was an additional text field for free comments at the end of the study. Twenty seven participants (out of 124) commented about the study. We examined these qualitative responses from the participants for our future study designs.
Analysis of these comments revealed two main themes: robot-related factors (robot’s response time, robot’s communication style, security of the robot, accuracy of robot’s actions, and usefulness of the robot) and context of robot use. The comments regarding context of use showed that how comfortable people would feel, or how much they would trust the robot depends on the type of the robot task. Moreover, participants liked the scenarios in which the robot was passive and the user was pro-active. This shed a light that having control over the interaction could be an influencing factor of perceived safety.
Akalin et al.  presented a model for safety perception of older adults during HRI. Since HRI includes two parties, perceived safety does not only depend on robot properties. Therefore, the proposed model included two main factors: human-related factors and robot-related factors. These factors were determined based on the HRI literature, gerontology literature and the insights obtained from the user studies. Our recent study [3] consolidated the factors considered in , and revealed a new factor; transparent robot behaviors. Therefore, if we summarize our three user studies [1, 3], main influencing factors of perceived safety are identified as context of use, comfort, experience and familiarity with robots, predictability and transparency of robots’ actions, sense of control, and trust. These factors also match with the factors commonly considered to be related to feeling of safety in different disciplines. They can be exemplified as trust [45, 76], comfort [10, 45], sense of control [17], experience and familiarity [17, 76], predictability and transparency [16, 17, 58, 76] and contextual factors [95].
As a next step, we reviewed the relevant HRI literature to explore the robot-related factors associated with the identified factors. It is important to identify these factors as they can be used to define the metrics to measure perceived safety.

4.1 Context of Use

Context plays a significant role in determining how people will react to a particular situation. Dey [27] provided a definition of context in computing as “any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves” (p. 5). ISO 9241-210:2019 (i.e. Ergonomics of human–system interaction - Part 210: Human-centred design for interactive systems) defines context of use as “combination of users, goals and tasks, resources, and environment” [83].
The context-awareness of a robot is one of the important design principles for increased physical safety in robot architectures [35]. The context of interaction plays a large role in establishing and maintaining perceived safety during HRI. Several studies explored the effect of context in HRI such as the influence of social context on the acceptance of social robots [90] and the influence of context in robot personality preferences [44]. Joosse et al. [44] reported that the users’ preference for a robot’s personality and behaviors depends on the context of the robot’s role and task. Similarly, the context in which the interaction takes place affects people’s perceived safety. As an example, a study that investigated the safety-related factors of pedestrians’ crossing the road process, categorized the influencing factors of perceived safety as demographic factors, contextual factors, and behavioral factors [95]. In HRI, such contextual information is not limited to environmental and social factors, but also includes factors related to the role of the robot in the interaction such as the type of robot’s task, the robot’s competency to accomplish the relevant task, and the robot’s performance. In different contexts, the level of safety perception for the same robot may be different. For example, in our video-based study [1], participants commented that they would feel safer when the robot’s task is finding an object at home than when the robot’s task is reminding about an important medication.
Therefore, how humans perceive the robots depends on the context of use. The aspects influencing context of use are robot’s role, robot’s task type and complexity, robot’s competency for the task, and robot’s task performance.

4.2 Comfort

Perceived safety and human comfort are often mentioned together in HRI literature [10, 23, 52, 54], sometimes used even as synonyms. Therefore, comfort is a criterion that is closely related to perceived safety. Users may experience discomfort that refers to negative feelings due to the occurrence of any intended or unintended situation during the interaction with a robot. To improve the interactions with robots, a robot should avoid any behavior that makes the person uncomfortable regardless of objective safety. For example, an approaching robot can make people feel unsafe and cause discomfort even if the robot controller eliminates all risks of collision [51]. Lasota et al. [54] reported that participants felt safer and more comfortable working with a robot operated with human-aware motion planning compared to a standard robot. To avoid a robot causing discomfort, negative emotional responses, and decreased safety perception, the literature presents different strategies such as maintaining proper distance (proxemics), proper approaching strategy, control strategies to avoid being noisy, and planning to avoid interference [51]. Other aspects that can affect human comfort in HRI are the robot’s response speed, movement trajectory, proximity, object-manipulating fluency, and sociability [88]. Finally, individual characteristics including gender, age, race/nationality, socioeconomic status, and personality traits can affect how people perceive robots [62].
Therefore, the aspects which influence human comfort in HRI are individual characteristics, approaching direction of the robot, proxemics, unintented noise from the robot, robot response speed, robot movement trajectory, motion fluency, and the robot sociability.

4.3 Experience and Familiarity

Humans seek familiar and known things rather than unfamiliar and unknown things to feel safe [60]. Familiarity refers to “knowledge of a thing or person through long or close association or frequent perception by any of the senses; everyday acquaintance, habituation; an instance of this” [75]. Arai et al. [7] considered familiarity as one of the important factors for mental safety, and familiarity was explained as general preferences for motions and designs of robots. Haring et al. [38] reported that an android robot was perceived significantly less safe in comparison to a humanoid and non-biomimetic robot (i.e., Keepon robot). This could be explained by the uncanny valley. Mori [66] proposed an hypothesis about a person’s reactions to robots that look and act almost human-like. The familiarity with a robot increases with human resemblance until an uncanny valley which is a sudden shift from empathy to revulsion due to a highly realistic appearance.
Interaction experience with robots is another factor that has an impact on perceived safety. Experience is defined as “knowledge resulting from actual observation or from what one has undergone” [74]. Previous experience with a specific technology has a positive effect on different aspects of user experience and acceptance of that technology. People’s exposure to robots will increase their familiarity with robots. The level of familiarity with robots and robot-related experience can be gained by the exposure of robots. A person who has experience and familiarity with robots would have positive perceptions towards acceptance of robots [25], have a positive attitude and less anxiety towards robots [11], and trust in robots [22]. On the other hand, lack of familiarity might lead to negative feelings toward technology and robots [25]. Takayama et al. [85] reported that people who had prior experience and familiarity with robots maintained a closer distance to robots. Similarly, human’s prior experience with robots can influence perceived safety [52].
Therefore, the aspects which influence experience and familiarity are interaction duration, interaction frequency with robots, the robots’ physical appearance and motions.

4.4 Predictability and Transparency

There is a lack of coherence for the concept of interpretable behaviors for robots/artificial agents. Many different terms, such as explainable, legible, predictable, and transparent have been used to convey intentions that would be ascribed to an agent’s behavior by a human observer [19]. Although predictability and transparency are used interchangeably, we refer to predictability as interpretable future actions and transparency as interpretable present actions. In other words, predictability is understanding a robot’s upcoming actions, and transparency is understanding what the robot is doing and why it is taking the current action. Transparency is sometimes defined as a combination of interpretable present and future actions. As an example, Alonso et al. [5] defined the term as “the observability and predictability of the system behavior, the understanding of what the system is doing, why, and what it will do next”(p. 1). While people feel safe in the predictable cases [76], situations that are unpredictable and uncertain are perceived as unsafe [45] even if there is no threat [16]. Therefore, regardless of the term used, predictable/transparent/legible robot behaviors are important for promoting safety perception in HRI [58]. For example, Koert et al. [49] reported that in a pick and place task, unpredictable robot actions, or robot responses that participants did not understand the reason they were occurring, were found to be uncomfortable and annoying.
Conveying information about the robot’s mode and intention can lead to greater perceived safety. The use of interfaces and natural language explanations are ways of achieving transparent behaviors [5]. For example, equipping AVs with external human–machine interfaces resulted in greater perceived safety, and had a positive effect on acceptance of pedestrians compared to an AV without an interface [36]. Social robots can also facilitate transparent behaviors. As an example, Suvei et al. [84] showed that social gaze cues increased the feeling of safety in a human-sized mobile robot approaching scenario. The robot’s communication that conveys its intent can improve many aspects of HRI, such as communication, reliability, predictability, transparency, and situational awareness [18]. Moreover, transparency of robot behaviors through explanations of its actions was considered to be more trustworthy [50]. However, it is important to use a clear communication to avoid potential ambiguities and thus the robot’s behaviors are easily understandable by its human partner. Therefore, it is important not only what to communicate, but also how to communicate [39].
Therefore, the aspects which influence predictability and transparency are visual interfaces, social gaze, predictable/transparent motions, communication style and intention communication of the robot.

4.5 Sense of Control

The sense of control or sense of agency refers to the subjective feeling of control over a person’s actions and control through these actions over the external events [6]. Therefore, the sense of control is the degree to which individuals perceive that the consequences of circumstances are under their control. Humans desire explainability and predictability for feeling in control thereby feeling safe [76]. It has also been expressed in gerontology studies that having control over the events leads to safety feelings in older adults [73]. In the context of HRI, we use the term sense of control as the user feels that the user is in charge of the robotic system, and the user has the control over the system, not the vice versa. The user could experience decreased perceived safety when the interaction is demanding, such as the user cannot cope with or control the robot’s behaviors and consequences of those behaviors. The ability to control implies perceived safety [63], on the contrary, lack of sense of control may induce frustration. In the context of HRI, a user’s sense of control is affected not only when the user cannot control a robot’s actions, but also by the social context, as an example being excluded by robots [30]. In a social interaction with a robot, how close the robot is to the person and the approach direction of the robot also affect the sense of control and perceived safety. Young et al. [93] presented a dog-leash interface which was used by participants to lead a robot simply by holding the leash, with three variations: the robot in front of the person, the robot following directly behind the participant, and the robot following the participant behind with 45° angle. Participants felt less in control and less safe when the robot was following behind, and behind at an angle. The level of autonomy of the robot is another aspect that influences sense of control. Chanseau et al. [20] found the participant’s sense of control during the interaction with a robot to be linearly related to the expected level of autonomy.
Therefore, the aspects which influence sense of control are proxemics, the robot’s approach direction, the robot’s follow direction, the level of autonomy and social properties of the robot.

4.6 Trust

Kok and Soh [50] defined trust in HRI as a “multidimensional latent variable that mediates the relationship between events in the past and the former agent’s subsequent choice of relying on the latter in an uncertain environment” (p. 300). Trust is a cornerstone for having sustainable human social relationships. The use, misuse, disuse, and acceptance of automation depend heavily on trust, especially for complex systems [55]. Trust in autonomous systems and robots is crucial as they begin to appear on the streets, in factories, and in private homes. Therefore, a significant number of studies have been focused on trust in HRI and human–robot collaboration [37, 50, 86]. Hancock et al. [37] reviewed factors affecting trust in HRI. They reported that a robot’s performance-based and attribute-based factors affect trust. Performance-based factors include behavior, dependability, reliability of robot, predictability, level of automation, failure rates, false alarms, and transparency. Attribute-based factors include proximity/ co-location, robot personality, adaptability, robot type and anthropomorphism. Other factors influencing human trust in robots are timing and frequency of faulty robot behaviors [26], type of task [79], and anthropomorphism [67]. Moreover, the ability of a robotic assistant to carry out a prescribed task, occurrence of errors, failures including social norm violation and technical failure, communication style (speech and facial expressions), and behavior transparency have been considered in human–robot trust studies [32]. Robot failures have the greatest impact on loss of trust. These failures include design failures, system failures (hardware, software), and expectation failures (the system acts as it is supposed to, but does not satisfy the user’s expectation) [86]. Perceived safety of robots is essential for building the trust of their human counterparts in HRI and human–robot collaboration [13]. If a robot is perceived as unsafe, people would not trust that robot. Perceived safety plays an important role in building trust in the human interaction partner [70].

5 Measuring Perceived Safety in HRI

HRI studies use different methods to measure perceived safety, including both subjective (usually in the form of questionnaires) and objective measures (physiological responses, emotional responses, and direct measurement tools). Many experimental paradigms are employed for interactions with autonomous systems and robotics including screen-based studies (i.e., simulators, video-based studies) [1, 57], virtual reality (VR) [89], real-world Wizard of Oz (WOZ) studies [36], and real-world studies with fully autonomous systems [18]. Most articles presenting research on perceived safety use a similar approach: participants interact with an autonomous system within an experimental scenario, and the safety perception of the participants during the interaction is linked to questionnaire ratings or physiological data. In the remainder of this section, we first provide a summary of experimental paradigms for measuring perceived safety in Sect. 5.1, and later we provide an overview of evaluation tools in Sect. 5.2.

5.1 Experimental Paradigms for Measuring Perceived Safety

Screen-based studies allow researchers to use standardized methodologies (same system behavior in each trial) and reach a large number of participants. However, they lack real-life experience. In screen-based studies, the common approach is to test perceived safety of participants after watching a video or interacting with a simulator. Lichtenthäler et al. [57] used short movie sequences recorded in a virtual environment using a simulator. The scenario was a human crossing the robot’s path in an office environment in which the participants rated the robot’s navigation behavior. The authors reported that human-aware navigation received higher perceived safety ratings. Similarly, another study by the authors [58], used the same human crossing the robot’s path scenario with recorded videos. The authors reported that legible robot behaviors such that the human can predict the next actions of the robot increased perceived safety.
VR has gained popularity in recent years especially for autonomous systems such as flying robots (drones) and AVs. VR environments have the benefit of high experimental control and maintaining real-world realism. One example can be seen in Widdowson et al. [89], where a flying robotic system was explored in a VR test environment for different experimental paradigms. The authors [89] investigated various flight behaviors including different speeds, acceleration, angles of approach, and distances. The aim was to improve the design and control of flying robots for enhanced comfort and perceived safety. The study considered human arousal as an indicator of safety perception. Therefore, several physiological signals (electrodermal activity (EDA), photoplethysmography (PPG), and head motion) were collected and used for predicting human arousal. It is common to use VR to test safety perception during the interactions with AVs [24, 40]. Perceived safety of robots could benefit from the findings of AV studies. As an example, Hollander et al. tested the impact of malfunctioning external car displays during pedestrian crossing on perceived safety and trust [40]. The authors reported that wrong information resulting from actual malfunction decreased perceived safety, trust, and confidence. However, the recovery of participants’ trust was quick.
The WOZ design is an experimental paradigm in which a system is fully or partially operated behind the scenes by a human and participants believe that the system is autonomous. Habibovic et al.  [36] used the WOZ setup in automated driving experiments, where pedestrians felt significantly safer when they encountered an AV with an interface compared to an AV without an interface. This shows that the transparency of the machines can help to improve perceived safety. Chadalavada et al. [18] introduced a bi-directional intention communication system for autonomous mobile robots using spatial augmented reality and eye-tracking glasses. An autonomous forklift projected various patterns onto the common floor area to communicate the robot’s motion intentions. The authors reported that intent communication projection improved participants’ comfort and perceived safety.
To summarize, perceived safety of human–machine systems is still in its infancy; thus, the reviewed studies so far focus on understanding human safety perception under different conditions during HRI. It is very rare utilizing perceived safety for guiding the robot’s behaviors for better interactions. However, maintaining perceived safety throughout the interaction is crucial for long-term interactions and robot acceptance.

5.2 Evaluation Tools for Perceived Safety

The most common evaluation methods in HRI studies are self-assessment subjective evaluations, behavioural measurements, psycho-physiological measures and task performance metrics [81]. The evaluation of perceived safety in HRI is done by monitoring psychological, behavioral, and physiological responses, follow-up questionnaires [52], or handheld devices [24]. Psychological responses are considered as responses that are linked to mental activities. Physiological responses are non-voluntary responses that are related to living organisms’ bodily reactions. Behavioral responses are manners which differ from physiological responses in a way that they can be controlled voluntarily [4]. Facial expressions, eye gaze, and blink variations could be considered as behavioral responses [4]. Monitoring and interpreting humans’ behavioral and physiological responses can provide important information about lower levels of perceived safety.
Psychological responses can be observed through self-assessment subjective evaluations (e.g. questionnaires) or interviews. The former is one of the most widely used ways in the HRI literature, examples are [10, 48, 68]. Bartneck et al. [10] presented a semantic differential questionnaire as an instrument for measuring perceived safety of robots. Kamide et al. [48] presented a questionnaire for measuring the psychological safety of humanoids. In [68], participants rated their emotions using a questionnaire including the items surprise, fear, disgust, and unpleasantness on a scale ranging from 1 (never) to 6 (very much). In our previous work, the participants rated their safety perception using a semantic differential questionnaire.
Physiological responses can provide information regarding the intensity of an individual’s internal state. There are several studies that collected physiological signals data to evaluate perceived safety [3, 8, 47, 68, 89]. As an example, Nonaka et al. [68] also measured the heart rate of participants in addition to questionnaires. However, they did not find a determinate conclusion about the relationship between heart rate and the human sense of security. Arai et al. [8] considered skin potential response to be an indicator of safety perception. Another example of physiological measures can be found in [47], where Kamide et al. collected saliva amylase in their user studies to check the physiological reactions and safety perception.
HRI could benefit from perceived safety measurement methods in automation. Handheld devices have been used in AVs studies to measure perceived safety. In [24], participants continuously reported their safety feelings during their interaction with an AV by holding a handheld button pressed for as long as they felt safe in a pedestrian crossing. The authors reported that the temporal window of feeling safe was wider when the AV was equipped with external human–machine interfaces. Similarly, Rossner et al. [77] used a handset control for the online measurement of perceived safety while driving an AV in a simulator. In Bazilinskyy et al. [12], participants reported their perceived safety while watching AVs equipped with varying parameters of external human–machine interfaces. The participants kept a keyboard key pressed as long as they felt safe to cross the road.
To summarize, there is no standardized way of experimental paradigm or measures for perceived safety in HRI and autonomous systems. The evaluation of perceived safety in HRI could benefit from the combination of several methods. As an example, questionnaires may not always be suitable for detecting the subtle indicators of perceived safety. Therefore, an additional information from different modalities as well as contextual information including the task, robot and environmental related factors could help evaluating perceived safety. The diverse set of complimentary measures can capture different aspects of interactions. It is worth to note that one unique aspect about safety and thereby perceived safety is that we can measure the lack of them, not the existence of them [41]. Thus, lower degrees of perceived safety could be observed through emotional, cognitive, behavioral, and physiological responses.
Table 1
The robot-related factors considered in the HRI literature as influencing factors of perceived safety. Some of the studies did not directly focus on perceived safety but, comfort, context of use, sense of control, predictability, transparency or trust
Robot-related factors
Reference
Functional properties
Physical contact
[72, 92]
 
Pre-warning
[21]
 
Emergency buttons
[69, 93]
 
Robot competence
[69, 72]
 
Visual interface
[12, 36, 87]
 
Audible interface
[69, 87]
 
Performance
[43, 88]
 
Task and role
[44, 78]
 
Level of autonomy
[20, 37, 56]
 
Failures
[37, 42]
 
False alarms
[28, 37]
Action and Movement
Speed and velocity
[78, 92]
 
Approach speed, direction, distance
[23, 51, 78, 93]
 
Proximity
[37, 78]
 
Motion trajectory, fluency
[78, 88]
 
Predictable motion
[69, 78]
Physical Properties
Physical appearance
[37, 56, 66, 78]
 
Anthropomorphism
[37, 46, 78]
Social Properties
Communication
[18, 78]
 
Eye gaze, contact
[69, 84]
 
Robot personality
[37, 44]
 
Sociability
[88]
 
Interaction duration, frequency
[59]

6 A Taxonomy for Perceived Safety in HRI

The relevant background for the influencing factors of perceived safety has been presented in Sect. 4, in which the robot-related premises associated with the identified main factors are summarized. It is important to determine robot-related factors, as they can define the metrics to measure perceived safety. Table 1 recaps the robot-related factors. Figure 2 outlines the human-related and robot-related factors presented in Table 1 into a taxonomy.
The list of factors is not exhaustive since there can be many individual human characteristics that can affect safety perceptions. Reflecting on existing studies involving perceived safety, we found that the literature is not clear about how different premises relate to perceived safety. The relationships between human-related factors and robot-related factors are complex since personal characteristics can not be changed whereas robot-related factors can be modified. Although the robot-related factors influence the human-related factors, we do not index causal relationships between human-related factors and robot-related factors in the proposed taxonomy. There is a need for more empirical research and systematic investigation which can illuminate the intricate dynamics of different factors, especially for drawing connections between robot-related factors and the six main factors (context of use, comfort, experience and familiarity, predictability and transparency, sense of control, trust) as well as possible mitigation strategies. Sometimes, one robot-related factor may affect more than one human-related factor.
The consequences of robot-related factors should not create adverse outcomes on human-related factors. Perceived safety is influenced not only by actual safety outcomes but also by subtle social and psychological factors. To exemplify, a person in the same environment with an autonomous mobile robot may not feel safe due to the lack of trust in the robot that the robot may collide with the person. In the same example, if the robot projects a light on the floor showing its navigation direction, the human may feel safer due to the transparency of the robot’s navigation [18]. Therefore, the robot must be sufficiently safe, comfortable, natural, transparent, and predictable for people in the same environment. These qualities can be achieved with a holistic approach. The proposed taxonomy (Fig. 2) is the first within the context of HRI, so it can serve as a starting point for developing similar taxonomies and models.

7 Example Case Study: Using Subjective and Objective Measures for Quantifying Perceived Safety

This section provides an example study regarding how the proposed taxonomy can be utilized to measure perceived safety. The study will be briefly summarized, for details please refer to [3]. It should be noted here that the research included in this paper was approved by the Swedish ethics committee for studies involving human participants. All participants took part in the study voluntarily, read and signed the informed consent form before the experiments.
We designed a user study in which we manipulated selected robot-related factors to measure perceived safety and other human-related factors (comfort, sense of control and trust). In the study, there were five within-subjects condition in which each condition included a robot-related factor to manipulate a human-related factor such as robot’s feedback—comfort, unpredictable robot behavior—perceived safety, persistent utterances—sense of control and robot failure—trust. The fifth condition was the baseline condition which did not have any manipulation. It was designed to familiarize the user with the the robot and the study. Twenty-seven young adult participants took part in the experiments. We collected questionnaire data and objective measures including video recordings for facial expression analysis and physiological data using the E4 wristband. The questionnaire results showed that the manipulations were successful (e.g., in the fourth condition, participants felt less safe and less in control).
The results also showed a correlation between comfort, sense of control, trust, and perceived safety. The facial features extracted from the videos and the physiological signal data were used to estimate perceived safety where participants’ subjective ratings were utilized as labels. The data from objective measures revealed that the prediction rate was higher from physiological signal data. We concluded that understanding the factors that make humans feel unsafe rather than safe is more important since the quantifiable measures occurred when people felt unsafe. Similar experimental setup can be used to understand the underlying reasons for unsafe feelings especially for the robot-related factors. Once the factors are identified, they can be fine-tuned based on the specific application area. In the previous section, we presented a taxonomy of the factors that affect perceived safety. This taxonomy can provide a starting point to investigate the robot-related factors within a similar experimental design presented in [3].

8 Conclusion

This article focused on a comprehensive investigation of perceived safety in HRI. Due to the complexity of human feelings, perceived safety is broad and encompasses several factors. These factors are identified based on the literature and the insights obtained from the user studies. A review of the literature together with our user studies suggest that perceived safety is multidimensional, incorporating both robot-related and human-related factors. We identified six main factors that influence perceived safety which are (1) the context of robot use, (2) comfort, (3) experience and familiarity with robots, (4) trust, (5) the sense of control over the interaction, (6) transparent and predictable robot actions.The paper also presented robot-related factors that influence the identified factors. These factors can help researchers to quantify interactions with robots and design features that has the potential to improve perceived safety. The identified features may also help to quantify perceived safety which can yield computational models that would allow robots to adapt their behavior to mitigate psychological harm.

Declarations

Conflict of interest

The authors report no conflict of interest.

Ethics Approval

Not applicable
Not applicable.
Not applicable.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
1.
Zurück zum Zitat Akalin N, Kiselev A, Kristoffersson A, Loutfi A (2017) An evaluation tool of the effect of robots in eldercare on the sense of safety and security. In: International conference on social robotics, pp. 628–637. Springer Akalin N, Kiselev A, Kristoffersson A, Loutfi A (2017) An evaluation tool of the effect of robots in eldercare on the sense of safety and security. In: International conference on social robotics, pp. 628–637. Springer
2.
Zurück zum Zitat Akalin N, Kristoffersson A, Loutfi A (2019) Evaluating the sense of safety and security in human–robot interaction with older people. In: Social Robots: Technological, Societal and Ethical Aspects of Human-Robot Interaction, pp. 237–264. Springer Akalin N, Kristoffersson A, Loutfi A (2019) Evaluating the sense of safety and security in human–robot interaction with older people. In: Social Robots: Technological, Societal and Ethical Aspects of Human-Robot Interaction, pp. 237–264. Springer
3.
Zurück zum Zitat Akalin N, Kristoffersson A, Loutfi A (2022) Do you feel safe with your robot? factors influencing perceived safety in human-robot interaction based on subjective and objective measures. Int J Human Comput Stud 158:102744CrossRef Akalin N, Kristoffersson A, Loutfi A (2022) Do you feel safe with your robot? factors influencing perceived safety in human-robot interaction based on subjective and objective measures. Int J Human Comput Stud 158:102744CrossRef
5.
Zurück zum Zitat Alonso V, De La Puente P (2018) System transparency in shared autonomy: a mini review. Front Neurorobot 12:83CrossRef Alonso V, De La Puente P (2018) System transparency in shared autonomy: a mini review. Front Neurorobot 12:83CrossRef
6.
Zurück zum Zitat Aoyagi K, Wen W, An Q, Hamasaki S, Yamakawa H, Tamura Y, Yamashita A, Asama H (2021) Modified sensory feedback enhances the sense of agency during continuous body movements in virtual reality. Sci Rep 11(1):1–10CrossRef Aoyagi K, Wen W, An Q, Hamasaki S, Yamakawa H, Tamura Y, Yamashita A, Asama H (2021) Modified sensory feedback enhances the sense of agency during continuous body movements in virtual reality. Sci Rep 11(1):1–10CrossRef
7.
Zurück zum Zitat Arai T, Kamide H (2016) Robotics for safety and security. Springer, Japan, pp 173–192 Arai T, Kamide H (2016) Robotics for safety and security. Springer, Japan, pp 173–192
8.
Zurück zum Zitat Arai T, Kato R, Fujita M (2010) Assessment of operator stress induced by robot collaboration in assembly. CIRP Ann 59(1):5–8CrossRef Arai T, Kato R, Fujita M (2010) Assessment of operator stress induced by robot collaboration in assembly. CIRP Ann 59(1):5–8CrossRef
9.
Zurück zum Zitat Bada M, Nurse JR (2020) The social and psychological impact of cyberattacks. In: Emerging cyber threats and cognitive vulnerabilities, pp. 73–92. Elsevier Bada M, Nurse JR (2020) The social and psychological impact of cyberattacks. In: Emerging cyber threats and cognitive vulnerabilities, pp. 73–92. Elsevier
10.
Zurück zum Zitat Bartneck C, Kulić D, Croft E, Zoghbi S (2009) Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int J Soc Robot 1(1):71–81CrossRef Bartneck C, Kulić D, Croft E, Zoghbi S (2009) Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int J Soc Robot 1(1):71–81CrossRef
11.
Zurück zum Zitat Bartneck C, Suzuki T, Kanda T, Nomura T (2007) The influence of people’s culture and prior experiences with aibo on their attitude towards robots. AI Soc 21(1–2):217–230 Bartneck C, Suzuki T, Kanda T, Nomura T (2007) The influence of people’s culture and prior experiences with aibo on their attitude towards robots. AI Soc 21(1–2):217–230
12.
Zurück zum Zitat Bazilinskyy P, Kooijman L, Dodou D, De Winter J (2021) How should external human-machine interfaces behave? examining the effects of colour, position, message, activation distance, vehicle yielding, and visual distraction among 1,434 participants. Appl Ergon 95:103450CrossRef Bazilinskyy P, Kooijman L, Dodou D, De Winter J (2021) How should external human-machine interfaces behave? examining the effects of colour, position, message, activation distance, vehicle yielding, and visual distraction among 1,434 participants. Appl Ergon 95:103450CrossRef
13.
Zurück zum Zitat Beton L, Hughes P, Barker S, Pilling M, Fuente L, Crook NT (2017) Leader-follower strategies for robot-human collaboration, pp. 145–158. Springer International Publishing Beton L, Hughes P, Barker S, Pilling M, Fuente L, Crook NT (2017) Leader-follower strategies for robot-human collaboration, pp. 145–158. Springer International Publishing
14.
Zurück zum Zitat Boddington P (2017) Epsrc principles of robotics: commentary on safety, robots as products, and responsibility. Connect Sci 29(2):170–176CrossRef Boddington P (2017) Epsrc principles of robotics: commentary on safety, robots as products, and responsibility. Connect Sci 29(2):170–176CrossRef
15.
Zurück zum Zitat Boden M, Bryson J, Caldwell D, Dautenhahn K, Edwards L, Kember S, Newman P, Parry V, Pegman G, Rodden T et al (2017) Principles of robotics: regulating robots in the real world. Connect Sci 29(2):124–129CrossRef Boden M, Bryson J, Caldwell D, Dautenhahn K, Edwards L, Kember S, Newman P, Parry V, Pegman G, Rodden T et al (2017) Principles of robotics: regulating robots in the real world. Connect Sci 29(2):124–129CrossRef
16.
Zurück zum Zitat Brosschot JF, Verkuil B, Thayer JF (2016) The default response to uncertainty and the importance of perceived safety in anxiety and stress: an evolution-theoretical perspective. J Anxiety Disorders 41:22–34CrossRef Brosschot JF, Verkuil B, Thayer JF (2016) The default response to uncertainty and the importance of perceived safety in anxiety and stress: an evolution-theoretical perspective. J Anxiety Disorders 41:22–34CrossRef
17.
Zurück zum Zitat Cao J, Lin L, Zhang J, Zhang L, Wang Y, Wang J (2021) The development and validation of the perceived safety of intelligent connected vehicles scale. Accid Anal Prev 154:106092CrossRef Cao J, Lin L, Zhang J, Zhang L, Wang Y, Wang J (2021) The development and validation of the perceived safety of intelligent connected vehicles scale. Accid Anal Prev 154:106092CrossRef
18.
Zurück zum Zitat Chadalavada RT, Andreasson H, Schindler M, Palm R, Lilienthal AJ (2020) Bi-directional navigation intent communication using spatial augmented reality and eye-tracking glasses for improved safety in human-robot interaction. Robot Comput Integr Manuf 61:101830CrossRef Chadalavada RT, Andreasson H, Schindler M, Palm R, Lilienthal AJ (2020) Bi-directional navigation intent communication using spatial augmented reality and eye-tracking glasses for improved safety in human-robot interaction. Robot Comput Integr Manuf 61:101830CrossRef
19.
Zurück zum Zitat Chakraborti T, Kulkarni A, Sreedharan S, Smith DE, Kambhampati S (2019) Explicability? legibility? predictability? transparency? privacy? security? the emerging landscape of interpretable agent behavior. In: Proceedings of the international conference on automated planning and scheduling, vol. 29, pp. 86–96 Chakraborti T, Kulkarni A, Sreedharan S, Smith DE, Kambhampati S (2019) Explicability? legibility? predictability? transparency? privacy? security? the emerging landscape of interpretable agent behavior. In: Proceedings of the international conference on automated planning and scheduling, vol. 29, pp. 86–96
20.
Zurück zum Zitat Chanseau A, Dautenhahn K, Koay KL, Salem M (2016) Who is in charge? sense of control and robot anxiety in human-robot interaction. In: 2016 25th IEEE international symposium on robot and human interactive communication (RO-MAN), pp. 743–748. IEEE Chanseau A, Dautenhahn K, Koay KL, Salem M (2016) Who is in charge? sense of control and robot anxiety in human-robot interaction. In: 2016 25th IEEE international symposium on robot and human interactive communication (RO-MAN), pp. 743–748. IEEE
21.
Zurück zum Zitat Chen TL, King CH, Thomaz AL, Kemp CC (2011) Touched by a robot: An investigation of subjective responses to robot-initiated touch. In: 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 457–464. IEEE Chen TL, King CH, Thomaz AL, Kemp CC (2011) Touched by a robot: An investigation of subjective responses to robot-initiated touch. In: 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 457–464. IEEE
22.
Zurück zum Zitat Cucciniello I, Sangiovanni S, Maggi G, Rossi S (2021) Validation of robot interactive behaviors through users emotional perception and their effects on trust. In: 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN), pp. 197–202. IEEE Cucciniello I, Sangiovanni S, Maggi G, Rossi S (2021) Validation of robot interactive behaviors through users emotional perception and their effects on trust. In: 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN), pp. 197–202. IEEE
23.
Zurück zum Zitat Dautenhahn K, Walters M, Woods S, Koay KL, Nehaniv CL, Sisbot A, Alami R, Siméon T (2006) How may i serve you? a robot companion approaching a seated person in a helping context. In: Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction, pp. 172–179 Dautenhahn K, Walters M, Woods S, Koay KL, Nehaniv CL, Sisbot A, Alami R, Siméon T (2006) How may i serve you? a robot companion approaching a seated person in a helping context. In: Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction, pp. 172–179
24.
Zurück zum Zitat De Clercq K, Dietrich A, Núñez Velasco JP, De Winter J, Happee R (2019) External human-machine interfaces on automated vehicles: effects on pedestrian crossing decisions. Human Factors 61(8):1353–1370CrossRef De Clercq K, Dietrich A, Núñez Velasco JP, De Winter J, Happee R (2019) External human-machine interfaces on automated vehicles: effects on pedestrian crossing decisions. Human Factors 61(8):1353–1370CrossRef
25.
Zurück zum Zitat De Graaf MM, Allouch SB (2013) Exploring influencing variables for the acceptance of social robots. Robot Auton Systems 61(12):1476–1486CrossRef De Graaf MM, Allouch SB (2013) Exploring influencing variables for the acceptance of social robots. Robot Auton Systems 61(12):1476–1486CrossRef
26.
Zurück zum Zitat Desai M, Medvedev M, Vázquez M, McSheehy S, Gadea-Omelchenko S, Bruggeman C, Steinfeld A, Yanco H (2012) Effects of changing reliability on trust of robot systems. In: 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 73–80. IEEE Desai M, Medvedev M, Vázquez M, McSheehy S, Gadea-Omelchenko S, Bruggeman C, Steinfeld A, Yanco H (2012) Effects of changing reliability on trust of robot systems. In: 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 73–80. IEEE
27.
Zurück zum Zitat Dey AK (2001) Understanding and using context. Pers Ubiquitous Comput 5(1):4–7CrossRef Dey AK (2001) Understanding and using context. Pers Ubiquitous Comput 5(1):4–7CrossRef
28.
Zurück zum Zitat Elara MR, Calderon CAA, Zhou C, Wijesoma WS (2010) False alarm metrics: evaluating safety in human robot interactions. In: 2010 IEEE Conference on Robotics, Automation and Mechatronics, pp. 230–236. IEEE Elara MR, Calderon CAA, Zhou C, Wijesoma WS (2010) False alarm metrics: evaluating safety in human robot interactions. In: 2010 IEEE Conference on Robotics, Automation and Mechatronics, pp. 230–236. IEEE
29.
Zurück zum Zitat Eller E, Frey D (2019) Psychological perspectives on perceived safety: social factors of feeling safe. In: Perceived Safety, pp. 43–60. Springer Eller E, Frey D (2019) Psychological perspectives on perceived safety: social factors of feeling safe. In: Perceived Safety, pp. 43–60. Springer
30.
Zurück zum Zitat Erel H, Cohen Y, Shafrir K, Levy SD, Vidra ID, Shem Tov T, Zuckerman O (2021) Excluded by robots: Can robot-robot-human interaction lead to ostracism? In: Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, pp. 312–321 Erel H, Cohen Y, Shafrir K, Levy SD, Vidra ID, Shem Tov T, Zuckerman O (2021) Excluded by robots: Can robot-robot-human interaction lead to ostracism? In: Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, pp. 312–321
32.
Zurück zum Zitat Flook R, Shrinah A, Wijnen L, Eder K, Melhuish C, Lemaignan S (2019) On the impact of different types of errors on trust in human-robot interaction: Are laboratory-based hri experiments trustworthy? Interact Stud 20(3):455–486CrossRef Flook R, Shrinah A, Wijnen L, Eder K, Melhuish C, Lemaignan S (2019) On the impact of different types of errors on trust in human-robot interaction: Are laboratory-based hri experiments trustworthy? Interact Stud 20(3):455–486CrossRef
34.
Zurück zum Zitat Giaretta A, De Donno M, Dragoni N (2018) Adding salt to pepper: a structured security assessment over a humanoid robot. In: Proceedings of the 13th International Conference on Availability, Reliability and Security, pp. 1–8 Giaretta A, De Donno M, Dragoni N (2018) Adding salt to pepper: a structured security assessment over a humanoid robot. In: Proceedings of the 13th International Conference on Availability, Reliability and Security, pp. 1–8
35.
Zurück zum Zitat Giuliani M, Lenz C, Müller T, Rickert M, Knoll A (2010) Design principles for safety in human–robot interaction. Int J Soc Robot 2(3):253–274CrossRef Giuliani M, Lenz C, Müller T, Rickert M, Knoll A (2010) Design principles for safety in human–robot interaction. Int J Soc Robot 2(3):253–274CrossRef
36.
Zurück zum Zitat Habibovic A, Lundgren VM, Andersson J, Klingegård M, Lagström T, Sirkka A, Fagerlönn J, Edgren C, Fredriksson R, Krupenia S et al (2018) Communicating intent of automated vehicles to pedestrians. Front Psychol 9:1336CrossRef Habibovic A, Lundgren VM, Andersson J, Klingegård M, Lagström T, Sirkka A, Fagerlönn J, Edgren C, Fredriksson R, Krupenia S et al (2018) Communicating intent of automated vehicles to pedestrians. Front Psychol 9:1336CrossRef
37.
Zurück zum Zitat Hancock PA, Billings DR, Schaefer KE, Chen JY, De Visser EJ, Parasuraman R (2011) A meta-analysis of factors affecting trust in human–robot interaction. Human Factors 53(5):517–527CrossRef Hancock PA, Billings DR, Schaefer KE, Chen JY, De Visser EJ, Parasuraman R (2011) A meta-analysis of factors affecting trust in human–robot interaction. Human Factors 53(5):517–527CrossRef
38.
Zurück zum Zitat Haring KS, Silvera-Tawil D, Takahashi T, Watanabe K, Velonaki M (2016) How people perceive different robot types: a direct comparison of an android, humanoid, and non-biomimetic robot. In: 2016 8th international conference on knowledge and smart technology (kst), pp. 265–270. IEEE Haring KS, Silvera-Tawil D, Takahashi T, Watanabe K, Velonaki M (2016) How people perceive different robot types: a direct comparison of an android, humanoid, and non-biomimetic robot. In: 2016 8th international conference on knowledge and smart technology (kst), pp. 265–270. IEEE
39.
Zurück zum Zitat Hellström T, Bensch S (2018) Understandable robots-what, why, and how. Paladyn, J Behav Robot 9(1):110–123CrossRef Hellström T, Bensch S (2018) Understandable robots-what, why, and how. Paladyn, J Behav Robot 9(1):110–123CrossRef
40.
Zurück zum Zitat Holländer K, Wintersberger P, Butz A (2019) Overtrust in external cues of automated vehicles: an experimental investigation. In: Proceedings of the 11th international conference on automotive user interfaces and interactive vehicular applications, pp. 211–221 Holländer K, Wintersberger P, Butz A (2019) Overtrust in external cues of automated vehicles: an experimental investigation. In: Proceedings of the 11th international conference on automotive user interfaces and interactive vehicular applications, pp. 211–221
41.
Zurück zum Zitat Hollnagel E (2014) Is safety a subject for science? Saf Sci 67:21–24CrossRef Hollnagel E (2014) Is safety a subject for science? Saf Sci 67:21–24CrossRef
42.
Zurück zum Zitat Honig S, Oron-Gilad T (2018) Understanding and resolving failures in human–robot interaction: literature review and model development. Front Psychol 9:861CrossRef Honig S, Oron-Gilad T (2018) Understanding and resolving failures in human–robot interaction: literature review and model development. Front Psychol 9:861CrossRef
43.
Zurück zum Zitat Hu Y, Benallegue M, Venture G, Yoshida E (2020) Interact with me: an exploratory study on interaction factors for active physical human–robot interaction. IEEE Robot Autom Lett 5(4):6764–6771CrossRef Hu Y, Benallegue M, Venture G, Yoshida E (2020) Interact with me: an exploratory study on interaction factors for active physical human–robot interaction. IEEE Robot Autom Lett 5(4):6764–6771CrossRef
44.
Zurück zum Zitat Joosse M, Lohse M, Pérez JG, Evers V (2013) What you do is who you are: the role of task context in perceived social robot personality. In: 2013 IEEE International Conference on Robotics and Automation, pp. 2134–2139. IEEE Joosse M, Lohse M, Pérez JG, Evers V (2013) What you do is who you are: the role of task context in perceived social robot personality. In: 2013 IEEE International Conference on Robotics and Automation, pp. 2134–2139. IEEE
45.
Zurück zum Zitat Kahn WA (1990) Psychological conditions of personal engagement and disengagement at work. Acad Manag J 33(4):692–724CrossRef Kahn WA (1990) Psychological conditions of personal engagement and disengagement at work. Acad Manag J 33(4):692–724CrossRef
46.
Zurück zum Zitat Kamide H, Arai T (2017) Perceived comfortableness of anthropomorphized robots in us and japan. Int J Soc Robot 9(4):537–543CrossRef Kamide H, Arai T (2017) Perceived comfortableness of anthropomorphized robots in us and japan. Int J Soc Robot 9(4):537–543CrossRef
47.
Zurück zum Zitat Kamide H, Kawabe K, Shigemi S, Arai T (2013) Social comparison between the self and a humanoid. In: International Conference on Social Robotics, pp. 190–198. Springer Kamide H, Kawabe K, Shigemi S, Arai T (2013) Social comparison between the self and a humanoid. In: International Conference on Social Robotics, pp. 190–198. Springer
48.
Zurück zum Zitat Kamide H, Mae Y, Kawabe K, Shigemi S, Hirose M, Arai T (2012) New measurement of psychological safety for humanoid. In: 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 49–56. IEEE Kamide H, Mae Y, Kawabe K, Shigemi S, Hirose M, Arai T (2012) New measurement of psychological safety for humanoid. In: 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 49–56. IEEE
49.
Zurück zum Zitat Koert D, Pajarinen J, Schotschneider A, Trick S, Rothkopf C, Peters J (2019) Learning intention aware online adaptation of movement primitives. IEEE Robot Autom Lett 4(4):3719–3726CrossRef Koert D, Pajarinen J, Schotschneider A, Trick S, Rothkopf C, Peters J (2019) Learning intention aware online adaptation of movement primitives. IEEE Robot Autom Lett 4(4):3719–3726CrossRef
50.
Zurück zum Zitat Kok BC, Soh H (2020) Trust in robots: Challenges and opportunities. Current Robotics Reports pp. 1–13 Kok BC, Soh H (2020) Trust in robots: Challenges and opportunities. Current Robotics Reports pp. 1–13
51.
Zurück zum Zitat Kruse T, Pandey AK, Alami R, Kirsch A (2013) Human-aware robot navigation: a survey. Robot Autonom Syst 61(12):1726–1743CrossRef Kruse T, Pandey AK, Alami R, Kirsch A (2013) Human-aware robot navigation: a survey. Robot Autonom Syst 61(12):1726–1743CrossRef
52.
Zurück zum Zitat Lasota PA, Fong T, Shah JA et al (2017) A survey of methods for safe human-robot interaction. Now Publishers Lasota PA, Fong T, Shah JA et al (2017) A survey of methods for safe human-robot interaction. Now Publishers
53.
Zurück zum Zitat Lasota PA, Rossano GF, Shah JA (2014) Toward safe close-proximity human-robot interaction with standard industrial robots. In: 2014 IEEE International Conference on Automation Science and Engineering (CASE), pp. 339–344. IEEE Lasota PA, Rossano GF, Shah JA (2014) Toward safe close-proximity human-robot interaction with standard industrial robots. In: 2014 IEEE International Conference on Automation Science and Engineering (CASE), pp. 339–344. IEEE
54.
Zurück zum Zitat Lasota PA, Shah JA (2015) Analyzing the effects of human-aware motion planning on close-proximity human–robot collaboration. Human Factors 57(1):21–33CrossRef Lasota PA, Shah JA (2015) Analyzing the effects of human-aware motion planning on close-proximity human–robot collaboration. Human Factors 57(1):21–33CrossRef
55.
Zurück zum Zitat Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Human Factors 46(1):50–80CrossRef Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Human Factors 46(1):50–80CrossRef
56.
Zurück zum Zitat Lee JG, Kim KJ, Lee S, Shin DH (2015) Can autonomous vehicles be safe and trustworthy? effects of appearance and autonomy of unmanned driving systems. Int J Human Comput Interact 31(10):682–691CrossRef Lee JG, Kim KJ, Lee S, Shin DH (2015) Can autonomous vehicles be safe and trustworthy? effects of appearance and autonomy of unmanned driving systems. Int J Human Comput Interact 31(10):682–691CrossRef
57.
Zurück zum Zitat Lichtenthäler C, Lorenz T, Karg M, Kirsch A (2012) Increasing perceived value between human and robots-measuring legibility in human aware navigation. In: 2012 IEEE workshop on advanced robotics and its social impacts (ARSO), pp. 89–94. IEEE Lichtenthäler C, Lorenz T, Karg M, Kirsch A (2012) Increasing perceived value between human and robots-measuring legibility in human aware navigation. In: 2012 IEEE workshop on advanced robotics and its social impacts (ARSO), pp. 89–94. IEEE
58.
Zurück zum Zitat Lichtenthäler C, Lorenzy T, Kirsch A (2012) Influence of legibility on perceived safety in a virtual human-robot path crossing task. In: 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, pp. 676–681. IEEE Lichtenthäler C, Lorenzy T, Kirsch A (2012) Influence of legibility on perceived safety in a virtual human-robot path crossing task. In: 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, pp. 676–681. IEEE
59.
Zurück zum Zitat MacDorman KF, Vasudevan SK, Ho CC (2009) Does japan really have robot mania? comparing attitudes by implicit and explicit measures. AI Soc 23(4):485–510CrossRef MacDorman KF, Vasudevan SK, Ho CC (2009) Does japan really have robot mania? comparing attitudes by implicit and explicit measures. AI Soc 23(4):485–510CrossRef
60.
Zurück zum Zitat Maslow A (1943) A theory of human motivation. Psychol Rev 50(4):370–396CrossRef Maslow A (1943) A theory of human motivation. Psychol Rev 50(4):370–396CrossRef
61.
Zurück zum Zitat Matsas E, Vosniakos GC (2017) Design of a virtual reality training system for human-robot collaboration in manufacturing tasks. Int J Interact Des Manuf (IJIDeM) 11(2):139–153CrossRef Matsas E, Vosniakos GC (2017) Design of a virtual reality training system for human-robot collaboration in manufacturing tasks. Int J Interact Des Manuf (IJIDeM) 11(2):139–153CrossRef
62.
Zurück zum Zitat May DC, Holler KJ, Bethel CL, Strawderman L, Carruth DW, Usher JM (2017) Survey of factors for the prediction of human comfort with a non-anthropomorphic robot in public spaces. Int J Soc Robot 9(2):165–180CrossRef May DC, Holler KJ, Bethel CL, Strawderman L, Carruth DW, Usher JM (2017) Survey of factors for the prediction of human comfort with a non-anthropomorphic robot in public spaces. Int J Soc Robot 9(2):165–180CrossRef
63.
Zurück zum Zitat Möller N, Hansson SO, Peterson M (2006) Safety is more than the antonym of risk. J Appl Philos 23(4):419–432CrossRef Möller N, Hansson SO, Peterson M (2006) Safety is more than the antonym of risk. J Appl Philos 23(4):419–432CrossRef
64.
Zurück zum Zitat Montoro L, Useche SA, Alonso F, Lijarcio I, Bosó-Seguí P, Martí-Belda A (2019) Perceived safety and attributed value as predictors of the intention to use autonomous vehicles: a national study with spanish drivers. Safe Sci 120:865–876CrossRef Montoro L, Useche SA, Alonso F, Lijarcio I, Bosó-Seguí P, Martí-Belda A (2019) Perceived safety and attributed value as predictors of the intention to use autonomous vehicles: a national study with spanish drivers. Safe Sci 120:865–876CrossRef
65.
Zurück zum Zitat Moody J, Bailey N, Zhao J (2020) Public perceptions of autonomous vehicle safety: an international comparison. Safe Sci 121:634–650CrossRef Moody J, Bailey N, Zhao J (2020) Public perceptions of autonomous vehicle safety: an international comparison. Safe Sci 121:634–650CrossRef
66.
Zurück zum Zitat Mori M, MacDorman KF, Kageki N (2012) The Uncanny valley [from the field]. IEEE Robot Autom Mag 19(2):98–100CrossRef Mori M, MacDorman KF, Kageki N (2012) The Uncanny valley [from the field]. IEEE Robot Autom Mag 19(2):98–100CrossRef
67.
Zurück zum Zitat Natarajan M, Gombolay M (2020) Effects of anthropomorphism and accountability on trust in human robot interaction. In: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pp. 33–42 Natarajan M, Gombolay M (2020) Effects of anthropomorphism and accountability on trust in human robot interaction. In: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pp. 33–42
68.
Zurück zum Zitat Nonaka S, Inoue K, Arai T, Mae Y (2004) Evaluation of human sense of security for coexisting robots using virtual reality. 1st report: evaluation of pick and place motion of humanoid robots. In: IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA’04. 2004, vol. 3, pp. 2770–2775. IEEE Nonaka S, Inoue K, Arai T, Mae Y (2004) Evaluation of human sense of security for coexisting robots using virtual reality. 1st report: evaluation of pick and place motion of humanoid robots. In: IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA’04. 2004, vol. 3, pp. 2770–2775. IEEE
69.
Zurück zum Zitat Nordhoff S, Stapel J, van Arem B, Happee R (2020) Passenger opinions of the perceived safety and interaction with automated shuttles: a test ride study with hidden safety steward. Transp Res Part A: Policy Pract 138:508–524 Nordhoff S, Stapel J, van Arem B, Happee R (2020) Passenger opinions of the perceived safety and interaction with automated shuttles: a test ride study with hidden safety steward. Transp Res Part A: Policy Pract 138:508–524
70.
Zurück zum Zitat Norouzzadeh S, Lorenz T, Hirche S (2012) Towards safe physical human–robot interaction: an online optimal control scheme. In: 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, pp. 503–508. IEEE Norouzzadeh S, Lorenz T, Hirche S (2012) Towards safe physical human–robot interaction: an online optimal control scheme. In: 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, pp. 503–508. IEEE
71.
Zurück zum Zitat Nyholm L, Santamäki-Fischer R, Fagerström L (2021) Users’ ambivalent sense of security with humanoid robots in healthcare. Informatics for Health and Social Care pp. 1–9 Nyholm L, Santamäki-Fischer R, Fagerström L (2021) Users’ ambivalent sense of security with humanoid robots in healthcare. Informatics for Health and Social Care pp. 1–9
72.
Zurück zum Zitat Pan MK, Croft EA, Niemeyer G (2018) Evaluating social perception of human-to-robot handovers using the robot social attributes scale (rosas). In: Proceedings of the 2018 ACM/IEEE international conference on human-robot interaction, pp. 443–451 Pan MK, Croft EA, Niemeyer G (2018) Evaluating social perception of human-to-robot handovers using the robot social attributes scale (rosas). In: Proceedings of the 2018 ACM/IEEE international conference on human-robot interaction, pp. 443–451
73.
Zurück zum Zitat Petersson I, Lilja M, Borell L (2012) To feel safe in everyday life at home—a study of older adults after home modifications. Age Soc 32(5):791CrossRef Petersson I, Lilja M, Borell L (2012) To feel safe in everyday life at home—a study of older adults after home modifications. Age Soc 32(5):791CrossRef
76.
Zurück zum Zitat Raue M, Streicher B, Lermer E (2019) Perceived safety: a multidisciplinary perspective. Springer Raue M, Streicher B, Lermer E (2019) Perceived safety: a multidisciplinary perspective. Springer
77.
Zurück zum Zitat Rossner P, Bullinger AC (2019) Do you shift or not? influence of trajectory behaviour on perceived safety during automated driving on rural roads. In: International Conference on Human-Computer Interaction, pp. 245–254. Springer Rossner P, Bullinger AC (2019) Do you shift or not? influence of trajectory behaviour on perceived safety during automated driving on rural roads. In: International Conference on Human-Computer Interaction, pp. 245–254. Springer
78.
Zurück zum Zitat Rubagotti M, Tusseyeva I, Baltabayeva S, Summers D, Sandygulova A (2022) Perceived safety in physical human robot interaction—A survey. Robotics and Autonomous Systems p. 104047 Rubagotti M, Tusseyeva I, Baltabayeva S, Summers D, Sandygulova A (2022) Perceived safety in physical human robot interaction—A survey. Robotics and Autonomous Systems p. 104047
79.
Zurück zum Zitat Salem M, Lakatos G, Amirabdollahian F, Dautenhahn K (2015) Would you trust a (faulty) robot? effects of error, task type and personality on human-robot cooperation and trust. In: 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 1–8. IEEE Salem M, Lakatos G, Amirabdollahian F, Dautenhahn K (2015) Would you trust a (faulty) robot? effects of error, task type and personality on human-robot cooperation and trust. In: 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 1–8. IEEE
80.
Zurück zum Zitat Salvini P, Paez-Granados D, Billard A (2021) On the safety of mobile robots serving in public spaces: Identifying gaps in en iso 13482: 2014 and calling for a new standard. ACM Trans Human–Robot Interact (THRI) 10(3):1–27CrossRef Salvini P, Paez-Granados D, Billard A (2021) On the safety of mobile robots serving in public spaces: Identifying gaps in en iso 13482: 2014 and calling for a new standard. ACM Trans Human–Robot Interact (THRI) 10(3):1–27CrossRef
81.
Zurück zum Zitat Sim DYY, Loo CK (2015) Extensive assessment and evaluation methodologies on assistive social robots for modelling human–robot interaction-a review. Inf Sci 301:305–344CrossRef Sim DYY, Loo CK (2015) Extensive assessment and evaluation methodologies on assistive social robots for modelling human–robot interaction-a review. Inf Sci 301:305–344CrossRef
82.
Zurück zum Zitat Sisbot EA, Marin-Urias LF, Broquere X, Sidobre D, Alami R (2010) Synthesizing robot motions adapted to human presence. Int J Soc Robot 2(3):329–343CrossRef Sisbot EA, Marin-Urias LF, Broquere X, Sidobre D, Alami R (2010) Synthesizing robot motions adapted to human presence. Int J Soc Robot 2(3):329–343CrossRef
83.
Zurück zum Zitat for Standardization, I.O.: Iso 9241–210: 2019 (en) ergonomics of human–system interaction-part 210: Human-centred design for interactive systems (2019) for Standardization, I.O.: Iso 9241–210: 2019 (en) ergonomics of human–system interaction-part 210: Human-centred design for interactive systems (2019)
84.
Zurück zum Zitat Suvei SD, Vroon J, Sanchéz VVS, Bodenhagen L, Englebienne G, Krüger N, Evers V (2018) "i would like to get close to you": Making robot personal space invasion less intrusive with a social gaze cue. In: International Conference on Universal Access in Human-Computer Interaction, pp. 366–385. Springer Suvei SD, Vroon J, Sanchéz VVS, Bodenhagen L, Englebienne G, Krüger N, Evers V (2018) "i would like to get close to you": Making robot personal space invasion less intrusive with a social gaze cue. In: International Conference on Universal Access in Human-Computer Interaction, pp. 366–385. Springer
85.
Zurück zum Zitat Takayama L, Pantofaru C (2009) Influences on proxemic behaviors in human–robot interaction. In: 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5495–5502. IEEE Takayama L, Pantofaru C (2009) Influences on proxemic behaviors in human–robot interaction. In: 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5495–5502. IEEE
86.
Zurück zum Zitat Tolmeijer S, Weiss A, Hanheide M, Lindner F, Powers TM, Dixon C, Tielman ML (2020) Taxonomy of trust-relevant failures and mitigation strategies. In: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pp. 3–12 Tolmeijer S, Weiss A, Hanheide M, Lindner F, Powers TM, Dixon C, Tielman ML (2020) Taxonomy of trust-relevant failures and mitigation strategies. In: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pp. 3–12
87.
Zurück zum Zitat Villani V, Pini F, Leali F, Secchi C (2018) Survey on human–robot collaboration in industrial settings: aafety, intuitive interfaces and applications. Mechatronics 55:248–266CrossRef Villani V, Pini F, Leali F, Secchi C (2018) Survey on human–robot collaboration in industrial settings: aafety, intuitive interfaces and applications. Mechatronics 55:248–266CrossRef
88.
Zurück zum Zitat Wang W, Chen Y, Li R, Jia Y (2019) Learning and comfort in human–robot interaction: a review. Appl Sci 9(23):5152CrossRef Wang W, Chen Y, Li R, Jia Y (2019) Learning and comfort in human–robot interaction: a review. Appl Sci 9(23):5152CrossRef
89.
Zurück zum Zitat Widdowson C, Yoon HJ, Cichella V, Wang RF, Hovakimyan N (2017) Vr environment for the study of collocated interaction between small uavs and humans. In: International Conference on Applied Human Factors and Ergonomics, pp. 348–355. Springer Widdowson C, Yoon HJ, Cichella V, Wang RF, Hovakimyan N (2017) Vr environment for the study of collocated interaction between small uavs and humans. In: International Conference on Applied Human Factors and Ergonomics, pp. 348–355. Springer
90.
Zurück zum Zitat Xu Q, Ng J, Cheong Y, Tan O, Wong J, Tay T, Park T (2012) The role of social context in human-robot interaction. In: 2012 Southeast Asian Network of Ergonomics Societies Conference (SEANES), pp. 1–5. IEEE Xu Q, Ng J, Cheong Y, Tan O, Wong J, Tay T, Park T (2012) The role of social context in human-robot interaction. In: 2012 Southeast Asian Network of Ergonomics Societies Conference (SEANES), pp. 1–5. IEEE
91.
Zurück zum Zitat Xu Z, Zhang K, Min H, Wang Z, Zhao X, Liu P (2018) What drives people to accept automated vehicles? Findings from a field experiment. Transport Res Part C: Emerg Technol 95:320–334CrossRef Xu Z, Zhang K, Min H, Wang Z, Zhao X, Liu P (2018) What drives people to accept automated vehicles? Findings from a field experiment. Transport Res Part C: Emerg Technol 95:320–334CrossRef
92.
Zurück zum Zitat You S, Kim JH, Lee S, Kamat V, Robert LP Jr (2018) Enhancing perceived safety in human–robot collaborative construction using immersive virtual environments. Autom Construct 96:161–170CrossRef You S, Kim JH, Lee S, Kamat V, Robert LP Jr (2018) Enhancing perceived safety in human–robot collaborative construction using immersive virtual environments. Autom Construct 96:161–170CrossRef
93.
Zurück zum Zitat Young JE, Kamiyama Y, Reichenbach J, Igarashi T, Sharlin E (2011) How to walk a robot: a dog-leash human-robot interface. In: 2011 RO-MAN, pp. 376–382. IEEE Young JE, Kamiyama Y, Reichenbach J, Igarashi T, Sharlin E (2011) How to walk a robot: a dog-leash human-robot interface. In: 2011 RO-MAN, pp. 376–382. IEEE
94.
Zurück zum Zitat Zacharaki A, Kostavelis I, Gasteratos A, Dokas I (2020) Safety bounds in human robot interaction: a survey. Safe Sci 127:104667CrossRef Zacharaki A, Kostavelis I, Gasteratos A, Dokas I (2020) Safety bounds in human robot interaction: a survey. Safe Sci 127:104667CrossRef
95.
Zurück zum Zitat Zhuang X, Wu C (2012) The safety margin and perceived safety of pedestrians at unmarked roadway. Transp Res Part F: Traffic Psychol Behav 15(2):119–131CrossRef Zhuang X, Wu C (2012) The safety margin and perceived safety of pedestrians at unmarked roadway. Transp Res Part F: Traffic Psychol Behav 15(2):119–131CrossRef
Metadaten
Titel
A Taxonomy of Factors Influencing Perceived Safety in Human–Robot Interaction
verfasst von
Neziha Akalin
Andrey Kiselev
Annica Kristoffersson
Amy Loutfi
Publikationsdatum
09.07.2023
Verlag
Springer Netherlands
Erschienen in
International Journal of Social Robotics / Ausgabe 12/2023
Print ISSN: 1875-4791
Elektronische ISSN: 1875-4805
DOI
https://doi.org/10.1007/s12369-023-01027-8

Weitere Artikel der Ausgabe 12/2023

International Journal of Social Robotics 12/2023 Zur Ausgabe

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.