Since the middle of the 20th century, robots have become a vital part of today’s production industry [
1,
2]. The latest development of robotics leads away from robots as a component of fully automated manufacturing processes towards processes in which human and robot work closely together at the same place and time. The latest generation of robots, for example, include the ability to recognize the proximity of a human as well as contacts and their intensity and to adapt flexibly to them [
3]. Although industrial robots are first and foremost “working machines”, they include social aspects (e.g. the appearance of being a co-worker) which have to be considered and taken into account in an implementation. Thus, the question of designing ergonomic workplaces where the human and robot can accomplish a task together in a collaborative operation is becoming increasingly important. In accordance with the definition in ISO 10218-1 “Robots and robotic devices—Safety requirements for industrial robots—Part 1: Robots” from the International Standardization Organization [
4] and ISO/TS 15066 [
5], the term collaboration is used in this paper as a “state in which purposely designed robots work in direct cooperation with a human within a defined workspace”. However, any erroneous behavior of the robot could result in serious injury to the human so the collaboration between human and robot has to be designed to be reliable and secure with regard to both hardware components as well as human cognition. In contrast, the term human–robot interaction is a general description of the interdependency between humans and robots.
It is predicted that by 2019 industrial robots worldwide will achieve a level of growth more than twice as high as was the case in 2010. Especially in industrialized countries, robots are used extensively in the national economy [
6‐
8]. For instance, according to IFR estimates, there are 314 robots per 10,000 working persons in the national economy of Japan, followed by the Federal Republic of Germany (hereinafter abbreviated as Germany) (292 robots per 10,000 working persons), the United States of America (hereinafter abbreviated as USA) (164 robots per 10,000 working persons) and the People’s Republic of China (hereafter abbreviated as China) (36 robots per 10,000 working persons). Furthermore, these countries have the biggest industrial robot sales. With 67,000 robot sales in 2015, China sold the most industrial robots worldwide, followed by Japan with 35,000 industrial robot sales, the USA with 27,000 industrial robot sales, and Germany with 20,000 robot sales [
8]. Robots are no longer only used in large corporations, but due to simpler programming and lower costs more and more SMEs will introduce robots in the next few years, which will further increase sales. While most of these robots do not share a workspace with human workers, it stands to reason that a similar trend of propagation will likely be observed for collaborative robots, as the required technologies become increasingly cheaper and more robust.
1.1 History in Technology Acceptance Research
There are a number of different models for measuring technology acceptance. Rogers’ diffusion theory from 1962 is fundamentally regarded as the starting point of this field of research. This theory proposes a five-step model, beginning with an awareness of a new technology and leading to its confirmation [
9]. The next essential step regarding technology acceptance is Davis’ Technology Acceptance Model (TAM) from 1989 [
10,
11]. It forecasts the acceptance and corresponding use of information technologies. More precisely, this model is based on the fundamental assumption that the behavioral intention leads to actual behavior. The behavioral intention, in turn, depends on two variables—the perceived ease of use and the perceived usefulness. The perceived usefulness is defined as “the degree to which a person believes that using a particular system would enhance his or her job performance”, whereas the perceived ease of use is defined as “the degree to which a person believes that using a particular system would be free from effort” [
10]. Some studies measuring the acceptance of robots refer to the work of Davis. These studies add other variables to the original model, such as personal implications (social norm, voluntariness of usage and image) and job-related variables (job relevance, output quality, result demonstrability and experience) (TAM 2 by Venkatesh and Davis [
12]). Another noteworthy approach is the Unified Theory of Acceptance and Use of Technology (TAM 3 by Venkatesh and Bala [
13]), which aims at combining existing models of user acceptance of information technologies. In addition, Venkatesh and Bala [
13] added variables to the TAM 3 model that relate to the anchoring and adjustment of human decision-making processes (computer self-efficacy, perception of external control, computer anxiety, computer playfulness, perceived enjoyment and objective usability). So far, however, acceptance models have not been adapted to the context of human–robot cooperation in an industrial setting and therefore neglect aspects to be considered important for this context.
The research that is presented in this paper therefore aimed at developing a human–robot collaboration acceptance model to investigate the acceptance of human–robot collaboration in an industrial work setting. To the knowledge of the authors, cross-cultural technology acceptance in the context of industrial robotics has not been explored so far with regard to collaboration but only for interaction variables. These results are summarized in the following paragraph.
1.2 Cross-Cultural Differences in Human–Robot Interaction
In the past few years, the interest of researchers to explore intercultural or cross-cultural acceptance of robots has grown. Transnational studies in this field were conducted within a Western or Eastern/Eastern-Asian cultural area, as well as cross-cultural studies between Western and Eastern countries, which will be described in more detail in the following. In the studies presented below, aspects such as direct versus indirect communication style, attitudes towards robots, anthropometry, robot type, and trust are discussed.
In general, it has been found that cultural background has an impact on human–robot interaction. Therefore, people feel more comfortable when interacting with a robot that behaves in a culturally normative way [
14‐
18]. Cross-cultural communication suggests that people thinking in Western patterns favor more direct forms of communication, whereas Eastern-influenced people prefer an indirect style of communication. Hall [
14] introduced the idea of low and high context cultures, whereby low context refers to cultures in which communications are explicit and require little interpretation to understand the content. High context cultures, on the other hand, are those in which content is less central and deciphering of contextual cues is required for accurate decoding. Although people tend to use both forms of communication, one generally dominates in a given culture [
14].
Generally, the results show that participants are more likely to accept the recommendations of a robot that speaks in more culturally normative ways. This illustrates that small adoptions to cultural preferences already have a positive effect on the whole interaction with a robot [
18,
19]. Westerners (those who live in the USA and Europe are more likely to accept the robot’s recommendation and evaluate it more positively when the robot uses an explicit communication style, whereas Easterners (specifically, the Chinese) are more likely to accept robots and evaluate them more positively when they use an implicit communication style [
16,
19]. Wang also shows that when collaborating in a human–robot team, Chinese participants were more likely to change their decision based on the robot’s advice when the robot communicated implicitly. American participants were more likely to do so when the robot communicated explicitly [
18]. Consequently, the communication types that are used in a society and by a robot influence the acceptance of recommendations provided by a robot [
18,
19].
Cultural differences can also have an impact on human–human interaction. Based on a Taiwanese–German management team, Mahadevan [
20] showed that tacit cultural differences can directly affect the team. In this study, cultural patterns were more general rather than contextual. In terms of comfort, the results show that people feel more comfortable when interacting with a robot that behaves in a more culturally normative way. Evers et al. [
15] found that when a robotic assistant was characterized as a group member, Chinese subjects reported feeling more comfortable compared with Americans.
Moreover, Bartneck et al. [
21] studied cultural differences in attitudes towards robots and found different attitudes depending on the cultural background of the subjects. In this cross-cultural study, the attitude of Dutch, Chinese and Japanese subjects using the “Negative Attitude towards Robots Scale” (NARS) questionnaire was examined. In contradiction to the popular belief that Japanese love robots, the results show that the Japanese are significantly more concerned with the impact that robots might have on society. A possible explanation could be that through their high exposure to robots, the Japanese are more aware of the robots’ abilities and also their lack thereof [
21]. In a follow-up study conducted by Bartneck et al. [
22], different appearances of robots were examined, which indicated that American subjects rated a robot as more favorable when it was more anthropomorphic, while Japanese subjects showed the opposite trend. In general, the Japanese do not seem to have significantly more positive attitudes and positive assumptions towards robots than Europeans [
23]. By using the NARS questionnaire, Bartneck et al. [
22] found that the Japanese are even significantly more concerned with the impact that robots might have on society than Westerners (e.g. Americans and the Dutch). A possible explanation could be related to their higher exposure to robots e.g. in real life, but predominantly through the Japanese media [
21‐
23]. Therefore, it is assumed that the Japanese have more robot-related experiences than the Chinese, Germans or Americans. However, Nomura et al. [
24] found out that UK citizens felt more negative toward humanoid robots than did the Japanese when using the “Frankenstein Syndrome Questionnaire” (FSQ).
In addition, several studies have been conducted on the subject of trust. After interacting with a robot in a judgement task, Chinese participants evaluated the robots as being more trustworthy than German participants did [
19]. When collaborating in a preference decision-making task together with a robot, Chinese participants were more likely to report trusting the robot than Americans did [
18]. After interacting with a robot in four different scenarios (teaching, guide, entertainment and security) Germans rated the robot less trustworthy than Chinese and Koreans [
25]. A study conducted by Haring [
26], showed that Australians rated the robot more trustworthy after an economic trust game than Japanese participants did.
Cultural differences were also found concerning the evaluation of the robot after interaction (e.g. [
18,
19,
25‐
27]) or after watching human–robot interaction (e.g. videos) (e.g. [
15]). For instance, people with an Eastern cultural background (e.g. Chinese, Koreans and Japanese) rated the robot higher in animacy and anthromorphism [
15,
18,
27], likeability [
19,
25], trust [
18,
19], perceived intelligence [
27] and perceived safety [
26,
27] than people with a Western cultural background (e.g. Americans, Germans and Australians). Following these results, it is assumed that Chinese and Japanese subjects evaluate the robot more positively after becoming more familiarized with it. Moreover, Easterners (Chinese and Koreans) show more engagement while interacting with a service robot in collaborative tasks than Westerners (e.g. Germans). Compared with German subjects, Chinese and Korean subjects perceived the service robots used in the experiment to be more likeable, satisfactory and trustworthy, and they had higher dedication with the robot. This finding is consistent with a previous study by Bartneck et al. [
21] showing that German subjects had more anxiety and more concern about the robot’s negative influence than Chinese subjects did [
25].
Cultural differences were also found regarding future intentions to use a robot in a collaborative situation after watching a video in which a human and robot perform a collaborative task together [
28] or after direct human–robot-interaction [
29]. Indeed, these differences were found within Europeans (British and Italians or Germans and Dutch).
However, the evaluation of the robot can also differ regarding the type of robot in terms of the autonomy level. For example, the Japanese assume autonomy of humanoid robots more strongly than Koreans or Americans [
30,
31]. In contrast, Lee et al. found that American subjects expected domestic robots to have high levels of autonomy [
32]. Thus, it is expected that there are cultural differences regarding the robot’s type (active vs. passive robot).
Previous studies have been mostly limited to measuring single aspects of cultural differences regarding the acceptance of social robots in a few countries. So far, to the best knowledge of the authors, there are no comparable studies measuring cross-cultural differences in industrial contexts. Moreover, a significant body of work is conducted mainly with test subjects recruited from universities (e.g. [
18,
21,
29‐
31]) or unspecified test subjects without a precise description (e.g. [
16,
23,
24,
32]). As far as we know, there are no studies measuring the acceptance of industrial workers. Therefore, the transferability of the available study results—with findings on social robots and test subjects from the university or everyday life—to industrial robots as well as work systems and test subjects from the industrial work environment is limited.
In short, based on the differences found in the attitudes towards robots in earlier studies (which compared Easterners and Westerners), this study aims at studying human–robot collaboration using Germany, Japan, China, and the USA as an example. The specific investigation of these four countries offers the potential to investigate the previously assumed dichotomy between a western and eastern culture in a more selective way and to identify possible causes for different technology acceptance more precisely. One factor, which predicts both effective and efficient technology usage, is the acceptance of the innovative assistive working system. A working system is perceived to be useful, cover human needs, capabilities and expectations only if acceptance scores are high. Therefore, human–robot collaboration acceptance should be evaluated with regard to differences in the predictors of acceptance depending on the cross-cultural background of subjects from Germany, Japan, China, and the USA.
1.3 The Present Study
The aim of our research was to build an acceptance model with regard to human–robot collaboration that builds on already existing knowledge and takes context-specific factors of the interaction between human and robots in an industrial setting into account. Therefore, the model was developed over four consecutive stages.
First, a research model based on the literature was developed and reviewed in a workshop with associates of robot manufacturing companies, associates of companies that use industrial robots, employees working with robots, and scientists in the fields of psychology, computer science and engineering. This model took variables of the traditional technology acceptance models, such as TAM [
10], TAM 2 [
12] and TAM 3 [
13] into account and was extended with regard to factors which came up during the workshop. As such, the model contains context-specific factors that might be the subject for adaptation in work systems, such as perceived enjoyment, perceived safety, ethical, legal and social implications and, on the other hand, personal characteristics such as self-efficacy, robot anxiety, affinity towards technology (adapted from [
33]) and perceptions of external control, which are considered as variables with uncertain influence on the predictors. The ELSI factors were added due to the context of human–robot collaboration (e.g. “I fear that I lose the contact to my colleagues because of the robot”). ELSI deal, among other things, with the advantages and disadvantages of the technology. On the one hand, employees can be supported in physically demanding or monotonous tasks. On the other hand, the growth of robot systems can lead to job losses and a decline in human skills and knowledge. This dualism of technology leads to the need to consider ethical, legal and social implications in the development of human–robot systems. [
34]. As a second step, a survey based on the emerged variables was developed and iteratively validated with experts. Third, the survey was implemented in form of an online tool and completed by 1326 participants. The participants were recruited according to the chosen countries with the support of a specialized survey panel company. Participants with at least 1 month of professional working experience as operational production workers were selected for the study. Lastly, the model was analyzed statistically by correlation analyses in order to draw conclusions with regard to possible predictors concerning the acceptance of robots.
The different taxonomies of human–robot collaboration and robots are complex and can be divided into communication channel, robot task, physical and temporal proximity, kind of collaboration, field of application, robot morphology, human interaction role, degree of robot autonomy and team composition [
35]. In this study, the role of the robot in the human–robot system is presented as active and passive in a simplified way. As robots can adopt an active role (e.g. handing over heavy components) or a passive role (e.g. hold a component so that the human can work on that component), we built two scenarios for the survey in order to make predictions concerning both ways of interacting. Participants were instructed to base their response behavior on the scenario including the robot as an active partner for interaction or as a passive partner for interaction.