Trust in humanoid robots: implications for services marketing

Michelle M.E. van Pinxteren (Department of International Relationship Management, Zuyd Hogeschool, Maastricht, The Netherlands)
Ruud W.H. Wetzels (Department of Marketing and Supply Chain Management, Maastricht University, Maastricht, The Netherlands)
Jessica Rüger (Maastricht University, Maastricht, The Netherlands)
Mark Pluymaekers (Department of International Relationship Management, Zuyd Hogeschool, Maastricht, The Netherlands)
Martin Wetzels (Maastricht University, Maastricht, The Netherlands; Waikato University, Hamilton, New Zealand and Aalto University, Helsinki, Finland)

Journal of Services Marketing

ISSN: 0887-6045

Article publication date: 23 July 2019

Issue publication date: 18 September 2019

19878

Abstract

Purpose

Service robots can offer benefits to consumers (e.g. convenience, flexibility, availability, efficiency) and service providers (e.g. cost savings), but a lack of trust hinders consumer adoption. To enhance trust, firms add human-like features to robots; yet, anthropomorphism theory is ambiguous about their appropriate implementation. This study therefore aims to investigate what is more effective for fostering trust: appearance features that are more human-like or social functioning features that are more human-like.

Design/methodology/approach

In an experimental field study, a humanoid service robot displayed gaze cues in the form of changing eye colour in one condition and static eye colour in the other. Thus, the robot was more human-like in its social functioning in one condition (displaying gaze cues, but not in the way that humans do) and more human-like in its appearance in the other (static eye colour, but no gaze cues). Self-reported data from 114 participants revealing their perceptions of trust, anthropomorphism, interaction comfort, enjoyment and intention to use were analysed using partial least squares path modelling.

Findings

Interaction comfort moderates the effect of gaze cues on anthropomorphism, insofar as gaze cues increase anthropomorphism when comfort is low and decrease it when comfort is high. Anthropomorphism drives trust, intention to use and enjoyment.

Research limitations/implications

To extend human–robot interaction literature, the findings provide novel theoretical understanding of anthropomorphism directed towards humanoid robots.

Practical implications

By investigating which features influence trust, this study gives managers insights into reasons for selecting or optimizing humanoid robots for service interactions.

Originality/value

This study examines the difference between appearance and social functioning features as drivers of anthropomorphism and trust, which can benefit research on self-service technology adoption.

Keywords

Citation

van Pinxteren, M.M.E., Wetzels, R.W.H., Rüger, J., Pluymaekers, M. and Wetzels, M. (2019), "Trust in humanoid robots: implications for services marketing", Journal of Services Marketing, Vol. 33 No. 4, pp. 507-518. https://doi.org/10.1108/JSM-01-2018-0045

Publisher

:

Emerald Publishing Limited

Copyright © 2019, Michelle M.E. van Pinxteren, Ruud W.H. Wetzels, Jessica Rüger, Mark Pluymaekers and Martin Wetzels.

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction

Self-service technologies (SSTs) promise to revolutionize the interactions of consumers with service providers (Kaushik and Rahman, 2015; Meuter et al., 2000) and radically shift the very nature of services (Parasuraman, 1996). These ‘technological interfaces that enable customers to produce a service independent of direct service employee involvement’ (Meuter et al., 2000, p. 50) take various forms in public service settings, including automated teller machines, self-service kiosks and self-checkouts (Collier et al., 2017; Curran and Meuter, 2005). The latest generation of SSTs relies on service robots (Severinson-Eklundh et al., 2003), which can physically replace human service employees (Edwards, 2014; Oh et al., 2013) and thus increase the level of customer service while decreasing costs (Allmendinger and Lombreglia, 2005; Bitner, 2001). Because service providers recognize this potential, more than 6.7 million service robots (International Federation of Robotics, 2017) are in operation; in hotels providing information to guests (Pinillos et al., 2016), in restaurants taking orders (Qing-xiao et al., 2010) and in stores assisting customers (Gross et al., 2002). Market reports thus predict that by 2020, 85 per cent of services will be provided without human involvement (Gartner, 2011; IDC, 2017) and the global service robotics market will be worth more than US$7.3bn (Ambasna-Jones, 2017).

Despite this potential, a key factor hinders the integration of service robots, namely consumers’ lack of trust (Everett et al., 2017; Morgan, 2017). Marketing research identifies trust as a strong determinant of intention to use a service through enjoyment (Wu and Chang, 2005). If consumers do not enjoy or intend to use service robots, the cost savings and bottom-line benefits will remain untapped (Allmendinger and Lombreglia, 2005; Bitner, 2001), and extant research offers few insights into the drivers of trust in service robots. Traditional SST research pertaining to services mainly focuses on automated teller machines (Curran and Meuter, 2005), self-scanning devices (Kaushik and Rahman, 2015), self-service kiosks (Collier et al., 2014) or self-checkouts (Collier et al., 2017). In particular, SST studies of service robots are scarce and mostly descriptive (Kaushik and Rahman, 2015), illustrating the need for experimental research (Gelbrich and Sattler, 2014). To address this gap, the current study seeks to identify antecedents and consequences of trust in service robots by focussing on a central concept in human–robot interaction (HRI) research: anthropomorphism (Epley et al., 2007).

Anthropomorphism is the human tendency to assign human capabilities, such as rational thought and feelings, to inanimate objects such as robots (Waytz et al., 2014). According to theory, anthropomorphism is easier if the robot is equipped with human-like features, such as a human face (Aggarwal and McGill, 2007; Epley et al., 2007). Research has also shown that higher levels of anthropomorphism increase trust in robots (Brave et al., 2005; Kiesler et al., 2008; Luo et al., 2006; Richards and Bransky, 2014; Waytz et al., 2014). This evidence has fuelled the development of humanoid robots with human-like facial expressions, voices and names (Fink, 2012). Although the effects of some human-like features on trust have been investigated, it is not clear which features increase trust most.

Anthropomorphism theory provides two contrasting perspectives (Epley et al., 2007). First, the elicited agent knowledge perspective stipulates that non-humans are anthropomorphized more readily when they possess observable human-like features and appearances. Second, the sociality motivation perspective proposes instead that non-humans are anthropomorphized more readily when they possess features resembling humans’ social functioning, such as the possibility to display non-verbal communication cues. From a design perspective, this discrepancy raises interesting issues, because robots designed to mimic humans’ social functioning do not necessarily resemble humans in appearance. Therefore, this study aims to investigate which features (appearance vs social functioning) affect anthropomorphism most, by focusing on gaze turn-taking cues. Recently developed service robots can display gaze turn-taking cues, by changing the colour of their eyes, such that they mimic humans’ social functioning. At the same time, this makes them more dissimilar from humans in appearance, whose eye colour cannot be changed. The experimental field study for this research required the humanoid robot to display these gaze cues in one condition but not in the other. This allowed us to investigate what is more effective for increasing anthropomorphism (and trust): social functioning features or appearance features that are more human-like.

The contributions of this study are both theoretical and practical. First, it sheds new light on the contrasting perspectives about the elicitation of anthropomorphism, arising from the theory proposed by Epley et al. (2007). Second, it represents an initial effort to combine central concepts in HRI (i.e. anthropomorphism and enjoyment) with notions from SST adoption literature (i.e. trust) to gain insight into consumers’ adoption of service robots (Fan et al., 2016; Kaushik and Rahman, 2015). Third, from a practical perspective, this expanded knowledge provides some concrete design guidelines for managers of companies that design and program service robots.

Conceptual background and hypotheses development

Anthropomorphism

According to HRI research, the design of attribute features in humanoid robots should reflect the understanding that ‘for a robot to be understandable to humans as other humans are, it must have a naturalistic embodiment, interact with the environment in the same way as living creatures do and perceive the same things humans find to be salient and relevant’ (Fong et al., 2003, p. 5). The integration of human-like features is believed to influence users’ perceptions of robots, through the cognitive process of anthropomorphism (Epley et al., 2007). In this process of inductive inference, humans attribute essential human traits, such as feelings or rational thought, to a robot in an effort to understand its otherwise unpredictable behaviour (Aggarwal and McGill, 2007; Eyssel et al., 2011; Waytz et al., 2014). As a result of this understanding, consumers prefer robots with greater human-likeness as interaction partners (Kiesler et al., 2008). Such predictions have fuelled the development of robots with obvious human-like features, such as faces or voices (Złotowski et al., 2015). Duffy (2003) states that a robot’s capacity to engage in human interaction requires a degree of human-like qualities, either in appearance, behaviour or both, yet anthropomorphism theory is ambiguous about which human-like qualities should be implemented.

Antecedents of anthropomorphism

Epley et al. (2007) describe three psychological mechanisms that explain why people anthropomorphize: effectance motivation (to explain and understand the behaviour of other agents), elicited agent knowledge (applicability of anthropocentric knowledge) and sociality motivation (desire for social contact and affiliation). The effectance motivation involves humans’ individual motivations to interact appropriately with a robot and explain its behaviour; elicited agent knowledge and the sociality motivation instead revolve around the characteristics of the robot and are therefore more relevant from the perspective of this study.

The elicited agent knowledge mechanism stipulates that knowledge about humans is more readily available and richly detailed for humans than knowledge about non-human agents (Epley et al., 2007). Therefore, humans use it as a basis for their inductive reasoning when they observe human features in non-human agents. The more morphologically similar a robot is in its observable features, the more likely humans are to use themselves as a source of induction and engage in anthropomorphization (Krach et al., 2008). This mechanism recommends incorporating human-like characteristics, such as faces and bodies, in the design of robots to enhance their human-like appearance (Burgoon et al., 2000; DiSalvo et al., 2002).

The sociality motivation mechanism instead implies that humans need to establish social connections with others (Epley et al., 2007). If such social connections are not available, people anthropomorphize robots to satisfy this need, by focussing on the features that facilitate social functioning during an interaction, including non-verbal cues. The more physiognomically similar a robot is in its social functioning, the more likely humans are to use themselves as sources of induction and anthropomorphize. This mechanism then suggests including human-like characteristics such as gaze, memory and gestures in the design of service robots (Bruce et al., 2001; Mutlu et al., 2009; Richards and Bransky, 2014; Salem et al., 2013). However, robots designed to resemble humans in social functioning do not necessarily resemble humans in appearance. This study aims to unravel these two contradictory perspectives, by focusing on the use of gaze turn-taking cues.

Gaze turn-taking cues

Turn-taking is a universal mechanism for coordinating interaction, regulating who speaks and when (Stivers et al., 2009; Sacks et al., 1974) and indicating interaction roles such as the addressee, bystander or overhearer (Goffman, 1979). To facilitate interactions, humans use multiple turn-taking cues. Duncan (1972) distinguishes six groups of cues – intonation, paralanguage, body motion, sociocentric sentences, pitch and syntax – that signal three different turn-taking intentions: turn-yielding or attempting to take the turn, suppressing or attempting to keep the turn and back-channelling or attempting to let the interlocutor keep the turn.

Although turn-taking cues are difficult to implement in robots, recent technological advances enable some displays of gaze turn-taking cues (Mutlu et al., 2009). Gaze cues are body motions, or specifically eye motions, that signal an intention of the speaker to an interlocutor (Goodwin, 1980; Sacks et al., 1974). A new generation of robots can display gaze turn-taking cues by changing the colour of their eyes (Ivanov et al., 2017). For example, the eyes of the humanoid robot Pepper (Softbank Robotics, 2017) turn red when it recognizes the interlocutor, thus indicating a yielding intention; green when it speaks, to indicate a suppressing intention and blue when waiting for input, or a back-channelling intention. Humans’ eye colours are static, so such gaze turn-taking cues do not resemble humans in appearance but solely in social functioning, which make them particularly suitable to test the contrasting perspectives offered by anthropomorphism theory (Epley et al., 2007). Doing so requires incorporating a variable that can explain the circumstances in which resemblance in appearance takes precedence over social functioning, or vice versa – namely, comfort.

Perceived interaction comfort

According to the comfort thesis (DiSalvo and Gemperle, 2003), interaction comfort is the primary emotional motivation for anthropomorphism. Similarly, Berger and Calabrese’s (1974) uncertainty reduction theory predicts that during interactions, people need information about an interlocutor to reduce uncertainty about its future behaviour. Being unable to obtain this information causes discomfort and triggers the use of uncertainty reduction strategies (Berger, 1986), including an increased focus on cues that provide social information, such as eye contact. When experiencing discomfort during an interaction with non-humans, humans similarly search for social cues they can use to predict their behaviour (Mourey et al., 2017). Therefore, interaction discomfort appears to increase human motivations to anthropomorphize social functioning cues in robots. Conversely, when people do not feel discomfort, the uncertainty reduction strategies are not triggered, that is, they do not rely on social cues to obtain social information such as eye contact. Furthermore, psychological research on cognitive processing shows that when humans experience neutral to positive emotions, interaction cues are processed less carefully and they use more superficial or cursory styles of thinking compared to humans who experience negative stress-related emotions such as discomfort (Baron et al., 1994; Bodenhausen et al., 1994). Therefore, we propose that the effect of the humanoid service robot’s gaze turn-taking cues on consumers’ perceived anthropomorphism is moderated by perceived interaction comfort, such that:

H1a.

When perceived interaction comfort is high, the effect of gaze turn-taking cues on anthropomorphism is attenuated.

H1b.

When perceived interaction comfort is low, the effect of gaze turn-taking cues on anthropomorphism is strengthened.

Trust

Services marketing research assesses trust in business-to-business contexts (Coulter and Coulter, 2002), retailing (Nguyen et al., 2014), financial services (Sekhon et al., 2013), healthcare (Auh, 2005) and e-commerce (Harris and Goode, 2010) but only a few studies examine trust in SSTs (Chowdhury et al., 2014; Kaushik and Rahman, 2015; Robertson et al., 2016). However, as research identifies trust as a strong determinant of intentions to use a service (Gefen et al., 2003) current literature highlights the need to understand drivers of trust in SSTs (Kaushik and Rahman, 2015) and service robots in particular (Wirtz et al., 2018). In the field of HRI, anthropomorphism has not only been identified as a strong determinant of user preference, but also of perceived trust (Duffy, 2003, Brave et al., 2005; Kiesler et al., 2008; Luo et al., 2006; Richards and Bransky, 2014; Waytz et al., 2014). Trust is a multidimensional concept, reflecting the perceived competence, integrity and benevolence of another entity (Mayer et al., 1995). When humans ascribe human capabilities, such as rational thought and feelings, to a robot, perceptions of the robot’s competence to perform its intended function are enhanced (Duffy, 2003). For example, Gong (2008) shows that virtual characters with a more human-like appearance are perceived as more competent to make decisions and trusted more. Therefore, features related to the robot itself, such as its appearance and functionality, are important for establishing trust with the user (Hancock et al., 2011). Thus:

H2.

Consumers’ perceived anthropomorphism of a humanoid service robot has a positive effect on consumers’ perceived trust.

Perceived enjoyment and intention to use

Early e-commerce studies indicate that perceived enjoyment mediates the relationship between trust and intentions to use (Sukhu et al., 2015; Wu and Chang, 2005). Researching the intentions to use service robots is crucial to reap their full benefits, for both consumers and service providers (Curran and Meuter, 2005). In research on service robots, a lack of trust is frequently cited as the main hindrance, preventing consumers from having the intention to use the service robot (Everett et al., 2017; Morgan, 2017). Research in various services marketing contexts, such as business (e.g. Barry et al., 2008), retailing (Nguyen et al., 2014) and SSTs (Chowdhury et al., 2014; Kaushik and Rahman, 2015), demonstrates that trust drives behavioural intentions. Furthermore, trust is included in multiple versions of the technology acceptance model as a predictor of intentions to use (Gefen et al., 2003). Studies in robotics often explain this relationship through flow theory (Nakamura and Csikszentmihalyi, 2014), which proposes that if people experience a feeling of total involvement when interacting with a robot, characterised by high perceived trust and interactivity, they enter a state of flow; that is a mental state of high enjoyment that intrinsically motivates people to continue using the service (El Shamy and Hassanein, 2017; Lee et al., 2007; Lu et al., 2009; Wu and Chang, 2005; Zhou et al., 2010). In parallel, multiple HRI studies indicate that enjoyment is an important driver of behavioural intentions (Heerink et al., 2008; Hong et al., 2008). Thus:

H3.

Consumers’ perceived enjoyment fully mediates the effect of perceived trust on consumers’ intentions to use humanoid service robots, such that (a) consumers’ perceived trust has a positive effect on perceived enjoyment and (b) their perceived enjoyment has a positive effect on consumers’ intentions to use humanoid service robots.

Figure 1 provides an overview of the full conceptual model.

Method

The conceptual model was tested with a field study in a public service setting. Humanoid service robots are increasingly being integrated into public service settings to enhance customer service experiences (Edwards, 2014; Fan et al., 2016). They welcome visitors and provide location-specific information (Gockley et al., 2005; Kanda et al., 2010; Leite et al., 2013; Pinillos et al., 2016; Severinson-Eklundh et al., 2003), which are crucial tasks in public spaces (Duranti, 1997; Lee and Makatchev, 2009; Pan et al., 2015).

Participants

An open innovation campus in the south-eastern part of The Netherlands provided the public service setting. Several start-ups and well-established companies are located on-site, including a leading global professional services company. Visitors to the campus enter through a main entrance, which features a reception desk. A Pepper humanoid service robot (Figure 2) was placed close to this reception desk, tasked with welcoming visitors and employees and offering directions to specific locations on the campus. A total of 116 respondents participated in the study, but missing values resulted in the exclusion of two respondents, leaving a final sample of 114 participants. The average age of the participants was 34 years, 35 per cent of them were women and 56 per cent had prior experience with robots.

Procedure

The humanoid service robot Pepper can display turn-taking cues with its eyes (Softbank Robotics, 2017). Several other features enable Pepper to interact with humans in a natural and intuitive way. First, a network of internal sensors, including ultrasound transmitters and receivers, laser sensors and obstacle detectors, provide the robot with information about objects within a range of 3 m. Second, four directional microphones in the robot’s head and speakers allow recognition of the interlocutor’s location and emotions, as transmitted by voice. Third, a 3 D camera and two HD cameras transmit images, processed by shape recognition software, so Pepper can identify faces and objects, movements and facial expressions of emotions. Fourth, tactile sensors in the humanoid service robot’s hands facilitate social interactions (Softbank Robotics, 2017).

Upon entry to the campus, participants were invited to interact with the humanoid service robot and asked for permission to record the interaction on camera. If permission was granted, the participant walked up to the robot. The interaction started with the robot welcoming the participant and asking how he or she was doing, to ensure a natural interaction. The features of the robot remained constant, and it offered appropriate responses to the participant’s comments. Subsequently, the robot asked the participant for the reason for the visit and where the participant wanted to go. In response, the robot provided appropriate information about the exact location while physically pointing in that direction. Finally, participants filled out a survey on a tablet provided by the researcher.

Design and measurement

This field experiment used a one-factor, between-subjects experimental design. In one condition, the robot’s eye colour was constant, resembling human appearance (i.e. static condition). In the other condition, the service robot sent gaze turn-taking cues by changing eye colour between red (robot noticed and/or recognized a person), blue (robot is ready to receive input), green (robot is speaking and/or moving) and white (robot is starting the application), which mimics human social functioning (i.e. dynamic condition). As the experiment was conducted in a real-life setting, the robot was not reprogrammed between participants. Therefore, participants were non-randomly assigned to conditions; however, no differences were found in participants’ age, gender and previous experience, which are important drivers of anthropomorphism (Epley, Waytz & Cacioppo, 2007) and intentions to use robots (de Graaf and Allouch, 2013). Participants were not informed of the meaning of the colours, because turn-taking in interactions between humans also gets learned during the conversation (Stivers et al., 2009). The experiment took place over a three-day period and yielded 60 valid responses in the static condition and 54 valid responses in the dynamic condition.

The study measures were adapted from extant literature to suit the humanoid service robot context. Specifically, the measures for perceived anthropomorphism (five items) came from Bartneck et al. (2009); perceived interaction comfort (one item, I felt comfortable interacting with the robot) was derived from work by Evers et al. (2008); trust (four items) came from Mayer et al. (1995); perceived enjoyment (four items) was adapted from Kulviwat et al. (2007) and intentions to use (two items) were based on work by Jackson et al. (1997). The anthropomorphism and enjoyment items relied on five-point semantic differential scales (Table I). All other constructs were measured using five-point Likert scales in which 1 indicated ‘strongly disagree’ and 5 ‘strongly agree’. The survey also included questions about age, gender and previous experience with robots.

Results

Partial least squares structural equation modelling (PLS-SEM) – an iterative combination of principal components analysis and ordinary least squares path analysis (Chin, 1998) – analyses the proposed model. For relatively small sample sizes, PLS-SEM is more robust, because the model parameters are estimated in blocks, and multivariate normality is not required (Hair et al., 2012). The software package was SmartPLS 3.2 (Ringle et al., 2015), and the bootstrapping procedure used 10,000 resamples to generate robust standard errors and t-statistics (Hair et al., 2016).

Evaluation of measurement model

Prior to evaluating the measurement model, the trust measure and the perceived interaction comfort measure were subjected to a square root transformation, because of their strong negative skewness (Freeman and Tukey, 1950; Tukey, 1957). The evaluation of the measurement model addressed its internal reliability, convergent validity and discriminant validity (Hair et al., 2016). First, in support of acceptable internal reliability, the composite reliability values for all multi-item constructs ranged from 0.82 to 0.92 (Table I), exceeding the recommended threshold value of 0.70 (Hair et al., 2011). Second, convergent validity was established (Table I), after omitting one item for anthropomorphism and one item for trust, because all average variance extracted (AVE) values exceed 0.50 (Fornell and Larcker, 1981). Third, the square root of the AVE exceeds the inter-construct correlations (Table II), in support of acceptable discriminant validity (Fornell and Larcker, 1981).

Evaluation of structural model

Prior to assessing the structural model and the hypothesized paths, the overall fit of the model was evaluated. As can be observed in Figure 3, the R2 values for each inner latent construct range between 0.121 and 0.262, indicating small to medium values (Chin, 1998). Tenenhaus et al. (2005) instead propose a goodness-of-fit (GoF) index to examine the model fit, in which GoF= communality¯×R¯2. The obtained GoF value of 0.42 indicates adequate fit (Wetzels et al., 2009).

First, the structural model results (Figure 3) indicate that H1a is statistically significant at p <0.05, whereas H1b is not. Moreover, the path coefficients for H2, H3a and H3b are statistically significant at p <0.01. The effect of gaze turn-taking cues on perceived anthropomorphism is moderated by perceived interaction comfort (β = −0.185, p <0.05; R2 = 0.121). An illustration of the moderation effect, Figure 4 shows that perceived anthropomorphism is higher for a service robot without gaze turn-taking cues (M =3.69) than for a robot with gaze turn-taking cues (M =3.18) when perceived interaction comfort is high. In contrast, when perceived interaction comfort is low, perceived anthropomorphism is higher for a robot with gaze turn-taking cues (M =3.06) than for a robot without this ability (M =2.98). Figure 5 shows where the conditional slope differs significantly from 0, for the standardized latent variables scores. Above the point when perceived interaction comfort is 0.49 SDs above the mean (standardized latent variable scores), the slope of gaze turn-taking cues is significantly different from 0 and negative (p <0.05). Although directionally as expected, the effect is not significant for low comfort. Thus, together these graphs show that perceived anthropomorphism is higher for a service robot without gaze turn-taking cues than for a robot with this ability when perceived interaction comfort is high (although directionally as expected, the effect is insignificant for low perceived interaction comfort). Furthermore, trust can be explained by perceived anthropomorphism (β = 0.475, p <0.01; R2 = 0.226), and trust does not drive intention to use directly (β = 0.08, p >0.10; R2 = 0.196); rather, its effect is fully mediated by perceived enjoyment (indirect effect: β = 0.187, p <0.01), to such an extent that more trust is associated with higher perceived enjoyment (β = 0.472, p <0.01; R2 = 0.262), and perceived enjoyment positively influences with intentions to use (β = 0.396, p <0.01; R2 = 0.196).

Discussion

In the reported experimental field study, a humanoid robot displays gaze turn-taking cues in one condition and none in the other, revealing whether a more human-like appearance or more human-like social functioning has the strongest effect on anthropomorphism and trust. The empirical evidence in Figures 3 and 4 highlights several key findings. First, the results provide support for a moderating role of perceived interaction comfort on the effect of gaze turn-taking cues on perceived anthropomorphism. When perceived interaction comfort is high, perceived anthropomorphism is significantly higher for a service robot without gaze turn-taking cues. The increased human-like appearance outweighs social functioning in terms of encouraging anthropomorphism toward service robots. When perceived interaction comfort is low, perceived anthropomorphism is directionally but not significantly higher for a service robot with turn-taking cues, suggesting that social functioning outweighs appearance in prompting such anthropomorphism. Second, perceived anthropomorphism drives trust in humanoid service robots, in line with previous HRI research (de Visser et al., 2016; Hancock et al, 2011; Heerink et al., 2010; Waytz et al., 2014). Third, consistent with existing literature, trust correlated positively with perceived enjoyment (Lee et al., 2007; Wu and Chang, 2005; Zhou et al., 2010), which in turn enhances intentions to use humanoid service robots (de Graaf and Allouch, 2013; Heerink et al., 2008; Hong et al., 2008).

Theoretical implications

Existing research in services marketing (Collier et al., 2014; Curran and Meuter, 2005; Gelbrich and Sattler, 2014) highlights the need to understand the adoption of SSTs in a public service setting, and of service robots in particular (Bartneck and Forlizzi, 2004). The present study answers this call by empirically assessing antecedents (i.e. anthropomorphism) and outcomes (i.e. enjoyment and intention to use) of trust in a specific SST in a public service setting. The results have important implications for service marketing and HRI research.

In particular, the present study sheds light on the contrasting perspectives that follow from the elicited agent knowledge mechanism and the sociality motivation mechanism in anthropomorphism theory (Epley et al., 2007). These two perspectives can be explained by literature on cognitive processing (Bodenhausen et al., 1994) and uncertainty reduction theory (Berger, 1986). First, this study provides evidence that when humans perceive interaction comfort as high, they tend to anthropomorphize robots with more human-like appearances. This finding can be explained by psychological research on cognitive processing that shows that when humans experience neutral to positive emotions, they process interaction cues less carefully and use more superficial or cursory styles of thinking, compared to humans who experience negative stress-related emotions such as discomfort (Baron et al., 1994; Bodenhausen et al., 1994). Therefore, humans who feel comfortable interacting with a service robot, might solely pay attention to superficial cues, such as appearance and not to behavioural cues such as gaze. Second, when customers experience low interaction comfort, our findings seem to indicate that people anthropomorphize robots with human-like social functioning features more readily. This is in line with uncertainty reduction theory which stipulates that people pay greater attention to social functioning cues when they feel uncomfortable during the interaction.

Although HRI research often posits that all kinds of human-like features incite anthropomorphism (Fink, 2012), the current study provides evidence that moderating variables ultimately determine their effect. Furthermore, by combining central concepts from HRI literature and emerging concepts from SST adoption literature (i.e. trust, interaction comfort and anthropomorphism) this study gains insight into consumers’ adoption of service robots (Collier et al., 2014; Fan et al., 2016; Hancock et al., 2011; Kaushik and Rahman, 2015). Therefore, this study contributes to the integrated research model called for in recent literature (Kaushik and Rahman, 2015; Wirtz et al., 2018).

Studies in HRI that rely on both flow theory (El Shamy and Hassanein, 2017; Lu et al., 2009) and technology acceptance (de Graaf and Allouch, 2013; Heerink et al., 2008; Hong et al., 2008; Shin and Kim, 2008) establish enjoyment as an important predictor of intentions to use, yet research on its mediating role between trust and intentions to use service robots is scarce. The present study picks up on the notion that more trust is associated with higher enjoyment (Koufaris, 2002; Lee et al., 2007; Wu and Chang, 2005; Zhou et al., 2010) and conveys this point to marketing research.

Managerial implications

Successfully integrating service robots in service interactions has the potential to benefit both service consumers and providers (Allmendinger and Lombreglia, 2005; Bitner, 2001; Meuter et al., 2000; Zhou et al., 2013). Trust appears fundamental to the adoption of service robots in this setting and can be triggered by human-like cues that resemble humans in both appearance and social functioning. Which of these features is more effective depends on the experienced interaction comfort of the consumer. Concretely, the findings of this study can help designers and managers reap these benefits and enhance public services settings.

First, for target groups for whom either the nature of the service (e.g. grocery service) or individual characteristics (e.g. experience) tend to make customers feel comfortable, equipping a robot with a human appearance will generally be more effective. Other adjustments could make the humanoid service robot appear more human-like, such as altering the service robot’s facial features, eyebrows or cheeks (Walters et al., 2008). If, on the other hand, a service provider targets a group for whom either the nature of the service (e.g. medical service) or individual characteristics (e.g. social anxiety) tend to make consumers feel uncomfortable during the interaction, equipping a robot with social functioning features might be a more effective solution. For such target groups, designers could experiment with other turn-taking cues, such as intonation, paralanguage, body motion, sociocentric sentences, pitch or syntax (Duncan, 1972).

Most humanoid service robots currently on the market resemble humans in appearance and not in social functioning (Fink, 2012), so managers might seek to engineer the physical surroundings of the service encounter to enhance people’s interaction comfort and thereby create superior HRI. Controllable factors that induce greater interaction comfort include general interior features (e.g. flooring, colour schemes, lighting, music, scent, temperature, cleanliness), layout and design factors (e.g. furniture, space design and allocation) and decorations (e.g. wall decorations, signs) (Turley and Milliman, 2000).

Limitations and further research

Although the present study offers insights into several antecedents and outcomes of trust in service robots, it also has some limitations. First, the experiment was conducted in an open innovation campus in the south-eastern part of The Netherlands. This represented a suitable service context but also meant that the sample was generally tech-savvy and accustomed to robotics (56 per cent indicated prior experience in interacting with robots). Additional research could increase the generalizability of the findings by using a more diverse sample.

Second, participants’ service encounters with the humanoid robot were relatively short in nature (M =42.2 s), which might have precluded some participants from noticing or interpreting the gaze turn-taking cues. Longer interactions might make these cues, or other social functioning cues, more salient. Longer interactions in combination with the inclusion of other human-related moderators, such as an emotional state (Ho et al., 2008), represent promising avenues for further research.

Third, the results indicate that participants perceived the turn-taking cues used in this experiment as less human-like when they felt comfortable during the interaction. Although humans use gaze cues to clarify their turn-taking intentions (Duncan, 1972), they are unable to change the colour of their eyes. Researchers in this field might attempt to combine robot-related and human-related factors to validate the relative importance of the elicited agent knowledge account or sociality motivation in Epley et al.’s (2007) anthropomorphism theory. One possibility is to study turn-taking cues that both humans and robots are able to perform in the same way, such as gestures or gazing (Chao and Thomas, 2010; Salem et al., 2013).

Figures

Conceptual model and hypotheses

Figure 1

Conceptual model and hypotheses

Pepper humanoid service robot

Figure 2

Pepper humanoid service robot

Structural model results

Figure 3

Structural model results

Results of simple slopes analysis of the interaction between gaze turn-taking cues and perceived interaction comfort on perceived anthropomorphism

Figure 4

Results of simple slopes analysis of the interaction between gaze turn-taking cues and perceived interaction comfort on perceived anthropomorphism

Conditional effect of the interaction between gaze turn-taking cues and perceived interaction comfort on perceived anthropomorphism

Figure 5

Conditional effect of the interaction between gaze turn-taking cues and perceived interaction comfort on perceived anthropomorphism

Factor loadings, composite reliability and average variance extracted of the constructs and their items

Components and manifest variables Loading (t-value)
Perceived anthropomorphism CR: 0.824, AVE: 0.542
How did you perceive the robot: Fake/Natural 0.837 (19.88)*
How did you perceive the robot: Machine-like/Human-like 0.744 (10.87)*
How did you perceive the robot: Unconscious/Conscious 0.701 (7.96)*
How did you perceive the robot: Moving rigidly/Moving elegantly 0.649 (7.16)*
Trust CR: 0.838, AVE: 0.635
I felt like the robot had my best interest at heart 0.720 (9.62)*
The robot provided accurate information 0.786 (13.78)*
I felt I could rely on the robot to do what was supposed to do 0.877 (30.45)*
Perceived enjoyment CR: 0.887, AVE: 0.663
The interaction with the robot made me feel: Unhappy/Happy 0.860 (31.27)*
The interaction with the robot made me feel: Annoyed/Pleased 0.761 (9.34)*
The interaction with the robot made me feel: Unsatisfied/Satisfied 0.843 (29.35)*
The interaction with the robot made me feel: Bored/Relaxed 0.789 (15.56)*
Intention to use CR: 0.923, AVE: 0.857
I intend to use robots in the future 0.924 (42.92)*
Using robots is a good idea 0.927 (49.04)*

Notes: CR: composite reliability; AVE: average variance extracted;

*p <0.01

Correlations and square root of the AVE

Construct 1 2 3 4 5 6
1. Gaze turn-taking cues *
2. Perceived interaction comfort 0.055 **
3. Anthropomorphism −0.091 0.271 0.736
4. Trust −0.090 0.454 0.475 0.797
5. Enjoyment −0.014 0.483 0.548 0.484 0.814
6. Intention to use 0.016 0.303 0.359 0.267 0.408 0.926

Notes: Values down the diagonal are the square roots of the AVE; all others are correlation coefficients;

*manipulation;

**single-item scale

References

Aggarwal, P. and McGill, A. (2007), “Is that car smiling at me? Schema congruity as a basis for evaluating anthropomorphized products”, Journal of Consumer Research, Vol. 34 No. 4, pp. 468-479, doi: 10.1086/518544.

Allmendinger, G. and Lombreglia, R. (2005), “Four strategies for the age of smart services”, Harvard Business Review, Vol. 83 No. 10, pp. 131-143.

Ambasna-Jones, M. (2017), “How social robots are dispelling myths and caring for humans”, available at: www.theguardian.com/medianetwork/2016/may/09/robots-social-health-care-elderly-children (accessed 6 November 2017).

Auh, S. (2005), “The effects of soft and hard service attributes on loyalty: the mediating role of trust”, Journal of Services Marketing, Vol. 19 No. 2, pp. 80-92, doi: 10.1108/08876040510591394.

Baron, R., Logan, H., Lilly, J., Inman, M. and Brennan, M. (1994), “Negative emotion and message processing”, Journal of Experimental Social Psychology, Vol. 30 No. 2, pp. 181-181.

Barry, J.M., Dion, P. and Johnson, W. (2008), “A cross‐cultural examination of relationship strength in B2B services”, Journal of Services Marketing, Vol. 22 No. 2, pp. 114-135, doi: 10.1108/08876040810862868.

Bartneck, C. and Forlizzi, J. (2004), “A design-centred framework for social human-robot interaction”, in Robot and Human Interactive Communication, ROMAN 2004, 13th IEEE International Workshop on, Kurashiki, Japan, IEEE, New York, NY, pp. 591-594, doi: 10.1109/ROMAN.2004.1374827.

Bartneck, C., Kulić, D., Croft, E. and Zoghbi, S. (2009), “Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots”, International Journal of Social Robotics, Vol. 1 No. 1, pp. 71-81.

Berger, C.R. (1986), “Uncertain outcome values in predicted relationships uncertainty reduction theory then and now”, Human Communication Research, Vol. 13 No. 1, pp. 34-38, doi: 10.1111/j.1468-2958.1986.tb00093.x.

Berger, C.R. and Calabrese, R.J. (1974), “Some explorations in initial interaction and beyond: toward a developmental theory of interpersonal communication”, Human Communication Research, Vol. 1 No. 2, pp. 99-112, doi: 10.1111/j.1468-2958.1975.tb00258.x.

Bitner, M.J. (2001), “Self-service technologies: what do customers expect?”, Marketing Management, Vol. 10 No. 1, pp. 10-11.

Bodenhausen, G.V., Kramer, G.P. and Süsser, K. (1994), “Happiness and stereotypic thinking in social judgment”, Journal of Personality and Social Psychology, Vol. 66 No. 4, pp. 621-632.

Brave, S., Nass, C. and Hutchinson, K. (2005), “Computers that care: investigating the effects of orientation of emotion exhibited by an embodied computer agent”, International Journal of Human-Computer Studies, Vol. 62 No. 2, pp. 161-178, doi: 10.1016/j.ijhcs.2004.11.002.

Bruce, A., Nourbakhsh, I. and Simmons, R. (2001), “The role of expressiveness and attention in human-robot interaction”, Paper presented at AAAI Fall Symposium Emotional and Intel II: The Tangled Knot of Social Cognition, 11-15 May, Washington, DC, available at: http://ieeexplore.ieee.org/abstract/document/1014396/

Burgoon, J.K., Bonito, J.A., Bengtsson, B., Cederberg, C., Lundeberg, M. and Allspach, L. (2000), “Interactivity in human-computer interaction: a study of credibility, understanding, and influence”, Computers in Human Behavior, Vol. 16 No. 6, pp. 553-574, doi: 10.1016/S0747-5632(00)00029-7.

Chao, C. and Thomas, A.L. (2010), “Turn taking for human-robot interaction”, in AAAI Fall Symposium: dialog with Robots, 11-13 November 2010, Arlington, VT, 2010, AAAI press, Menlo Park, pp. 132-134.

Chin, W.W. (1998), “The partial least squares approach to structural equation modelling”, Modern Methods for Business Research, Vol. 295 No. 2, pp. 295-336.

Chowdhury, R.I., Patro, S., Venugopal, P. and Israel, D. (2014), “A study on consumer adoption of technology facilitated services”, Journal of Services Marketing, Vol. 28 No. 6, pp. 471-483, available at: https://doi.org/10.1108/JSM-04-2013-0095

Collier, J.E., Breazeale, M. and White, A. (2017), “Giving back the ‘self’ in self-service: customer preferences in self-service failure recovery”, Journal of Services Marketing, Vol. 31 No. 6, pp. 604-617, doi: 10.1108/JSM-07-2016-0259.

Collier, J.E., Sherrell, D.L., Babakus, E. and Horky, A.B. (2014), “Understanding the differences of public and private self-service technology”, Journal of Services Marketing, Vol. 28 No. 1, pp. 60-70.

Coulter, K.S. and Coulter, K.S. (2002), “Determinants of trust in a service provider: the moderating role of length of relationship”, Journal of Services Marketing, Vol. 16 No. 1, pp. 35-50, doi: 10.1108/08876040210419406..

Curran, J.M. and Meuter, M.L. (2005), “Self-service technology adoption: comparing three technologies”, Journal of Services Marketing, Vol. 19 No. 2, pp. 103-113, doi: 10.1108/08876040510591411.

de Graaf, M.M. and Allouch, S.B. (2013), “Exploring influencing variables for the acceptance of social robots”, Robotics and Autonomous Systems, Vol. 61 No. 12, pp. 1476-1486, doi: 10.1016/j.robot.2013.07.007.

de Visser, E.J., Monfort, S.S., McKendrick, R., Smith, M.A., McKnight, P.E., Krueger, F. and Parasuraman, R. (2016), “Almost human: anthropomorphism increases trust resilience in cognitive agents”, Journal of Experimental Psychology: Applied, Vol. 22 No. 3, pp. 331-348, doi: 10.1037/xap0000092..

DiSalvo, C.F. and Gemperle, F. (2003), “From seduction to fulfillment: the use of anthropomorphic form in design”, in Proceedings of the 2003 International Conference on Designing Pleasurable Products and Interfaces Pittsburgh, USA, ACM, New York, NY, pp. 67-72.

DiSalvo, C.F., Gemperle, F., Forlizzi, J. and Kiesler, S. (2002), “All robots are not created equal: the design and perception of humanoid robot heads”, in Proceedings of the 4th Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, London, England, 2002, ACM, New York, NY, pp. 321-326.

Duffy, B.R. (2003), “Anthropomorphism and the social robot”, Robotics and Autonomous Systems, Vol. 42 Nos 3/4, pp. 177-190, doi: 10.1016/S0921-8890(02)00374-3.

Duncan, S. (1972), “Some signals and rules for taking speaking turns in conversations”, Journal of Personality and Social Psychology, Vol. 23 No. 2, pp. 283-292, doi: 10.1037/h0033031.

Duranti, A. (1997), “Universal and culture‐specific properties of greetings”, Journal of Linguistic Anthropology, Vol. 7 No. 1, pp. 63-97, doi: 10.1525/jlin.1997.7.1.63.

Edwards, R. (2014), “Robot butler piloted at California hotel”, available at: www.telegraph.co.uk/travel/destinations/northamerica/usa/11031447/Robot-butler-piloted-at-California-hotel.html (accessed 16 November 2017).

El Shamy, N. and Hassanein, K. (2017), “A meta-analysis of enjoyment effect on technology acceptance: the moderating role of technology conventionality”, in Proceedings of the 50th International Conference on System Sciences (HICS), HI, 2017, pp. 4139-4147

Epley, N., Waytz, A. and Cacioppo, J.T. (2007), “On seeing human: a three-factor theory of anthropomorphism”, Psychological Review, Vol. 114 No. 4, pp. 864-886, doi: 10.1037/0033-295X.114.4.864.

Everett, J., Pizarro, D. and Crockett, M. (2017), “Why are we reluctant to trust robots?available at: www.theguardian.com/science/head-quarters/2017/apr/24/why-are-we-reluctant-to-trust-robots (accessed 16 November 2017).

Evers, V., Maldonado, H., Brodecki, T. and Hinds, P. (2008), “Relational vs group self-construal: untangling the role of national culture in HRI”, in Human-Robot Interaction (HRI), 3rd ACM/IEEE International Conference on, Amsterdam, the Netherlands, 2008, IEEE, Piscataway, pp. 255-262.

Eyssel, F., Kuchenbrandt, D. and Bobinger, S. (2011), “Effects of anticipated human-robot interaction and predictability of robot behavior on perceptions of anthropomorphism”, in Proceedings of the 6th International Conference on Human-Robot Interaction, ACM, New York, NY, pp. 61-68.

Fan, A., Wu, L. and Mattila, A.S. (2016), “Does anthropomorphism influence customers’ switching intentions in the self-service technology failure context?”, Journal of Services Marketing, Vol. 30 No. 7, pp. 713-723, doi: 10.1108/JSM-07-2015-0225.

Fink, J. (2012), “Anthropomorphism and human likeness in the design of robots and human-robot interaction”, in International Conference on Social Robotics (ICSR), Chengdu, China, 2012, Springer, Heidelberg, Berlin, pp. 199-208.

Fong, T., Nourbakhsh, I. and Dautenhahn, K. (2003), “A survey of socially interactive robots”, Robotics and Autonomous Systems, Vol. 42 Nos 3/4, pp. 143-166, doi: 10.1016/S0921-8890(02)00372-X.

Fornell, C. and Larcker, D.F. (1981), “Evaluating structural equation models with unobservable variables and measurement error”, Journal of Marketing Research, Vol. 18 No. 1, pp. 39-50, doi: 10.2307/3151312.

Freeman, M.F. and Tukey, J.W. (1950), “Transformations related to the angular and the square root”, The Annals of Mathematical Statistics, Vol. 21 No. 4, pp. 607-611, doi: 10.1214/aoms/1177729756.

Gartner (2011), “Gartner customer 360 summit 2011”, available at: www.gartner.com/imagesrv/summits/docs/na/customer-360/C360_2011_brochure_FINAL.pdf (accessed 10 November 2017).

Gefen, D., Karahanna, E. and Straub, D.W. (2003), “Trust and TAM in online shopping: an integrated model”, MIS Quarterly, Vol. 27 No. 1, pp. 51-90.

Gelbrich, K. and Sattler, B. (2014), “Anxiety, crowding, and time pressure in public self-service technology acceptance”, Journal of Services Marketing, Vol. 28 No. 1, pp. 82-94, doi: 10.1108/JSM-02-2012-0051.

Gockley, R., Bruce, A., Forlizzi, J., Michalowski, M., Mundell, A., Rosenthal, S., Sellner, B., Simmons, R., Snipes, K., Schultz, A.C. and Wang, J. (2005), “Designing robots for long-term social interaction”, in Intelligent Robots and Systems (IROS), IEEE/RSJ International Conferenceon, Edmonton, Canada, 2005, IEEE, Piscataway, pp. 1338-1343.

Goffman, E. (1979), “Footing”, Semiotica, Vol. 25 No. 1-2, pp. 1-30, doi: 10.1515/semi.1979.25.1-2.1.

Gong, L. (2008), “How social is social responses to computers? the function of the degree of anthropomorphism in computer representations”, Computers in Human Behavior, Vol. 24 No. 4, pp. 1494-1509, doi: 10.1016/j.chb.2007.05.007.

Goodwin, C. (1980), “Restarts, pauses, and the achievement of mutual gaze at turn-beginning”, Sociological Inquiry, Vol. 50 Nos 3/4, pp. 272-302, doi: 10.1111/j.1475-682X.1980.tb00023.x.

Gross, H.M., Koenig, A., Boehme, H.J. and Schroeter, C. (2002), “Vision-based monte carlo self-localization for a mobile service robot acting as shopping assistant in a home store”, in Intelligent Robots and Systems, IEEE/RSJ International Conference on, Lausanne, Switzerland, 2002, IEEE, Piscataway, pp. 256-262.

Hair, J.F., Ringle, C.M. and Sarstedt, M. (2011), “PLS-SEM: indeed a silver bullet”, Journal of Marketing Theory and Practice, Vol. 19 No. 2, pp. 139-152, doi: 10.2753/MTP1069-6679190202.

Hair, J.F., Hult, G.T.M., Ringle, C., and Sarstedt, M. (2016), A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM), Sage Publications, Thousand Oaks.

Hair, J.F., Sarstedt, M., Ringle, C.M. and Mena, J.A. (2012), “An assessment of the use of partial least squares structural equation modeling in marketing research”, Journal of the Academy of Marketing Science, Vol. 40 No. 3, pp. 414-433, doi: 10.1007/s11747-011-0261-6.

Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y., de Visser, E.J. and Parasuraman, R. (2011), “A meta-analysis of factors affecting trust in human-robot interaction”, Human Factors: The Journal of the Human Factors and Ergonomics Society, Vol. 53 No. 5, pp. 517-527, doi: 10.1177/0018720811417254..

Harris, L.C. and Goode, M.M.H. (2010), “Online servicescapes, trust and purchase intentions”, Journal of Services Marketing, Vol. 24 No. 3, pp. 230-243, doi: 10.1108/08876041011040631.

Heerink, M., Kröse, B., Evers, V. and Wielinga, B. (2008), “The influence of social presence on acceptance of a companion robot by older people”, Journal of Physical Agents (Jopha), Vol. 2 No. 2, pp. 33-39, doi: 10.14198/JoPha.2008.2.2.05.

Heerink, M., Kröse, B., Evers, V. and Wielinga, B. (2010), “Assessing acceptance of assistive social agent technology by older adults: the almere model”, International Journal of Social Robotics, Vol. 2 No. 4, pp. 361-375, doi: 10.1007/s12369-010-0068-5.

Ho, C.C., Macdorman, K.F. and Pramono, Z.D. (2008), “Human emotion and the uncanny valley: a GLM, MDS, and isomap analysis of robot video ratings”, in Human-Robot Interaction (HRI), 3rd ACM/IEEE International Conference on, Amsterdam, the Netherlands, 2008, IEEE, Piscataway, pp. 169-176.

Hong, S.T., Shin, J.C. and Kang, M.S. (2008), “A study on factors influencing consumers' intention to adopt intelligent home robot services-applying technology acceptance model and diffusion of innovation model”, Asia Marketing Journal, Vol. 9 No. 4, pp. 271-303.

IDC (2017), “Worldwide semiannual robotics and drones spending guide”, available at: www.idc.com/getdoc.jsp?containerId=IDC_P33201 (accessed 20 February 2018).

International Federation of Robotics (2017), “Trends in industrial and service robot application”, available at: https://ifr.org/free-downloads/ (accessed 10 February 2018).

Ivanov, S.H., Webster, C. and Berezina, K. (2017), “Adoption of robots and service automation by tourism and hospitality companies”, paper presented at the INVTUR Conference, 17-19 May 2017, Aveiro, Portugal, available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2964308

Jackson, C.M., Chow, S. and Leitch, R.A. (1997), “Toward an understanding of the behavioral intention to use an information system”, Decision Sciences, Vol. 28 No. 2, pp. 357-389, doi: 10.1111/j.1540-5915.1997.tb01315.x..

Kanda, T., Shiomi, M., Miyashita, Z., Ishiguro, H. and Hagita, N. (2010), “A communication robot in a shopping mall”, IEEE Transactions on Robotics, Vol. 26 No. 5, pp. 897-913, doi: 10.1109/TRO.2010.2062550.

Kaushik, A.K. and Rahman, Z. (2015), “An alternative model of self-service retail technology adoption”, Journal of Services Marketing, Vol. 29 No. 5, pp. 406-420, doi: 10.1108/JSM-08-2014-0276.

Kiesler, S., Powers, A., Fussell, S.R. and Torrey, C. (2008), “Anthropomorphic interactions with a robot and robot–like agent”, Social Cognition, Vol. 26 No. 2, pp. 169-181.

Koufaris, M. (2002), “Applying the technology acceptance model and flow theory to online consumer behavior”, Information Systems Research, Vol. 13 No. 2, pp. 205-223, doi: 10.1287/isre.13.2.205.83.

Krach, S., Hegel, F., Wrede, B., Sagerer, G., Binkofski, F. and Kircher, T. (2008), “Can machines think? Interaction and perspective taking with robots investigated via fMRI”, PloS One, Vol. 3 No. 7, p. e2597, doi: 10.1371/journal.pone.0002597.

Kulviwat, S., Bruner, I.I., Gordon, C., Kumar, A., Nasco, S.A. and Clark, T. (2007), “Towards a unified theory of consumer acceptance technology”, Psychology and Marketing, Vol. 24 No. 12, pp. 1059-1084, doi: 10.1002/mar.20196.

Lee, M.K. and Makatchev, M. (2009), “How do people talk with a robot? An analysis of human-robot dialogues in the real world”, in CHI'09 Extended Abstracts on Human Factors in Computing Systems, ACM, Boston, pp. 3769-3774, doi: 10.1145/1520340.1520569.

Lee, K.C., Kang, I. and McKnight, D.H. (2007), “Transfer from offline trust to key online perceptions: an empirical study”, IEEE Transactions on Engineering Management, Vol. 54 No. 4, pp. 729-741, doi: 10.1109/TEM.2007.906851.

Leite, I., Martinho, C. and Paiva, A. (2013), “Social robots for long-term interaction: a survey”, International Journal of Social Robotics, Vol. 5 No. 2, pp. 291-308, doi: 10.1007/s12369-013-0178-y.

Lu, Y., Zhou, T. and Wang, B. (2009), “Exploring chinese users’ acceptance of instant messaging using the theory of planned behavior, the technology acceptance model, and the flow theory”, Computers in Human Behavior, Vol. 25 No. 1, pp. 29-39, doi: 10.1016/j.chb.2008.06.002.

Luo, J.T., McGoldrick, P., Beatty, S. and Keeling, K.A. (2006), “On‐screen characters: their design and influence on consumer trust”, Journal of Services Marketing, Vol. 20 No. 2, pp. 112-124, available at: https://doi.org/10.1108/08876040610657048

Mayer, R.C., Davis, J.H. and Schoorman, F.D. (1995), “An integrative model of organizational trust”, Academy of Management Review, Vol. 20 No. 3, pp. 709-734, doi: 10.2307/258792.

Meuter, M.L., Ostrom, A.L., Roundtree, R.I. and Bitner, M.J. (2000), “Self-service technologies: understanding customer satisfaction with technology-based service encounters”, Journal of Marketing, Vol. 64 No. 3, pp. 50-64, doi: 10.1509/jmkg.64.3.50.18024.

Morgan, B. (2017), “10 Things robots can't do better than humans”, available at: www.forbes.com/sites/blakemorgan/2017/08/16/10-things-robots-cant-do-better-than-humans/#5e4795a1c83d (accessed 14 November 2017).

Mourey, J.A., Olson, J.G. and Yoon, C. (2017), “Products as pals. Engaging with anthropomorphic products mitigates the effects of social exclusion”, Journal of Consumer Research, Vol. 44 No. 2, pp. 414-431, doi: 10.1093/jcr/ucx038.

Mutlu, B., Shiwa, T., Kanda, T., Ishiguro, H. and Hagita, N. (2009), “Footing in human-robot conversations: how robots might shape participant roles using gaze cues”, in Proceedings of the 4th ACM/IEEE international conference on Human robot interaction, ACM, La Jolla, pp. 61-68, doi: 10.1145/1514095.1514109.

Nakamura, J. and Csikszentmihalyi, M. (2014), “The concept of flow”, In Flow and the Foundations of Positive Psychology, Springer Science and Business, Berlin, Heidelberg.

Nguyen, B., Klaus, P. and Simkin, L. (2014), “It’s just not fair: exploring the effects of firm customization on unfairness perceptions, trust and loyalty”, Journal of Services Marketing, Vol. 28 No. 6, pp. 484-497, doi: 10.1108/JSM-05-2013-0113.

Oh, H., Jeong, M. and Baloglu, S. (2013), “Tourists' adoption of self-service technologies at resort hotels”, Journal of Business Research, Vol. 66 No. 6, pp. 692-699, doi: 10.1016/j.jbusres.2011.09.005.

Pan, Y., Okada, H., Uchiyama, T. and Suzuki, K. (2015), “On the reaction to robot’s speech in a hotel public space”, International Journal of Social Robotics, Vol. 7 No. 5,pp. 911-920, doi: 10.1007/s12369-015-0320-0.

Parasuraman, A. (1996), “Understanding and leveraging the role of customer service in external, interactive and internal marketing”, paper presented at Frontiers in Services Conference, Nashville, TN.

Pinillos, R., Marcos, S., Feliz, R., Zalama, E. and Gómez-García-Bermejo, J. (2016), “Long-term assessment of a service robot in a hotel environment”, Robotics and Autonomous Systems, Vol. 79 No. 1, pp. 40-57, doi: 10.1016/j.robot.2016.01.014.

Qing-xiao, Y., Can, Y., Zhuang, F. and Yan-zheng, Z. (2010), “Research of the localization of restaurant service robot”, International Journal of Advanced Robotic Systems, Vol. 7 No. 3, pp. 227-238, doi: 10.5772/9706.

Richards, D. and Bransky, K. (2014), “ForgetMeNot: what and how users expect intelligent virtual agents to recall and forget personal conversational content”, International Journal of Human-Computer Studies, Vol. 72 No. 5, pp. 460-476, doi: 10.1016/j.ijhcs.2014.01.005..

Ringle, C.M., Wende, S. and Becker, J.M. (2015), “SmartPLS 3. Boenningstedt: smartPLS GmbH ”, available at: www.smartpls.com (accessed 1 December 2017).

Robertson, N., McDonald, H., Leckie, C. and McQuilken, L. (2016), “Examining customer evaluations across different self-service technologies”, Journal of Services Marketing, Vol. 30 No. 1, pp. 88-102, doi: 10.1108/JSM-07-2014-0263..

Sacks, H., Schegloff, E.A. and Jefferson, G. (1974), “A simplest systematics for the organization of turn-taking for conversation”, Language, Vol. 50 No. 4, pp. 696-735, doi: 10.2307/412243.

Salem, M., Eyssel, F., Rohlfing, K., Kopp, S. and Joublin, F. (2013), “To err is human (-like): effects of robot gesture on perceived anthropomorphism and likability”, International Journal of Social Robotics, Vol. 5 No. 3, pp. 313-323, doi: 10.1007/s12369-013-0196-9.

Sekhon, H., Roy, S., Shergill, G. and Pritchard, A. (2013), “Modelling trust in service relationships: a transnational perspective”, Journal of Services Marketing, Vol. 27No. 1, pp. 76-86, available at: https://doi.org/10.1108/08876041311296392

Severinson-Eklundh, K., Green, A. and Hüttenrauch, H. (2003), “Social and collaborative aspects of interaction with a service robot”, Robotics and Autonomous Systems, Vol. 42 Nos 3/4, pp. 223-234, doi: 10.1016/S0921-8890(02)00377-9.

Shin, D.H. and Kim, W.Y. (2008), “Applying the technology acceptance model and flow theory to cyberworld user behavior: implication of the web2. 0 user acceptance”, CyberPsychology and Behavior, Vol. 11 No. 3, pp. 378-382, doi: 10.1089/cpb.2007.0117.

Softbank Robotics (2017), “Who is pepper?”, available at: www.ald.softbankrobotics.com/en/robots/pepper (accessed 15 December 2017).

Stivers, T., Enfield, N.J., Brown, P., Englert, C., Hayashi, M., Heinemann, T. and Levinson, S.C. (2009), “Universals and cultural variation in turn-taking in conversation”, Proceedings of the National Academy of Sciences, Vol. 106 No. 26, pp. 10587-10592, doi: 10.1073/pnas.0903616106.

Sukhu, A., Zhang, T. and Bilgihan, A. (2015), “Factors influencing information-sharing behaviors in social networking sites”, Services Marketing Quarterly, Vol. 36 No. 4, pp. 317-334, doi: 10.1080/15332969.2015.1076697.

Tenenhaus, M., Vinzi, V.E., Chatelin, Y.M. and Lauro, C. (2005), “PLS path modeling”, Computational Statistics and Data Analysis, Vol. 48 No. 1, pp. 159-205, doi: 10.1016/j.csda.2004.03.005.

Tukey, J.W. (1957), “On the comparative anatomy of transformations”, The Annals of Mathematical Statistics, Vol. 28 No. 3, pp. 602-632.

Turley, L.W. and Milliman, R.E. (2000), “Atmospheric effects on shopping behavior: a review of the experimental evidence”, Journal of Business Research, Vol. 49 No. 2, pp. 193-211, doi: 10.1016/S0148-2963(99)00010-7.

Walters, M.L., Syrdal, D.S., Dautenhahn, K., te Boekhorst, R. and Koay, K.L. (2008), “Avoiding the uncanny valley: robot appearance, personality and consistency of behavior in an attention-seeking home scenario for a robot companion”, Autonomous Robots, Vol. 24 No. 2, pp. 159-178, doi: 10.1007/s10514-007-9058-3.

Waytz, A., Heafner, J. and Epley, N. (2014), “The mind in the machine: anthropomorphism increases trust in an autonomous vehicle”, Journal of Experimental Social Psychology, Vol. 52 No. 1, pp. 113-117, doi: 10.1016/j.jesp.2014.01.005.

Wetzels, M., Odekerken-Schröder, G. and Van Oppen, C. (2009), “Using PLS path modelling for assessing hierarchical construct models: guidelines and empirical illustration”, MIS Quarterly, Vol. 33 No. 1, pp. 177-195.

Wirtz, J., Patterson, P.G., Kunz, W.H., Gruber, T., Lu, V.N., Paluch, S. and Martins, A. (2018), “Brave new world: service robots in the frontline”, Journal of Service Management, Vol. 29 No. 5, pp. 907-931, doi: 10.1108/JOSM-04-2018-0119.

Wu, J.J. and Chang, Y.S. (2005), “Towards understanding members' interactivity, trust, and flow in online travel community”, Industrial Management & Data Systems, Vol. 105 No. 7, pp. 937-954, doi: 10.1108/09596110410519982.

Zhou, T., Li, H. and Liu, Y. (2010), “The effect of flow experience on mobile SNS users' loyalty”, Industrial Management & Data Systems, Vol. 110 No. 6, pp. 930-946, doi: 10.1108/02635571011055126.

Zhou, Y., Huang, M., Tsang, A. and Zhou, N. (2013), “Recovery strategy for group service failures”, European Journal of Marketing, Vol. 47 No. 8, pp. 1133-1156, doi: 10.1108/03090561311324255.

Złotowski, J., Proudfoot, D., Yogeeswaran, K. and Bartneck, C. (2015), “Anthropomorphism: opportunities and challenges in human–robot interaction”, International Journal of Social Robotics, Vol. 7 No. 3, pp. 347-360, doi: 10.1007/s12369-014-0267-6.

Acknowledgements

This research was supported by the Province of Limburg, The Netherlands, under grant number SAS-2014-02207. We would like to thank Accenture Technology for allowing us to make use of their humanoid service robot and for re-programming the robot’s software to facilitate our experiment. In addition, we would like to thank Nivard Beumers for constructive criticism of the manuscript.

Corresponding author

Michelle M.E. van Pinxteren can be contacted at: michelle.vanpinxteren@zuyd.nl

Related articles