With the launch of
Siri, a conversational assistant presented by
Apple in 2011, voice-enabled personal assistants have since been accessible to the masses (Hoy
2018). As speech is the main channel for communication between humans (Flanagan
1972; Schafer
1995) and is considered to be an innate human behavior (Pinker
1994), interacting with a voice interface is intuitive (Cohen, Giangola and Balogh
2004). Studies conducted under the Computers Are Social Actors (CASA) paradigm indicate that speech-output and interactivity are two main factors to elicit social reactions in users (Nass et al.
1993). Users adopt human principles like reciprocity and team affiliation when interacting with computers (Nass and Moon
2000). As voice assistants are able to send social cues we assumed that subjects will show social reactions towards an
Amazon Echo. Focussing on the social norm of reciprocity, we measured if people provide more help to the assistant after being told that they are interdependent of each other when compared to being independent.
A laboratory experiment with 120 participants was conducted. Participants played an interactive game using an Amazon Echo. Team affiliation was manipulated by telling one group their game performance would be rated individually while telling the interdependent group that they are being assessed on their joint performance with the assistant. To operationalize reciprocity, we opted for a behavioral measurement to assess participants’ willingness to help the assistant: Participants were asked to name potential cities in which the game could take place. Participants were able to name any number of cities, assuming a relationship between the number of cities named and the level of cooperativeness towards the assistant.
Results show a significant main effect of interdependence on the evaluation of Alexa as well as the number of cities named meaning participants did show reciprocal behavior towards the voice assistant.