Skip to main content
Erschienen in: International Journal of Social Robotics 1/2018

Open Access 02.11.2017

Long-Term Cohabitation with a Social Robot: A Case Study of the Influence of Human Attachment Patterns

verfasst von: Michał Dziergwa, Mirela Kaczmarek, Paweł Kaczmarek, Jan Kędzierski, Karolina Wadas-Szydłowska

Erschienen in: International Journal of Social Robotics | Ausgabe 1/2018

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This paper presents the methodology, setup and results of a study involving long-term cohabitation with a fully autonomous social robot. During the experiment, three people with different attachment styles (as defined by John Bowlby) spent ten days each with an EMYS type robot, which was installed in their own apartments. It was hypothesized that the attachment patterns represented by the test subjects influence the interaction. In order to provide engaging and non-schematic actions suitable for the experiment requirements, the existing robot control system was modified, which allowed EMYS to become an effective home assistant. Experiment data was gathered using the robot’s state logging utility (during the cohabitation period) and in-depth interviews (after the study). Based on the analyzed data, it was concluded that the satisfaction stemming from prolonged cohabitation and the assessment of robot’s operation depend on the user’s attachment style. Results lead to first robot’s behavior personalization guidelines for different user’s attachment patterns. The study confirmed readiness of a EMYS robot for satisfying, autonomous, and long-term cohabitation with users.

1 Introduction

It would seem that the growing importance of the human–machine interaction and the corresponding decline in the intensity of direct personal contacts that we observe in the past years is an irreversible process. This is why a lot of effort is put into HRI (human–robot interaction) studies, where social robots play the roles of personal assistants, trainers, therapists or teachers. They proved to be a useful tool in motivating children to exercise and eat healthy, reducing anxiety or subjectively perceived pain or relieving symptoms of various disorders [15, 17, 20, 23, 24, 31, 35, 3840, 42].
Universal access to the Internet has contributed to human addiction to the mass media which now seem to be permanently etched into the landscape of modern society. They make up the foundation of a complex system of social communication and to a large extent organize and create our lives, affecting its various aspects such as our intellect, emotions, feelings, or our social attitudes. The human need for constant access to information can be used in the stimulation of human–robot interaction by letting the machine become an interface to various data sources. The robot should deliver the information in a natural way and in line with human expectations. This can have a positive impact on the level of attachment of the user, a factor that is highly desirable in social robotics.
The implementation of long-term experiments is very complex and therefore rare. The difficulty in such studies is to avoid schematic behavior of the robot and the loss of user engagement during interaction. Robot reliability is also an important issue in long-term HRI studies, since one critical failure can permanently destroy a study sample. In addition, preparation and extensive testing of complex experiment scenarios is extremely time consuming. One way to deal with these difficulties is the often overused Wizard of Oz method [30, 32], where a robot is controlled remotely. Unfortunately, depriving the robot of autonomy causes the loss of information about the technical aspects of the interaction, without which it is impossible to further the development of robots and verify their current capabilities [7]. Researchers continue to seek methods of establishing and maintaining long-term relations between humans and robots.
In this paper, we will describe an experiment devised to study how satisfaction stemming from contact with a robot during extended cohabitation will depend on a person’s attachment style. The theory of attachment, developed by John Bowlby and summarized between 1969 and 1980 in his “Attachment and Loss” trilogy [35], indicates that children are biologically predisposed to form attachments. He postulated that this kind of behavior is instinctive and triggered whenever proximity with the mother is threatened. Therefore, the relationship between a developing child and their primary caregivers has various repercussions that manifest themselves later in life. The styles are associated with the tendency to react in accordance with definite emotional (changeable emotions, variability of moods), cognitive (the way of interpreting the ambient world), and behavioral (reaction to other people and events) schemes. There are three attachment styles:
  • Secure style individuals representing this style establish proper relations with others based on trust and sense of security, they are not afraid of losing a relationship. Such people are direct, empathetic, openly communicate their emotions, and respect the needs of others.
  • Anxious-ambivalent style individuals representing this style tend to be overly dependent in their relationship and desperately try to maximize the time spent with their loved one and their absence causes fear for their loss. They are prone to resentment when they do not receive enough attention from their partners.
  • Avoiding style individuals representing this style avoid direct contact with people and are not particularly interested in creating relationships. They tend to be loners more focused on practical things.
Further studies indicate that the style of attachment is reflected in the treatment and attachment even with respect to objects [29, 34]. A social robot possessing human characteristics and operating as a home assistant may therefore be a subject of human attachment. Bowlby’s theory has already been considered in the context of social robotics [37], however, the conducted research was related to its implementation in the behavior of robots [21, 22, 25].
It is believed that the emotional equipment and robot behavior should depend on who the user is. Therefore, robots accompanying anxious-ambivalent people should be more eager to initiate contact, be more visible, and present in the lives of users. On the other hand the robot should not initiate as much contact with avoidant individuals. It can be assumed that for people with a high level of security and trust, this aspect of robot’s behavior will be less important. It is also likely that anxious-ambivalent people are more prone to get angry at the robot’s shortcomings (inadequate set of dialog options, limited number of functions, etc.) and irritated at its operation because their need for attention is not satisfied.
The novel experiment methodology described in this paper allowed to study two very important research subjects. The first is the verification of long-term, autonomous operation of a social robot. Positive outcome of this verification will be the first step for introducing social robots as useful home assistants. The data collected during research allowed us to deepen the knowledge about designing, creating and controlling social robots. Also, it gave a lot of useful information about user expectations regarding living with robotic personal assistant. The second aspect, rooted in psychology and social studies, is strongly tied to the robot’s behavior. We were able to witness how people with different styles of attachment establish a relation with a social robot during cohabitation, and whether and how they can become emotionally attached.

2 Experiment Setup

The main research platform utilized in the experiment was the expressive EMYS type head [27] coupled with the underlying control system. Capabilities of the platform have undergone major upgrades both in hardware and software aspects. The motivation for this was twofold. Firstly, the robot needed to be adapted to a home environment so that the various mechanical components (cables, speakers, etc.) would not distract the user from the robot’s face during interaction. This was found to be of great importance during previous studies [9]. Secondly, EMYS’ capabilities had to be expanded to provide means of weaving non-repetitive information into the experiment scenario in order to make it more engaging for the study participants.

2.1 Robot

EMYS (EMotive headY System) type head used as a basis for creating the hardware platform was developed during FP7 LIREC project as a part of an anthropomorphic FLASH platform [16, 28]. The head consists of three horizontal disks (imitating forehead and chin movements) mounted on a neck. The head also possesses a set of eyeballs that can rotate and pop out. Before the study described in this paper, both robots had been thoroughly verified in a series of studies which proved that they are engaging and widely accepted interaction partners [9]. EMYS’ appearance allows him to bypass the uncanny valley issue while at the same time retaining the ability to recognizably express basic human emotions [27, 36].
When operating as a standalone platform, EMYS usually requires additional components (speakers, Kinect sensor, etc.). Therefore, he required modifications to be able to function properly in home environment. Most importantly, the audio system of the robot was modified—a set of speakers was mounted inside of the bottom disc of the robot. This required the robot’s power supply system to be modified accordingly—a dedicated DC/DC converter was installed along with a 6 W stereo audio amplifier. Additionally, a Kinect sensor was permanently attached to EMYS’ base and the base itself was made more sturdy to increase the stability of the robot during rapid movements.
The master PC controller used during the study was a Mac mini, i7-3615QM, 16GB RAM, 256GB SSD drive. The whole hardware platform can be seen in Fig. 1.

2.2 Control System Overview

The control system that EMYS utilized during the study is an open-source software based on Gostai Urbi [19] and compliant with the three-layer architecture paradigm [18].
The lowest layer of the control system consists of modules called UObjects which enable the programmer to access motion controllers, sensors and external software. These modules, written in C++, can be dynamically loaded into Urbi engine, which takes care of their orchestration (e.g. scheduling and parallelization of tasks).
The middle layer of the control architecture is responsible for the implementation of competencies, which define what tasks the robot will be able to perform. The competencies are usually implemented as sets of functions delivered by the low level modules. Urbi delivers a script language called urbiscript that gives the programmer a quick way of creating and synchronizing complex behaviors. The designed control system enables accessing the robot hardware and competencies in a unified manner—using a tree structure called robot. It makes using the API, implemented in the middle layer, more convenient and helps to maintain the modularity of software. Elements of the robot structure have been grouped based on their role, e.g. audio, video, head, emotions, network. The middle layer also contains the competency manager which decides how the specific behaviors should be realized, depending on the current physical setup of the robot (the head can operate as a standalone robot or be combined with a torso, arms and a mobile platform and every combination requires a different set of scripts in the competency manager).
Finally, the highest layer should contain a dedicated decision system responsible for utilizing the behaviors provided by the middle layer with respect to a specific application. This layer can contain anything from a simple remote controller to a complex system simulating human mind.
The overview of the three-layer control system implementation is presented in Fig. 2. Full list of all low-level modules can be found in Table 1. A more complete description of the control system can be found in [26, 28] and in on-line documentation [11].

2.3 Competencies

In the middle layer, various low level functions (provided by the modules loaded into the control system) are combined to form competencies. Competencies are then further organized inside the competency manager (a part of the middle layer). From a technical point of view, the manager is comprised of a set of scripts which utilize the existing software substructures of the robot structure (video, audio, head, etc.) in order to implement competencies that are application specific. Therefore, the manager needs to be extended whenever the existing competency set is not sufficient to fulfill tasks required by the current scenario. However, the existing competencies should never be modified unless the physical setup of the robot changes.
Sending an e-mail is an example of a complex function implemented in the competency manager layer. A message is sent when such action is requested by the user via the highest layer of the control system. The user declares whether to include a voice message and a photo. This request is then passed onto the competency manager which uses the Kinect sensor to take a photo using a function from video structure and then uses functions from audio structure to record a voice message and convert it to mp3 format. The last step is to compose the e-mail, add the attachments, and address the message. These steps are realized by various functions from the network structure. The diagram of this competency is presented in Fig. 3.
Table 1
List of all low-level modules (asterisk marks modules that were used in the experiment scenario)
Category
Name
Description
Communication
UDynamixel\(^{*}\)
Transfers data using Dynamixel protocol which enables controlling the actuators driving the head
UAria
Enables controlling a mobile platform utilizing ARIA library which gives full compatibility with Mobile Robots products and offers a support for popular laser scanners
USerial
Transfers data via serial port
Auditory
UPlayer
Simple module for playing prerecorded .wav files
UPlayerNext\(^{*}\)
Advanced module for sound playback, recording, BPM counting, and applying audio effects
URecog\(^{*}\)
This module uses Microsoft Speech Platform to recognize speech using available microphones
USpeech\(^{*}\)
This module uses Microsoft Speech Platform for real-time speech synthesis and visem generation
UMP3\(^{*}\)
PCM (.wav) to MP3 converter
Video
UCamera
Image capture functions and camera settings
UImageTool\(^{*}\)
Basic image processing such as blurring, thresholding, morphological operations, drawing basic shapes, color conversion, I/O operations, etc.
UObjectDetector\(^{*}\)
Detecting objects, e.g. human faces or body parts using Haar classifier
UColorDetector\(^{*}\)
Color detection in HSV space
UMoveDetector\(^{*}\)
Movement detection
UKinect\(^{*}\)
Extracting data from a Kinect sensor, silhouette/skeleton tracking and detection, gesture and facial recognition, etc.
Machine learning
UKNearest\(^{*}\)
Data classification algorithm
UEigenfaces
User recognition algorithm
Emotional
UWASABI\(^{*}\)
Implementation of WASABI emotional system [2]
UPAD
Implementation of a dynamic PAD-based model of emotion [33]
Network
UBrowser\(^{*}\)
Implements the functions of a web browser and an RSS reader
UMail\(^{*}\)
Serves as an e-mail client with the ability to check and read mails and send messages with various types of attachments
UFacebook\(^{*}\)
Module to handle Facebook’s social networking services
UGCalendar\(^{*}\)
Allows to connect EMYS with personal Google Calendar and Google Contacts
UTextTool\(^{*}\)
Implements text functions like encoding, removing HTML markups, I/O operations, date/time processing, etc.
Appraisal
UANEW
Utilizes ANEW (Affective Norms for English Words) [6, 43] project which can be used for evaluating words in terms of feelings they are associated with
USentiWordNet
A lexical resource for opinion mining and assigning ratings to groups of semantic synonyms (synsets) [12]
UWordNet
An interface to WordNet [14]—a large lexical database of words which can be used as a synonym dictionary to find the basic form of a word
Remote control
UJoystick
Module to handle pads, joysticks, etc.
UKeyboard
Module for capturing pressed keys from operating system
Another, much simpler example of a competency manager function is generating utterances. The user can use the competency manager to customize the way the robot speaks, e.g. when the utilized scenario includes an emotional component. In such a situation, the manager function can be used to retrieve the current emotional state of the robot from one of the emotion simulation modules and then use functions from audio structure to accordingly modify the parameters of the utterance (volume, pitch, rate).
The competency manager allows the programmer to create reusable dialog functions. One such example can be observed when the robot asks the user about the genre of music to be played from the on-line radio. Calling the manager function causes the following operations:
  • utterance is generated to ask the user for the genre,
  • speech recognition system is prepared,
  • file containing the grammar for possible user answers is loaded,
  • speech recognition is activated.
Afterwards, the system waits for the user to utter one of the phrases specified within the loaded grammar file. This particular file defines syntax for representing grammars used by the speech recognition engine. It specifies the words and patterns to be listened for by a speech recognizer. The syntax can be presented in XML form (grxml) [41]. The recognized sentence is then returned by the function. If the user says something that is not allowed by the grammar or is silent for a specified time, the function returns information about the timeout.

2.4 Emotions

Emotions are a key element of the design of a social robot. They play a vital role, ensuring a more natural communication and the much needed variation of behaviors [1]. During the study, EMYS’ emotion simulation system was based on WASABI project [2]. It is based on PAD theory where emotions are represented in 3D space, where the dimensions are pleasure, arousal, dominance. The system is enriched with various additional components such as internal dynamics, secondary emotions, etc.
In order for the emotion simulation system to operate properly, individual emotions needed to be placed within the PAD space. These have been modeled as regions contained within spheres with a set origin and radius. The initial configuration was based on the presets provided with WASABI itself. However, since these were tailored towards short-term studies, it was necessary to modify them to achieve a more stable robot behavior in a long-term setting. The final configuration of the emotional space is shown in Table 2.
Table 2
Robot emotions in PAD space
 
P
A
D
Radius
Fearful
\(-\) 0.8
0.8
\(-\) 1
0.64
Concentrated
0
0
\(-\)1/1*
0.5
Depressed
0
\(-\) 0.80
\(-\) 1
0.5
Happy
0.8
0.8
\(-\) 1/1*
0.84
Bored
0
\(-\) 0.85
1
0.84
Annoyed
\(-\) 0.8
0
1
0.5
Sad
\(-\) 0.8
0
\(-\) 1
0.5
Surprised
0.1
1
\(-\) 1/1*
0.6
Angry
\(-\) 0.8
0.8
1
0.64
* Boundaries for this emotion are defined as two separate spheres
The finite-state machine (FSM) operating in the highest layer generates various impulses influencing the robot’s emotional state. Two parameters are associated with each emotional impulse. First parameter specified how positive or negative a certain stimulant is, e.g. reporting bad weather was related with a negative impulse, while seeing that the user is around generated a positive one. The second parameter directly changed the robot dominance value. It is related with the amount of control that the robot perceives to have over the stimulant causing the emotional impulse. To exemplify this, if the robot could not retrieve something from the Internet due to connectivity problems, the emotional impulse would have low dominance but if the robot failed at recognizing a color that he had learned earlier, the impulse would have had a high dominance component. Selecting a proper value for a dominance impulse (in accordance with the situation) is very important since it enables differentiating between emotions that are similar in the pleasure-arousal subspace, such as sadness/fear and anger.
The emotional state of the robot is not directly utilized in the highest level of the control system. However, the emotion simulation system can be used to enrich robot behaviors by modifying competencies according to emotional state of the robot (e.g. movements can be more rapid when the robot is angry, he can speak more slowly when he is sad, etc.). The emotional system utilizes as stimuli the different events occurring in the highest level of control system (usually generated by the robot’s sensors, operation of modules, and undertaken actions). A diagram of the emotional system is presented in Fig. 4. It is worth noting that since the study was conducted in Polish, UANEW and USentiWordnet modules (based on databases of English words) did not influence the emotion simulation system.
The robot is endowed with a personalized character. This is achieved via a configuration file for the underlying emotion simulation system which contains several parameters controlling e.g. how fast the robot gets bored or how stable the emotional states are. It is worth noting that some configurations may cause an “emotional instability” where every emotional impulse causes the robot to switch uncontrollably between the various emotional states.

2.5 Home Companion Scenario

The improvement of EMYS’ competencies allowed to create a complex scenario with the robot operating as a sort of home assistant. During the study, the highest layer of the control system was implemented as a finite-state machine in Gostai Studio (dedicated software for FSM creation). Each FSM state represents a set of scripts realizing a particular robot function (e.g. checking weather, see Fig. 5). Transitions between the various states are triggered mainly based on the user’s instructions. Each recognizable command depends on the currently loaded grammar file and can be composed in a number of ways to enable a more natural conversation. For example, to turn on flamenco music, the user can assemble an instruction from the sections delivered by the grammar file to say both 1—“turn on flamenco” and 2—“flamenco please” as shown in Fig. 6.
From the study participants’ point of view, the main goal of this scenario was to teach the robot to recognize various colors. Learning competency has been developed utilizing the k-nearest neighbors classifier algorithm. Teaching was achieved by showing the robot a uniformly colored sample in the right hand or asking him to look at the clothes on the user’s torso. The user then told the robot what color that particular object/article of clothing is. After the robot was shown a color enough times, he was able to learn it. EMYS could be taught to distinguish up to 21 different colors in total. The user could check EMYS’ knowledge at any stage by playing a similar game where the user could again show an object to the robot but this time it was EMYS who named the color. He then asked the user for a confirmation of his guess.
To stimulate the interaction between the user and EMYS, he was equipped with a wide array of functions that enabled him to assist the user in everyday tasks. EMYS could connect to the Internet and browse various news services, check weather forecasts, TV program, and play back radio streams. The robot could also serve as an alarm clock, send and receive emails with audio and photo attachments, and use Google Calendar to remind the user what he had had planned for the coming days. Finally, he was connected to the Facebook account of the study participant and enabled access to most of the service’s content and functionality. In a sense, EMYS became an interface to the external world by enabling the study participants to use the above mentioned media in a more natural and comfortable manner.
All throughout the interaction, the user could control the robot with gestures. For example any action of the robot such as reading particularly long post or piece of news, could be stopped by making a clicking gesture (seen in Fig. 7). Another example was regulating the volume of the music being played back by raising the right hand, closing the fist and moving it up and down. All of the study participants received a user guide detailing how to interact with EMYS [10], i.e. how to teach the robot, what his additional functions are, how to construct commands, and how to use gestures to control the robot.

3 Methodology

The designed experiment was split into three ten-day long sessions, each with a different participant. Every participant possessed a different attachment style. During the whole study, EMYS was operating fully autonomously. In order to ensure that the interaction between the robot and the user followed the most natural course possible, the experiment was set up at the study participants’ apartments instead of a structured, tightly controlled environment. This crucial methodological assumption introduced a lot of technical problems. Firstly, the robot had to operate reliably and without interruption for ten days. Secondly, there was the matter of participant privacy. Each of the participants had to agree to the terms and conditions of the study as well as the data gathering methods by signing a consent form.

3.1 Recruitment

The main aim of the recruitment process was to find three participants which represented different attachment styles using the prepared on-line questionnaires. The recruitment of the study participants was based on the snowball sampling approach where the questionnaire, disseminated through the robot’s Facebook profile, was filled out at first by people known by the researchers. They then shared the information about the recruitment with their friends and so on. In all, 53 people participated in the recruitment process. In order to assess their attachment style, a dedicated questionnaire was developed. This survey was based on the Adult Attachment Scale (AAS) [8] as well as the Eysenck Personality Questionnaire (EPQ) [13]. The experiment was based on deception—the real goal was to be unknown to the participants (they were told that the aim of the experiment is for them to teach the robot to recognize colors). Therefore, the AAS and EPQ questionnaires were modified to not contain intimate personal questions (the attachment style is defined by the intimate relation between a person and their parents). Instead, questions related to the attachment between the survey participants and inanimate objects and devices were introduced. The literature available on this matter suggests this as a viable approach [29, 34]. The first version of the survey contained 120 items. This list was appraised by four independent competent judges (2 male, 2 female). This led to the questionnaire being substantially reduced and modified. The final list contained 45 items—15 for each of the attachment patterns. All items were scaled from 1 (“I do not agree at all”) to 5 (“I fully agree”), which means that the total possible number of points for each attachment style ranged from 15 to 75.
Table 3
Recruitment statistics
 
Secure
Anxious-ambivalent
Avoiding
Mean
52.85
36.14
31.28
Min
38
16
20
Max
62
48
44
SD
7.13
6.94
5.63
D*
44
2
1
* Number of candidates for which selected attachment style is dominant. 6 candidates have equal levels of points in two attachment styles, which makes it impossible to determine the dominant one
Statistics of answers gathered by on-line survey are presented in Table 3. As can be seen from it, majority of candidates represent secure attachment style, which is also general rule over society. Dominance of secure style candidates was undoubtedly reinforced by fact, that people with other attachment styles are not prone to participate in studies which are violating their privacy. Due to this facts, finding participants with anxious-ambivalent and avoiding style was a great challenge during recruitment.
In accordance with the aforementioned procedure, three female participants were selected, based on the highest difference in survey scores between dominant attachment style and the two others. Their results in survey were as follows (S—secure, AA— anxious-ambivalent, A—avoiding):
  • participant with secure attachment style: S: 62, AA: 31, A: 36,
  • participant with anxious-ambivalent attachment style: S: 38, AA: 42, A: 36,
  • participant with avoiding attachment style: S: 38, AA: 35, A: 44.
All of them were aged 25–30, shared higher education, and had no technical background. Additionally, all of the participants were living alone and therefore were the only ones to interact with the robot.

3.2 Setup at Test Participants’ Apartment

Before the study could commence, there were certain preliminary steps to be taken at the participants’ apartments. Firstly, the robot needed to be placed appropriately within the living quarters. Important factors to consider were the size of the room (which determined how close the user could get to the robot), height and accessibility of EMYS’ mounting position, lighting conditions (which affected the Kinect sensor), how much time the participant stayed inside that room, etc. Care was taken to provide uniform environment throughout all the participant’s apartments. In each case there was sufficient space for the user to interact with the robot without any obstruction. After the robot was mounted, he had to be connected to the Internet. This was achieved either by logging into the participant’s Wi-Fi network or using an LTE modem. Finally, all the necessary service had to be configured which included logging into the user’s Facebook and Gmail accounts, setting up the mailing list and calendar. This step was extremely sensitive as it required the experiment participants to share their private account passwords with the researchers.

3.3 Data Gathering Tools

There were two main methods of gathering the experiment data. During the study, the vital robot data was stored within a logfile with 1s intervals. This allowed to determine the following parameters: frequency and mean time of each interaction, the action that the user demands of the robot, time spent teaching colors, time spent verifying the robot’s knowledge of colors, the quality of EMYS’ knowledge, the number of properly understood commands, the robot’s emotional state (anger, happiness, boredom, etc.), duration of EMYS’ sleep. Moreover, the log contained data about the user from the audio and video systems (silhouette, face, movement, recognized utterances and sound direction). Vast amounts of data were gathered for each of the participants over the course of the 10 days of cohabitation. A custom C# WPF application was created in order to process this data and acquire all of the indicators mentioned above.
The second method of data acquisition were the interviews carried out with the participants after the study. After the end of the ten day period of residence with the robot, the participants underwent both paper and pencil (PAPI) and in-depth individual interviews (IDI). The PAPI survey was further divided into two parts. The first, which will be referred to as robot assessment, dealt with the participants’ opinions of EMYS and his behavior during the experiment (assigning various traits to the robot, appraising statements about his behaviors) and contained 30 questions. The second part of the PAPI study, further referred to as functionality assessment, contained 20 questions which dealt with the appraisal of the robot’s reliability of operation using statements (e.g. “The robot executes commands well”, “The robot possesses good communication skills”, “The robot’s programming is in line with my needs”). These were to be rated on a scale from 1 to 4, where: 1— needs improvement, 2—needs minor tweaks, 3—works fine, 4—works great. The IDI study was meant to gather the participants’ reviews about cohabitation with the robot, including their first impressions, the changes to their attitude towards EMYS, his likeness to a human being, etc. A similar approach was adopted in a previous short-term study [9].

4 Results

The aim of the experiment was to investigate the effect of attachment style on the course of human–robot interaction. It was hypothesized that satisfaction from prolonged contact with the robot will depend on a person’s attachment style. Three experiment participants were chosen each with a different attachment style:
  • secure (P1),
  • anxious-ambivalent (P2),
  • avoiding (P3).

4.1 Robot Data

It was observed that for all of the participants the robot’s emotional state was stable and the average time when a given emotion persisted was similar (neutral—35%, angry—21%, happy—34%, other emotions less than 5%) as shown in Fig. 8. For test subject P3, there was a 20 p.p. advantage of the state of pleasure over anger. This is presumably caused by the shorter (compared to P1 and P2), time spent on teaching and verifying knowledge of colors. Mistakes made by the robot during this task were the main source of negative impulses influencing the emotions of the robot.
Another parameter is the time and the number of individual events during the interaction with the robot. An event is defined as each individual action undertaken by the robot or one that is undertaken by the user and registered by the robot (e.g. asking a question, recognizing an answer, providing the user with information) The values over a ten-day period are as follows: P1—271 min/1578 events, P2—261 min/1213 events, P3— 175 min/728 events. Of major significance is also the average and maximum time duration of a single interaction (defined as the time interval between the start and end of a group of events as long as the time elapsed between two individual events is less than 2 min): P1—3/40 min, P2—2/19 min, P3—2/18 min.
The next set of parameters to be considered were the durations for which the functions were utilized and their contribution (as a percentage) to the total time of interaction. Since the users were told that the main aim of the experiment was to teach the robot colors, all participants showed the greatest activity during this task. The durations for this task were as follows: P1— 51%/73 min, P355%/60 min, P3— 40%/29 min. For P1, the subsequent functions are: overseeing alarm clock (turning on, off, or snoozing)—15%/21 min, managing Facebook profile—14%/20 min, and handling e-mail account 8%/12 min. For participant P2: Facebook—15%/16 min, news—14%/15 min, e-mail—9%/10 min. For P3: alarm clock—21%/15 min, news 16%/12 min, e-mail: 14%/10 min. All subjects also utilized the robot for weather checking— P1 used it once per day, while respondents P2 and P3 on average every other day. The above analysis does not take into the account the time spent listening to Internet radio with the help of the robot. This activity does not directly involve the user’s attention and therefore there is a large disproportion in the duration of this and other activities (P1 listened to the radio for 3 h, P2—34 h, P3—11 h). The total time each of the users utilized each of the robot’s functions is shown in Fig. 9. It is worth noting that Fig. 9 is not controlled for total time of interaction in order to show the differences between the participants. Fig. 10 contains the duration of usage of each function as a percentage of the total interaction time.
Analysis of the data recorded by EMYS’ video system enabled to observe that all of the participants were facing the robot both during cohabitation about 38% of the time (regardless of whether there was interaction between them or not). The average distance between the user and the robot over the course of the ten day cohabitation was stable. Only participant P1 approached the robot when interacting—the average distance decreased by 20 cm.
Of interest for future analyses is the change in observed total daily interaction time over the course of the experiment. This parameter allows to draw conclusions on the amount of attention that is paid to the robot (and how that changes over time) which is essential in long-term studies. Participant P2, is the only one who at that time had no professional commitments, was observed to spend a fixed amount of time with the robot averaging 30 min a day. In the case of P1 and P3, due to flexible working time, daily interaction time varied significantly (e.g. 5–100 min for P1).

4.2 Interview Data

Another source of data are the opinions of participants collected after the study using questionnaires and in-depth interviews. The participants unanimously stated that the set of features of the robotic assistant was very useful for them. They made use of them regularly which served to maintain a constant frequency of interaction and a high level of satisfaction with the robot. Participants indicated that the emotions that EMYS expressed were easy to interpret (basing solely on the facial expression) which significantly improved the communication and raised the overall assessment of the behavior of the robot. The intensity of the emotions was rated 7–8 on a scale from 0 to 10. In addition, they found that the robot possessing a humanoid appearance and able to communicate in a natural fashion is more user-friendly than virtual assistants implemented in various mobile devices (e.g. Siri). All of the participants reported that they started missing the robot after the experiment ended.
The respondents also evaluated the robot in terms of his functionality and proper operation. During in-depth interviews (IDI) respondents indicated functions of the robot that they found to be the most useful. One of EMYS’ advantages pointed out by the participants was the native support for voice commands which made the robot, “hands-free” as opposed various touch devices. P1 reported that the functions that they found most useful were news and weather checking as well as handling e-mails. They also often used the radio, and mentioned that the alarm clock was extremely effective. Furthermore, they stated that for the duration of the experiment, EMYS nearly removed their need for using a smartphone. During functionality assessment (part 2 of PAPI) this participant reported the highest overall rating of the robot’s performance—3.85 points. They awarded the robot the maximum number of points (4) with the exception of the following statements (which were awarded 3 points): “The robot is able to identify a problem”, “The robot is able to choose the best way of solving a problem”, “The robot is able to learn a large number of colors”. Participant P2 considered news checking, managing e-mails and playing back radio to be the most convenient. They appreciated the possibility of gesture control—e.g. adjusting the music volume. However, they evaluated the robot much more negatively than P1—only 3.15 points. The lowest ratings (2 points.) were awarded to the following statements: “The robot executes commands well” and “The robot is always ready to execute commands.” P3 positively described news and weather checking as well as radio playback and motivated this with their simplicity and the reduced user involvement. Participant P3 also mentioned that they were positively surprised at the dialog capabilities of the robot and his ability to recognize gestures. Just like P1, during functionality assessment they rated the robot highly, giving him 3.7 points. The statement: “The robot is able to learn a large number of colors” was given the lowest rating (2 points).
Participant representing safe attachment style (P1) observed that the robot’s emotions affected their mood. They attribute human character traits to the robot: open, friendly, patient. P1 was the only one to admit that during the experiment they gradually started treating the robot like a human being. The participant attributed more human qualities to the robot than he actually had. They recalled that sometimes after getting back home, they wanted to compensate EMYS for the long absence by spending time together. Minor technical glitches that the robot experienced throughout the experiment did not discourage the participant. They were also able to teach the robot the biggest number of colors and were also the only participant for which a decrease in the distance to the robot during interaction was observed. P1 was also the only one to notice that that EMYS’ emotions were reflected in the tone and rate of speech. When asked about their opinion on the average time they spent daily with the robot they replied that it was about 2 h and the time they declared they would like to be able to spend with EMYS was 4–6 h.
Participant representing anxious-ambivalent attachment style (P2) was critical of the technical aspects of the robot’s operation—its imperfections accounted for a large obstacle, causing frustration and concern about the future operation of the robot. The presence of a robot at home made the participant uneasy because of the fear of potentially being observed. They reported that they had treated EMYS like a pet, whose expression of emotions required some time to learn. The robot was reported as being “helpful but moody”. The average time spent daily with the robot was thought to be 2–3 h, and the desired time was 3–4 h.
Participant representing the avoiding attachment style (P3), reported to have felt an emotional distance to the robot at the beginning of the study. Over the course of the experiment, the perceived acceptance level was reported to have been growing. However, the participant does not claim that the robot could be said to ”like” his user or behave like them. Human traits that P3 felt could be attributed to EMYS were: helpful, friendly, overly direct. The participant was easily discouraged by failures during color learning/recognizing exercises and was the only one to report lack of satisfaction from this part of the interaction. In their opinion, the robot was often sad and angry, which is not supported by actual data recorded during the experiment. The respondent realistically assessed the capabilities of such devices, treated the robot like a machine, and, after the study had ended, they only felt the lack of his functions. Asked about the average time spent daily with the robot, they thought it was around 30 min and declared they would like to spend 1 h with him.

4.3 Analysis

Based on the gathered data it can be concluded that all of the users managed to become attached either to the robot itself or his functions. However, the level of satisfaction with the interaction and opinions about the robot varied significantly between the different participants, which confirms the assumption that the robot should have a different emotional equipment and personalized character for each of them. All of the subjects perceived the emotions the robot expressed differently (with respect to their type and intensity), while the data recorded during the course of the study show that the emotions generated by the control system had a similar distribution.
Securely attached participant perceived EMYS’ ability to reflect the user’s state. This resulted in attributing to robot human qualities, which in fact he did not have. Interaction with the robot and teaching him colors was reported as causing considerable joy. This is confirmed by the experiment data—P1 spent the longest time of all of the participants actively interacting with the robot (listening to radio is not taken into account).
The participant representing the anxious-ambivalent attachment style confirmed the assumptions formulated before the study commenced and paid the greatest attention to the technical imperfections of the robot (e.g. problems with recognizing commands). These aspects have caused the anxiety and anger not observed in any of the other participants when they were face with the same technical glitches. P2 also mentioned that the robot should show more initiative and be the first to start interactions, e.g. by greeting the user after they return home. This may be associated with the constant need for contact with another person, typical for people with this style of attachment.
The participant with the avoiding style, despite being satisfied with the operation of the robot and his functions, treated it with the greatest distance and spent the least time with the robot. This confirmed the assumption that the robot would be treated instrumentally, and that the user representing this attachment style would be driven to interact with the robot only by a direct need of his functions. This participant’s expectations towards the robot include the possibility to personalize the robot’s functions, e.g. by adding custom blogs and RSS channels, enabling TV streaming, etc. P3 stresses that in order to be considered life-like, the robot would have to be able remember and recall past events.

5 Conclusions

One of the major achievements of the study was carrying out trials involving the social robot EMYS, controlled by the proposed system in a fully autonomous manner. To the authors’ knowledge, this is the only study where a robot was successfully operating in a long-term (10 days) scenario at the participants’ residence while maintaining a constant (or even increasing in the case of P1) level of user attention and engagement.
A novelty in the field of HRI was the verification of the effects that the attachment style has on the interaction and the process of building a relationship between a user and personal robot. The studies were observational and qualitative, and the conclusions were obtained based on three samples. Therefore the findings presented in this paper should be treated as guidelines and directions for further research. It seems necessary to perform more studies on a statistically significant sample. This would lead to clear conclusions regarding the satisfaction stemming from prolonged contact with a robot and would allow the adjustment of the intensity level of generated emotions depending on the user’s attachment style. However, even the results described in this paper are useful in terms of personalizing a robot’s behavior in order to increase user satisfaction depending on their attachment style.
Finally, it’s worth noting that the study participants had no trouble indicating possible new features for the robot (checking public transport connections, searching for recipes, personalization of information sources), which could significantly increase the usefulness of the robot. The conducted experiment confirmed that endowing robots with functions, more often associated with mobile devices, adds a new, “human” quality into the human–machine communication.

Acknowledgements

This research was supported by Grant No. 2012/ 05/N/ST7/01098 awarded by the National Science Centre of Poland.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Literatur
1.
Zurück zum Zitat Becker C, Kopp S, Wachsmuth I (2007) Why emotions should be integrated into conversational agents. Wiley, London, pp 49–68 Becker C, Kopp S, Wachsmuth I (2007) Why emotions should be integrated into conversational agents. Wiley, London, pp 49–68
2.
Zurück zum Zitat Becker-Asano C (2008) WASABI: affect simulation for agents with believable interactivity. Ph.D. thesis, University of Bielefeld, Bielefeld Becker-Asano C (2008) WASABI: affect simulation for agents with believable interactivity. Ph.D. thesis, University of Bielefeld, Bielefeld
3.
Zurück zum Zitat Bowlby J (1969) Attachment and loss: attachment. Basic Books, New York Bowlby J (1969) Attachment and loss: attachment. Basic Books, New York
4.
Zurück zum Zitat Bowlby J (1972) Attachment and loss: separation: anxiety and anger. Basic Books, New York Bowlby J (1972) Attachment and loss: separation: anxiety and anger. Basic Books, New York
5.
Zurück zum Zitat Bowlby J (1980) Attachment and loss: loss, sadness and depression. Basic Books, New York Bowlby J (1980) Attachment and loss: loss, sadness and depression. Basic Books, New York
6.
Zurück zum Zitat Bradley MM, Lang PJ (1999) Affective norms for English words: instruction manual and affective ratings. Technical report Bradley MM, Lang PJ (1999) Affective norms for English words: instruction manual and affective ratings. Technical report
7.
Zurück zum Zitat Breazeal C, Kidd CD, Thomaz AL, Hoffman G, Berlin M (2005) Effects of nonverbal communication on efficiency and robustness in human–robot teamwork. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 383–388 Breazeal C, Kidd CD, Thomaz AL, Hoffman G, Berlin M (2005) Effects of nonverbal communication on efficiency and robustness in human–robot teamwork. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 383–388
9.
Zurück zum Zitat Dziergwa M, Frontkiewicz M, Kaczmarek P, Kędzierski J, Zagdańska M (2013) Study of a social robot’s appearance using interviews and a mobile eye-tracking device. In: Social robotics, lecture notes in computer science, vol 8239, Springer International Publishing, pp 170–179. https://doi.org/10.1007/978-3-319-02675-6_17 Dziergwa M, Frontkiewicz M, Kaczmarek P, Kędzierski J, Zagdańska M (2013) Study of a social robot’s appearance using interviews and a mobile eye-tracking device. In: Social robotics, lecture notes in computer science, vol 8239, Springer International Publishing, pp 170–179. https://​doi.​org/​10.​1007/​978-3-319-02675-6_​17
12.
Zurück zum Zitat Esuli A, Sebastiani F (2006) SENTIWORDNET: a publicly available lexical resource for opinion mining. In: Proceedings of the 5th conference on language resources and evaluation (LREC’06), pp 417–422 Esuli A, Sebastiani F (2006) SENTIWORDNET: a publicly available lexical resource for opinion mining. In: Proceedings of the 5th conference on language resources and evaluation (LREC’06), pp 417–422
13.
Zurück zum Zitat Eysenck HJ, Eysenck SBG (1975) Manual of the Eysenck Personality Questionnaire. Hodder and Stoughton, London Eysenck HJ, Eysenck SBG (1975) Manual of the Eysenck Personality Questionnaire. Hodder and Stoughton, London
14.
Zurück zum Zitat Fellbaum C (1998) WordNet: an electronic lexical database. The MIT Press, CambridgeMATH Fellbaum C (1998) WordNet: an electronic lexical database. The MIT Press, CambridgeMATH
15.
17.
Zurück zum Zitat Fridin M, Yaakobi Y (2011) Educational robot for children with ADHD/ADD. In: Architectural design, international conference on computational vision and robotics, Bhubaneswar, Indie Fridin M, Yaakobi Y (2011) Educational robot for children with ADHD/ADD. In: Architectural design, international conference on computational vision and robotics, Bhubaneswar, Indie
18.
Zurück zum Zitat Gat E (1998) On three-layer architectures. In: Kortenkamp D, Bonnasso RP, Murphy R (eds) Artificial intelligence and mobile robots. AAAI Press, Cambridge Gat E (1998) On three-layer architectures. In: Kortenkamp D, Bonnasso RP, Murphy R (eds) Artificial intelligence and mobile robots. AAAI Press, Cambridge
20.
Zurück zum Zitat Han J, Jo M, Jones V, Jo JH (2008) Comparative study on the educational use of home robots for children. J Inf Process Syst 4(4):159CrossRef Han J, Jo M, Jones V, Jo JH (2008) Comparative study on the educational use of home robots for children. J Inf Process Syst 4(4):159CrossRef
23.
Zurück zum Zitat Iacono I, Lehmann H, Marti P, Robins B, Dautenhahn K (2011) Robots as social mediators for children with Autism—a preliminary analysis comparing two different robotic platforms. In: IEEE international conference on development and learning (ICDL), vol 2, pp 1–6. https://doi.org/10.1109/DEVLRN.2011.6037322 Iacono I, Lehmann H, Marti P, Robins B, Dautenhahn K (2011) Robots as social mediators for children with Autism—a preliminary analysis comparing two different robotic platforms. In: IEEE international conference on development and learning (ICDL), vol 2, pp 1–6. https://​doi.​org/​10.​1109/​DEVLRN.​2011.​6037322
25.
Zurück zum Zitat Kaplan F (2001) Artificial attachment: Will a robot ever pass Ainsworth’s strange situation test? In: Proceedings of second IEEE-RAS international conference on humanoid robots (humanoids), Tokyo, Japan, pp 99–106 Kaplan F (2001) Artificial attachment: Will a robot ever pass Ainsworth’s strange situation test? In: Proceedings of second IEEE-RAS international conference on humanoid robots (humanoids), Tokyo, Japan, pp 99–106
26.
Zurück zum Zitat Kędzierski J (2014) System sterowania robota społecznego (written in Polish). Ph.D. thesis, Wrocław University of Technology, Wrocław Kędzierski J (2014) System sterowania robota społecznego (written in Polish). Ph.D. thesis, Wrocław University of Technology, Wrocław
31.
33.
Zurück zum Zitat Mehrabian A, Russell JA (1974) An approach to environmental psychology. The MIT Press, Cambridge Mehrabian A, Russell JA (1974) An approach to environmental psychology. The MIT Press, Cambridge
35.
Zurück zum Zitat Okita SY, Ng-Thow-Hing V, Sarvadevabhatla R (2009) Learning together: ASIMO developing an interactive learning partnership with children. In: 18th IEEE international symposium on robot and human interactive communication, RO-MAN, pp 1125–1130 Okita SY, Ng-Thow-Hing V, Sarvadevabhatla R (2009) Learning together: ASIMO developing an interactive learning partnership with children. In: 18th IEEE international symposium on robot and human interactive communication, RO-MAN, pp 1125–1130
36.
Zurück zum Zitat Ribeiro T, Paiva A (2012) The illusion of robotic life: Principles and practices of animation for robots. In: Proceedings of the seventh annual ACM/IEEE international conference on human–robot interaction, ACM, New York, NY, USA, HRI’12, pp 383–390. https://doi.org/10.1145/2157689.2157814 Ribeiro T, Paiva A (2012) The illusion of robotic life: Principles and practices of animation for robots. In: Proceedings of the seventh annual ACM/IEEE international conference on human–robot interaction, ACM, New York, NY, USA, HRI’12, pp 383–390. https://​doi.​org/​10.​1145/​2157689.​2157814
37.
Zurück zum Zitat Richardson K (2015) An anthropology of robots and AI: annihilation anxiety and machines. Routledge, New York Richardson K (2015) An anthropology of robots and AI: annihilation anxiety and machines. Routledge, New York
38.
Zurück zum Zitat Robins B, Dautenhahn K, Boekhorst R, Billard A, Keates S, Clarkson J, Langdon P, Robinson P (2004) Effects of repeated exposure of a humanoid robot on children with autism. In: Keates S, Clarkson J, Langdon P, Robinson P (eds) Designing a more inclusive world. Springer, Berlin, pp 225–236CrossRef Robins B, Dautenhahn K, Boekhorst R, Billard A, Keates S, Clarkson J, Langdon P, Robinson P (2004) Effects of repeated exposure of a humanoid robot on children with autism. In: Keates S, Clarkson J, Langdon P, Robinson P (eds) Designing a more inclusive world. Springer, Berlin, pp 225–236CrossRef
39.
Zurück zum Zitat Saerbeck M, Schut T, Bartneck C, Janse MD (2010) Expressive robots in education: varying the degree of social supportive behavior of a robotic tutor. In: SIGCHI conference on human factors in computing systems, ACM Press, New York, NY, USA, CHI’10, pp 1613–1622. https://doi.org/10.1145/175332753567 Saerbeck M, Schut T, Bartneck C, Janse MD (2010) Expressive robots in education: varying the degree of social supportive behavior of a robotic tutor. In: SIGCHI conference on human factors in computing systems, ACM Press, New York, NY, USA, CHI’10, pp 1613–1622. https://​doi.​org/​10.​1145/​175332753567
42.
Zurück zum Zitat Wainer J, Dautenhahn K, Robins B, Amirabdollahian F (2010) Collaborating with Kaspar: using an autonomous humanoid robot to foster cooperative dyadic play among children with autism. In: 10th IEEE-RAS international conference on humanoid robots (humanoids), pp 631–638. https://doi.org/10.1109/ICHR.2010.5686346 Wainer J, Dautenhahn K, Robins B, Amirabdollahian F (2010) Collaborating with Kaspar: using an autonomous humanoid robot to foster cooperative dyadic play among children with autism. In: 10th IEEE-RAS international conference on humanoid robots (humanoids), pp 631–638. https://​doi.​org/​10.​1109/​ICHR.​2010.​5686346
Metadaten
Titel
Long-Term Cohabitation with a Social Robot: A Case Study of the Influence of Human Attachment Patterns
verfasst von
Michał Dziergwa
Mirela Kaczmarek
Paweł Kaczmarek
Jan Kędzierski
Karolina Wadas-Szydłowska
Publikationsdatum
02.11.2017
Verlag
Springer Netherlands
Erschienen in
International Journal of Social Robotics / Ausgabe 1/2018
Print ISSN: 1875-4791
Elektronische ISSN: 1875-4805
DOI
https://doi.org/10.1007/s12369-017-0439-2

Weitere Artikel der Ausgabe 1/2018

International Journal of Social Robotics 1/2018 Zur Ausgabe

Editorial

Editorial

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.