Skip to main content
Erschienen in: International Journal of Social Robotics 4/2016

Open Access 01.08.2016

Making New “New AI” Friends: Designing a Social Robot for Diabetic Children from an Embodied AI Perspective

verfasst von: Lola Cañamero, Matthew Lewis

Erschienen in: International Journal of Social Robotics | Ausgabe 4/2016

Aktivieren Sie unsere intelligente Suche um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Robin is a cognitively and motivationally autonomous affective robot toddler with “robot diabetes” that we have developed to support perceived self-efficacy and emotional wellbeing in children with diabetes. Robin provides children with positive mastery experiences of diabetes management in a playful but realistic and natural interaction context. Underlying the design of Robin is an “Embodied” (formerly also known as “New”) Artificial Intelligence (AI) approach to robotics. In this paper we discuss the rationale behind the design of Robin to meet the needs of our intended end users (both children and medical staff), and how “New AI” provides a suitable approach to developing a friendly companion that fulfills the therapeutic and affective requirements of our end users beyond other approaches commonly used in assistive robotics and child–robot interaction. Finally, we discuss how our approach permitted our robot to interact with and provide suitable experiences of diabetes management to children with very different social interaction styles.

1 Introduction: Friends for Diabetic Children

Robin is a cognitively and motivationally autonomous affective robot toddler with “robot diabetes” that we have developed to support diabetes management in children aged 7–12 years, particularly perceived self-efficacy, a crucial element in improving diabetes self-management skills. More specifically, we have focused on supporting the more affective aspects of self-efficacy, namely self-confidence and the development of responsibility, by providing them with positive mastery experiences in diabetes management in a playful but realistic and natural interaction context.
Type 1 diabetes mellitus (T1DM) is an incurable chronic disease caused by the loss of insulin-producing beta cells in the pancreas, and hence the inability of the body to produce insulin naturally. This leads to chronically raised blood glucose levels (hyper-glycemia) that needs to be corrected artificially by injecting insulin. T1DM is often diagnosed in childhood and, if poorly managed, the high glucose levels lead to devastating complications such as blindness, limb amputations or organ failure. Diabetes is therefore a very challenging condition, not only physically but also psychologically, in terms of both cognitive demands (learning about complex diabetes treatment) and emotional demands. Concerning the latter, in addition to the anxiety and discomfort created by the condition itself, many challenges in daily life and social interaction can easily undermine self-esteem and self-confidence, particularly during pre- and early adolescence.
This work started as part of the EU-funded ALIZ-E project (www.aliz-e.org), which endeavored to develop companion robots for diabetic children as they learn about their diabetes, at the age in which they start transitioning towards managing their own diabetes. The request to use robots for this purpose with this age group (7–12 years) came from San Raffaele Hospital, linked to one of project partners (the Fondazione Centro San Raffaele). They were inspired by two elements. On the one hand, the practice in their hospital of using toys as “patients” to teach younger children about their condition. On the other hand, the successful use of animal-like robots as substitutes for animal-assisted therapy [52] pointed towards the potential of robots, which also seemed particularly appropriate for this age group. The intended use of the robots is to complement their current educational and therapeutic practice with diabetic children. Robin is therefore not intended as a replacement for any other therapy, but to help consolidate the knowledge of diabetes management gained by current practice, and to cover an existing gap: providing the children with a suitable context to apply their knowledge of diabetes management before they had to apply it in real life.
In this paper we discuss how we designed our friendly social agent as an autonomous robot toddler with “robot diabetes” to support the development of self-efficacy and emotional wellbeing in diabetic children. Our design thus pays particular attention to the affective and social aspects of the interaction. We follow an Embodied, or “New” Artificial Intelligence (AI) approach [13, 16] to design, as well as combining principles of an embodied approach to cognition and interaction, developmental robotics, and the psychology of emotional development. In particular, we discuss the rationale behind the design of Robin to meet the needs of our intended end users (both children and medical staff), and how “New AI” provides a suitable approach to developing a friendly companion that fulfills the therapeutic and affective requirements of our end users beyond other approaches commonly used in assistive robotics and child–robot interaction. Finally, we discuss how our approach permitted our robot to interact with and provide positive mastery experiences to diabetic children with very different social interaction styles and diverse ways of showing engagement and friendship, both verbally and non-verbally.

2 Therapeutic and Educational Background

In this section we provide some background information about diabetes, its treatment and its consequences for children to provide context for the design rationale of Robin.

2.1 Treatment of Type 1 Diabetes

Following diagnosis, diabetic children and their carers start attending clinics for treatment (which needs to be individually tailored) and to learn about proper management practice, in order to be able to carry out daily treatment at home. This is far from being an easy task as it involves mastering many interacting variables and a very considerable dose of determination.
The current treatment involves monitoring and adjusting blood glucose through the provision of insulin—either through a pump or by injection—and glucose/carbohydrates—by eating appropriate foods. Self-management of diabetes is necessary in order to live independently, which in addition to balancing the amounts and timing of three key variables—insulin, food (in particular the carbohydrate content) and physical activity—involves being aware of the symptoms of high and low glucose levels, which can vary between individuals, and being aware of how different foods and activities specifically affect their own blood glucose. The dose of insulin needs to be carefully balanced with the food intake and physical activity levels. If too much insulin is present in the blood relative to carbohydrate intake and activity level, it can lead to low blood glucose (hypo-glycemia) associated with symptoms such as tiredness, headaches and, possibly, coma. If too little insulin is present, then it conversely results in hyper-glycemia with characteristic symptoms such as excess thirst and urination as well as severe long-term consequences. Some symptoms may be present in both hypo- and hyper-glycemia, depending on the individual, making the recognition of each condition more difficult. In order to confirm a hypo- or hyper-glycemia, a quick finger-prick blood test is used to measure the amount of glucose in the blood. Hypo-glycemia is treated by eating rapidly-absorbed carbohydrates (e.g. sugars), while hyper-glycemia is treated with insulin. Both corrections have to be made in appropriate amounts in order to avoid over-compensating and causing the opposite condition. Diabetes treatment therefore involves a great deal of education, but the ultimate medical aim of this education is behavior change: the acquisition of good diabetes self-management practice.
With young children, it is the carers who must do the work of managing the diabetes. As the children get older, they take increasing responsibility of their own treatment as they progress towards full self-management. This typically starts to happen at the age group that we target with Robin (7–12 years). Since each person is different in the way their body responds, people with diabetes need to “become an expert on their own diabetes” [26].

2.2 Emotional Aspects of Diabetes

Diabetes is very challenging emotionally. In addition to the anxiety and discomfort created by the condition itself, many challenges in daily life and social interaction can easily undermine self-esteem and self-confidence, particularly in the age group that we are concerned with. The complexities of diabetes and its management affect all aspects of life that are important to children, and facing those challenges requires affective skills that are not fully mature, but are still developing. As Anderson and Brackett [4, p 10] note: “The primary developmental tasks of the child during the elementary school years include making a smooth adjustment from the home to the school setting; forming close friendships [...]; obtaining approval from this peer group; developing new intellectual, athletic and artistic skills and forming a positive sense of self.” All these aspects are negatively affected by diabetes. At a social level, diabetes thus singles them out from their peer group when social support is most needed to cope with the disease.
The need to support emotional aspects in children, adolescents and young adults with diabetes has been echoed by changes in the focus of treatment. This focus has increasingly shifted from the above-mentioned practical aspects—insulin, carbohydrates and physical activity—to stressing love, care, and knowledge in addition to insulin [26]. This focus emphasizes the emotional aspects of the condition, with knowledge being the tool to understand and take control of one’s own diabetes.
One of the most pressing needs in the early stages may be the emotional demands of diagnosis, which have been compared to those of grief [43]. This emotional element is an ongoing aspect of living with diabetes. Heller [27] writes: “It is difficult to conceive of a disease more likely to cause psychological problems than diabetes. Both Types 1 and 2 diabetes are lifelong incurable conditions with a strong heritable element, giving plenty of time for the development of guilt and recrimination within a family. Children who develop Type 1 diabetes are ‘punished’ by a series of injections and blood tests, a diet which forces them to eat when they don’t want to and the prohibition of chocolate and ice cream, previously used to reward them for being ‘good’.”
According to diabetes UK (diabetes.org.uk), problems for children in our target age group include:
  • The acceptance of the diagnosis, the changes in the child’s life, and hence the loss of aspects of their old life (again referred to as a “grief process”).
  • Telling friends about their diabetes.
  • Bullying or teasing due to diabetes.
  • Fear of exclusion from activities that they enjoy and share with peers (e.g., physical activities, parties, school trips, and treats) due to diabetes.
  • The effect of diabetes on family life can be a source of anxiety for the entire family. For example, the “special” treatment that a child receives due to diabetes can produce jealousy or resentment from siblings.
Another problem that is particularly present in this age range is the fact that, even when the children correctly follow the prescribed treatment and behaviors, their bodies do not always respond as expected; this is exacerbated by the numerous changes that the body undergoes through puberty, and can be a highly frustrating and demotivating factor. In addition, treatment requires implementing changes in daily behavior that must be robust through adolescence (even though the way in which the body behaves changes) and into adulthood.
Children with diabetes thus have great demands put on psychological skills related to emotion regulation that are not mature in pre-adolescent children [37, 50, 51]. In fact, emotion regulation is developing in this age group, and this is one of the points where children of this age need most support. These emotion regulation skills include the control of their frustration, anger and other negative emotions, maintaining their self-esteem and self-confidence, and motivation to comply with a treatment that interferes with their life. Besides affecting their social and emotional development, poor emotion regulation can interfere with their diabetes treatment and development of self-management skills. Robin aims to support primarily these emotional aspects, in particular self-esteem and self-confidence, with the aim to improve self-efficacy in their management of diabetes [3], as we discuss in the next section.

2.3 Behavior Change and Perceived Self-Efficacy

One of the main obstacles to achieve robust behavior change is the feeling of powerlessness that many children experience, as this can easily lead to lack of motivation to comply with the prescribed treatment and behavior, and poor self-esteem. Therefore, in addition to knowing, understanding and managing the aspects of treatment described above (Sect. 2.1), it is crucial that they are confident that they know what to do, and that they are doing it well. This is closely related to Bandura’s notion of perceived self-efficacy, a key element in his theories of successful behavior change [7, 8]. Measures of self-efficacy have been linked to improvements in HbA1c (glycated hemoglobin) levels, which give a measure of blood glucose levels over the last two to three months, in adolescents [21] and young adults [31]. We summarize here the definitions that we have adopted synthesizing from the literature; we refer the reader to [34] for a more extensive discussion of these notions:
  • Perceived self-efficacy—a person’s beliefs about their own ability to successfully perform a specific task in a specific situation.
  • Self-confidence—general feelings about one’s own abilities. Self-confidence is thus more general than self-efficacy, and some authors talk of it in terms of generalized self-efficacy.
  • Self-esteem—feelings about one’s own worth. Self-esteem is related to self-efficacy, particularly to self-efficacy in those tasks that one holds as valuable.
The most powerful positive factors identified by Bandura as informing perceived self-efficacy are performance accomplishments, also called mastery experiences [9]. These are successful experiences of executing the behavior or task, or closely related tasks, and may take place in situations perceived as more or less challenging. They may be further re-enforced by seeing the hoped-for results of one’s own actions.
This concept was a key idea in the design of our interaction scenario with Robin. Our aim was to give a mastery experience of diabetes management in a scenario that was challenging enough to be meaningful, but was “playful” and hence not stressful, and that permitted the children to see that their actions had the desired effect soon after they were performed.

3 Defining End-User Needs

The needs of the end users that were to provide requirements for the design of the ALIZ-E robot companion were defined iteratively throughout the project in discussions between an expert group and representatives of the ALIZ-E consortium from the Fondazione Centro San Raffaele (FCSR) team, the Italian project partner linked to the San Raffaele Hospital. This process is documented1 in the project deliverables [13]. The expert group was composed of members of the medical staff in the pediatric diabetes team at San Raffaele Hospital with a variety of roles: consultant diabetologists, psychologists, nutritionists, nurses, pediatricians, physiotherapists, and teachers; a representative (the president) of a diabetic patient’s association2 also joined the team of experts.
The needs that the expert group identified for the children and the needs of the staff looking after the children were both discussed. A set of potentially useful scenarios and activities for the robot, recommendations and requirements were subsequently given to us and the rest of the ALIZ-E consortium. Some of the “desiderata” issued by the expert group were beyond the capabilities of current social robots, and hence significant reinterpretation from us (the designers) was required in those cases. However, the expert group were very understanding of this issue and happy to see other options suggested that would also address the desired objectives.
As part of this iterative process, feedback from the children was also incorporated into the recommendations of the expert group and the design of the two prototypes. From very early on in the project, different prototypes of partial elements of the robot companions (most of the controlled by a Wizard-of-Oz) were developed by different groups in the project. These prototypes implemented specific features such as elements related to social interaction, affect expression, or games relevant to stress reduction, diabetes treatment or diabetes management. To avoid involving patients unnecessarily, only those systems specifically related to diabetes were tested by FCSR staff with Italian diabetic children. Feedback from the children was collected and analyzed in collaboration with the expert group. Given the limited availability of the medical staff, interaction between them and the ALIZ-E consortium was mediated by FCSR staff. Only when the “final” prototypes were mature enough, did other partners interact directly with the medical staff and take part in the testing with the Italian diabetic children. In our case, these are the pilots of the Robin prototype that we mention in this paper (Sect. 5). At that point, we sought additional feedback from the children.
In this section we focus on those aspects of the end-user needs and “desiderata” that were relevant for the design of Robin and the interaction scenario, as well as our interpretation of them. We don’t discuss requirements only related to the “Integrated System” also developed in the project [11, 22], since they are out of the scope of this paper.

3.1 End-User Needs from the Expert Group

We designed Robin with two different types of end users in mind: the diabetic children and the medical staff that could be using the robot in the future as part of their routine treatment. Their needs are related, but also distinct as we discuss below.

3.1.1 The Needs of the Diabetic Children

As mentioned in Sect. 2.1, following diagnosis, diabetic children and their carers need to attend clinics for treatment (which needs to be tailored to individual needs) and to learn about proper management practice in order to be able to carry out daily treatment at home. This usually involves an initial short period of hospitalization, followed by a much longer period during which children, often accompanied by a carer, attend regular clinics.
The above situations are a source of anxiety for the children, in addition to imposing high cognitive demands on them. The expert group very early on saw the potential role that a robot companion could play in reducing anxiety in these situations, and that was the first need that they recommended to address.
In addition to improving their emotional wellbeing, other aspects involved in the treatment that are particularly challenging for the children, and for which they could potentially benefit from the support of a companion robot, were highlighted by the expert group as particularly important:
(a)
The acquisition of knowledge about diabetes and diabetes management.
 
(b)
Improving their (perceived) self-efficacy3 and the different factors that strongly influence it, particularly self-confidence, their sense of responsibility toward themselves, and their efficacy in caring for others [3, p. 10]. Improving the children’s (perceived) self-efficacy would be expected to facilitate the formation of the intention and the initiation of an action toward behavior change, following social-cognitive models of health behavior change.
 
(c)
Motivation to follow the doctor’s recommendations, i.e., adherence or compliance to prescribed treatments and behaviors (which often involve behavior change).
 
Three potential roles were thus proposed by the expert group for a companion robot [1]:
(a)
A teacher of important medical concepts and general concepts related to health. This role would have an asymmetrical relationship between the robot and the child, as the robot would be the “knowledgeable” partner, explicitly helping the child to learn.
 
(b)
An affective companion to help reduce anxiety and negative emotions and promote positive emotions and wellbeing. For this role, the expert group issued no recommendation regarding whether the relationship between the child and the robot should be symmetrical or asymmetrical.
 
(c)
A coach or motivator for the children to comply and adhere to proper behaviors. For this role, the expert group issued no recommendation regarding whether the relationship between the child and the robot should be symmetrical or asymmetrical.
 
As we shall see in the following sections, Robin was designed to contribute to these three roles to different degrees.

3.1.2 The Needs of the Medical Staff

Discussions with the expert group also resulted in a list of areas in which the medical staff would welcome support from a companion robot [13], listed in order of increasing difficulty:
(a)
Making the hospital environment more friendly.
 
(b)
Improving the autonomy of the children in food selection and insulin management.
 
(c)
Reinforcing and testing the knowledge that the children acquire about diabetes using the standard methods.
 
(d)
Providing educational support in the explanation of important concepts.
 
(e)
Improving children’s awareness and understanding through self-assessment.
 
(f)
Improving the children’s (perceived) self-efficacy in diabetes management.
 
(g)
Encouraging children to express and discuss their problems.
 
As we shall see in the following sections, Robin was designed to contribute to all these areas to different degrees.
In addition to these desiderata, the expert group suggested a number of potential scenarios and activities to achieve these objectives [3]. With Robin, we took up the challenge of creating a companion robot simultaneously implementing two of them, namely
  • A “robotic actor” that could act as a diabetic patient in different situations and “ask” the child how to behave and react accordingly. This scenario would support the goal of improving (perceived) self-efficacy in diabetes management from the above list.
  • An “insulin companion” that could improve children’s autonomy in food selection and insulin management.
Finally, the medical staff expressed a wish to have a robot that was autonomous at least for some of those activities. This would facilitate effective use in a hospital without requiring either the presence of technical staff or having medical staff spend valuable time controlling the robot.

3.2 Expert Advice Based on Early Prototypes

As part of the iterative design process, different prototypes of robot companions carrying out different activities with the diabetic children were developed over time, varying in the way they interacted with the children (e.g., verbally, non-verbally, or a combination of both), their mode of control (from a full Wizard-of-Oz model to a fully autonomous robot, with various cases in between) and the length of their interaction episodes.
The early prototypes implemented different games. Both the children and the robot were static—the robot either standing or crouching on top of a table, and the child sitting down or standing in front of the robot. When the project started, the Nao robot was not CE-marked; therefore, it was not possible to leave the children alone in the room with the robot or to let them touch it. This changed as the robot was awarded the certification, about one year into the project.

3.2.1 Initial Feedback

Although the interaction of the children with these early prototypes were very different to the interactions that we designed with Robin, the feedback that the expert group provided after observing different videos of the early prototypes proved very valuable to us. In particular, the following observations, relevant to supporting self-efficacy and promoting engagement, caught our attention:
  • All the children were interested in the robot, and although some children appeared to be more shy when interacting with the robot on their own, none of them seemed to be scared of the robot.
  • Interest in the different activities varied, as did the length of time the children appeared willingly engaged in the activity. For most activities, engagement did not last for the whole duration.
  • The more the robot moved and changed its behavior, the more engaged the children seemed.
  • The children seemed to want to touch the robot, and a few of them touched the robot unprompted.
  • Children naturally assigned a “will” to the robot, even in the case of those prototypes controlled by a Wizard.
  • The desk used as platform for the robot and the restriction on touching the robot seemed to put up a barrier to the creation of a “friendship” bond.
  • The robot was too directive in many cases, leaving little room for the child to be spontaneous.

3.2.2 Expert Group Recommendations

When observing the early interactions, the expert group noted that, although the children were interested in the robot, they were not necessarily engaged in the interaction with it, nor were they treating it as an agent, still less as a friend. They offered some recommendations towards promoting a feeling of familiarity and friendly social interaction:
1.
The head of the robot should be more or less at the same level as the head of the child, to facilitate “eye contact”.
 
2.
The robot and the child should be as physically close as possible.
 
3.
Since physical contact is very important in the establishment of positive bonds, the children should be allowed, or even encouraged, to touch the robot.
 
4.
The early games were not affectively engaging. Interactions with higher emotional involvement should be attempted in order to assess to what extent the children perceived the robot as a believable and friendly social companion.
 
5.
The activities should be engaging, in the sense of being both absorbing (demanding attention) and persistent (the child would not want to end them), and require active participation and self-determination on the part of the child.
 
6.
The early prototypes provided too much explicit information. It would be interesting to see whether, if given fewer cues, the children would spontaneously come up with their own ways of interacting with the robot.
 
7.
A one-way interaction, in which the robot was “the” knowledgeable partner directing the interaction, seemed to be less suitable for establishing a positive bond between the child the robot.
 
8.
Having the robot and the child “share” something of common interest could be a good way to encourage a more personal bond.
 
9.
The robot should first be introduced in an environment that is well-known to the child.
 

3.3 Our Interpretation of User Needs

As previously mentioned in Sect. 3.1.2, we needed to fulfill the requirements of the two scenarios assigned to us, i.e., the “robot actor” and “insulin companion”:
  • Improving the children’s (perceived) self-efficacy in diabetes management.
  • Improving the autonomy of the children in food selection and insulin management.
In addition, since our group’s main research area is modeling affect, the end-user needs that we were in charge of addressing were those more closely related with affect, namely
  • Developing an “affective companion” to reduce anxiety and support wellbeing (Role ‘b’ for the companion robot from the list in Sect. 3.1.1).
  • Making the hospital environment more friendly (area ‘a’ from the list in Sect. 3.1.2).

3.3.1 Background to Our Approach

It is our view that a companion robot designed to support primarily affective aspects related to self-efficacy, autonomy and wellbeing, would very likely also support more “cognitive” objectives in the list in Sect. 3.1.2 (e.g., reinforcing or testing the knowledge the children acquire about diabetes as part of their training), given that affective and cognitive factors are highly intertwined [26].
Bringing these affective and cognitive aspects together, and following Bandura’s ideas outlined in Sect. 2.3, we decided to design our robot architecture and child–robot interaction scenario as a tool to increase perceived self-efficacy in the child primarily by giving them a mastery experience of diabetes management—in this case the child would manage the robot’s diabetes. We thought that an affective companion could also act as a motivator to promote adherence. We thus added these secondary goals to our “design requirements” list.
Regarding autonomy and the transition to self-management, which starts in our target age group, we agreed with [12] in thinking that “children need to be given more room to take responsibility for their illness and to take this responsibility early on and in a stepwise manner.” We took this view into account in designing an activity that gives the child some level of autonomy and responsibility, and in which they are expected to try out their knowledge of good diabetes management in an initially simple “task” that can be modified and made incrementally more complex.
The expert group recommendations summarized in Sect. 3.2.2 were all very much in line with, and could be interpreted in the light of, our Embodied AI approach to autonomous robots [16, 17], and our research on motivationally autonomous robots [15, 18, 23, 35] and on modeling the development of attachment [20, 29]. We thus decided to design a motivationally autonomous affective robot companion—named “Robin” after “Robot Infant”—to implement the “robot actor” and “insulin companion” scenarios that were given as part of the needs of the medical staff (Sect. 3.1.2).

3.3.2 Concrete Design Decisions

Our design decisions addressed the expert group recommendations (Sect. 3.2.2) as follows.
We decided to have Robin move around (“going about its business”) in a room, rather than placing it static on top of a desk. This would permit the child to see (or infer) what Robin was trying to do or wanted by observing the robot’s behavior, rather than by listening to its explanations and requests. This would portray Robin more clearly as an independent (and more believable) agent with its own “needs” and “desires”, and would also promote a more immediate and spontaneous interaction. The child would need to kneel down or sit on the floor to interact with Robin, follow it, hold it, move it around or help it to stand up when it had fallen over. This would make the interaction less formal and address recommendation 1 (robot and child at the same level), recommendation 2 (physical proximity) and recommendation 3 (physical contact).
To fulfill the requirements for role for the companion that we had been assigned—the robot as affective companion, role ‘b’ in Sect. 3.1.1—we decided to reverse the roles of the robot and the child advised in the “robot as teacher” role. Here, we would thus give the child the role of the “grown up”, to foster an increased sense of responsibility. Robin would be a very young and friendly robot—a toddler—autonomous and independent but also in need of help and of being looked after. This addressed recommendation 7 (a “non-leading” robot would be more suitable to establish a positive bond), as well as recommendation 4 (emotional involvement related to Robin’s need to be looked after by the child and Robin’s “friendliness”) and recommendation 5 (looking after a very active toddler is certainly a very engaging activity, as defined above).
Being a toddler, Robin would not be able or expected to have complex speech and language understanding abilities. Robin would show its “desires” largely through behavior, occasionally using a few isolated words. This would prevent frustration due to poor language understanding and dialog capabilities, and also addressed recommendation 6—the robot should not always provide explicit explanations, to permit the child to explore different aspects of the interaction.
To implement the scenarios that we were given as part of the needs of the medical staff (Sect. 3.1.2), i.e., the “robotic actor” and “insulin companion”, we gave Robin “robot diabetes”—not simply “acting out” symptoms but actually having an internal model of diabetes that would affect its needs and behavior. This permitted us to design an interaction scenario to work on improving the autonomy of the children in food selection and insulin management and their self-efficacy. This additionally addressed recommendation 8—Robin and the child had something significant in common. Finally, we placed the interaction in a toddler’s playroom, following recommendation 9—a friendly environment familiar to the child.

4 Designing a “New AI” New Friend

In this section, we provide details of how we implemented Robin addressing the requirements described in Sect. 3 from the perspective of Embodied Artificial Intelligence, also known as “New AI”. Let us recall that we have implemented the Robin character using the hardware (but not the control software, which is our own) of a standard commercial Nao robot developed by Aldebaran (https://​www.​aldebaran.​com/​).
To achieve our assigned objectives, we thought it would be important that the interaction was unstructured and partly ambiguous and unpredictable, as this would make this “play” experience feel closer to the complexity of real diabetes self-management. As we shall see below, the use of a motivationally and cognitively autonomous robot [16, 34] (rather than, e.g., a scripted system) is instrumental to this end. It also makes each interaction unique, due to both the dynamics of the architecture in interaction with the physical and social environment (the robot never behaves in exactly the same way twice), and to the different ways in which each child treated the robot.

4.1 Design of the Robot Architecture

Robin’s decision-making architecture follows principles of Embodied AI [13, 44], also known as “New AI” in its earlier days. Drawing on our previous research [15, 16], our approach is built around a “physiology” of homeostatically controlled “survival-related” variables that Robin needs to keep within permissible values. We have also given the robot a simple model of Type 1 diabetes, comprising an internal blood glucose level that increases upon “eating” toy food, and decreases with “insulin”. Robin chooses how to behave as a function of these internal needs and the stimulation it gets from the environment. Elements of the environment are detected using vision (e.g., foods, faces) and tactile contact (e.g., collisions, strokes, hugs). Internal needs and environmental cues are mathematically combined in what we call motivations. Motivations lead Robin to autonomously select behaviors from its repertoire (e.g., walking, looking for a person, eating, resting) that best satisfy its needs (e.g., social contact, nutrition, resting, playing) in the present circumstances. For this reason, Robin is a motivationally and cognitively autonomous robot.
To foster appropriate behavior in the children in our scenario, Robin is not capable of fully attending to all its needs without human assistance. It can play on its own, eat, and, by resting, it can recover from tiredness caused by too much movement. However, Robin requires assistance from the children to satisfy its social needs (e.g., social presence, strokes, hugs), some of its nutritional needs (the child can “feed” the robot using toy food items), and to control its blood glucose level.
We describe below Robin’s control architecture in terms of how its main elements—sensors, actuators, diabetes model, and action selection loop—relate to our design requirements and decisions.

4.1.1 Sensors

Robin uses vision, tactile contact and interoception to detect relevant elements in its external and internal environment. The sonars fitted on the robot’s chest are used by the walking behaviors to prevent walking into objects, as well as to detect hugs from people. The foot bumpers are used detect collisions. The head touch sensor is used to detect “strokes” (the robot makes a purring sound to give feedback). One of the head cameras is used as to provide vision, detecting colored objects (e.g., toy food objects used in our scenario) and faces. The gyroscopes are used to detect when the robot has fallen. Simulated sensors are used to detect the levels of the homeostatically-controlled essential internal variables.
Sensors can be very noisy but, following an Embodied AI approach, instead of trying to make perception “perfect” by adding complex pre- or post-processing, we keep the sensory information that the different behaviors use very simple, and exploit the noise by using it as part of the interaction scenario. For example, for Robin a face is perceived when two “eyes” and a “mouth” (or something with a similar abstract shape and located at the vaguely appropriate distance) are detected. This happens when human faces are perceived, but it can also happen when perceiving other objects and drawings of faces. We used this feature to modulate Robin’s sociability by placing pictures and drawings of Robin and its friends on the walls of the playroom, as we will discuss in Sect. 4.2.

4.1.2 Actuators

Robin uses its effectors for two main purposes: carrying out actions related to needs and behaviors, and to convey information about its internal state. Regarding the former, Robin uses its legs to walk and move around, moves its head to visually detect and track objects, and its hand to reach and “consume” food and drink items. It uses bodily postures and movement as well as simple vocalizations to convey its internal states (e.g., tiredness, happiness, sadness).
Similarly to sensors, actuators can be very noisy and again, following an Embodied AI approach, instead of trying to make actuation “perfect”, we exploit the noise in the actuators by using it as part of Robin’s behavior. For example, we exploited the “clumsiness” of the walking (the movement and the falls) caused by varying speed of the actuators, as well as some elements of noise related to active perception, as part of the behavior of Robin, increasing its believability as a toddler.

4.1.3 Diabetes Model

We have implemented in Robin a simple simulated glucose physiology that tracks what food has been eaten and is being digested (gradually raising the blood glucose), what insulin doses have been given (gradually being released into the blood, lowering blood glucose), and the amount of physical activity, which is determined by the current to the joint motors (which acts to slightly lower blood glucose). Hypo- and hyper-glycemia have associated symptoms, such as increased tiredness resulting in a change in behavior, alerting the children of the potential presence of a problem. Using a Bluetooth “glucometer” device, the children can measure Robin’s glucose levels and provide insulin to lower its glucose (correcting hyper-glycemia). They can feed Robin high-glucose food to raise its glucose (correcting hypo-glycemia).

4.1.4 Action Selection: Essential Variables, Motivations and Behaviors

An overview of Robin’s action selection architecture is shown in Fig. 1. The architecture is an improved version of that described in a previous paper [34].
Robin has a “physiology” comprising four internal essential variables that are controlled homeostatically—mostly through interaction with the world by executing behaviors. They give Robin four internal needs, corresponding to its homeostatic deficits, which grow with time when not attended to. They are the following:
  • Need for food: its intensity is determined by the foods currently being “digested”.
  • Need to socialize: the level of this deficit decreases with social interaction (seeing faces, having his head stroked or being hugged) and increases without this interaction.
  • Need to rest: its intensity increases with hot joint motors, or in high and low blood glucose situations.
  • Need to play: its intensity increases over time, but Robin can “dance” to decrease it.
These internal needs are mathematically combined with relevant perceptions from the external environment (e.g., the presence of food or faces) in what we call motivations. Motivations lead Robin to autonomously select behaviors from its repertoire (e.g., walking, looking for a person, eating, resting) that best satisfy its needs in the present circumstances.
To satisfy these motivations, we have provided Robin with a number of (appetitive and consummatory) behaviors designed to correct the relevant deficits when they are executed. Figure 1 shows the “high level” composite behaviors, while Table 1 shows the simpler behaviors making up those complex behaviors associated with each motivation.
Table 1
Simpler behaviors making up Robin’s complex behaviors
Motivation
Behaviors
Hunger
Search for food items/eat
Socialize
Search for person/solicit hug
Tiredness
Sit down and rest/express tiredness
Play
“Dance”/explore
Motivations and behaviors are dynamically assigned intensity levels that indicate how relevant their execution would be given the current situation4 (a combination of internal needs and external circumstances). Behaviors “inherit” their intensity levels from the motivation they relate to; they are hence more likely to be selected for execution when the intensity of the corresponding motivation is high.
To select the behavior(s) that Robin executes, our action selection loop, executing every 0.125 s, first checks the intensity levels of the behaviors and, as a consequence, new behaviors can potentially be selected for execution. Multiple behaviors can be executed simultaneously, for example Robin can speak while reaching for an object and walking towards it. The major restriction on this is that no two behaviors sharing an actuator group (e.g. each arm, the legs, the head, the voice) can be run at the same time. This permits the generation of richer overall behavior from a smaller collection of simple behaviors.
Our choice of the specific motivations and behaviors was intended to address key features of our scenario:
  • Hunger was included to make Robin partially dependent on the child, and to connect with our purpose of making a diabetes-related interaction.
  • Socialization was included to make the robot friendly to the child, and to cue interaction related to social bonding.
  • Tiredness was included as it is one of the main symptoms of diabetes, as well as to allow both robot and child to rest from activity from time to time.
  • Solo play was included to give Robin independence, with the dancing behavior contributing to a happy/playful personality.
To enhance the affective elements of Robin, the architecture includes simple pleasure and displeasure hormones based on the dynamics of the homeostatically controlled needs. The levels of these two hormones influence the intensity of a vocalization behavior, as well as which sounds are selected from a small repertory of recordings of human voices with positive or negative valence.
A second type of very simple vocalizations—single words expressing some of Robin’s internal needs—are used to help the children interpret Robin’s needs more clearly.

4.2 Design of the Interaction Scenario

We designed the interaction scenario to be as natural and friendly as possible, in order to provide a positive experience to the children—not only a positive mastery experience of diabetes management but also a positive social and affective experience. Below we highlight some of the elements that show how this was approached in our design.

4.2.1 Physical Elements of the Environment

As already mentioned, the interaction is located in a toddler’s playroom, a familiar place for the children that provides the right environment for a playful, natural, unstructured interaction. The playroom (Fig. 2) is made of a floor of white rubber play tiles, so that the child can move around comfortably and the robot doesn’t suffer damage when it falls over, and of walls made of lightweight wooden frames covered with fabric in soothing colors. The playroom can be configured to have different shapes and sizes; for the interactions discussed in Sect. 5, we used an area of approximately six square meters.
We decorated the playroom with soft toys scattered around the floor and pictures of Robin and its “family” on the walls. We put elements important for diabetes management—the glucometer, sugary food and drink (blue items) that can be used as corrections and water—on a low table. There are no specific places allocated to the robot and the child, who share the same space.

4.2.2 Social Elements

We present Robin to the children as a friendly but independent agent, who will both approach and move away from them, “engaging” and “disengaging” in the interaction. Since the impression given by the first encounter in very important, Robin is already actively moving around when the child arrives. This impression is reinforced by the pictures and toys in the playroom, which provide clues about Robin’s life beyond the interaction.
The role of the adult that introduces Robin to the child is also key to establishing the agency of Robin. To this end, rather than purely instructing the child in what to do and treating Robin as an artifact, the adult is demonstrating what to do by interacting with Robin. Further, the adult can cue interaction between the child and the robot by engaging in certain types of interaction themselves, for example, if they praise Robin, or use the toys in the room to entertain it, then the children may try to do the same when they are interacting with Robin.
Other social elements of the playroom include the pictures and drawings of Robin and its “family” already mentioned; detected as faces by Robin, they contribute to satisfying its needs for sociability and provide a “distraction” causing Robin to move away from the child. This “disengaging” behavior is very important in social interaction as it makes engagement episodes more significant, and reinforces the independence of Robin.
Another element related to an Embodied AI design philosophy is the use of ambiguity. The behavior of Robin was deliberately designed to include some ambiguity so that children had to explore different options by “probing” Robin and various elements of the environment, harnessing their creativity and supporting the goal of helping them to apply their knowledge of diabetes and diabetes management in ways that are not externally imposed. An example related to diabetes management is the “tired” expression, which can be the result of moving around, or a symptom of hypo- or hyper-glycemia (low or high blood glucose). The children can distinguish between these three types of tiredness by using the Bluetooth glucometer device. An example of an ambiguous social signal is Robin’s “reaching” behavior (raising and opening one or both arms), which can happen under very different circumstances and can be socially interpreted in various ways—e.g., as pointing, as “greeting”, as a gesture of friendship, as an indication of looking for something.
Social interactions in this environment do not need to be for a fixed number of participants. In addition to the adult present at the beginning and the end, we can have one or more children without having to modify any element of either the robot or the environment. We have used Robin in both dyadic child–robot interactions and triadic interactions with two children.

5 Making New Friends in Their Natural Environment

In this section we discuss how our design of Robin gave rise to friendly and natural social behavior in interactions with diabetic children. Our purpose here is not to give an account of these interactions, but rather to relate some of our observations to the elements of our design and user requirements.

5.1 A Brief Overview of the Interactions with Robin

As part of the ALIZ-E project, we carried out pilot interactions with 17 Italian diabetic children within our target age range at a hospital and a diabetes summer camp, to assess whether the elements of the interaction were appropriate, believable and engaging with a variety of children in a real-world context. These studies were not (yet) to assess the effectiveness of Robin as a tool to support self-efficacy in diabetes management. A pre-pilot with Italian non-diabetic children provided initial feedback on the social aspects of our design. A small pilot with diabetic children at San Raffaele Hospital provided initial feedback regarding the diabetes-related elements of the robot and the scenario, as well as on social aspects of the interaction and engagement with diabetic children. A larger pilot at the SOStegno70 Summer Camp for diabetic children in Misano Adriatico (www.​sostegno70.​org) provided feedback from a wider variety of diabetic children. We refer the reader to [35] for a detailed description of these interactions. Here we briefly summarize their structure in order to give context for our discussion of Robin as a “New AI friend”.
The interactions of the children with Robin are unscripted, following our Embodied AI approach. They have a “high-level” structure comprising three phases, with a total duration of about half an hour:
1. Introductions (about 5 min) The diabetic child enters Robin’s playroom where an adult, who is interacting with Robin, is present. The adult introduces Robin and the child to each other, and shows the child how to feed Robin, how to use its glucometer to measure glucose and give insulin, and elements of social interaction with the robot.
2. Child & robot alone together (about 15–20 min) After the child has learned how to interact with Robin, the adult will ask if the child would mind looking after Robin while they leave the room for a while. Before leaving, the adult provides the child with a mobile phone which they can use to get help. All our children agreed to be left alone with the robot. The interaction is remotely monitored by the experimenters. During this phase of the interaction, Robin shows some symptoms of diabetes, such as stopping moving around and sitting down, being tired or sleepy. We then expect the child to check the robot’s blood glucose, and either give a dose of insulin, or feed the robot one of the high-sugar food items. They may also choose to provide some sort of comfort to Robin. The experimenters may choose to send an adult back if the child appears to be having any difficulties. Once the robot has recovered, it will stand and again start to walk around.
3. Return of the adult and debriefing (about 5–10 min) The adult returns to the playroom and asks the child how they and the robot are. The children have the opportunity to describe the interaction in their own words. Shortly afterwards, an experimenter arrives to pick up the child, debriefs them away from the playroom, and helps the child fill in a questionnaire about the interaction.
These interactions were very natural and Robin “made lots of friends”. As we discuss in Sect. 5.2, all the children showed concern for Robin, becoming socially, cognitively and emotionally invested in the interaction; this shows that our design gave rise to interaction in line with the expert group recommendations discussed in Sect. 3.2.2 towards promoting a feeling of familiarity and friendly social interaction. The children gave signs of willingness to continue the interaction after it was meant to end. Also, as we discuss in Sect. 5.3, all the children showed visible signs of treating Robin as a socially acceptable agent and interacted in a friendly manner, although the form these signs took varied enormously across individual children.
We first discuss some of the observations that show that our design gave rise to interactions in line with the nine recommendations of the expert group for promoting familiarity and a friendly social interaction discussed in Sect. 3.2.2.
During the interaction, the children willingly got on the floor, variously kneeling, squatting or sitting (occasionally lying) so they were at the same level as the robot. This was fully in line with recommendation 1, that the robot and the child should be at same physical level.
Even when the children weren’t doing something that necessitated being close to the robot (such as hugs), the children and robot were often physically very close, with the robot’s feet almost touching the child, and with nothing in between them. Sometimes it was the robot that moved closer to the child, and sometimes it was the child moving towards the robot (in line with recommendation 2, physical proximity).
The amount and type of physical contact between the children and Robin varied greatly from child to child. Most children would pick up and move Robin around in different ways. Some of the children touched the robot in ways not anticipated by us, for example, holding its hand as it walked, or lying the sleepy Robin on their lap (in line with recommendation 3, physical contact).
All the children were very engaged (some more responsive, some more proactive) in helping Robin when it appeared to need it. For example, all the children willingly stroked Robin’s head and hugged it when requested, and helped it when it fell over. Many of them would also provide social comfort such as unprompted affective touch (e.g., lightly touching its “nose”), especially (although not only) when it appeared to be in trouble, such as during a hypo or hyper (in line with recommendation 4, emotional involvement).
Looking after Robin certainly requires attention, even at a basic level of simply responding to its verbal requests. Despite the relatively lengthy interactions, all the children were fully engaged throughout the interaction until they were asked to leave, and some tried to delay their departure from the playroom, or came back to help Robin while filling out the post-interaction questionnaire (in line with recommendation 5, engagement).
Since there are no rules about exactly what the children should do during their time alone with Robin, their behavior varied enormously. In addition to behavior aimed at looking after Robin’s needs, many children would try to entertain Robin using the toys in the room. We also saw more creative ways of interacting with Robin. Two examples of this are, one child who took a plastic spoon and held it near Robin’s mouth, as though giving a spoonful of medicine; and in another interaction with two children, they arranged the soft toys as a bed and lay Robin gently on it (in line with recommendation 6, exploration/discovery).
The fact that the children themselves, rather than Robin, were the “knowledgeable grown-ups” in our scenario did seem to promote a positive bond well beyond the “task” of having to look after Robin (in line with recommendation 7, avoid “knowledgeable directive” robot). The children showed many signs of engagement, treating Robin in a friendly manner and as a believable social agent. This will be discussed in Sect. 5.3.
The fact that the behavior of Robin sometimes changed as a result of its diabetes, showing symptoms that were similar to symptoms experienced by the children, prompted empathic behavior towards the robot with references to, and reflections on, their own experience. For example, this often happened with the symptoms of tiredness and sleepiness, which the children were very quick and careful to attend to (either “medically” or “socially”), some of them telling Robin of a similar experience of theirs, some reporting the similarity of the situations in the debriefing (in line with recommendation 8, sharing something to promote a personal bond).
Finally, since the interaction is unstructured, there are no designated areas for the child and for the robot. In practice, the child and the robot were in close proximity for most of the interaction, and moved freely about the playroom, sharing the same space, sometimes approaching each other, sometimes one following the other around, sometimes spending time in one place doing a specific activity. The children looked very comfortable in the environment (in line with recommendation 9, friendly environment).

5.3 Robin Treated as a Social Agent

We now discuss some of the observed behaviors that, based on the literature, can be taken as signs that the children related to Robin as a socially acceptable agent, and a friendly one.
The social signals most broadly used in the literature to model and assess the social qualities of human-robot interaction are gaze [5], physical proximity [36, 40], tactile contact [19, 46], joint attention [5, 32, 48], mirroring [41, 49], imitation [24, 39, 48], synchrony [5, 25, 45] and coordination [28] of behavior, empathic behavior [33], and body posture [10, 47] and orientation [14]. Children varied enormously in the ways that they used these to indicate a positive connection with Robin.
In the previous section, we have already commented on the variety of physical (tactile) contact by the children and the fact that all the children remained in close physical proximity to the robot. We will thus primarily focus on other social signals in the remainder of this section.
Gaze was indeed used broadly. The majority of children would look at Robin and follow it with their gaze for extended periods of time during the interaction, or look at Robin while it was playing on its own. However, other signals were also used. We had a particularly interesting case of a boy who did not look at Robin most of the time, appearing distracted, but would stay in close proximity to Robin most of the time, sometimes touching its hand while looking away. This child would also mirror the robot’s arm gestures, and showed various other subtle empathic behaviors, offering social comfort when Robin was tired before attending to the diabetes-related needs. Another potentially empathic behavior was the child’s checking of his own insulin pump before and after using Robin’s glucometer (see recommendation 8, sharing something). Also, although he superficially appeared distracted, he interrupted whatever he was doing to rush to attend to every single verbal request from Robin.
The mirroring of Robin’s gestures by this child was not an isolated episode, and mirroring of especially salient behaviors of the robot, such as arm gestures, clapping, dancing and vocalizations, was common.
We also observed empathic responses to flaws in Robin’s behavior, as we had already observed with the ALIZ-E Integrated System [42]. An example of this was when Robin fell over. In our scenario, falling was perceived as a natural aspect of being a toddler, and prompted in all children a willingness to help. However, again, we observed very different ways of doing this. While the majority of children would physically help Robin, some would stand in proximity with their arms open ready to help if Robin needed it. We had a more extreme case of a child who kept asking Robin if it wanted to be helped (without actually doing it as Robin did not confirm), even when it had repeatedly failed to stand on its own. This latter “respect” for Robin’s autonomy indicates that the robot was being treated as an independent agent.
Body orientation was also widely used. Most children would orient their body towards Robin when interacting with it, or would orient the robot towards them when it was “distracted” and they wanted it to interact with them.
Joint attention was also very common, for example when Robin paid attention to objects on the table or to pictures on the walls. This later case was particularly interesting as the children would often follow the robot on their knees and look at the pictures with it.
There was a great variety in vocalization from the children. Some spoke very little, while others gave an almost continual commentary to Robin while they were alone with it, in spite of the fact that we had not programmed any responses to sound, and the children knew that Robin would not understand them.
The use of greetings to signal a socially acceptable interaction has been less commonly investigated, a notable exception being [6]. In our interactions, we observed the use of greetings. For example, children would commonly call Robin by its name, and said “Ciao, Robin” at the beginning and/or end of the interaction. Some of them followed the adult’s behavior in this, but some of them did it unprompted.
In addition to greetings, other signals of friendly social behavior towards an agent less commonly explored in the HRI literature, but that we have observed, include: divided attention when the adult was explaining things, asking Robin whether it wanted/needed things or what it was looking for, asking the robot specific questions about what it was feeling (“are you tired?”, “are you hungry?”), and telling it things relating to their own experience.
Finally, we wish to comment on the children’s responses to disengagement that Robin showed from time to time. We want to highlight the importance of disengagement in social interaction since, although it is well known in developmental psychology (e.g., [30, 38]), most HRI studies focus on maintaining and measuring continued engagement. As we have already mentioned, Robin’s disengagement from interaction with the children to look for other things, wander off, or initiate other activities, strongly shows that Robin is an independent agent with its own motivations and desires. All the children responded appropriately to this signal, showing behaviors that they would show with a real toddler they like interacting with. Such behaviors were however very different, ranging from trying to regain the attention of Robin with other objects such as toys, to a very directive picking up the robot and turning it around back to face them, to being very respectful of its independence, while following it at a distance.

6 Conclusion

In this paper we have discussed how Robin, an autonomous robot toddler intended to support the development of self-efficacy in diabetic children, was designed as a friendly social agent using Embodied AI principles, in order to better support the goals of improving the self-efficacy and wellbeing of children with diabetes. Our design pays particular attention to the affective and social aspects of the interaction.
Our aim in this paper has not merely been to report empirical results. Rather, our main focus has been on how the rationale behind the design of Robin meets the needs of our intended end users (both children and medical staff), and how an Embodied AI approach, largely ignored by the HRI and CRI communities, provides a suitable tool for developing a friendly companion that acts and is perceived as an independent socially acceptable agent. The use of an autonomous robot with its own motivations, and “having” diabetes, meant that Robin was a “life-like” character in unscripted interactions that were meaningful to diabetic children. The interactions differed as a function of a combination of Robin’s dynamically changing internal state and the how the children interacted with it, providing personalized experiences to the individual children without having to modify the software. Also without modifying the software, Robin can scale up to triadic and small group interactions. Finally, due to the characteristics of the Embodied AI design, Robin is very robust to unexpected elements of the environment and the interaction.
Initial pilot interactions with Italian diabetic children provided several positive outcomes. Robin acted as a believable and engaging social interaction partner that behaved and was treated as a friendly but independent agent. They also showed the robustness of Robin in a real-world situation with a wide variety of different children and interaction styles. Finally, they showed that Robin and the interaction were appropriately designed as tools to support self-efficacy in diabetic children. However, the effectiveness of Robin as a tool to improve self-efficacy in diabetes self-management still needs to be assessed. This will be a long process, likely requiring further development to the interaction and the robot software, and longer-term piloting and trialing in a clinical context.
Finally, we have discussed in detail specific signs of friendly and engaging social interaction observed in the children, including signs broadly used in the HRI and signal processing literature, as well as less commonly used indicators more closely linked to the fact that Robin is a motivationally autonomous agent, and is perceived and treated as such.

Acknowledgments

We would like to thank the teams at Fondazione Centro San Raffaele and the Misano Adriatico Summer Camp, in particular Marco Nalin, Ilaria Baroni, Elettra Oleari, Clara Pozzi, Marco Mosconi, Francesca Sacchitelli, Sara Bellini, Marco Moura, Mattia Zelati and Alberto Sanna. We would also like to thank all the children who took part in the interactions with Robin, as well as their families. The opinions expressed are solely the authors’. This work was supported partly by EC Grant FP7-ICT-248116 (ALIZ-E) and partly by the University of Hertfordshire.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Fußnoten
1
See ALIZ-E deliverables D5.1.1 [1], D5.1.2 and D5.1.3 regarding the definition of clinical requirements and possible scenarios, and deliverables D5.2, D5.4, D5.7 and D5.8 regarding the expert group. These deliverables are available from the project website http://​www.​aliz-e.​org.
 
2
The diabetic patient’s association SOStegno70, organizer of the Summer Camps in Misano Adriatico where the two ALIZ-E prototypes, a consortium-wide prototype known as “the Integrated System” [11, 22] and our robot Robin [34], were tested (Sect. 5.1).
 
3
See Sect. 2.3 for definitions of these terms.
 
4
The fact that motivations levels are partially determined by data from sensors means that Robin can respond rapidly to environmental cues and changes.
 
Literatur
4.
Zurück zum Zitat Anderson BJ, Brackett J (2005) Diabetes in children. In: Snoek FJ, Skinner TC (eds) Psychology in diabetes care, 2nd edn. Wiley, Chichester, pp 1–25 Anderson BJ, Brackett J (2005) Diabetes in children. In: Snoek FJ, Skinner TC (eds) Psychology in diabetes care, 2nd edn. Wiley, Chichester, pp 1–25
5.
Zurück zum Zitat Anzalone SM, Boucenna S, Ivaldi S, Chetouani M (2015) Evaluating the engagement with social robots. Int J Soc Robot 7(4):465–478CrossRef Anzalone SM, Boucenna S, Ivaldi S, Chetouani M (2015) Evaluating the engagement with social robots. Int J Soc Robot 7(4):465–478CrossRef
6.
Zurück zum Zitat Baddoura R, Venture G (2015) This robot is sociable: close-up on the gestures and measured motion of a human responding to a proactive robot. Int J Soc Robot 7(4):489–496CrossRef Baddoura R, Venture G (2015) This robot is sociable: close-up on the gestures and measured motion of a human responding to a proactive robot. Int J Soc Robot 7(4):489–496CrossRef
7.
Zurück zum Zitat Bandura A (1977) Self-efficacy: toward a unifying theory of behavioral change. Psychol Rev 84(2):191–215CrossRef Bandura A (1977) Self-efficacy: toward a unifying theory of behavioral change. Psychol Rev 84(2):191–215CrossRef
8.
Zurück zum Zitat Bandura A (1997) Self-efficacy: the exercise of control. Worth Publishers, New York Bandura A (1997) Self-efficacy: the exercise of control. Worth Publishers, New York
9.
Zurück zum Zitat Bandura A, Adams NE, Beyer J (1977) Cognitive processes mediating behavioral change. J Personal Soc Psychol 35:125–139CrossRef Bandura A, Adams NE, Beyer J (1977) Cognitive processes mediating behavioral change. J Personal Soc Psychol 35:125–139CrossRef
10.
Zurück zum Zitat Beck A, Cañamero L, Hiolle A, Damiano L, Cosi P, Tesser F, Sommavilla G (2013) Interpretation of emotional body language displayed by a humanoid robot: a case study with children. Int J Soc Robot 5(3):325–334CrossRef Beck A, Cañamero L, Hiolle A, Damiano L, Cosi P, Tesser F, Sommavilla G (2013) Interpretation of emotional body language displayed by a humanoid robot: a case study with children. Int J Soc Robot 5(3):325–334CrossRef
11.
Zurück zum Zitat Belpaeme T, Baxter P, Read R, Wood R, Cuayáhuitl H, Kiefer B, Racioppa S, Kruijff-Korbayová I, Athanasopoulos G, Enescu V, Looije R, Neerincx M, Demiris Y, Ros-Espinoza R, Beck A, Cañamero L, Hiolle A, Lewis M, Baroni I, Nalin M, Cosi P, Paci G, Tesser F, Sommavilla G, Humbert R (2013) Multimodal child-robot interaction: building social bonds. J Hum Robot Interact 1(2):33–53CrossRef Belpaeme T, Baxter P, Read R, Wood R, Cuayáhuitl H, Kiefer B, Racioppa S, Kruijff-Korbayová I, Athanasopoulos G, Enescu V, Looije R, Neerincx M, Demiris Y, Ros-Espinoza R, Beck A, Cañamero L, Hiolle A, Lewis M, Baroni I, Nalin M, Cosi P, Paci G, Tesser F, Sommavilla G, Humbert R (2013) Multimodal child-robot interaction: building social bonds. J Hum Robot Interact 1(2):33–53CrossRef
12.
Zurück zum Zitat Blanson Henkemans OA, Hoondert V, Schrama-Groot F, Looije R, Alpay LL, Neerincx MA (2012) I just have diabetes: children’s need for diabetes self-management support and how a social robot can accommodate their needs. Patient Intell 4:51–61CrossRef Blanson Henkemans OA, Hoondert V, Schrama-Groot F, Looije R, Alpay LL, Neerincx MA (2012) I just have diabetes: children’s need for diabetes self-management support and how a social robot can accommodate their needs. Patient Intell 4:51–61CrossRef
13.
Zurück zum Zitat Brooks RA (1991) New approaches to robotics. Science 253(5025):1227–1232CrossRef Brooks RA (1991) New approaches to robotics. Science 253(5025):1227–1232CrossRef
14.
Zurück zum Zitat Bruce A, Nourbakhsh I, Simmons R (2002) The role of expressiveness and attention in human-robot interaction. In: Robotics and automation, 2002. Proceedings of the ICRA’02. IEEE international conference on, IEEE, vol 4, pp 4138–4142 Bruce A, Nourbakhsh I, Simmons R (2002) The role of expressiveness and attention in human-robot interaction. In: Robotics and automation, 2002. Proceedings of the ICRA’02. IEEE international conference on, IEEE, vol 4, pp 4138–4142
15.
Zurück zum Zitat Cañamero L (1997) Modeling motivations and emotions as a basis for intelligent behavior. In: Proceedings of the Agents’97, ACM, pp 148–155 Cañamero L (1997) Modeling motivations and emotions as a basis for intelligent behavior. In: Proceedings of the Agents’97, ACM, pp 148–155
16.
Zurück zum Zitat Cañamero L (2001) Emotions and adaptation in autonomous agents: a design perspective. Cybern Syst 32(5):507–529CrossRefMATH Cañamero L (2001) Emotions and adaptation in autonomous agents: a design perspective. Cybern Syst 32(5):507–529CrossRefMATH
17.
Zurück zum Zitat Cañamero L (2005) Emotion understanding from the perspective of autonomous robots research. Neural Netw 18:445–455CrossRef Cañamero L (2005) Emotion understanding from the perspective of autonomous robots research. Neural Netw 18:445–455CrossRef
18.
Zurück zum Zitat Cañamero L, Avila-García O (2007) A bottom-up investigation of emotional modulation in competitive scenarios. In: Affective computing and intelligent interaction, second international conference, ACII 2007, Lisbon, Portugal, September 12–14, 2007, Proceedings, pp 398–409 Cañamero L, Avila-García O (2007) A bottom-up investigation of emotional modulation in competitive scenarios. In: Affective computing and intelligent interaction, second international conference, ACII 2007, Lisbon, Portugal, September 12–14, 2007, Proceedings, pp 398–409
19.
Zurück zum Zitat Cañamero L, Fredslund J (2001) I show you how I like you—can you read it in my face? IEEE Trans Syst Man Cybern Part A 31(5):454–459CrossRef Cañamero L, Fredslund J (2001) I show you how I like you—can you read it in my face? IEEE Trans Syst Man Cybern Part A 31(5):454–459CrossRef
20.
Zurück zum Zitat Cañamero L, Blanchard AJ, Nadel J (2006) Attachment bonds for human-like robots. Int J Humanoid Robot 03(03):301–320CrossRef Cañamero L, Blanchard AJ, Nadel J (2006) Attachment bonds for human-like robots. Int J Humanoid Robot 03(03):301–320CrossRef
22.
Zurück zum Zitat Coninx A, Baxter P, Oleari E, Bellini S, Bierman B, Blanson Henkemans O, Cañamero L, Cosi P, Enescu V, Ros Espinoza R, Hiolle A, Humbert R, Kiefer B, Kruijff-Korbayova I, Looije R, Mosconi M, Neerincx M, Paci G, Patsis G, Pozzi C, Sacchitelli F, Sahli H, Sanna A, Sommavilla G, Tesser F, Demiris Y, Belpaeme T (2013) Towards long-term social child-robot interaction: using multi-activity switching to engage young users. J Hum Robot Interact 5:32–67CrossRef Coninx A, Baxter P, Oleari E, Bellini S, Bierman B, Blanson Henkemans O, Cañamero L, Cosi P, Enescu V, Ros Espinoza R, Hiolle A, Humbert R, Kiefer B, Kruijff-Korbayova I, Looije R, Mosconi M, Neerincx M, Paci G, Patsis G, Pozzi C, Sacchitelli F, Sahli H, Sanna A, Sommavilla G, Tesser F, Demiris Y, Belpaeme T (2013) Towards long-term social child-robot interaction: using multi-activity switching to engage young users. J Hum Robot Interact 5:32–67CrossRef
23.
Zurück zum Zitat Cos-Aguilera I, Cañamero L, Hayes G, Gillies A (2013) Hedonic value: enhancing adaptation for motivated agents. Adapt Behav 21(6):465–483CrossRef Cos-Aguilera I, Cañamero L, Hayes G, Gillies A (2013) Hedonic value: enhancing adaptation for motivated agents. Adapt Behav 21(6):465–483CrossRef
24.
Zurück zum Zitat Dautenhahn K (1994) Trying to imitate—a step towards releasing robots from social isolation. In: From perception to action conference, 1994. Proceedings, IEEE, pp 290–301 Dautenhahn K (1994) Trying to imitate—a step towards releasing robots from social isolation. In: From perception to action conference, 1994. Proceedings, IEEE, pp 290–301
25.
Zurück zum Zitat Delaherche E, Chetouani M, Mahdhaoui A, Saint-Georges C, Viaux S, Cohen D (2012) Interpersonal synchrony: a survey of evaluation methods across disciplines. IEEE Trans Affect Comput 3(3):349–365CrossRef Delaherche E, Chetouani M, Mahdhaoui A, Saint-Georges C, Viaux S, Cohen D (2012) Interpersonal synchrony: a survey of evaluation methods across disciplines. IEEE Trans Affect Comput 3(3):349–365CrossRef
26.
Zurück zum Zitat Hanas R (2015) Type 1 diabetes in children, adolescents and young adults. 6th edn. Class Health Hanas R (2015) Type 1 diabetes in children, adolescents and young adults. 6th edn. Class Health
27.
Zurück zum Zitat Heller S (2005) Foreword to the second edition. In: Snoek FJ, Skinner TC (eds) Psychology in diabetes care, 2nd edn. Wiley, Chichester, pp 15–16 Heller S (2005) Foreword to the second edition. In: Snoek FJ, Skinner TC (eds) Psychology in diabetes care, 2nd edn. Wiley, Chichester, pp 15–16
28.
Zurück zum Zitat Hiolle A, Cañamero L, Andry P, Blanchard A, Gaussier P (2010) Using the interaction rhythm as a natural reinforcement signal for social robots: a matter of belief. Lect Notes Comput Sci 6414:81–89CrossRef Hiolle A, Cañamero L, Andry P, Blanchard A, Gaussier P (2010) Using the interaction rhythm as a natural reinforcement signal for social robots: a matter of belief. Lect Notes Comput Sci 6414:81–89CrossRef
29.
Zurück zum Zitat Hiolle A, Cañamero L, Ross M, Bard K (2012) Eliciting caregiving behavior in dyadic human-robot attachment-like interactions. ACM Trans Interact Intell Syst 2(1):3:1–3:24CrossRef Hiolle A, Cañamero L, Ross M, Bard K (2012) Eliciting caregiving behavior in dyadic human-robot attachment-like interactions. ACM Trans Interact Intell Syst 2(1):3:1–3:24CrossRef
30.
Zurück zum Zitat Johnson M, Posner MI, Rothbart MK (1991) Components of visual orienting in early infancy: contingency learning, anticipatory looking, and disengaging. J Cognit Neurosci 3(4):335–344CrossRef Johnson M, Posner MI, Rothbart MK (1991) Components of visual orienting in early infancy: contingency learning, anticipatory looking, and disengaging. J Cognit Neurosci 3(4):335–344CrossRef
31.
Zurück zum Zitat Johnston-Brooks CH, Lewis MA, Garg S (2002) Self-efficacy impacts self-care and HbA1c in young adults with type I diabetes. Psychosom Med 64(1):43–51CrossRef Johnston-Brooks CH, Lewis MA, Garg S (2002) Self-efficacy impacts self-care and HbA1c in young adults with type I diabetes. Psychosom Med 64(1):43–51CrossRef
32.
Zurück zum Zitat Kaplan F, Hafner VV (2004) The challenges of joint attention. In: Berthouze L, Kozima H, Prince CG, Sandini G, Stojanov G, Metta G, Balkenius C (eds) Proceedings of the Fourth International Workshop on Epigenetic Robotics, Lund University Cognitive Studies, vol 117, pp 67–74 Kaplan F, Hafner VV (2004) The challenges of joint attention. In: Berthouze L, Kozima H, Prince CG, Sandini G, Stojanov G, Metta G, Balkenius C (eds) Proceedings of the Fourth International Workshop on Epigenetic Robotics, Lund University Cognitive Studies, vol 117, pp 67–74
33.
Zurück zum Zitat Leite I, Pereira A, Castellano G, Mascarenhas S, Martinho C, Paiva A (2012) Modelling empathy in social robotic companions. Adv User Model 7138:135–147CrossRef Leite I, Pereira A, Castellano G, Mascarenhas S, Martinho C, Paiva A (2012) Modelling empathy in social robotic companions. Adv User Model 7138:135–147CrossRef
34.
Zurück zum Zitat Lewis M, Cañamero L (2014) An affective autonomous robot toddler to support the development of self-efficacy in diabetic children. In: Proceedings of the 23rd annual IEEE international symposium on robot and human interactive communication (IEEE RO-MAN 2014), pp 359–364 Lewis M, Cañamero L (2014) An affective autonomous robot toddler to support the development of self-efficacy in diabetic children. In: Proceedings of the 23rd annual IEEE international symposium on robot and human interactive communication (IEEE RO-MAN 2014), pp 359–364
35.
Zurück zum Zitat Lewis M, Oleari E, Pozzi C, Cañamero L (2015) An embodied AI approach to individual differences: supporting self-efficacy in diabetic children with an autonomous robot. In: Tapus A, André E, Martin JC, Ferland F, Ammi M (eds) Proceedings of the 7th international conference on social robotics (ICSR-2015). Springer, Paris, pp 401–410 Lewis M, Oleari E, Pozzi C, Cañamero L (2015) An embodied AI approach to individual differences: supporting self-efficacy in diabetic children with an autonomous robot. In: Tapus A, André E, Martin JC, Ferland F, Ammi M (eds) Proceedings of the 7th international conference on social robotics (ICSR-2015). Springer, Paris, pp 401–410
36.
Zurück zum Zitat Mead R, Atrash A, Matarić MJ (2011) Proxemic feature recognition for interactive robots: automating metrics from the social sciences. Soc Robot 7072:52–61CrossRef Mead R, Atrash A, Matarić MJ (2011) Proxemic feature recognition for interactive robots: automating metrics from the social sciences. Soc Robot 7072:52–61CrossRef
37.
Zurück zum Zitat Meerum Terwogt M, Olthof T (1989) Awareness and self-regulation of emotion in young children. In: Saarni C, Harris PL (eds) Children’s understanding of emotion. Cambridge University Press, New York, pp 209–237 Meerum Terwogt M, Olthof T (1989) Awareness and self-regulation of emotion in young children. In: Saarni C, Harris PL (eds) Children’s understanding of emotion. Cambridge University Press, New York, pp 209–237
39.
Zurück zum Zitat Mohammad Y, Nishida T (2015) Why should we imitate robots? Effect of back imitation on judgment of imitative skill. Int J Soc Robot 7(4):497–512CrossRef Mohammad Y, Nishida T (2015) Why should we imitate robots? Effect of back imitation on judgment of imitative skill. Int J Soc Robot 7(4):497–512CrossRef
40.
Zurück zum Zitat Mumm J, Mutlu B (2011) Human-robot proxemics: physical and psychological distancing in human-robot interaction. In: Proceedings of the 6th international conference on human-robot interaction, ACM, New York, NY, USA, HRI’11, pp 331–338 Mumm J, Mutlu B (2011) Human-robot proxemics: physical and psychological distancing in human-robot interaction. In: Proceedings of the 6th international conference on human-robot interaction, ACM, New York, NY, USA, HRI’11, pp 331–338
41.
Zurück zum Zitat Nadel J, Simon M, Canet P, Soussignan R, Blancard P, Cañamero L, Gaussier P (2006) Human responses to an expressive robot. In: Proceedings of the 6th international workshop on epigenetic robotics. Lund University, Lund University cognitive studies, pp 79–86 Nadel J, Simon M, Canet P, Soussignan R, Blancard P, Cañamero L, Gaussier P (2006) Human responses to an expressive robot. In: Proceedings of the 6th international workshop on epigenetic robotics. Lund University, Lund University cognitive studies, pp 79–86
42.
Zurück zum Zitat Nalin M, Baroni I, Kruijff-Korbayova I, Cañamero L, Lewis M, Beck A, Cuayáhuitl H, Sanna A (2012) Children’s adaptation in multi-session interaction with a humanoid robot. In: IEEE RO-MAN 2012, pp 351–357 Nalin M, Baroni I, Kruijff-Korbayova I, Cañamero L, Lewis M, Beck A, Cuayáhuitl H, Sanna A (2012) Children’s adaptation in multi-session interaction with a humanoid robot. In: IEEE RO-MAN 2012, pp 351–357
45.
Zurück zum Zitat Prepin K, Gaussier P (2010) How an agent can detect and use synchrony parameter of its own interaction with a human? In: Esposito A, Campbell N, Vogel C, Hussain A, Nijholt A (eds) Development of multimodal interfaces: active listening and synchrony, lecture notes in computer science, vol 5967. Springer, Berlin, pp 50–65CrossRef Prepin K, Gaussier P (2010) How an agent can detect and use synchrony parameter of its own interaction with a human? In: Esposito A, Campbell N, Vogel C, Hussain A, Nijholt A (eds) Development of multimodal interfaces: active listening and synchrony, lecture notes in computer science, vol 5967. Springer, Berlin, pp 50–65CrossRef
46.
Zurück zum Zitat Salter T, Dautenhahn K, te Boekhorst R (2006) Learning about natural human-robot interaction styles. Robot Auton Syst 54(2):127–134 (Intelligent autonomous systems 8th conference on intelligent autonomous systems (IAS-8))CrossRef Salter T, Dautenhahn K, te Boekhorst R (2006) Learning about natural human-robot interaction styles. Robot Auton Syst 54(2):127–134 (Intelligent autonomous systems 8th conference on intelligent autonomous systems (IAS-8))CrossRef
47.
Zurück zum Zitat Sanghvi J, Castellano G, Leite I, Pereira A, McOwan PW, Paiva A (2011) Automatic analysis of affective postures and body motion to detect engagement with a game companion. In: Human-robot interaction (HRI), 2011 6th ACM/IEEE international conference on, IEEE, pp 305–311 Sanghvi J, Castellano G, Leite I, Pereira A, McOwan PW, Paiva A (2011) Automatic analysis of affective postures and body motion to detect engagement with a game companion. In: Human-robot interaction (HRI), 2011 6th ACM/IEEE international conference on, IEEE, pp 305–311
48.
Zurück zum Zitat Scassellati B (1999) Imitation and mechanisms of joint attention: A developmental structure for building social skills on a humanoid robot. In: Nehaniv CL (ed) Computation for metaphors, analogy, and agents, lecture notes in computer science, vol 1562. Springer, Berlin, pp 176–195CrossRef Scassellati B (1999) Imitation and mechanisms of joint attention: A developmental structure for building social skills on a humanoid robot. In: Nehaniv CL (ed) Computation for metaphors, analogy, and agents, lecture notes in computer science, vol 1562. Springer, Berlin, pp 176–195CrossRef
49.
50.
Zurück zum Zitat Stegge H, Meerum Terwogt M (2007) Awareness and regulation of emotion in typical and atypical development. In: Gross JJ (ed) Handbook of emotion regulation. The Guilford Press, New York, pp 269–286 Stegge H, Meerum Terwogt M (2007) Awareness and regulation of emotion in typical and atypical development. In: Gross JJ (ed) Handbook of emotion regulation. The Guilford Press, New York, pp 269–286
51.
Zurück zum Zitat Thompson RA, Meyer S (2007) Socialization of emotion regulation in the family. In: Gross JJ (ed) Handbook of emotion regulation. The Guilford Press, New York, pp 249–268 Thompson RA, Meyer S (2007) Socialization of emotion regulation in the family. In: Gross JJ (ed) Handbook of emotion regulation. The Guilford Press, New York, pp 249–268
52.
Zurück zum Zitat Wada K, Shibata T, Saito T, Sakamoto K, Tanie K (2005) Psychological and social effects of one year robot assisted activity on elderly people at a health service facility for the aged. In: Proceedings of the 2005 IEEE international conference on robotics and automation, 2005. ICRA 2005, IEEE, pp 2785–2790 Wada K, Shibata T, Saito T, Sakamoto K, Tanie K (2005) Psychological and social effects of one year robot assisted activity on elderly people at a health service facility for the aged. In: Proceedings of the 2005 IEEE international conference on robotics and automation, 2005. ICRA 2005, IEEE, pp 2785–2790
Metadaten
Titel
Making New “New AI” Friends: Designing a Social Robot for Diabetic Children from an Embodied AI Perspective
verfasst von
Lola Cañamero
Matthew Lewis
Publikationsdatum
01.08.2016
Verlag
Springer Netherlands
Erschienen in
International Journal of Social Robotics / Ausgabe 4/2016
Print ISSN: 1875-4791
Elektronische ISSN: 1875-4805
DOI
https://doi.org/10.1007/s12369-016-0364-9

Weitere Artikel der Ausgabe 4/2016

International Journal of Social Robotics 4/2016 Zur Ausgabe

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.