Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access November 26, 2020

A probabilistic evaluation of human activity space for proactive approach behavior of a social robot

  • Chapa Sirithunge EMAIL logo , H. M. Ravindu T. Bandara , A. G. Buddhika P. Jayasekara and D. P. Chandima

Abstract

Intelligent robot companions contribute significantly to improve the living standards of people in the modern society. Therefore, humanlike decision-making skills are sought after during the design of such robots. On the one hand, such features enable the robot to be easily handled by its human user. On the other hand, the robot will have the capability of dealing with humans without disturbing them by its behavior. Perception of Behavioral Ontology prior to an interaction is an important aspect in this regard. Furthermore, humans make an instant evaluation of task-related movements of others before approaching them. In this article, we present a mechanism to monitor how the activity space is utilized by a particular user on a temporal basis as an ontological assessment of the situation and then determine an appropriate approach behavior for a proactive robot to initiate an interaction with its user. This evaluation was then used to determine appropriate proxemic behavior to approach that person. The usage of activity space varies depending on the task of an individual. We used a probabilistic approach to find the areas that are the most and least likely to be occupied within the activity space of a particular individual during various tasks. As the robot approaches its subject after analyzing the spatial behavior of the subject within his/her activity space, spatial constraints occurred as a result of which robot’s movement could be demolished. Hence, a more socially acceptable spatial behavior could be observed from the robot. In other words, an etiquette based on approach behavior is derived considering the user’s activity space. Experiment results used to validate the system are presented, and critical observations during the study and implications are discussed.

1 Introduction

Social robots not only work with us in collaborative workspaces, but they accompany us into personal settings such as home and health care [1,2]. Therefore, simulating social settings, as humans do, is an emerging requirement with the rise of service robots in social domains. When robots are deployed in social environments, they are expected to abide by the society’s unspoken social rules [1]. Respecting human personal spaces is one rule in this regard. Proxemics has been identified as a personality trait of robots and people’s reactions upon personal space change in different situations [3]. Familiarity between people reduces the personal space, and this fact remains same for human–robot interaction (HRI), according to the study in ref. [4]. Displaying appropriate proxemic behavior has several aspects such as following societal norms and establishing psychophysical distancing with people, etc. [5]. This further elaborates that human proxemic behavior is shaped by the individual’s psychophysical closeness to the other. These two aspects have to be perceived in order to facilitate effective HRI. Personality attributes are on top when attention-seeking features and behaviors displayed by the robots are considered [6]. Furthermore, humans prefer robots with humanlike personality attributes in human–robot domains. In such occasions, following such etiquettes, simple “robotiquettes” that humans adhered to in social environment are expected [7].

As modern robots are expected to perceive the uncertainties of the world without a supervision, limited access to prior knowledge has become an obstacle for better performance. To overcome this challenge, robotic systems have to grasp knowledge on specific scenarios, individual differences, task-related or environment-specific uncertainties. Therefore, novel methods for the perception of these uncertainties and dynamic changes are required. A hierarchical analysis of comprehending such information from motion is presented in ref. [8]. Expressive, individual features of motion have been analyzed through this research. Yet encoding such features in to human-friendly signatures has become a challenge and equally important in the advancement of Social Robotics. Authors in ref. [8] have taken an effort to encode such features to solve tasks in selected human activities.

In order to respect the space acquired by a human [9], his/her behavior has to be observed. An activity space is formed by the human while engaged in a certain activity [10]. Therefore, activity space can be used as a demonstrator of physical behavior of an individual. Activity space is a major domain for ergonomics study of humans during work [11,12]. Furthermore, activity space is vividly utilized during different activities. Thus, the activity space can be used as an observable cue to monitor human behavior and hence determine appropriate proxemic behavior for interaction. Even though such Ergonomics monitoring of human behavior is popular in health sector, it is rarely used in designing personality attributes of robots [13,14].

This study presents how proxemic behavior of a social robot is determined through a probabilistic evaluation of activity space of a person. Through this work, we took an effort to minimize the problem of formulating a relationship between task-related human behavior and appropriate proxemics for his/her robot companion. This work includes an analysis of the usage of local areas within the activity space to determine probable areas to approach a human without causing any disturbance to that person. It demonstrates a design space for proxemics development in Social Robotics.

2 Related work

Researches have been conducted to discover how social cues displayed by robots are interpreted by humans. Out of them, cues associated with proxemic behavior were found to significantly affect human perception of robot’s social presence and emotional state of that encounter [15]. On the other hand, proxemics is one aspect in improving robot’s perception on environment where the robot manipulates the interpretation of distance [16,17]. According to the studies, there are many emotional and psychophysiological aspects in proxemic behavior such as gender, social norms, and personality. Therefore, such aspects have to be taken into consideration before long-term interaction. For example, according to the study in ref. [18], social presence of the robot was more appealing to humans when its behavior was determined by gaze and proxemic aspects rather than just following its user. In ref. [19], how the rapport between robot and its user is shaped through robot’s perception of proxemic behavior is explained. Therefore, it can be seen how important it is to maintain appropriate proxemics during HRI, especially during robot-initiated interaction, when a user is least expecting the robot. According to ref. [20], proxemic behavior falls under interdisciplinary taxonomy of social cues and signals in the service of engineered social intelligence in robots.

A framework based on affective spaces to model personality and affect in behavior-based autonomous systems is proposed in ref. [21]. However, the evaluation of affect was not used for any proxemic behavior of the robot. According to the experiment conducted in ref. [22], there exists an effect of robotic social cues such as proxemic behavior on interpersonal attributes during HRI. Therefore, consideration of proxemics by anyone in a social domain is important in the perspective of humans. A set of feature representations for analyzing human spatial behavior (proxemics) motivated by metrics used in the social sciences are given in ref. [23]. A Hidden Markov Model was trained to recognize spatiotemporal behaviors that signify transitions into and out of a social interaction. Here the transitions are initiation and termination of the interaction. Such methods do not cover smaller movements such as hand tips or elbows although most of the tasks involve such movements.

Umbrico et al. [24] present a cognitive approach for robots to personalize assistive tasks. This involves the integration of holistic knowledge contexts and perspectives which will be used in adaptation. The approach autonomously recognizes different situations, profiles users, according to their requirements, and decides upon appropriate tasks and how they are executed. A number of internal and external elements associated with the robot were taken into consideration during the approach. Ontological perceptive system takes observations upon an “event” to determine upon the set of activities to be performed by the robot. Human anatomy and psychophysiological aspects of human have been taken into consideration when building the semantics of an event. These aspects have been used for monitoring adults’ health but not adequately utilized in cooperative human–robot collaboration in social environments. Therefore, to fill that gap, friendly robots that are aware of social norms and human behavior are required.

The work in ref. [25] puts forward a model for communication between human–robot doubles in public space. This work introduces a functional structure of cognitive agents. According to that, Knowledge mastering, imitation of activity and behavior models play an important role in human–robot doubles. Khoshhal et al. [26] describe a new approach to perceive physical human behavior based on Power Spectrum-based feature extraction. Probabilistic Laban movement analysis takes a sample of 3D acceleration data of six body parts, namely, head, center, right hand, left hand, right feet, and left feet, to derive the behavior of the human subject, by means of his/her actions. Even though various systems are available to derive human behavior, only limited work uses such findings to generate approach responses. Furthermore, existing work majorly focuses on continuing an interaction between a human and a robot once the interaction has been started, but not on the concerns prior to an interaction. Therefore, there is a requirement of similar systems for behavior monitoring prior to an interaction.

A probabilistic framework for proxemic control is used in ref. [27] for mobile HRI. A set of sensory features experienced by an agent (robot/human) were evaluated to determine values for the interpersonal distance and the angle of orientation between robot and its user. This requires inputs from different places in the setting, which can sometimes be inconvenient due to physical and technical constraints. A spatial approach to reach a walking person has been proposed in ref. [28]. “User unaware” failure and confusion regarding the scenario have been demolished during these two approaches. However, random human behaviors, which were not preobserved by the system, cannot be perceived by this method.

A proxemic planner was implemented upon user behavior in ref. [29]. This method involved decision-making regarding the approach direction and mutual distance based on an evaluation of users’ movements. Movements included the distance to the farthest body joint from the spine vertical and highest joint speed recorded within the period of observation. These two variables were used as inputs to a fuzzy system and the output determined the interpersonal distance. Approach direction was determined as right and left and the user behavior had no effect upon robots’ orientation. Positioning of body joints which is important in approaching a person without invading his/her personal space was not considered in this mechanism. Hence, the usage of activity space was considered here only in part.

Wrist plays a major part in most activities done by humans [30]. In addition, it is a highly dynamic component during activities. Therefore, we selected wrist movements as an important observation in monitoring human behavior. We present a geometry-based evaluation of wrist movements of a human in order to predict the movements of that human to determine an appropriate approach behavior for a robot that intends to initiate an interaction with that human. It was necessary to monitor the activity of “wrist” joint as it can be farthest from human body during activities due to its dynamic nature according to Ergonomics. Unlike most of the present methods, a robot observes its user for a specific duration before making decisions regarding its approach behavior through this method. The approach direction, orientation, and mutual distance to be kept are included in the “approach behavior” in this method.

3 Analyzing activity space

3.1 Task vs space

Activities involve various movements in body parts. Of them, hands are the most frequently used in most of the tasks. Hand movements will also vary depending on the handedness of a person who performs the task. Figure 1 illustrates four occasions encountered while making a call. Positions of the two wrist joints in space were different in each occasion. Similar to this example, positioning of major body parts during various scenarios will be different depending on the activity the user engages in. Therefore, spatial behavior of certain body joints can be used as a mediator to monitor human behavior. Among these, wrist plays a dynamic role in most of the activities of humans and is a great cue for understanding nonverbal communication [30,31].

Figure 1 
                  An example situation in which an individual is making a call. The points in space to which the right and left wrists move within a period are marked in (a) to (d). Variation in the positions of two joints; right wrist 
                        
                           
                           
                              (
                              
                                 
                                    H
                                 
                                 
                                    R
                                 
                              
                              )
                           
                           ({H}_{\text{R}})
                        
                      and left wrist 
                        
                           
                           
                              (
                              
                                 
                                    H
                                 
                                 
                                    L
                                 
                              
                              )
                           
                           ({H}_{\text{L}})
                        
                      are joined for the ease of comparison of variation. Shown in yellow and blue are the positions of right and left wrist, respectively.
Figure 1

An example situation in which an individual is making a call. The points in space to which the right and left wrists move within a period are marked in (a) to (d). Variation in the positions of two joints; right wrist ( H R ) and left wrist ( H L ) are joined for the ease of comparison of variation. Shown in yellow and blue are the positions of right and left wrist, respectively.

In our approach, wrist movements are observed for an adequate period of time, and the behavior is evaluated as a whole. To include a set of points into a single space for the ease of analysis, the space around the user is divided into smaller regions and the observed points are analyzed in clusters. The concept of clustered points within the activity space was developed in a previous stage of this research [32].

During this approach, front and side personal spaces of the human are divided into subspaces called “zones,” which is shown in Figure 2. To increase the accuracy of evaluation, 50 zones were identified. On the one hand, when the number of zones is high, the capability of tracking even smaller movements increases. On the other hand, handling a large number of variables (zones) for decision-making is cumbersome. Therefore, an average of 50 zones was selected for the study. The length of the personal space is divided into six regions, including “far right,” “right,” “mid,” “left,” and “far left.” Areas that do not fall under these five labels belong to zone “51,” which will be defined later in this paragraph. Similarly, height is divided into levels from “level 1” to “level 5” and the width is divided into “front” and “far front.” These are shown in Figure 2(a) and (b). These divisions make 50 zones around the activity space and the space excluding all the 50 zones belongs to zone 51. For example, the label (level 1, far right, and front) describes zone 1 and (level 5, far left, and far front ) gives zone 50.

Figure 2 
                  Division of activity space into regions and levels. (a) Front view of the activity space and (b) side view of the front space.
Figure 2

Division of activity space into regions and levels. (a) Front view of the activity space and (b) side view of the front space.

Hands are the most utilized body parts during many day-to-day activities. Handedness of a particular individual determines the extent to which he/she uses each zone during a certain task. Therefore, activity zones occupied by right and left hands during a certain period were monitored through this study. In the original research, this tracking process of zones occupied by both hands during the period of observation, t, was called “Activity Space Analyzer (ASA).” In ASA, all the regions occupied by a specific body joint within a period of observation are recorded. For instance, consider a walking person. Zones recorded during the walk according to the ASA were as follows.

  • Right wrist = {11, 51}

  • Left wrist = {14, 7}

This idea is illustrated in Figure 3(a). In this example, the zones recorded for a single hand were 20 and 51. These zones were visited at least once during the period of observation. Rest of the zones was not occupied during the period of observation.

Figure 3 
                  How (a) ASA and (b) QASA records data for analysis is shown. Here in (a), 1 and 0 represent whether the corresponding zone is visited or not visited, respectively.
Figure 3

How (a) ASA and (b) QASA records data for analysis is shown. Here in (a), 1 and 0 represent whether the corresponding zone is visited or not visited, respectively.

3.2 Quantitative ASA (QASA)

Sometimes, the same zone will be occupied for a number of times depending on the type of activity. This is common in repetitive tasks. In QASA, the frequency of visits recorded for each zone is considered in contrast to ASA, where only the visited zones are considered irrespective of how many times a particular zone is occupied. This is illustrated in Figure 3(b). The frequency of visits to each zone was counted at the end of the period and the percentage frequency is calculated accordingly. Of all the visits, zone 20 was visited 48% of the time and 51 was occupied 52% of the time. The frequency of obtaining data from a particular user is given under Section 5. The frequency of visits could be used to differentiate the behaviors in tasks that occupy the same set of zones but at different frequencies. This approach further allows the robot to give less priority to random movements, which may record farther zones for fewer times.

During the period of observation, t, a person might move both hands into the similar zones repeatedly. Sometimes, he/she might not move the hands for the entire time. The zone that included the subject’s wrist was recorded in each time unit. Likewise a record of the wrist positions by means of zones was kept throughout t. In both the scenarios, a single zone is considered to be visited more than once. All the zones visited throughout the period of observation were considered in ASA. At the end of t seconds, the number of visits to each zone by the two wrist joints (frequencies) is calculated separately in QASA as in Figure 3(b). Dimensions of zones were chosen by trial and error to increase the accuracy of identifying distinct movements. These dimensions do not depend on the whole-body movement of the person, as zones are positioned relative to the lengths of specific body joints such as the right and left shoulder joints. The considered body joints and dimensions used are illustrated in Figure 4. Front and f a r f r o n t regions are each 0.2 m wide and are measured toward the front of the individual. Other fixed lengths were chosen, so that even movements made over a small range could be tracked.

Figure 4 
                  Dimensions and joints used to separate zones are shown in the skeletal diagram. The distance between right and left shoulder joints is denoted as l and the height from the floor to face joint is denoted as h. Values used for length and width are marked.
Figure 4

Dimensions and joints used to separate zones are shown in the skeletal diagram. The distance between right and left shoulder joints is denoted as l and the height from the floor to face joint is denoted as h. Values used for length and width are marked.

3.3 Probabilistic analysis

After the frequency of visits to each zone is calculated at the end of t s, the probability of each zone to be visited during the same activity under same conditions – probability of occupancy – is calculated. Equation (1) is used for this evaluation.

(1) P zone _ i = f zone _ i P j = 1 51 f zone _ j

where i = 1, 2,…,51. Here P zone _ i is the probability of occupancy of zone i at the end of t s. f zone _ i and f zone _ j denote the frequency of visits to zone i and j, respectively, during t. The values obtained for P 1 to P 51 are compared before making interaction decisions. Often, only a few zones are visited during a certain task. Therefore, the probability of the rest of the zones will be zero. Figure 5 shows the probabilities of occupancy for each zone visited while the user was “standing.” In this scenario, zones 11, 36, and 51 have been occupied for 56, 23, and 21% of the time, respectively. Hence, P zone _11 = 0.52, P zone _31 = 0.23, and P zone _51 = 0.21 according to (1). This probability provides the tendency of the user to initialize the same zone again during the task.

Figure 5 
                  Approach of QASA. It records the zones each joint occupies and finally calculates the percentage visit to each zone. The observation was made upon an individual who was standing, relaxed.
Figure 5

Approach of QASA. It records the zones each joint occupies and finally calculates the percentage visit to each zone. The observation was made upon an individual who was standing, relaxed.

4 Decision-making criteria

Movements of an individual are recorded in the form of probabilities calculated using (1) for the ease of figurative or quantitative comparison.

The distance between robot and user, d, was calculated with respect to the numerical value of the highest probability of occupancy after observation. The equations used to calculate this distance are shown in (2) and (3). It is assumed that a minimum gap of 1.0 m has to be maintained in order to follow the social norms. This marginal value for interpersonal distance − 1.0 m was chosen as in ref. [33]. If i is the zone with the maximum probability of occupancy and P zone _ i is the corresponding probability of occupancy,

then the highest occupancy is in front,

(2) d = 1.0 + P zone _ i × 1.0 m

and the highest occupancy is in f a r f r o n t ,

(3) d = 0.2 + 1.0 + P zone _ i × 1.0 m

These equations were formulated, so that an adequate mutual distance is maintained to least distract the highly occupied zones.

The orientation of robot after approaching the user is calculated in a similar manner. The orientation of the robot, θ , at two occasions is shown in Figure 6. In ordinary scenarios, maintaining an orientation close to 90 ° is rare. Therefore, we chose a range from 0 to 80° for θ .

(4) θ = P i _ max × 80 °

Here P i _ max is the zone recorded with the highest occupancy during observation. This allows the robot to maintain an orientation proportional to the highest probability of occupancy but toward the opposite direction of mostly obstructed areas around the user. As the robot positions himself on the side (right/left) of least occupancy, the robot will face toward the zone with highest occupancy from the side with the least occupancy. This is similar to the human behavior when dealing with somebody highly engaged. During such occasions, the outsider will be cautious about the movements of the opponent while continuing the interaction. The higher the occupancy of a region, the greater the robot’s repulsion from that region. Furthermore, the higher the occupancy of a region, the lesser the robot tends to choose the opposite region. θ is measured clockwise or anticlockwise from the horizontal drawn at 0 ° head orientation as marked in Figure 6. From (4), it is expected to scale the angle of deviation with respect to the occupancy. Hence, the robot will deviate much from highly occupied areas to least disrupt the user. Using (2), (3), and (4), the robot identifies the approach direction, orientation, and mutual distancing, respectively, by considering the user behavior. Hence, the robot tries to follow an etiquette-based behavior by respecting its user’s personal space.

Figure 6 
               The top view of the HRI scenario is shown. The orientation of the robot is marked as 
                     
                        
                        
                           θ
                        
                        \theta 
                     
                  . 
                     
                        
                        
                           θ
                        
                        \theta 
                     
                   is measured clockwise (if reaching user from his/her right) or anticlockwise (if reaching user from his/her left) from the vertical.
Figure 6

The top view of the HRI scenario is shown. The orientation of the robot is marked as θ . θ is measured clockwise (if reaching user from his/her right) or anticlockwise (if reaching user from his/her left) from the vertical.

The criteria used for making major decisions regarding the approach direction, orientation, and mutual distance to keep between the user and robot are given in Algorithm 1.

When choosing the direction to approach user according to the algorithm, for a maximum occupancy in f a r r i g h t allows the robot to approach the user from left. Similarly, if maximum occupancy is recorded in r i g h t , m i d , l e f t , and f a r l e f t , the robot will choose to approach user from f a r l e f t , f a r r i g h t and z o n e 51 from r i g h t , and r i g h t , respectively. The gap between the front and f a r f r o n t regions (0.20 m) must be kept if the highest number of movements was recorded in the f a r f r o n t region. This way the robot gets the opportunity to avoid highly engaged areas within the user’s personal space which makes an interaction adaptive and situation cautious.

5 Results and discussion

5.1 Research platform

Experiments were conducted with the participation of 21 individuals of 25–50 years (mean of 26.3 and standard deviation of 8.96). A service robot platform called MIRob was used for the implementation of a proposed system. MIRob includes a Pioneer 3DX Mobile Robot platform attached with a Cyton Gamma 300 Manipulator for assistive tasks and a kinect sensor for vision. Skeletal representation of the tracked body was extracted as 3D coordinates of feature points in Kinect. The experiment was carried out in an artificially created domestic environment.

5.2 Previous experiments

We conducted a human study on the nature of robot-initiated HRI in ref. [34]. We considered a robot could comprehend observable, nonverbal human cues prior to a virtual social behavior of a robot in a human–robot encounter. During the study, we found how important is the robot’s perception of spatial constraints of an encounter, for an interaction to be less or not disturbing to users. During this study, three human cues, namely, speed and positioning of selected body joints and frequently occupied areas of space by these body joints, were considered to generate appropriate responses toward a particular reason. This work considered the movement of elbow and ankle joints in addition to the wrist joint considered in our current work. For simplicity, the space around a user was divided into three parts as right, center, and left. A decision grid was created in order to define which spatial and verbal responses of robot match the observations well. The robot reaches its user from the least obstructed side (out of right, center, and left regions) keeping a calculated interpersonal distance according to (5).

(5) Interpersonal distance = 1.2 + maximum fanning of a joint observed in the front region

The term refers to the distance from the center line of the body to the considered joint toward one coordinate system. Here it is measured along the z direction, which extends from the center line toward the front region of the user. In addition to proxemics, the robot chose a conversational preference such as no interaction at all, greeting, delivering a service, and having a small talk.

These responses were generated by means of a wizard-of-oz experiment, considering how dynamic (inclined toward a busy state) or static (inclined toward a relaxed state) a person is. The same social robot we use in this work was also used to conduct the study and the platform was visually capable and equipped with a microphone and a speaker to listen and respond to users. A set of tasks were selected for users, since their responses depend on the priority given by him/her to the task. Robot was remotely navigated toward a user during observation and interaction.

Results of the experiment confirmed that the spatial and verbal behaviors of a robot were more socially acceptable through situation awareness gained by observation of human cues. In our set of experiments, situation awareness is confined to the behavior of the user, robot, and spatial constraints. It was observed that proxemic decisions were taken, so that highly engaged areas were least obstructed, received higher feedback scores from users, upon a static approach behavior.

Furthermore, it could be seen that the decisions made by such adaptive systems substantially agree with those of the user. The experiment considered physical, social, and emotional aspects of human behavior by considering a limited number of cues. We extended our contribution to implement such a robotic system to successfully engage with users without violating their expectations.

Furthermore, we tried to embed nonverbal behavior-based situation awareness in approach behavior of a robot. Therefore, proxemic-cautious lively behavior of the robot was replicated during our current set of experiments. For that we selected only the critical joints of human body that contribute significantly to their nonverbal behavior, and the verbal responses generated by the robot were omitted as we consider only the robot’s approach behavior prior to interaction. Age and gender of users were not considered within the scope of this study.

5.3 Experiment

Tasks encountered in a typical domestic and lab environments were selected for the study. The list of selected tasks is shown below. The task names are shortened for the ease of future reference.

  1. DES – Engaged in desk activity

  2. CLN – Cleaning floor

  3. EXR – Exercising

  4. WTV – Watching television

  5. SEA – Seated, relaxing

  6. LAB – Engaged in lab work

  7. PHN – Making a phone call

  8. LAP – Working on laptop

  9. CK – Cooking

  10. LEA – Leaning to a wall

Each participant was asked to perform the ten tasks mentioned above. Hence, 210 different scenarios were selected during the experiment. First, MIRob was allowed to approach each user in direction A in Figure 6 where the orientation is 0 ° and d = 1.2 m. Then MIRob was allowed to observe each individual once the activity was started. After the evaluation of the user behavior according to the model, robot navigated toward the user keeping an orientation, a mutual distance, and an approach direction as determined by the proposed model. This approach behavior of the robot was rated by the participant by comparing this approach behavior with that in direction A in Figure 6. Hence, approaching the user from front as in A is used as the ground truth during this experiment. User rated the robot’s behavior based on the convenience or the discomfort he/she felt as the robot approached. Here the period of observation or t was taken as 10 s. This value was determined, so that an adequate amount of data was obtained for the analysis. Visual information was extracted at a rate of five sets of data per second. This data included positions of user’s body joints. The face orientation was obtained at the first instance only in order to avoid the algorithm becoming computer intensive. Proxemic decisions were taken, so that highly engaged areas were least obstructed. To evaluate the performance of the system, we conducted the same experiment two more times with a gap of 7 and 14 days. The feedback scores received at each stage were analyzed. Robot was allowed to wander around in a given map and it stopped and started observation once a human is tracked.

5.4 Observations and discussion

Zones recorded for a single occasion and the calculated probabilities for each zone during each task are shown in Table 1. Only the zones with a nonzero frequency of visits are given. The zones with the highest probability of occupancy were chosen and the distance from the user was calculated according to the decision-making criteria. Figure 7 shows the distances calculated for the ten tasks in Table 1, considering the highest probability of occupancy. Except for the zones given in Table 1, all the other zones received a zero occupancy probability since these zones were never occupied during the activity. After selecting P i _ max for each task, the approach direction, orientation, and the distance between user and robot were calculated. Results obtained for each occasion are given in Table 2. The mean feedback score received from the users at their first encounter with the robot is given here. The position and orientation of the robot with respect to its user after implementing the model are marked in Figure 7. Shown in dotted and dashed lines are restricted areas including the region with the maximum occupancy. Figure 8 shows the implementation of the model during PHN, as given in Table 2.

Table 1

The set of zones and their corresponding probabilities of occupancy obtained for each task

Task (I, P_{zone _i})
DES (15, 0.76) (14, 0.24)
CLN (11, 0.01) (12, 0.03) (13, 0.39) (14, 0.19) (15, 0.1) (16, 0.05) (20, 0.06) (36, 0.04) (37, 0.08) (45, 0.05)
EXR (11, 0.06) (13, 0.05) (14, 0.01) (15, 0.06) (16, 0.32) (17, 0.11) (18, 0.05) (19, 0.11) (20, 23)
WTV (11, 0.07) (32, 0.1) (35, 0.02) (36, 0.67) (42, 0.09) (45, 0.05)
SEA (11, 0.04) (32, 0.12) (36, 0.76) (37, 0.03) (42, 0.05)
LAB (7, 0.1) (8, 0.11) (13, 0.3) (14, 0.34) (15, 0.1) (38, 0.05)
PHN (17, 0.12) (18, 0.18) (19, 50) (22, 0.03) (23, 17)
LAP (14, 0.20) (15, 0.80)
CK (7, 0.14) (8, 0.15) (14, 0.30) (15, 0.06) (38, 0.08) (39, 0.04)
LEA (11, 0.50) (20, 0.50)
Figure 7 
                  Distances considered maximum occupancy regions of the selected set of tasks are marked here. The red triangle denotes the position of the user and the other colored triangles denote the position and orientation of the robot during each task. The size of the triangle is not scaled with the size of the user or robot but the orientation is marked by measuring 
                        
                           
                           
                              θ
                           
                           \theta 
                        
                     . Colored contours represent the restricted area for the robot to invade while approaching its user.
Figure 7

Distances considered maximum occupancy regions of the selected set of tasks are marked here. The red triangle denotes the position of the user and the other colored triangles denote the position and orientation of the robot during each task. The size of the triangle is not scaled with the size of the user or robot but the orientation is marked by measuring θ . Colored contours represent the restricted area for the robot to invade while approaching its user.

Table 2

Results of ten scenarios during the experiment and average feedback score received for each task

Task P i _ max , region Approach direction d (m) Orientation (°) Average feedback score (of 10)
DES 0.76, far left Right 1.76 61 8.17
CLN 0.39, mid Far right 1.39 31 9.31
EXR 0.32, far left Right 1.32 26 9.33
WTV 0.67, far right Left 1.87 54 5.83
SEA 0.76, far right Left 1.96 61 6.07
LAB 0.34, left Far right 1.34 27 8.31
PHN 0.50, right Far left 1.5 40 9.50
LAP 0.80, far left Right 1.8 64 9.05
CK 0.53, left Far right 1.53 42 9.30
LEA 0.50, far right Left 1.5 40 8.33
Figure 8 
                  Two occasions encountered during PHN in Table 2. (a) Positions of the robot and the user initially as the robot observes the user. (b) Positions of the robot and user after approaching the user. The orientation and the interpersonal distances during these two occasions are marked.
Figure 8

Two occasions encountered during PHN in Table 2. (a) Positions of the robot and the user initially as the robot observes the user. (b) Positions of the robot and user after approaching the user. The orientation and the interpersonal distances during these two occasions are marked.

According to Tables 1 and 2, while the user was performing a desk activity: DES, zone 15, which is on the f a r l e f t region, recorded the maximum probability of occupancy. Therefore, after implementing Algorithm 1, the approach direction was obtained as Right. According to (1), d was calculated as 1.0 + 0.76 × 1.0 m. Similarly, the orientation was 0.76 × 80 ° . Hence, the figures for d and θ were 1.76 m and 61 ° , respectively. Few occasions encountered during this task is illustrated in Figure 8. Robot behavior in this situation received an average feedback score of 8.17 of 10. The reason for the reduction of 2 marks was that the distance robot kept was too high if the user wants to ask for a service from the robot or talk to him. During all the other instances, except WTV and SEA, the behavior of the robot received feedback scores above 8. In WTV, the distance kept by the robot was too large, according to the user. While watching TV, the user preferred to speak only a few words to the robot or ask for some service. The distance was too large for both the preferences. During SEA, the user was relaxed; therefore, he liked to interact with the robot for a longer duration. However, as the robot was nearly 2 m away, the occasion reduced the user’s trend toward a conversation.

The feedback scores received when the experiment was repeated after 7 and 14 days of the first experiment were given in a box and whisker plot in Figure 9. We intended to analyze how user acceptance regarding the behavior of the robot evolve with experience. The mean values have shown an increase with time as the users get the experience of this user-aware behavior of the robot. This could clearly be seen in the tasks: DES, WTV, SEA, LAB, and LAP. CLN, EXR, PHN, and CK recorded higher feedback scores from the beginning; and therefore, only a slight difference was observed between the scores received for these tasks afterward.

Figure 9 
                  This graph shows the mean values of feedback scores received from users for the acceptance of robot’s behavior in the three occasions: initial stage, after 7 days of initial stage, and after 14 days of initial stage. The error bars represent the minimum and maximum scores received.
Figure 9

This graph shows the mean values of feedback scores received from users for the acceptance of robot’s behavior in the three occasions: initial stage, after 7 days of initial stage, and after 14 days of initial stage. The error bars represent the minimum and maximum scores received.

In DES, the feedback scores increased from 8.17 to 8.36 and then to 8.50 with an overall gap of 0.37 points in between. The pattern continued in a similar manner in LAB and LEA with the overall increments of 0.45 and 0.38. CLN, EXR, LAP, and CK recorded slight improvements in the overall scores but still managed to be the tasks that received the highest scores for robot’s behavior. These four tasks improved the feedback score from points 0.19, 0.14, 0.21, and 0.07, respectively. Even though the PHN recorded an overall decrement in scores by 0.20 points, it still managed to obtain feedback scores above 9 in all three occasions. The scores were 9.5, 9.26, and 9.30, respectively, for three occasions. WTV and SEA recorded the highest improvement in user feedback scores which were 1.02 and 2.07 points. WTV improved the score from 5.83 to 6.90 and then lastly to 6.85. SEA improved its initial score of 6.07 to 7.31 and then to 8.17. A probable reason for this improvement is that the users first disliked being disturbed by a robot in their relaxing times. Later on, they were delighted by the fact that robots learn not to invade their personal space in such relaxing situations.

Table 3 represents the results of a t test performed to analyze the differences between the feedback scores received by the robot for its approach behavior in initial and final stages. Initially, it is assumed by the null hypothesis that no significant difference was observed between the compared groups. From these results, it can be seen that p < 0.05 in DES, WTV, SEA, LAB, LAP, and LEA. Hence, it can be deduced that the null hypothesis cannot be accepted. It shows a significant improvement in the feedback scores in these sets of tasks. In contrast, for the tasks p > 0.05 in CLN, EXR, PHN, and CK, the null hypothesis is accepted. Hence, no significant improvement in the feedback scores. But it can already be seen that the feedback scores received for CLN, EXR, PHN, and CK are much higher. In all the cases considered here, dof and t critical were 20 and 1.724, respectively.

Table 3

t test for the comparison of user feedback scores in initial and final attempts

Task t Scores Initial stage Final stage
DES Mean 8.17 8.5
Variance 1.55 0.85
P 0.011
CLN Mean 9.31 9.404
Variance 0.84 0.51
P 0.081
EXR Mean 9.33 9.48
Variance 0.51 0.26
P 0.081
WTV Mean 5.83 6.86
Variance 0.98 1.6
P 0.0084
SEA Mean 6.07 8.17
Variance 0.98 1.68
P 0.0000016
LAB Mean 8.31 8.5
Variance 2.26 1.8
P 0.036
PHN Mean 9.5 9.31
Variance 0.325 0.886
P 0.179
LAP Mean 9.048 9.26
Variance 0.597 0.29
P 0.012
CK Mean 9.31 9.38
Variance 0.562 0.572
P 0.093
LEA Mean 8.33 8.71
Variance 1.28 0.964
P 0.049

In general, the minimum feedback scores have been increasing with time, with an exception in PHN and CK. Maximum feedback scores either stayed constant or increased with time. From these trends, it can be concluded that a significant improvement was observed in determining appropriate approach behavior of a robot, and the performance of the system could make a positive effect upon the user’s acceptance of robots as well.

6 Implications

6.1 Implications for theory

Findings of this study were mostly based on the fact that people prefer similar proxemic rules when interacting with robots as they do while interacting with other humans. Factors that influence personal spaces such as the nature of gaze, gender, and familiarity with the robot were not considered within this context. Therefore, this method does not replicate all parts of human-human interaction into the HRI scenario. However, we could identify several relationships between preferred proxemic behavior and user experience with a robot’s cognition that hold true for HRI as well. In this study, we assumed that an individual’s state is affected by his/her task. However, numerous other factors are not consider during the study. Social norms in addition to proxemics, cultural values, personality traits and principles, health condition are a few such social and psychophysiological conditions. In addition, no theory combines all these aspects to model a certain situation. Therefore, development of such instinct in robots lacks conceptual and technological basis.

6.2 Implications for design

Based on the findings, we can state that this model can offer better means of determining a robot’s proxemic behavior based on nonverbal human cues such as movements. As humans prefer not getting interrupted by the behavior of the robot, the first design guideline proposed by the study is engraving a sense of respecting personal space based on physical and emotional aspects. It is important to design social robots that can maintain human trust. Otherwise the designer has to stress out the user with a set of rules to follow when using the robot. The feature of “maintaining appropriate proxemic behavior” can be used as an “etiquette” for future robots used in human domains. Even though people have been accustomed to robots in their environment, and will tolerate behaviors with limited perception capability, such behavior is not acceptable in a dynamic environment such as shopping malls, hotels, relaxation environments, etc. Therefore, maintaining etiquettes in behavior can be stated as the second design guideline for robots. The third guideline is to consider human cues as much as possible. During this approach, only the usage of activity space was considered. Although activity space is a clear representation of task, often there are occasions where it is a vague representation of the setting. Therefore, considering factors in the environment can be a plus for better perception of the surrounding. An important fact observed during the experiment was that some people were restricted to a few zones while some had extended movements covering a number of zones. Therefore, a zone-based assessment is advantageous in monitoring human behavior especially to determine proxemics. This zone-based assessment is the forth design guideline that can be highlighted from the results. Body proportions differ from human to human. This is the reason for determining dimensions of zones as well as the whole activity space grid as shown in Figure 4, based on individual body parameters. This can be considered as the fifth guideline to design a proxemic-cautious robot companion.

6.3 Limitations

Involuntary responses of humans before the presence of the robot such as changes in pose and voluntary responses such as verbal cues were not evaluated during this study. However, these parameters may have an effect upon the emotional state of the situation which requires adjustments in proxemic behavior of the robot accordingly. Factors such as age, gender, and the race of the participants were not considered within the scope of this study. Therefore, in the future, it is expected to expand the system to make an assessment of a situation in various aspects except for physical behavior of a human. Other than that, cultural concerns and norms confined to certain communities can be considered in future improvements.

7 Conclusions

Various tasks utilize activity space differently. This fact was deployed in this research to determine an appropriate approach behavior for a robot to suit a situation. The “approach behavior” included approach direction, orientation, and mutual distance between the person and the robot. This study presents a model for the probabilistic evaluation of human activity space. The probability of a certain region being utilized by a person in the future is assessed by a robot before deciding to approach a person from that side. The region with the highest probability of being occupied by a person is identified and avoided when approaching that person. This probabilistic analysis of human activity space was implemented on a social robot in order to determine the appropriate proxemic behavior to approach humans. Feedbacks upon robot’s behavior were taken to evaluate the functionality of the proposed mechanism. Results obtained during various situations in a social environment were used to validate the system. An etiquette for a robot based on proxemics is developed in this study with the feature called “activity zones.” The empirical results of this study can be used to reveal the human response toward a robot with proxemic-based etiquettes. Finally these results were used to propose several guidelines while deploying robots in social domains.

References

[1] A. Rossi, K. Dautenhahn, K. L. Koay, and J. Saunders, “Investigating human perceptions of trust in robots for safe HRI in home environments,” in Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, ACM, 2017, pp. 375–376.10.1145/3029798.3034822Search in Google Scholar

[2] T. Rantanen, P. Lehto, P. Vuorinen, and K. Coco, “The adoption of care robots in home care – A survey on the attitudes of Finnish home care personnel,” J. Clin. Nurs., vol. 27, no. 9–10, pp. 1846–1859, 2018.10.1111/jocn.14355Search in Google Scholar PubMed

[3] R. Mead and M. J. Mataric, “Autonomous human-robot proxemics: Socially aware navigation based on interaction potential,” Auton. Robot., vol. 41, no. 5, pp. 1189–1201, 2017.10.1007/s10514-016-9572-2Search in Google Scholar

[4] M. L. Walters, “The design space for robot appearance and behaviour for social robot companions,” PhD thesis, University of Hertfordshire, 2008.Search in Google Scholar

[5] J. Mumm and B. Mutlu, “Human-robot proxemics: Physical and psychological distancing in human-robot interaction,” in Proceedings of the 6th International Conference on Human-Robot Interaction, ACM, 2011, pp. 331–338.10.1145/1957656.1957786Search in Google Scholar

[6] M. L. Walters, D. S. Syrdal, K. Dautenhahn, R. Te Boekhorst, and K. L. Koay, “Avoiding the uncanny valley: Robot appearance, personality and consistency of behavior in an attention-seeking home scenario for a robot companion,” Auton. Robot., vol. 24, no. 2, pp. 159–178, 2008.10.1007/s10514-007-9058-3Search in Google Scholar

[7] P. Liu, D. F. Glas, T. Kanda, H. Ishiguro, and N. Hagita, “A model for generating socially-appropriate deictic behaviors towards people,” Int. J. Soc. Robot., vol. 9, no. 1, pp. 33–49, 2017.10.1007/s12369-016-0348-9Search in Google Scholar

[8] Luís Carlos Gonçalves Ferreira dos Santos, “Laban movement analysis: A Bayesian computational approach to hierarchical motion analysis and learning,” PhD thesis, University of Coimbra, 2014.Search in Google Scholar

[9] J. Stark, R. R. Mota, and E. Sharlin, “Personal space intrusion in human-robot collaboration,” in Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, ACM, 2018, pp. 245–246.10.1145/3173386.3176998Search in Google Scholar

[10] T. E. Laatikainen, K. Hasanzadeh, and M. Kytta, “Capturing exposure in environmental health research: Challenges and opportunities of different activity space models,” Int. J. Health Geogr., vol. 17, no. 1, art. 29, 2018.10.1186/s12942-018-0149-5Search in Google Scholar PubMed PubMed Central

[11] L. Delp, Z. Mojtahedi, H. Sheikh, and J. Lemus, “A legacy of struggle: The OSHA ergonomics standard and beyond, Part I,” New Solut., vol. 24, no. 3, pp. 365–389, 2014.10.2190/NS.24.3.iSearch in Google Scholar PubMed

[12] M. O. Melo, L. B. da Silva, and F. dos Santos Rebelo, “Ergonomics aspects and workload on the operators in the electric power control and operation centers: Multi-case studies in Portugal and Brazil,” Iberoam. J. Ind. Eng., vol. 8, no. 16, pp. 35–55, 2017.Search in Google Scholar

[13] S. O. Olabode, A. R. Adesanya, and A. A. Bakare, “Ergonomics awareness and employee performance: An exploratory study,” Econ. Environ. Geol., vol. 17, no. 44, pp. 813–829, 2017.10.25167/ees.2017.44.11Search in Google Scholar

[14] S. Purnawati, N. Kawakami, A. Shimazu, D. P. Sutjana, and N. Adiputra, “Retraction: Effects of an ergonomics-based job stress management program on job strain, psychological distress, and blood cortisol among employees of a national private bank in Denpasar Bali,” Industrial Health, pp. 2015–0260, 2016.10.2486/indhealth.2015-0260Search in Google Scholar PubMed

[15] F. Papadopoulos, D. Kuster, L. J. Corrigan, A. Kappas, and G. Castellano, “Do relative positions and proxemics affect the engagement in a human-robot collaborative scenario?,” Interact. Stud., vol. 17, no. 3, pp. 321–347, 2017.10.1075/is.17.3.01papSearch in Google Scholar

[16] M. L. Walters, K. Dautenhahn, R. Te Boekhorst, K. L. Koay, D. S. Syrdal, and C. L. Nehaniv, “An empirical framework for human-robot proxemics,” Proceedings of New Frontiers in Human-Robot Interaction, 2009.Search in Google Scholar

[17] M. Obaid, E. B. Sandoval, J. Zlotowski, E. Moltchanova, C. A. Basedow, and C. Bartneck, “Stop! That is close enough. How body postures influence human-robot proximity,” in 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), IEEE, 2016, pp. 354–361.10.1109/ROMAN.2016.7745155Search in Google Scholar

[18] T. J. Wiltshire, E. J. Lobato, A. V. Wedell, W. Huang, B. Axelrod, and S. M. Fiore, “Effects of robot gaze and proxemic behavior on perceived social presence during a hallway navigation scenario,” in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, SAGE Publications Sage CA, Los Angeles, CA, 2013, vol. 57, no. 1, pp. 1273–1277.10.1177/1541931213571282Search in Google Scholar

[19] Y. Kim and B. Mutlu, “How social distance shapes human-robot interaction,” Int. J. Hum. Comput., vol. 72, no. 12, pp. 783–795, 2014.10.1016/j.ijhcs.2014.05.005Search in Google Scholar

[20] T. J. Wiltshire, E. J. Lobato, J. Velez, F. Jentsch, and S. M. Fiore, “An interdisciplinary taxonomy of social cues and signals in the service of engineering robotic social intelligence,” in Proc. SPIE 9084, Unmanned Systems Technology XVI, 2014, p. 90840F.10.1117/12.2049933Search in Google Scholar

[21] R. Mead and M. J. Mataric, “Proxemics and performance: Subjective human evaluations of autonomous sociable robot distance and social signal understanding,” in Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on IEEE, 2015, pp. 5984–5991.10.1109/IROS.2015.7354229Search in Google Scholar

[22] T. J. Wiltshire, E. J. Lobato, D. R. Garcia, S. M. Fiore, F. G. Jentsch, W. H. Huang, et al. “Effects of robotic social cues on interpersonal attributions and assessments of robot interaction behaviors,” in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, SAGE Publications, Sage CA, Los Angeles, CA, 2015, vol. 59, no. 1, pp. 801–805.10.1177/1541931215591245Search in Google Scholar

[23] R. Mead, A. Atrash, and M. J. Mataric, “Automated proxemic feature extraction and behavior recognition: Applications in human-robot interaction,” Int. J. Soc. Robot., vol. 5, no. 3, pp. 367–378, 2013.10.1007/s12369-013-0189-8Search in Google Scholar

[24] A. Umbrico, A. Cesta, G. Cortellessa, and A. Orlandini, “A holistic approach to behavior adaptation for socially assistive robots,” Int. J. Soc. Robot., vol. 12, pp. 617–637, 2020.10.1007/s12369-019-00617-9Search in Google Scholar

[25] E. Bryndin, “Collaboration robots as digital doubles of person for communication in public life and space,” Am. J. Mech. Eng., vol. 4, no. 2, pp. 35–39, 2019.10.11648/j.ajmie.20190402.12Search in Google Scholar

[26] K. Khoshhal, H. Aliakbarpour, J. Quintas, P. Drews and J. Dias, “Probabilistic LMA-based classification of human behaviour understanding using Power Spectrum technique,” 2010 13th International Conference on Information Fusion, Edinburgh, 2010, pp. 1–7.10.1109/ICIF.2010.5712107Search in Google Scholar

[27] R. Mead and M. J. Mataric, “A probabilistic framework for autonomous proxemic control in situated and mobile human-robot interaction,” in Proceedings of the seventh annual ACM/IEEE International Conference on Human-Robot Interaction, ACM, 2012, pp. 193–194.10.1145/2157689.2157751Search in Google Scholar

[28] S. Satake, T. Kanda, D. F. Glas, M. Imai, H. Ishiguro, and N. Hagita, “How to approach humans?: Strategies for social robots to initiate interaction,” in Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction, ACM, 2009, pp. 109–116.10.1145/1514095.1514117Search in Google Scholar

[29] S. M. Bhagya, P. Samarakoon, H. P. Chapa Sirithunge, M. A. Viraj, J. Muthugala, A. G. Buddhika, et al., “Proxemics and approach evaluation by service robot based on user behavior in domestic environment,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, pp. 8192–8199.10.1109/IROS.2018.8593713Search in Google Scholar

[30] W. S. Marras and R. W. Schoenmarxlin, “Wrist motions in industry,” Ergonomics, vol. 36, no. 4, pp. 341–351, 1993.10.1080/00140139308967891Search in Google Scholar PubMed

[31] L. Marusca, “What every body is saying. An ex-FBI agent’s guide to speed-reading people,” J. Media Res., vol. 7, no. 3, pp. 89–90, 2014.Search in Google Scholar

[32] C. Sirithunge, B. Jayasekara, and C. Pathirana, “Effect of activity space on detection of human activities by domestic service robots,” in Region 10 Conference, TENCON 2017–2017 IEEE, 2017, pp. 344–349.10.1109/TENCON.2017.8227887Search in Google Scholar

[33] M. A. Yousuf, Y. Kobayashi, Y. Kuno, A. Yamazaki, and K. Yamazaki, “How to move towards visitors: A model for museum guide robots to initiate conversation,” in RO-MAN 2013, IEEE, 2013, pp. 587–592.10.1109/ROMAN.2013.6628543Search in Google Scholar

[34] Chapa Sirithunge, H. M. Ravindu T. Bandara, A. G. Buddhika P. Jayasekara, D. P. Chandima, and H. M. Harsha S. Abeykoon, “A study on robot-initiated interaction: Towards virtual social behavior,” in Social Robotics: Technological, Societal and Ethical Aspects of Human-Robot Interaction, Human-Computer Interaction Series, Springer, Cham, 2019, vol. 7, pp. 37–60.10.1007/978-3-030-17107-0_3Search in Google Scholar

Received: 2019-08-17
Revised: 2020-07-31
Accepted: 2020-09-11
Published Online: 2020-11-26

© 2021 Chapa Sirithunge et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 19.5.2024 from https://www.degruyter.com/document/doi/10.1515/pjbr-2021-0006/html
Scroll to top button