Skip to main content

2017 | Buch

Advances in Human Factors in Robots and Unmanned Systems

Proceedings of the AHFE 2016 International Conference on Human Factors in Robots and Unmanned Systems, July 27-31, 2016, Walt Disney World®, Florida, USA

insite
SUCHEN

Über dieses Buch

This book focuses on the importance of human factors in the development of reliable and safe unmanned systems. It discusses current challenges such as how to improve perceptual and cognitive abilities of robots, develop suitable synthetic vision systems, cope with degraded reliability of unmanned systems, predict robotic behavior in case of a loss of communication, the vision for future soldier-robot teams, human-agent teaming, real-world implications for human-robot interaction, and approaches to standardize both display and control of technologies across unmanned systems. Based on the AHFE 2016 International Conference on Human Factors in Robots and Unmanned Systems, held on July 27-31, 2016, in Walt Disney World®, Florida, USA, this book is expected to foster new discussion and stimulate new ideas towards the development of more reliable, safer, and functional devices for carrying out automated and concurrent tasks.

Inhaltsverzeichnis

Frontmatter

A Vision for Future Soldier-Robot Teams

Frontmatter
Iterative Interface Design for Robot Integration with Tactical Teams

This research investigated mobile user interface requirements for robots used in tactical operations and evaluated user responses through an iterative participatory design process. A longitudinal observational study (five sessions across six months) was conducted for the iterative development of robot capabilities and a mobile user interface. Select members of the tactical team wore the mobile interface and performed operations with the robot. After each training an after-action review and feedback was received about the training, the interface, robot capabilities, and desired modifications. Based on the feedback provided, iterative updates were made to the robotic system and the user interface. The field training studies presented difficulties in the interpretation of the responses due to complex interactions and external influences. Iterative designs, observations, and lessons learned are presented related to the integration of robots with tactical teams.

K. M. Ibrahim Asif, Cindy L. Bethel, Daniel W. Carruth
Context Sensitive Tactile Displays for Bidirectional HRI Communications

Tactile displays have shown high potential to support critical communications, by relaying information, queries, and direction cues from the robot to the Soldier in a hands-free manner. Similarly, Soldiers can communicate to the robot through speech, gesture, or visual display. They could also respond to robot messages by pressing a button to acknowledge or request a repeat message. A series of studies have demonstrated the ability of Soldiers to interpret tactile direction and information cues during waypoint navigation in rough terrain, during day and night operations. The design of tactile display systems must result in reliable message detection. We have proposed a framework to ensure salience of the tactile cues. In this presentation we summarize research efforts that explore factors affecting the efficacy of tactile cues for bidirectional soldier-robot communications. We will propose methods for changing tactile salience based on the symbology and context.

Bruce Mortimer, Linda Elliott
An Initial Investigation of Exogenous Orienting Visual Display Cues for Dismounted Human-Robot Communication

The drive to progress dismounted Soldier-robot teaming is toward more autonomous systems with effective bi-directional Soldier-robot dialogue, which in turn requires a strong understanding of interface design factors that impact Soldier-robot communication. This experiment tested effects of various exogenous orienting visual display cues on simulation-based reconnaissance and communication performance, perceived workload, and usability preference. A 2 × 2 design provided four exogenous orienting visual display designs, two for navigation route selection and two for building identification. Participants’ tasks included signal detection and response to visual prompts within a tactical multimodal interface (MMI). Within the novice non-military sample, results reveal that all display designs elicited low perceived workload, were highly accepted in terms of usability preference, and did not have an effect on task performance regarding responses to robot assistance requests. Results suggest inclusion of other factors, such as individual differences (experience, ability, motivation) to enhance a predictive model of task performance.

Julian Abich IV, Daniel J. Barber, Linda R. Elliott
Five Requisites for Human-Agent Decision Sharing in Military Environments

Working with industry, universities and other government agencies, the U.S. Army Research Laboratory has been engaged in multi-year programs to understand the role of humans working with autonomous and robotic systems. The purpose of the paper is to present an overview of the research themes in order to abstract five research requirements for effective human-agent decision-making. Supporting research for each of the five requirements is discussed to elucidate the issues involved and to make recommendations for future research. The requirements include: (a) direct link between the operator and a supervisory agent, (b) interface transparency, (c) appropriate trust, (d) cognitive architectures to infer intent, and e) common language between humans and agents.

Michael Barnes, Jessie Chen, Kristin E. Schaefer, Troy Kelley, Cheryl Giammanco, Susan Hill
Initial Performance Assessment of a Control Interface for Unmanned Ground Vehicle Operation Using a Simulation Platform

The successful navigation of Unmanned ground vehicles (UGV) is important as UGVs are being increasingly integrated into tactical and reconnaissance operations. Not only is there the possibility of winding environments but also the narrow passage of obstacles. This study investigated a participant’s ability to navigate a maze environment incorporating narrow hallways, with two different user interfaces, using human in the loop simulation. Participants used a game controller and customized user interface to navigate a simulated UGV through a simulated maze environment. Results indicated that the video-plus-map interface displaying both video and LiDAR data required more time to complete compared to an interface displaying video-only data.

Leif T. Jensen, Teena M.  Garrison, Daniel W. Carruth, Cindy L.  Bethel, Phillip J. Durst, Christopher T. Goodin

Confronting Human Factors Challenges

Frontmatter
Examining Human Factors Challenges of Sustainable Small Unmanned Aircraft System (sUAS) Operations

Small unmanned aircraft systems (sUAS) represent a significant instrument for improving task efficiency and effectiveness across numerous industries and operational environments. However, concern has grown regarding potentially irresponsible operation and public apprehension to potential privacy loss. These concerns, combined with unique sUAS human factors challenges, may lead to unwanted and dangerous results, including reduction of safety, property damage, and loss of life. Such challenges include lack of command, control, and communication (C3) standardization; detection, tracking, and managing operations; and human perceptual and cognitive issues. Issues and concerns could be significant barriers to permitting routine and sustainable operations in the National Airspace System (NAS), but by closely examining these factors may be possible to devise strategies to better support future application. This exploratory study seeks to provide a review of relevant exigent literature as well as condense findings into sets of recommendations and guidelines for human factors in sUAS adoption and use.

Clint R. Balog, Brent A. Terwilliger, Dennis A. Vincenzi, David C. Ison
Mission Capabilities—Based Testing for Maintenance Trainers: Ensuring Trainers Support Human Performance

Historically, Unmanned Aerial Systems (UAS) Training Device (TD) testing emphasized meeting performance specification with an insignificant level of attention paid towards mission capabilities-based requirements. Recent research focusing on effectiveness and efficiency of UAS TDs in supporting learning of critical skills has brought mission capabilities-based requirements testing to the forefront. This paper provides a review of the process and results of the MQ-8B Fire Scout Avionics Maintenance Trainer (AMT) MCT event. The Fire Scout AMT MCT event produced qualitative and quantitative data associated with the AMT’s ability to support teaching of tasks/Learning Objectives (LOs) specific to maintenance. The multi-competency test team collected two types of data; one type measuring the capability of TD to provide critical attributes to support each task/LO and the other measuring the capability of TD to facilitate task/LO completion.

Alma M. Sorensen, Mark E. Morris, Pedro Geliga
Detecting Deictic Gestures for Control of Mobile Robots

For industrial environments esp. under conditions of “Industry 4.0” it is necessary to have a mobile and hands-free controlled interaction solution. Within this project a mobile robot system (for picking, lifting and transporting of small boxes) in logistic domains was created. It consists of a gesture detection and recognition system based on Microsoft Kinect™ and gesture detection algorithms. For implementing these algorithms several studies about the intuitive use, executing and understanding of mid-air-gestures were processed. The base of detection was to define, if a gesture is executed dynamically or statically and to derive a mathematical model for these different kinds of gestures. Fitting parameters to describe several gesture phases could be found and will be used for their robust recognition. A first prototype with an implementation of this technology also is shown in this paper.

Tobias Nowack, Stefan Lutherdt, Stefan Jehring, Yue Xiong, Sabine Wenzel, Peter Kurtz
Effects of Time Pressure and Task Difficulty on Visual Search

The process of pilot constantly checking the information given by instruments was examined in this study to detect the effects of time pressure and task difficulty on visual searching. A software was designed to simulate visual detection tasks, in which time pressure and task difficulty were adjusted. Two-factor analysis of variance, simple main effect, and regression analyses were conducted on the accuracy and reaction time obtained. Results showed that both time pressure and task difficulty significantly affected accuracy. Moreover, an interaction was apparent between the two factors. In addition, task difficulty had a significant effect on reaction time, which had a linearly increasing relationship with the number of stimuli. By contrast, the effect of time pressure on reaction time was not so apparent under high reaction accuracy of 90 % or above. In the ergonomic design of a human-machine interface, a good matching between time pressure and task difficulty is key to yield excellent searching performance.

Xiaoli Fan, Qianxiang Zhou, Fang Xie, Zhongqi Liu

Human-Agent Teaming

Frontmatter
Operator-Autonomy Teaming Interfaces to Support Multi-Unmanned Vehicle Missions

Advances in automation technology are leading to the development of operational concepts in which a single operator teams with multiple autonomous vehicles. This requires the design and evaluation of interfaces that support operator-autonomy collaborations. This paper describes interfaces designed to support a base defense mission performed by a human operator and heterogeneous unmanned vehicles. Flexible operator-autonomy teamwork is facilitated with interfaces that highlight the tradeoffs of autonomy-generated plans, support allocation of assets to tasks, and communicate mission progress. The interfaces include glyphs and video gaming type icons that present information in a concise, integrated manner and multi-modal controls that augment an adaptable architecture to enable seamless transition across control levels, from manual to fully autonomous. Examples of prototype displays and controls are provided, as well as usability data collected from multi-task simulation evaluations.

Gloria L. Calhoun, Heath A. Ruff, Kyle J. Behymer, Elizabeth M. Mersch
Shaping Trust Through Transparent Design: Theoretical and Experimental Guidelines

The current research discusses transparency as a means to enable trust of automated systems. Commercial pilots (N = 13) interacted with an automated aid for emergency landings. The automated aid provided decision support during a complex task where pilots were instructed to land several aircraft simultaneously. Three transparency conditions were used to examine the impact of transparency on pilot’s trust of the tool. The conditions were: baseline (i.e., the existing tool interface), value (where the tool provided a numeric value for the likely success of a particular airport for that aircraft), and logic (where the tool provided the rationale for the recommendation). Trust was highest in the logic condition, which is consistent with prior studies in this area. Implications for design are discussed in terms of promoting understanding of the rationale for automated recommendations.

Joseph B. Lyons, Garrett G. Sadler, Kolina Koltai, Henri Battiste, Nhut T. Ho, Lauren C. Hoffmann, David Smith, Walter Johnson, Robert Shively
A Framework for Human-Agent Social Systems: The Role of Non-technical Factors in Operation Success

We present a comprehensive framework that identifies a number of factors that impact human-agent team building, including human, agent, and environmental factors. This framework integrates existing empirical work in organization behavior, non-technical training, and human-agent interaction to support successful human-agent operations. We conclude by discussing implications and next steps to evaluate and expand our framework with the aim of guiding future attempts to create efficient human-agent teams and improve mission outcomes.

Monika Lohani, Charlene Stokes, Natalia Dashan, Marissa McCoy, Christopher A. Bailey, Susan E. Rivers
Insights into Human-Agent Teaming: Intelligent Agent Transparency and Uncertainty

This paper discusses two studies testing the effects of agent transparency in joint cognitive systems involving supervisory control and decision-making. Specifically, we examine the impact of agent transparency on operator performance (decision accuracy), response time, perceived workload, perceived usability of the agent, and operator trust in the agent. Transparency has a positive impact on operator performance, usability, and trust, yet the depiction of uncertainty has potentially negative effects on usability and trust. Guidelines and considerations for displaying transparency in joint cognitive systems are discussed.

Kimberly Stowers, Nicholas Kasdaglis, Michael Rupp, Jessie Chen, Daniel Barber, Michael Barnes
Displaying Information to Support Transparency for Autonomous Platforms

The purpose of this paper is to summarize display design techniques that are best suited for displaying information to support transparency of communication in autonomous systems interfaces. The principles include Ecological Interface Design, integrated displays, and pre-attentive cuing. Examples of displays from two recent experiments investigating how transparency affects operator trust, situational awareness, and workload, are provided throughout the paper as an application of these techniques. Specifically, these interfaces were formatted using the Situation awareness-based Agent Transparency model as a method of formatting the information in displays for an autonomous robot—the Autonomous Squad Member (ASM). Overall, these methods were useful in creating usable interfaces for the ASM display.

Anthony R. Selkowitz, Cintya A. Larios, Shan G. Lakhmani, Jessie Y.C. Chen
The Relevance of Theory to Human-Robot Teaming Research and Development

In many disciplines and fields, theories help organize the body of knowledge in the field and provide direction for research. In turn, research findings contribute to theory building. The field of human-robot teaming (HRT) is a relatively new one, spanning only over the last two decades. Much of the research in this field has been driven by expediency rather than by theory, and relatively little effort has been invested in using HRT research to advance theory. As the field of HRT continues to expand rapidly, we find it increasingly necessary to relate theories to the research so that one can inform the other. As an initial effort, the current work will discuss and evaluate two broad research areas in human-robot teaming, and identify theories relevant to each area. The areas are (i) human-robot interfaces, and (ii) specific factors that enable teaming. In identifying the relevant theories for each area, we will describe how the theories were used and if findings supported the theories.

Grace Teo, Ryan Wohleber, Jinchao Lin, Lauren Reinerman-Jones

From Theory to Application: UAV and Human-Robot Collaboration

Frontmatter
Classification and Prediction of Human Behaviors by a Mobile Robot

Robots interacting and collaborating with people need to comprehend and predict their movements. We present an approach to perceiving and modeling behaviors using a 3D virtual world. The robot’s visual data is registered with the virtual world to construct a model of the dynamics of the behavior and to predict future motions using a physics engine. This enables the robot to visualize alternative evolutions of the dynamics and to classify them. The goal of this work is to use this ability to interact more naturally with humans and to avoid potentially disastrous mistakes.

D. Paul Benjamin, Hong Yue, Damian Lyons
The Future of Human Robot Teams in the Army: Factors Affecting a Model of Human-System Dialogue Towards Greater Team Collaboration

Understanding of intent is one of the most complex traits of highly efficient teams. Combining elements of verbal and non-verbal communication along with shared mental models about mission goals and team member capabilities, intent requires knowledge about both task and teammate. Beginning with the traditional models of communication, accounting for teaming factors, such as situation awareness, and incorporating the sensing, reasoning, and tactical capabilities available via autonomous systems, a revised model of team communication is needed to accurately describe the unique interactions and understanding of intent which will occur in human-robot teams. This paper focuses on examining the issue from a system capability viewpoint, identifying which system capabilities can mirror the abilities of humans through the sensor and computing strengths of autonomous systems, thus creating a team environment which is robust and adaptable while maintaining focus on mission goals.

A. William Evans, Matthew Marge, Ethan Stump, Garrett Warnell, Joseph Conroy, Douglas Summers-Stay, David Baran
Human-Autonomy Teaming Using Flexible Human Performance Models: An Initial Pilot Study

Recent advances in autonomy have highlighted opportunities for tight coordination between humans and autonomous agents in many current and future applications. For operations involving cooperating humans and robots, autonomous teammates must have flexibility to respond to the inherently unpredictable behavior of their human counterparts. To investigate this issue in detail, this paper uses an unmanned aerial vehicle (UAV) simulation to evaluate flexible human performance models over traditional static modeling approaches for multi-agent task allocation and scheduling. Additional comparisons are drawn between adaptive human models, which are adjusted in real-time by the autonomous planner according to realized human performance, and adaptable human models, where the human operator is given sole authority over model adjustments. Results indicate that adaptive human performance models significantly increase total mission reward over both the baseline static modeling framework (p = 0.0012) as well as the adaptable modeling technique (p = 0.0028) for this system.

Christopher J. Shannon, David C. Horney, Kimberly F. Jackson, Jonathan P. How
Self-scaling Human-Agent Cooperation Concept for Joint Fighter-UCAV Operations

In this article, we describe human automation integration concepts that allow the guidance and the mission management of multiple UCAVs (Unmanned Combat Aerial Vehicles) from aboard a manned single-seat fighter aircraft. The conceptual basis of our approach is dual-mode cognitive automation. This concept uses two distinct modes of human-agent cooperation, a hierarchical relationship with agents working in delegation mode, and a heterarchical relationship with an agent working in assistance mode. For the hierarchical relationship we suggest three delegation modes (team-, intent-, and task-based). The agent in heterarchical relationship, i.e. the assistant system, adapts the operator-assistant system cooperation and the guidance of UCAVs according to the named delegation modes. The adaptation is shaped by the assessment of the operator’s mental state and external situation features. Thereby, we aim at balancing the operator’s activity and work demands. Future research at our institute will concentrate on developing a software prototype for human-in-the-loop experiments.

Florian Reich, Felix Heilemann, Dennis Mund, Axel Schulte
Experimental Analysis of Behavioral Workload Indicators to Facilitate Adaptive Automation for Fighter-UCAV Interoperability

In this article, we present an experimental study investigating the operationalization of behavioral indicators of pilots’ mental workload in a military manned-unmanned teaming scenario. For the identification of such behavioral workload indicators, we conducted an explorative experimental campaign. We chose an air-to-ground low-level flight mission with multiple target engagements. To further increase the task load of the pilots, we introduced an embedded secondary task, i.e. the classification of target pictures delivered by remote UCAVs. This is a typical task, which we expect in future manned-unmanned teaming setups. The examination of the subjective ratings shows that high individual workload states were achieved. In these high workload situations, the subjects used various behavioral adaptations to keep a high performance level while regulating their subjective workload. As these behavioral adaptations occur prior to grave performance decrements, we consider to use behavioral changes as indicator for high workload and as trigger for adaptive support.

Dennis Mund, Felix Heilemann, Florian Reich, Elisabeth Denk, Diana Donath, Axel Schulte

Supporting Sensor and UAV Users

Frontmatter
Model-Driven Sensor Operation Assistance for a Transport Helicopter Crew in Manned-Unmanned Teaming Missions: Selecting the Automation Level by Machine Decision-Making

One of the research fields at the Institute of Flight Systems (IFS) of the University of the Armed Forces (UniBwM) focuses on the integration of reconnaissance sensor operation support in manned-unmanned teaming (MUM-T) helicopter missions. The purposive deployment of mission sensors carried by a team of unmanned aerial vehicles (multi-UAV) in such missions is expected to bring in new and impactful aspects, especially in workload-intensive situations. Paradigms of variable automation in the sensor domain and cognitive assistant systems are intended to achieve an operationally manageable solution. This paper provides an overview of the sensor assistant system to be deployed in a MUM-T setup. To manage sensor deployment automation functions, a machine decision making process represented by an agent system will be described. Depending on a workload state input, a suitable level of automation will be chosen from a predefined set. A prototype system of such agent with its capability to react on varied stimuli will be demonstrated in a reduced toy problem setup.

Christian Ruf, Peter Stütz
Using Natural Language to Enable Mission Managers to Control Multiple Heterogeneous UAVs

The availability of highly capable, yet relatively cheap, unmanned aerial vehicles (UAVs) is opening up new areas of use for hobbyists and for commercial activities. This research is developing methods beyond classical control-stick pilot inputs, to allow operators to manage complex missions without in-depth vehicle expertise. These missions may entail several heterogeneous UAVs flying coordinated patterns or flying multiple trajectories deconflicted in time or space to predefined locations. This paper describes the functionality and preliminary usability measures of an interface that allows an operator to define a mission using speech inputs. With a defined and simple vocabulary, operators can input the vast majority of mission parameters using simple, intuitive voice commands. Although the operator interface is simple, it is based upon autonomous algorithms that allow the mission to proceed with minimal input from the operator. This paper also describes these underlying algorithms that allow an operator to manage several UAVs.

Anna C. Trujillo, Javier Puig-Navarro, S. Bilal Mehdi, A. Kyle McQuarry
Adaptive Interaction Criteria for Future Remotely Piloted Aircraft

There are technical trends and operational needs within the aviation domain towards adaptive behavior. This study focus on adaptive interaction criteria for future remotely piloted aircraft. Criteria that could be used to guide and evaluate design as well as to create a model for adaptive interaction used by autonomous functions and decision support. A scenario and guidelines from the literature, used as example criteria, was presented in a questionnaire to participants from academia/researchers, end users, and aircraft development engineers. Several guidelines had a wide acceptance among the participants, but there was also aspects missing for the application of supporting adaptive interaction for remotely piloted aircraft. The various groups of participants contributed by different aspects supports the idea of having various stakeholders contributing with complementary views. Aspects that the participants found missing includes, predictability, aviation domain specifics, risk analysis, complexity and how people perceive autonomy and attribute intentions.

Jens Alfredson
Confidence-Based State Estimation: A Novel Tool for Test and Evaluation of Human-Systems

Test and evaluation (T&E) of complex human-in-the loop systems has been a challenge for system developers. Traditional methods for T&E rely on questionnaires given periodically in combination with task performance measures to quantify the effectiveness of a given system. This approach is inherently obtrusive and interferes with natural system interaction. Here, we propose a method to leverage unobtrusive wearable technology to create a system for continuously assessing human state. Previous efforts at this type of assessment have often failed to generalize beyond controlled laboratory environments due to increased variability in signal quality from both the wearable sensors and in human behavior. We propose a method to account for this variability using measures of confidence to create robust estimates of state capable of dynamically adapting to changes in behavior over time. We postulate that the confidence-based approach can provide high-resolution estimates of state that will augment T&E of complex systems.

Amar R. Marathe, Jonathan R. McDaniel, Stephen M. Gordon, Kaleb McDowell
Human Robots Interactions: Mechanical Safety Data for Physical Contacts

In a world that relies heavily on technology, the industry invests heavily to developing solutions that focus on the positive interaction between people and machines and the isolation of physical or immaterial infrastructure as a method of protection. An approach based on strategy must be followed to change the paradigm of human machine interaction, researchers must look to the machine as a co-worker and as such it may not pose a risk to other colleagues. So the challenge for designers and machine developers must therefore turn to “minimization of force” as a key to reach the safety of the machine/equipment. Considering these questions and having into account that existing data about this issue is scattered, focused in specific applications and cannot easily be transferred to different or more complex applications the International Technical Committee ISO/TC 199—Safety of machinery decided to create a Study Group—ISO/TC 199 SG01—with a purpose of prepare an International Standard that would support the design, development and use of machines that will interact with people.

Alberto Fonseca, Claudia Pires

An Exploration of Real-World Implications for Human-Robot Interaction

Frontmatter
Droning on About Drones—Acceptance of and Perceived Barriers to Drones in Civil Usage Contexts

The word “drone” is commonly associated with the military. However, the same term is also used for multicopters that can be and are used by civilians for a multitude of purposes. Nowadays, drones are tested for commercial delivery of goods or building inspections. A survey of 200 people, laypersons and active users, on their acceptance and perceived barriers for drone use was conducted. In the present work, user requirements for civil drones in different usage scenarios with regard to appearance, routing, and autonomy could be identified. User diversity strongly influences both acceptance and perceived barriers. It was found that laypeople rather feared the violation of their privacy whereas active drone pilots saw more of a risk in possible accidents. Drones deployed for emergency scenarios should be clearly recognizable by their outward appearance. Also, participants had clear expectations regarding the routes drones should and should not be allowed to use.

Chantal Lidynia, Ralf Philipsen, Martina Ziefle
Factors Affecting Performance of Human-Automation Teams

Automated systems continue to increase in both complexity and capacity. As such, there is an increasing need to understand the factors that affect the performance of human-automation (H-A) teams. This high-level review examines several such factors: we discuss levels and degrees of automation, the reliability of the automated system, human trust of automation, and workload transitions in the H-A system due to off-nominal events. The influence that each of these factors has on the H-A team dynamic must be more completely understood in order to ensure that the team can perform to its maximum potential. Thorough understanding of this dynamic is especially important to ensuring that H-A teams can succeed safely and effectively in critical contexts.

Anthony L. Baker, Joseph R. Keebler
A Neurophysiological Examination of Multi-robot Control During NASA’s Extreme Environment Mission Operations Project

Previous research has explored the use of an external or “3rd person” view in the context of augmented reality, video gaming, and robot control. Few studies, however, involve the use of mobile robot to provide that viewpoint, and fewer still do so in dynamic, unstructured, high stress environments. This study examined the cognitive state of robot operators performing complex search and rescue tasks in a simulated crisis scenario. A solo robot control paradigm was compared with a dual condition in which an alternate (surrogate) perspective was provided via voice commands to a second robot employed as a highly autonomous teammate. Subjective and neurophysiological measurements indicate an increased level of situational awareness was achieved in the dual condition along with a reduction in workload and decision oriented task engagement. These results are discussed in the context of mitigation potential for cognitive overload in complex and unstructured task environments.

John G. Blitch
A Comparison of Trust Measures in Human–Robot Interaction Scenarios

When studying Human–Robot Interaction (HRI), we often employ measures of trust. Trust is essential in HRI, as inappropriate levels of trust result in misuse, abuse, or disuse of that robot. Some measures of trust specifically target automation, while others specifically target HRI. Although robots are a type of automation, it is unclear which of the broader factors that define automation are shared by robots. However, measurements of trust in automation and trust in robots should theoretically still yield similar results. We examined an HRI scenario using (1) an automation trust scale and (2) a robotic trust scale. Findings indicated conflicting results coming from these respective trust scales. It may well be that these two trust scales examine separate constructs and are therefore not interchangeable. This discord shows us that future evaluations are required to identify scale appropriate context applications for either automation or robotic operations.

Theresa T. Kessler, Cintya Larios, Tiffani Walker, Valarie Yerdon, P. A. Hancock
Human-Robot Interaction: Proximity and Speed—Slowly Back Away from the Robot!

This experiment was designed to evaluate the effects of proximity and speed of approach on trust in human-robot interaction (HRI). The experimental design used a 2 (Speed) × 2 (Proximity) mixed factorial design and trust levels were measured by self-report on the Human Robot Trust Scale and the Trust in Automation Scale. Data analyses indicate proximity [F(2, 146) = 6.842, p < 0.01, partial ŋ2 = 0.086] and speed of approach [F(2, 146) = 2.885, p = 0.059, partial ŋ2 = 0.038] are significant factors contributing to changes in trust levels.

Keith R. MacArthur, Kimberly Stowers, P. A. Hancock
Human Factors Issues for the Design of a Cobotic System

We present a new approach for the design of cobotic systems. It is based on several steps with increasing complexity: Activity analysis, basic design, detailed design and realization. A particular attention is paid to human factors and human systems interactions. Different simulation levels are required to provide flexibility and adaptability.

Théo Moulières-Seban, David Bitonneau, Jean-Marc Salotti, Jean-François Thibault, Bernard Claverie
A Natural Interaction Interface for UAVs Using Intuitive Gesture Recognition

The popularity of unmanned aerial vehicles (UAVs) is increasing as technological advancements boost their favorability for a broad range of applications. One application is science data collection. In fields like earth and atmospheric science, researchers are seeking to use UAVs to augment their current portfolio of platforms and increase their accessibility to geographic areas of interest. By increasing the number of data collection platforms, UAVs will significantly improve system robustness and allow for more sophisticated studies. Scientists would like the ability to deploy an available fleet of UAVs to traverse a desired flight path and collect sensor data without needing to understand the complex low-level controls required to describe and coordinate such a mission. A natural interaction interface for a Ground Control System (GCS) using gesture recognition is developed to allow non-expert users (e.g., scientists) to define a complex flight path for a UAV using intuitive hand gesture inputs from the constructed gesture library. The GCS calculates the combined trajectory on-line, verifies the trajectory with the user, and sends it to the UAV controller to be flown.

Meghan Chandarana, Anna Trujillo, Kenji Shimada, B. Danette Allen

Optimizing Human-Systems Performance Through System Design

Frontmatter
An Analysis of Displays for Probabilistic Robotic Mission Verification Results

An approach for the verification of autonomous behavior-based robotic missions has been developed in a collaborative effort between Fordham University and Georgia Tech. This paper addresses the step after verification, how to present this information to users. The verification of robotic missions is inherently probabilistic, opening the possibility of misinterpretation by operators. A human study was performed to test three different displays (numeric, graphic, and symbolic) for summarizing the verification results. The displays varied by format and specificity. Participants made decisions about high-risk robotic missions using a prototype interface. Consistent with previous work, the type of display had no effect. The displays did not reduce the time participants took compared to a control group with no summary, but did improve the accuracy of their decisions. Participants showed a strong preference for more specific data, heavily using the full verification results. Based on these results, a different display paradigm is suggested.

Matthew O‘Brien, Ronald Arkin
A Neurophysiological Assessment of Multi-robot Control During NASA’s Pavilion Lake Research Project

A number of previous studies have explored the value of an external or “3rd person” view in the realm of video gaming and augmented reality. Few studies, however, actually utilize a mobile robot to provide that viewpoint, and fewer still do so in dynamic, unstructured environments. This study examined the cognitive state of robot operators performing complex survey and sample collection tasks in support of a time sensitive, high profile science expedition. A solo robot control paradigm was compared with a dual condition in which an alternate (surrogate) perspective was provided via voice commands to a second robot employed as a highly autonomous teammate. Subjective and neurophysiological measurements indicate an increased level of situational awareness was achieved in the dual condition along with a reduction decision oriented task engagement. These results are discussed in the context of mitigation potential for cognitive overload and automation induced complacency in complex and unstructured task environments.

John G. Blitch
A Method for Neighborhood Gesture Learning Based on Resistance Distance

Multimodal forms of human-robot interaction (HRI) including non-verbal forms promise easily adopted and intuitive use models for assistive devices. The research described in this paper targets an assistive robotic appliance which learns a user’s gestures for activities performed in a healthcare or aging in place setting. The proposed approach uses the Growing Neural Gas (GNG) algorithm in combination with the Q-Learning paradigm of reinforcement learning to shape robotic motions over time. Neighborhoods of nodes in the GNG network are combined to collectively leverage past learning by the group. Connections between nodes are assigned weights based on frequency of use which can be viewed as measures of electrical resistance. In this way, the GNG network may be traversed based on distances computed in the same manner as resistance in an electrical circuit. It is shown that this distance metric provides faster convergence of the algorithm when compared to shortest path neighborhood learning.

Paul M. Yanik, Anthony L. Threatt, Jessica Merino, Joe Manganelli, Johnell O. Brooks, Keith E. Green, Ian D. Walker
Metadaten
Titel
Advances in Human Factors in Robots and Unmanned Systems
herausgegeben von
Pamela Savage-Knepshield
Jessie Chen
Copyright-Jahr
2017
Electronic ISBN
978-3-319-41959-6
Print ISBN
978-3-319-41958-9
DOI
https://doi.org/10.1007/978-3-319-41959-6