Skip to main content
Top

2011 | Book

Mixed Reality and Human-Robot Interaction

insite
SEARCH

About this book

MR technologies play an increasing role in different aspects of human-robot interactions. The visual combination of digital contents with real working spaces creates a simulated environment that is set out to enhance these aspects. This book presents and discusses fundamental scientific issues, technical implementations, lab testing, and industrial applications and case studies of Mixed Reality in Human-Robot Interaction. It is a reference book that not only acts as meta-book in the field that defines and frames Mixed Reality use in Human-Robot Interaction, but also addresses up-coming trends and emerging directions of the field.

This volume offers a comprehensive reference volume to the state-of-the-art in the area of MR in Human-Robot Interaction, an excellent mix of contributions from leading researcher/experts in multiple disciplines from academia and industry. All authors are experts and/or top researchers in their respective areas and each of the chapters has been rigorously reviewed for intellectual contents by the editorial team to ensure a high quality. This book provides up-to-date insight into the current research topics in this field as well as the latest technological advancements and the best working examples.

Table of Contents

Frontmatter
What Is Mixed Reality, Anyway? Considering the Boundaries of Mixed Reality in the Context of Robots
Abstract
Mixed reality, as an approach in human-computer interaction, is often implicitly tied to particular implementation techniques (e.g., see-through device) and modalities (e.g., visual, graphical displays). In this paper we attempt to clarify the definition of mixed reality as a more abstract concept of combining the real and virtual worlds – that is, mixed reality is not a given technology but a concept that considers how the virtual and real worlds can be combined. Further, we use this discussion to posit robots as mixed-reality devices, and present a set of implications and questions for what this implies for mixed-reality interaction with robots.
J. Young, E. Sharlin, T. Igarashi
User-Centered HRI: HRI Research Methodology for Designers
Abstract
This chapter introduces the field of user-centered HRI, which differs from the existing technology-driven approach adopted by HRI researchers in emphasizing the technological improvement of robots. It proposes a basic framework for user-centered HRI research, which comprises the three elements of “aesthetic”, “operational”, and “social” contextuability. This framework is made for robot product designers seeking to incorporate user perspectives and needs; it is intended to allow easy identification and efficient study of issues in user-centered HRI design. The case studies introduced in this chapter, all of which are based on the aforementioned framework, will facilitate understanding of user-centered HRI, and create new research opportunities for designers, non-experts, and robot engineers.
M. Kim, K. Oh, J. Choi, J. Jung, Y. Kim
Mental Transformations in Human-Robot Interaction
Abstract
Human-robot interfaces can be challenging and tiresome because of misalignments in the control and view relationships. The human user must mentally transform (e.g., rotate or translate) desired robot actions to required inputs at the interface. These mental transformations can increase task difficulty and decrease task performance. This chapter discusses how to improve task performance by decreasing the mental transformations in a human-robot interface. It presents a mathematical framework, reviews relevant background, analyzes both single and multiple camera-display interfaces, and presents the implementation of a mentally efficient interface.
B. P. DeJong, J. E. Colgate, M. A. Peshkin
Computational Cognitive Modeling of Human-Robot Interaction Using a GOMS Methodology
Abstract
The goal of this study was to use computational cognitive modeling to further understand human behavior and strategy in robotic rover control. To this end, GOMS (Goals, Operators, Methods, Selection Rules) Language models of rover control were constructed based on a task analysis and observations during human rover control trials. For the first model, we hypothesized control would be characterized by actions to prevent deviations from exact path following. The second model was developed based on an alternate hypothesis that operators commanded ballistic rover movements to approximate path direction. In manual trials, an operator was required to navigate a commercially available micro-rover along a defined path using a computer interface (providing remote environment information through a camera view) located in a room separate from the rover. The computational cognitive model was executed with a pseudo system interface (Java device) in real-time. Time-to-navigation completion and path tracking accuracy were recorded during the human and cognitive model trials with navigation circumstances being identical. Comparison of the GOMSL model outputs with human performance demonstrated the first model to be more precise than actual human control, but at the cost of time. The second model with the new navigation criteria appeared to be more plausible for representing operator behavior; however, model navigation times were consistently longer than the human. This was attributable to limitations of the modeling approach in representing human parallel processing and continuous control. Computational GOMS modeling approaches appear to have potential for describing interactive closed-loop rover control with continuous monitoring of feedback and corresponding control actions. Humans exhibit satisficing behavior in terms of rover direction and movement control versus minimizing errors from optimal navigation performance. Certain GOMSL modeling issues exist for applications to human-robot interaction and this research provides a first empirical insight.
D. B. Kaber, S. H. Kim, X. Wang
A Mixed Reality Based Teleoperation Interface for Mobile Robot
Abstract
The human-robot interface system is the key to extending the application field for next generation robot systems. Conventional interface for robots have been widely used to perform tasks such as reconnaissance, surveillance and target acquisition. However, it is too complex and difficult to use for people who do not have sufficient situation awareness. Additionally, constructing mental models of remote environments is known to be difficult for human operators. This paper proposes a mixed reality interface for remote robot using both real and virtual data acquired by a mobile equipped with an omnidirectional camera and a laser scanner. The MR interface can enhance the current remote robot teleoperation visual interface by combining real environment and virtual information together on a single display to efficiently improve situation awareness, to facilitate the understanding of surrounding environment, and to predict the future status. The computational model that describes the triangle relationship among the mobile robot, the operation and intelligent environment also be discussed in this paper.
X. Wang, J. Zhu
Evaluating the Usability of Virtual Environment by Employing Affective Measures
Abstract
In this chapter a new approach based on exploring affective status and cues for evaluating the performance and designing quality of virtual environments have been proposed. Five individual experiments have been performed to analyse the effect of proposed affective computing approach in designing schemes. The results show that, by employing user’s emotional states into the designing paradigms, the better results could be achieved.
Iman M. Rezazadeh, M. Firoozabadi, X. Wang
Security Robot Simulator
Abstract
Building intelligent behaviors is an important aspect of developing a robot for use in security monitoring services. Simulating and testing the robot behavior in a virtual environment prior to producing the robot and then conducting practical experiments can greatly reduce the cost and duration of the testing process. This research proposes a framework for the simulation of security robots, called the security robot simulator (SRS), which is aimed at providing a fully inclusive simulation environment from fundamental physics behaviors to high-level robot scenarios for developers. Human simulation is also integrated into the robot simulator for simulating interactions between the security robot and human personnel. The simulator was implemented in Microsoft Robotics Developer Studio (MSRDS), a services oriented robotics platform composed of a simulation core and four decentralized modules: scenario event, patrol planner, robot unit, and civilian modules. The results show that the four modules fulfill the requirements of a security robot.
W. H. Hung, P. Liu, S. C. Kang
Companion Migration – Initial Participants’ Feedback from a Video-Based Prototyping Study
Abstract
This chapter presents findings from a user study which investigated users’ perceptions and their acceptability of a Companion and associated ’personality’ which migrated between different embodiments (i.e. avatar and robot) to accomplish its tasks. Various issues such as Companion migration decision, Retention of Companion identity in different embodiments, Personalisation of Companion, users’ privacy and control over the technology are discussed. Authorisation guidelines for Companions regarding migration, accessing an embodiment and the data stored in the embodiment are proposed and discussed for future design of migration Companion.
K. L. Koay, D. S. Syrdal, K. Dautenhahn, K. Arent, Ł Małek, B. Kreczmer
Backmatter
Metadata
Title
Mixed Reality and Human-Robot Interaction
Editor
Xiangyu Wang
Copyright Year
2011
Publisher
Springer Netherlands
Electronic ISBN
978-94-007-0582-1
Print ISBN
978-94-007-0581-4
DOI
https://doi.org/10.1007/978-94-007-0582-1