Skip to main content

Über dieses Buch

This book explores how Artificial Intelligence (AI), by leading to an increase in the autonomy of machines and robots, is offering opportunities for an expanded but uncertain impact on society by humans, machines, and robots. To help readers better understand the relationships between AI, autonomy, humans and machines that will help society reduce human errors in the use of advanced technologies (e.g., airplanes, trains, cars), this edited volume presents a wide selection of the underlying theories, computational models, experimental methods, and field applications. While other literature deals with these topics individually, this book unifies the fields of autonomy and AI, framing them in the broader context of effective integration for human-autonomous machine and robotic systems.

The contributions, written by world-class researchers and scientists, elaborate on key research topics at the heart of effective human-machine-robot-systems integration. These topics include, for example, computational support for intelligence analyses; the challenge of verifying today’s and future autonomous systems; comparisons between today’s machines and autism; implications of human information interaction on artificial intelligence and errors; systems that reason; the autonomy of machines, robots, buildings; and hybrid teams, where hybrid reflects arbitrary combinations of humans, machines and robots.

The contributors span the field of autonomous systems research, ranging from industry and academia to government. Given the broad diversity of the research in this book, the editors strove to thoroughly examine the challenges and trends of systems that implement and exhibit AI; the social implications of present and future systems made autonomous with AI; systems with AI seeking to develop trusted relationships among humans, machines, and robots; and the effective human systems integration that must result for trust in these new systems and their applications to increase and to be sustained.



Chapter 1. Introduction

Two Association for the Advancement of Artificial Intelligence (AAAI) symposia, organized and held at Stanford in 2015 and 2016, are reviewed separately. After the second of these two symposia was completed, the conference organizers solicited book chapters from those who participated, as well as more widely, but framed by these two symposia. In this introduction, we review briefly the two symposia and then individually introduce the contributing chapters that follow.
W. F. Lawless, Ranjeev Mittu, Stephen Russell, Donald Sofge

Chapter 2. Reexamining Computational Support for Intelligence Analysis: A Functional Design for a Future Capability

We explore the technological bases for argumentation combined with information fusion techniques to improve intelligence analyses. We review various tools framed by several examples of modern intelligence analyses drawn from different environments. Current tools fail to support computational associations needed for fusion of relations among entities needed for the assembly of an integrated situational picture. Most tools are single-sourced for entity streams, with tools automatically linking analyses between bounded entity-pairs and enabling levels of “data fusion”, but the rigor is limited. Yet these tools often accept the pre-processed extractions from these entities as correct. These tools can identify the intuitive associations among entities, but mostly as if uncertainty did not exist. However, in their attempt to discover relations among entities with little uncertainty and few entity associations, the complexities are left to the human analysts to be resolved. This situation leads to cognitive overloading of the analysts who must manually assemble the selected situational interpretations into a comprehensive narrative. Our goal is automating the integration of complex hypotheses. We review the literature of computational support for argumentation and, for an integrated functional design, as part of a combined approach, we nominate a unique, belief- and story-based subsystem designed to support hybrid argumentation. To deal with the largely textual data foundation of these intelligence analyses, we describe how a previously, author-developed, ‘hard plus soft’ information fusion system (combining sensor/hard and textual/soft information) could be integrated into a functional design. We combine these two unique capabilities into a scheme that arguably overcomes many of the deficiencies we cite to provide considerable improvement in efficiency and effectiveness for intelligence analyses.
James Llinas, Galina Rogova, Kevin Barry, Rachel Hingst, Peter Gerken, Alicia Ruvinsky

Chapter 3. Task Allocation Using Parallelized Clustering and Auctioning Algorithms for Heterogeneous Robotic Swarms Operating on a Cloud Network

In this paper, a novel centralized robotic swarm of heterogeneous unmanned vehicles consisting of autonomous surface vehicles and micro-aerial vehicles is presented. The swarm robots operate in an outdoor environment and are equipped with cameras and Global Positioning Systems (GPS). Manipulations of the swarm demonstrate how aspects of individual robotic platforms can be controlled cooperatively to accomplish a group task in an efficient manner. We demonstrate the use of air-based robots to build a map of important features of the local environment, such as the locations of targets. The map is then sent to a cloud-based cluster on a remote network. The cloud performs clustering algorithms using the map to calculate optimal clusters of the targets. The cloud then performs an auctioning algorithm to assign the clusters to the surface-based robots based on several factors such as relative position and capacities. The surface-robots then travel to their assigned clusters to complete the allocated tasks. Lastly, we present the results of simulating our cooperative swarm in both software and hardware, demonstrating the effectiveness of our proposed algorithm.
Jonathan Lwowski, Patrick Benavidez, John J. Prevost, Mo Jamshidi

Chapter 4. Human Information Interaction, Artificial Intelligence, and Errors

In a time of pervasive and increasingly transparent computing, humans will interact more with information objects, and less with the computing devices that define them. Artificial Intelligence (AI) will be the proxy for humans’ interaction with information. Because interaction creates opportunities for error, the trend towards AI-augmented human information interaction (HII) will mandate an increased emphasis on cognition-oriented information science research, and new ways of thinking about errors and error handling. In this chapter, a review of HII and its relationship to AI is presented, with a focus on errors in this context.
Stephen Russell, Ira S. Moskowitz, Adrienne Raglin

Chapter 5. Verification Challenges for Autonomous Systems

In this chapter, some research challenges in the verification of autonomous systems are outlined. The objective was to identify existing available verification tools and their associated gaps, additional challenges for which there are no tools, and to make suggestions for directions in which progress may profitably be made. The chapter briefly touches on existing research to begin addressing these problems but there are more unexplored research challenges than there are programs underway to explore them. This chapter concludes with an enumeration of the unexplored challenges.
Signe A. Redfield, Mae L. Seto

Chapter 6. Conceptualizing Overtrust in Robots: Why Do People Trust a Robot That Previously Failed?

In this chapter, we present work that suggests people tend to be overly trusting and overly forgiving of robots in certain situations. In keeping with the theme of this book where intelligent systems help humans recover from errors, our work so far has focused on robots as guides in emergency situations. Our experiments show that, at best, human participants in our simulated emergencies focus on guidance provided by robots, regardless of a robot’s prior performance or other guidance information, and, at worst, believe that the robot is more capable than other sources of information. Even when the robots do break trust, a properly timed statement can convince a participant to follow it. Based on this evidence, we have conceptualized overtrust of robots using our previous framework of situational trust. We define two mechanisms in which people can overtrust robots: misjudging the abilities or intentions of the robot and misjudging the risk in the scenario. We discuss our prior work in light of this new reconceptualization to attempt to explain our previous results and encourage future work.
Paul Robinette, Ayanna Howard, Alan R. Wagner

Chapter 7. Research Considerations and Tools for Evaluating Human-Automation Interaction with Future Unmanned Systems

Advances in automation will soon enable a single operator to supervise multiple unmanned aerial vehicles. Successfully implementing this new supervisory control paradigm not only requires improvements in automation capability and reliability, but also an understanding of the human performance issues associated with concurrent management of several automated systems. Research in this area has generally focused on topics such as trust, reliability, and levels of automation. The goal of automating systems is generally to minimize the human’s need to directly interact with the system; despite this objective, the majority of current supervisory control research emphasizes situations in which the human must frequently interact with the automation. This is typically done to provide researchers with a clear means of assessing human performance, but ultimately limits the generalizability of the research since it only applies to a limited mission context. The current chapter discusses a model of assessing human-automation interaction that emphasizes not only the traditional outcome-based measures of assessing performance (e.g., speed and accuracy), but also addresses measures of operator state. Such measures include those obtained from subjective workload and fatigue probes, situation awareness (SA) probes, and continuous measures from eye tracking systems. The chapter closes by discussing a new testbed developed by the authors that enables the assessment of human-automation interaction across a broad range of mission contexts.
Ciara Sibley, Joseph Coyne, Sarah Sherwood

Chapter 8. Robots Autonomy: Some Technical Issues

Imbuing a robot with decisional autonomy raises many technical issues that must be dealt with in order to let the robots out of the labs. In the framework of a shared authority between the robot itself and a human being, what is at stake is to mitigate the weaknesses of both agents (i.e., the robot and the human) through appropriate control handovers. This chapter focuses on autonomy technical issues, on some human weak spots when interacting with a robot, and on the human-robot interaction issues. Moreover a section focuses on some ethical challenges that are raised by autonomy. The conclusion shows that many questions have to be answered through research and technical work so as to make sure that robots and autonomous machines will be beneficial for society.
Catherine Tessier

Chapter 9. How Children with Autism and Machines Learn to Interact

We explore how children with autism (CwA) learn to interact and what kind of difficulties they experience. Autistic reasoning is an adequate means to explore team formation because it is rather simple compared to the reasoning of controls and software systems on one hand, and allows exploration of human behavior in real-world environment on the other hand. We discover that reasoning about mental world, impaired in various degrees in autistic patients, is the key parameter of limiting the capability to form the teams and cooperate. While teams of humans, robots and software agents have a manifold of limitations to form teams, including resources, conflicting desires, uncertainty, environment constraints, children with autism have only a single limitation which is reduced reasoning about the mental world. We correlate the complexity of the expressions for mental states children are capable of operating with their ability to form teams. Reasoning rehabilitation methodology is described, as well as its implications for children behavior in the real world involving interactions and including cooperation and team formation.
Boris A. Galitsky, Anna Parnis

Chapter 10. Semantic Vector Spaces for Broadening Consideration of Consequences

Reasoning systems with too simple a model of the world and human intent are unable to consider potential negative side effects of their actions and modify their plans to avoid them (e.g., avoiding potential errors). However, hand-encoding the enormous and subtle body of facts that constitutes common sense into a knowledge base has proved too difficult despite decades of work. Distributed semantic vector spaces learned from large text corpora, on the other hand, can learn representations that capture shades of meaning of common-sense concepts and perform analogical and associational reasoning in ways that knowledge bases are too rigid to perform, by encoding concepts and the relations between them as geometric structures. These have, however, the disadvantage of being unreliable, poorly understood, and biased in their view of the world by the source material. This chapter will discuss how these approaches may be brought together in a way that combines the best properties of each for understanding the world and human intentions in a richer way.
Douglas Summers-Stay

Chapter 11. On the Road to Autonomy: Evaluating and Optimizing Hybrid Team Dynamics

This chapter explores the potential of neuroscience methods to supplement traditional psychometric approaches by unobtrusively measuring team dynamics and quantifying a team’s cognitive and emotional state in real-time. The novel teaming platform is reviewed and a number of studies that have been conducted with this emerging technology are presented (e.g., monitoring team neurodynamics, leadership emergence, neural responses to narrative storytelling, tutoring process, or surgical skills training). Lastly, the implications of using this new technology in team studies and possible future work directions are reviewed.
Chris Berka, Maja Stikic

Chapter 12. Cybersecurity and Optimization in Smart “Autonomous” Buildings

Significant resources have been invested in making buildings “smart” by digitizing, networking and automating key systems and operations. Smart autonomous buildings create new energy efficiency, economic and environmental opportunities. But as buildings become increasingly networked to the Internet, they can also become more vulnerable to various cyber threats. Automated and Internet-connected buildings systems, equipment, controls, and sensors can significantly increase cyber and physical vulnerabilities that threaten the confidentiality, integrity, and availability of critical systems in organizations. Securing smart autonomous buildings presents a national security and economic challenge to the nation. Ignoring this challenge threatens business continuity and the availability of critical infrastructures that are enabled by smart buildings. In this chapter, the authors address challenges and explore new opportunities in securing smart buildings that are enhanced by machine learning, cognitive sensing, artificial intelligence (AI) and smart-energy technologies. The chapter begins by identifying cyber-threats and challenges to smart autonomous buildings. Then it provides recommendations on how AI enabled solutions can help smart buildings and facilities better protect, detect and respond to cyber-physical threats and vulnerabilities. Next, the chapter will provide case studies that examine how combining AI with innovative smart-energy technologies can increase both cybersecurity and energy efficiency savings in buildings. The chapter will conclude by proposing recommendations for future cybersecurity and energy optimization research for examining AI enabled smart-energy technology.
Michael Mylrea, Sri Nikhil Gupta Gourisetti

Chapter 13. Evaluations: Autonomy and Artificial Intelligence: A Threat or Savior?

We first review and evaluate our own research presented at AAAI-2015 (computational autonomy) and AAAI-2016 (reducing human errors). Then we evaluate each of the other contributed chapters on their own terms that more or less mesh with these two parts of this book. To begin, after recent successes with Artificial Intelligence (AI; e.g., driverless cars), claims have surfaced that autonomous robots in society may 1 day threaten human existence. These extraordinary claims followed others by macroeconomists in 2003 that their economic tools would prevent future financial collapses, and social scientists in 2015 that humans can learn the skills to forecast political winners. We now know that these claims were overstated, wishful or fanciful. Theoretically, how can AI threaten human society when the key characteristic of social interaction, interdependence, has not been satisfactorily modeled with AI or economics; or when interdependence “invalidates” experimental social science? We have found that the incorrect understanding of interdependence (mutual information) by traditional scientists precludes successful social predictions; and its mystery has led economists to believe that aggregated social preferences are meaningless. If AI cannot model interdependence satisfactorily, AI-robots may be able to communicate among themselves or with humans or they may be able to defeat humans at board games, but AI will not be a wellspring of innovation like interdependence has been for humans, nor will AI ever be able to model human interaction effectively. Thus, we need to know whether interdependence can be modeled satisfactorily for robots; what limits does interdependence entail; and what forecasts does interdependence permit? In contrast to traditional perspectives, with our model, we have concluded that interdependence is a resource used by humans to innovate and to solve intractable problems. Finally, we evaluate our two themes in this chapter and the chapters in this book to offer a path forward for research in AI. In the first section of this chapter (13.1), we discuss the use of AI in the development of autonomy for individual machines, robots and hybrid teams that include humans in states of social interdependence; then, on their own terms, we evaluate the other chapters associated with autonomy. In the second section (13.2), we discuss the use of AI in reducing, preventing or mitigating human error in society; this second section is followed by an evaluation of the remaining chapters, again, on their own terms.
W. F. Lawless, Donald A. Sofge
Weitere Informationen

Premium Partner