Skip to main content
Top

2021 | Book

Systems Engineering and Artificial Intelligence

Editors: William F. Lawless, Ranjeev Mittu, Donald A. Sofge, Thomas Shortell, Thomas A. McDermott

Publisher: Springer International Publishing

insite
SEARCH

About this book

This book provides a broad overview of the benefits from a Systems Engineering design philosophy in architecting complex systems composed of artificial intelligence (AI), machine learning (ML) and humans situated in chaotic environments. The major topics include emergence, verification and validation of systems using AI/ML and human systems integration to develop robust and effective human-machine teams—where the machines may have varying degrees of autonomy due to the sophistication of their embedded AI/ML. The chapters not only describe what has been learned, but also raise questions that must be answered to further advance the general Science of Autonomy.

The science of how humans and machines operate as a team requires insights from, among others, disciplines such as the social sciences, national and international jurisprudence, ethics and policy, and sociology and psychology. The social sciences inform how context is constructed, how trust is affected when humans and machines depend upon each other and how human-machine teams need a shared language of explanation. National and international jurisprudence determine legal responsibilities of non-trivial human-machine failures, ethical standards shape global policy, and sociology provides a basis for understanding team norms across cultures. Insights from psychology may help us to understand the negative impact on humans if AI/ML based machines begin to outperform their human teammates and consequently diminish their value or importance. This book invites professionals and the curious alike to witness a new frontier open as the Science of Autonomy emerges.

Table of Contents

Frontmatter
Chapter 1. Introduction to “Systems Engineering and Artificial Intelligence” and the Chapters
Abstract
In this introductory chapter, we first review the science behind the two Association for the Advancement of Artificial Intelligence (AAAI) Symposia that we held in 2020 (“AI welcomes Systems Engineering. Towards the science of interdependence for autonomous human-machine teams”). Second, we provide a brief introduction to each of the chapters in this book.
William F. Lawless, Ranjeev Mittu, Donald A. Sofge, Thomas Shortell, Thomas A. McDermott
Chapter 2. Recognizing Artificial Intelligence: The Key to Unlocking Human AI Teams
Abstract
This chapter covers work and corresponding insights gained while building an artificially intelligent coworker, named Charlie. Over the past year, Charlie first participated in a panel discussion and then advanced to speak during multiple podcast interviews, contribute to a rap battle, catalyze a brainstorming workshop, and even write collaboratively (see the author list above). To explore the concepts and overcome the challenges when engineering human–AI teams, Charlie was built on cutting-edge language models, strong sense of embodiment, deep learning speech synthesis, and powerful visuals. However, the real differentiator in our approach is that of recognizing artificial intelligence (AI). The act of “recognizing” Charlie can be seen when we give her a voice and expect her to be heard, in a way that shows we acknowledge and appreciate her contributions; and when our repeated interactions create a comfortable awareness between her and her teammates. In this chapter, we present our approach to recognizing AI, discussing our goals, and describe how we developed Charlie’s capabilities. We also present some initial results from an innovative brainstorming workshop in which Charlie participated with four humans that showed that she could not only participate in a brainstorming exercise but also contribute and influence the brainstorming discussion covering a space of ideas. Furthermore, Charlie helped us formulate ideas for, and even wrote sections of, this chapter.
Patrick Cummings, Nathan Schurr, Andrew Naber, Charlie, Daniel Serfaty
Chapter 3. Artificial Intelligence and Future of Systems Engineering
Abstract
Systems Engineering (SE) is in the midst of a digital transformation driven by advanced modeling tools, data integration, and resulting “digital twins.” Like many other domains, the engineering disciplines will see transformational advances in the use of artificial intelligence (AI) and machine learning (ML) to automate many routine engineering tasks. At the same time, applying AI, ML, and autonomation to complex and critical systems needs holistic, system-oriented approaches. This will encourage new systems engineering methods, processes, and tools. It is imperative that the SE community deeply understand emerging AI and ML technologies and applications, incorporate them into methods and tools, and ensure that appropriate SE approaches are used to make AI systems ethical, reliable, safe, and secure. This chapter presents a road mapping activity undertaken by the Systems Engineering Research Center (SERC). The goal is to broadly identify opportunities and risks that might appear as this evolution proceeds as well as potentially provide information that guides further research in both SE and AI/ML.
Thomas A. McDermott, Mark R. Blackburn, Peter A. Beling
Chapter 4. Effective Human–Artificial Intelligence Teaming
Abstract
In 1998, the great social psychologist, (Jones, Gilbert et al.Fiske et al.Lindzey (eds), The Handbook of Social Psychology, McGraw-Hill, 1998), asserted that interdependence was present in every social interaction and key to unlocking the social life of humans, but this key, he also declared, had produced effects in the laboratory that were “bewildering,” and too difficult to control. Since then, along with colleagues and students, we have brought the effects of interdependence into the laboratory for detailed studies where we have successfully explored many of the aspects of interdependence and its implications. In addition, in a review led by the first author and a colleague, the National Academy of Sciences reported that interdependence in a team enhances the performance of the individual (Cooke and Hilton,.Enhancing the Effectiveness of Team Science. Authors: Committee on the Science of Team Science; Board on Behavioral, Cognitive, and Sensory Sciences; Division of Behavioral and Social Sciences and Education; National Research Council, National Academies Press, 2015). This book chapter allows me to review the considerable research experiences we have gained from our studies over the years to consider the situations in which an artificial intelligence (AI) agent or machine begins to assist and possibly replace a human teammate on a team in the future.
Nancy J. Cooke, William F. Lawless
Chapter 5. Toward System Theoretical Foundations for Human–Autonomy Teams
Abstract
Both human–autonomy teaming, specifically, and intelligent autonomous systems, more generally, raise new challenges in considering how best to specify, model, design, and verify correctness at a system level. Also important are extending this to monitoring and repairing systems in real time and over lifetimes to detect problems and restore desired properties when they are lost. Systems engineering methods that address these issues are typically based around a level of modeling that involves a broader focus on the life cycle of the system and much higher levels of abstraction and decomposition than some common ones used in disciplines concerned with the design and development of individual elements of intelligent autonomous systems. Nonetheless, many of the disciplines associated with autonomy do have reasons for exploring higher level abstractions, models, and ways of decomposing problems. Some of these may match well or be useful inspirations for systems engineering and related problems like system safety and human system integration. This chapter will provide a sampling of perspectives across scientific fields such as biology, neuroscience, economics/game theory, and psychology, methods for developing and accessing complex socio-technical systems from human factors and organizational psychology, and methods for engineering teams from computer science, robotics, and engineering. Areas of coverage will include considerations of team organizational structure, allocation of roles, functions, and responsibilities, theories for how teammates can work together on tasks, teaming over longer time durations, and formally modeling and composing complex human–machine systems.
Marc Steinberg
Chapter 6. Systems Engineering for Artificial Intelligence-based Systems: A Review in Time
Abstract
With backgrounds in the science of information fusion and information technology, a review of Systems Engineering (SE) for Artificial Intelligence (AI)-based systems is provided across time, first with a brief history of AI and then the systems’ perspective based on the lead author’s experience with information fusion processes. The different types of AI are reviewed, such as expert systems and machine learning. Then SE is introduced and how it has evolved and must evolve further to become fully integrated with AI, such that both disciplines can help each other move into the future and evolve together. Several SE issues are reviewed, including risk, technical debt, software engineering, test and evaluation, emergent behavior, safety, and explainable AI.
James Llinas, Hesham Fouad, Ranjeev Mittu
Chapter 7. Human-Autonomy Teaming for the Tactical Edge: The Importance of Humans in Artificial Intelligence Research and Development
Abstract
The U.S. Army is currently working to integrate artificial intelligence, or AI-enabled systems, into military working teams in the form of both embodied (i.e., robotic) and embedded (i.e., computer or software) intelligent agents with the express purpose of improving performance during all phases of the mission. However, this is largely uncharted territory, making it unclear how to do this integration effectively for human-AI teams. This chapter provides an overview of the Combat Capabilities Development Command (DEVCOM) Army Research Laboratory’s effort to address the human as a critical gap with associated implications on effective teaming. This chapter articulates four major research thrusts critical to integrating AI-enabled systems into military operations, giving examples within these broader thrusts that are currently addressing specific research gaps. The four major research thrusts include: (1) Enabling Soldiers to predict AI; (2) Quantifying Soldier understanding for AI; (3) Soldier-guided AI adaptation; and (4) Characterizing Soldier-AI performance. These research thrusts are the organizing basis for explaining a path toward integration and effective human-autonomy teaming at the tactical edge.
Kristin E. Schaefer, Brandon Perelman, Joe Rexwinkle, Jonroy Canady, Catherine Neubauer, Nicholas Waytowich, Gabriella Larkin, Katherine Cox, Michael Geuss, Gregory Gremillion, Jason S. Metcalfe, Arwen DeCostanza, Amar Marathe
Chapter 8. Re-orienting Toward the Science of the Artificial: Engineering AI Systems
Abstract
AI-enabled systems are becoming more pervasive, yet system engineering techniques still face limitations in how AI systems are being deployed. This chapter provides a discussion of the implications of hierarchical component composition and the importance of data in bounding AI system performance and stability. Issues of interoperability and uncertainty are introduced and how they can impact emergent behaviors of AI systems are illustrated through the presentation of a natural language processing (NLP) system used to provide similarity comparisons of organizational corpora. Within the bounds of this discussion, we examine how the concepts from Design science can introduce additional rigor to AI complex system engineering.
Stephen Russell, Brian Jalaian, Ira S. Moskowitz
Chapter 9. The Department of Navy’s Digital Transformation with the Digital System Architecture, Strangler Patterns, Machine Learning, and Autonomous Human–Machine Teaming
Abstract
The Department of Navy (DoN) is rapidly adopting mature technologies, products, and methods used within the software development community due to the proliferation of machine learning (ML) capabilities required to complete warfighting missions. One of the most impactful places where ML algorithms, their applications, and capabilities will have on warfighting is in the area of autonomous human–machine teaming (AHMT). However, stakeholders given the task to implement AHMT solutions enterprise-wide are finding current DoN system architectures and platform infrastructures inadequate to facilitate deployment at scale. In this chapter, the authors discuss the DoN’s goal, barriers to, and a potential path to success in implementing AHMT solutions fleet- and force-wide.
Matthew Sheehan, Oleg Yakimenko
Chapter 10. Digital Twin Industrial Immune System: AI-driven Cybersecurity for Critical Infrastructures
Abstract
Innovative advances in machine learning (ML) and artificial intelligence (AI)-driven cyber-physical anomaly detection will help to improve the security, reliability and resilience of the United States' power grid. These advances are timely as sophisticated cyber adversaries are increasingly deploying innovative tactics, techniques and technology to attack critical energy infrastructures. Defenders of these modern infrastructures need to better understand how to combine innovative technology in a way that enables their teams to detect, protect, respond and endure attacks from complex, nonlinear and rapidly evolving cyber threats. This chapter (i) explores how AI is being combined with advances in physics to develop a next-generation industrial immune system to defend against sophisticated cyber-physical attacks to critical infrastructure; (ii) provides an overview of the technology and explores its applicability to address the needs of cyber defenders to critical energy infrastructures; applicability is explored through opportunities and challenges related to human–machine teams as well as the process and technology; (iii) includes validation and verification of findings when the technology was tested defending against stealthy attacks on the world's largest gas turbines; (iv) explores how the AI algorithms are being developed to provide cyber defenders with improved cyber situation awareness to rapidly detect, locate and neutralize the threat; and (v) concludes with future research to overcome human–machine challenges with neutralizing threats from all hazards.
Michael Mylrea, Matt Nielsen, Justin John, Masoud Abbaszadeh
Chapter 11. A Fractional Brownian Motion Approach to Psychological and Team Diffusion Problems
Abstract
In this chapter we discuss drift diffusion and extensions to fractional Brownian motion. We include some Artificial Intelligence (AI) motivated issues in fractional Brownian motion. We also discuss how fractional Brownian motion may be used as a metric for interdependence in Team science.
Ira S. Moskowitz, Noelle L. Brown, Zvi Goldstein
Chapter 12. Human–Machine Understanding: The Utility of Causal Models and Counterfactuals
Abstract
Trust is a human condition. For a human to trust a machine, the human must understand the capabilities and functions of the machine in a context spanning the domain of trust so that the actions of the machine are predictable for a given set of inputs. In general, we would like to expand the domain of trust so that a human–machine system can be optimized for the widest range of operating scenarios. This reasoning motivates the desire to cast the operations of the machine into a knowledge structure that is tractable to the human. Since the machine is deterministic, for every action, there is a reaction and the dynamics of the machine can be described through a structural causal model to enable the formulation of the counterfactual queries upon which human trust may be anchored.
Paul Deignan
Chapter 13. An Executive for Autonomous Systems, Inspired by Fear Memory Extinction
Abstract
We explore an executive function that performs adaptive, introspective reasoning for autonomous systems in challenging situations. This chapter presents a definition of the problem using cartoon examples for electronic warfare and submarine surveillance. A case study of neural processes in therapy for Post-traumatic Stress Disorder (PTSD) is discussed; PTSD provides both a second modelling challenge and an architectural inspiration for executive reasoning. The main body of the chapter is towards a technique for working with virtual and physical agent models in mixed human/machine systems. The architecture supposes a second-sorted reasoning system with complementary reasoning power over situations, influences and unknowns.
Matt Garcia, Ted Goranson, Beth Cardier
Chapter 14. Contextual Evaluation of Human–Machine Team Effectiveness
Abstract
The adoption of human-machine teams is rapidly expanding in many domains such as healthcare and disaster relief. Fueled by novel advances in robotics, artificial intelligence, and other technologies, machines with relatively high degrees of autonomy and self-awareness are being developed to improve efficiency and productivity in complex dynamic environments. The traditional role of machines as human tools is shifting to one where they now serve as human collaborative team partners. Despite this progression, evaluation of human-machine team performance remains ill-defined. In many human-machine team settings, end-users rely on metrics that are insufficient at explaining a team’s performance. Explanations are crucial because they help understand a team’s operational dynamics and identify the shortcomings that individual agents (human or machine) introduce to the team. To address this explanation gap, we introduce a context-specific interference-based methodology to evaluate human-machine team effectiveness. Interference provides a measure that reflects the cohesiveness and compatibility between the goals of the human and the machine agents. Context is essential as human-machine teams are deployed in various settings. Our methodology relies on using a classifier that is trained to map human-machine team behavior to a set of behavioral attributes that are directly linked to the team’s performance. These behavioral attributes provide high-level explanations about the team’s observed performance outcome and insights on the mechanism of team interference. To test our methodology, we conduct experiments involving the teaming of humans and scripted bots (machines) in a StarCraft 2 game domain. From these experiments, our classifier achieves an accuracy of 84% in predicting agent behavioral attributes from a set of 18 unique classes. To validate the use of this classifier in our evaluation approach, we compare the Pearson correlation between predicted team win-ratios and observed win-ratios, and we achieve a statistically significant score of 0.76. These results suggest that predicted team attributes reflect the actual team behaviors; hence, we can confidently apply the predicted team attributes to evaluate and prescribe human-machine teams.
Eugene Santos Jr, Clement Nyanhongo, Hien Nguyen, Keum Joo Kim, Gregory Hyde
Chapter 15. Humanity in the Era of Autonomous Human–machine Teams
Abstract
In this chapter, we address the meaning of the development of autonomous human–machine teams undergirded by the trio, namely, data, the Internet, and algorithms. We first review and examine this issue against a general background related to the philosophy and history of science and technology, symbiosis and cyborgs, and an evolutionary viewpoint from the Anthropocene and Novacene. We then argue that the meaning for humanity in this increasingly intensive autonomous human–machine interaction environment is two-fold, namely, individuality and the democratization of individuality (capability development). Nevertheless, to not leave the future of humanity to be dominated and solely determined by machines (the trio), humanistic scholars have to get involved themselves in the autonomous human–machine teams. In fact, some of their earlier actions have already taken place and have contributed to the changing face of the humanities, which will also be highlighted in this chapter.
Shu-Heng Chen
Chapter 16. Transforming the System of Military Medical Research: An Institutional History of the Department of Defense’s (DoD) First Electronic Institutional Review Board Enterprise IT System
Abstract
This unusual history of how a small team transformed the global system composed of Army and then DoD medical research processes has been unrecorded until now. It offers guidance to others attempting to transform similarly large systems. It begins with an evaluation of the Department of Clinical Investigation (DCI) at the US Army Medical Center (MEDCEN) in 2005, the formation of a collaboration team in 2006, and the team’s vision of an electronic records management tool (ERMT) for its documents in 2007. From this small beginning, these disparate efforts combined to transform the management of research protocol submission, review, and approval processes as well as research protocols and supporting documents at all DoD MEDCENs. Before this history began, the Army’s MEDCENs used a paper-based research protocol submission and review process by the Institutional Review Board (IRB) for the approval of medical research on human subjects (and animals). The team’s evaluation of the existing processes added metrics that enabled the design of an electronic system to measure the performance of the Army’s medical research mission. Merging the evaluation and the team’s vision to replace the Army’s paper-based IRB occurred with the purchase of a commercial electronic IRB system. It took until 2008 for the eIRB to become funded and another year to begin operations, but within 2 years of start-up, it was rapidly adopted across DoD’s global research community to become the largest enterprise eIRB in the world. In 2011, a formal evaluation project was proposed to measure the impact of the eIRB’s unexpected success across DoD; the impact study was funded in 2012, begun in 2013 and finished in 2014 when we end this history; subsequently, the team was disbanded. Although not a part of this history, we briefly address a few of the statistical results of the eIRB’s impact now and more fully at a later time. We close with a postscript to update readers on the unexpected closure of the eIRB and its reincarnation.
J. Wood, William F. Lawless
Chapter 17. Collaborative Communication and Intelligent Interruption Systems
Abstract
Within collaborative environments, humans are not only tasked with interacting with technology, but also with other humans. The interruption management systems literature is dedicated to alleviating the ill-effects of interruptions specifically within single-user, multitasking interactions by proposing temporal presentations of interruptions in the main task that are least disruptive to the entire interaction. There is less work focused on this concept within multi-user, multitasking environments. In this chapter we propose various temporal presentations of information at low cognitive workloads and evaluate how these timings affect human performance. In measuring objective and subjective individual and team metrics within a dual-user, dual-task paradigm, performance is optimized for low cognitive workload interruption timings compared to high cognitive ones. This work contributes to the overall body of literature by proposing temporal presentations of information within multi-user, multitasking interactions that circumvent the disruptiveness of disturbances in these domains.
Nia Peters, Margaret Ugolini, Gregory Bowers
Chapter 18. Shifting Paradigms in Verification and Validation of AI-Enabled Systems: A Systems-Theoretic Perspective
Abstract
There is a fundamental misalignment between current approaches to designing and executing verification and validation (V&V) strategies and the nature of AI-enabled systems. Current V&V approaches rely on the assumption that system behavior is preserved during a system’s lifetime. However, AI-enabled systems are developed so that they evolve their own behavior during their lifetime; this is the consequence of learning by the AI-enabled system. This misalignment makes existing approaches to designing and executing V&V strategies ineffective. In this chapter, we will provide a systems-theoretic explanation for (1) why learning capabilities originate a unique and unprecedented family of systems, and (2) why current V&V methods and processes are not fit for purpose. AI-enabled systems necessitate a paradigm shift in V&V activities. To enable this shift, we will delineate a set of theoretical advances and process transformations that could support such shift.
Niloofar Shadab, Aditya U. Kulkarni, Alejandro Salado
Chapter 19. Toward Safe Decision-Making via Uncertainty Quantification in Machine Learning
Abstract
The automation of safety-critical systems is becoming increasingly prevalent as machine learning approaches become more sophisticated and capable. However, approaches that are safe to use in critical systems must account for uncertainty. Most real-world applications currently use deterministic machine learning techniques that cannot incorporate uncertainty. In order to place systems in critical infrastructure, we must be able to understand and interpret how machines make decisions. This need is so that they can provide support for human decision-making, as well as the potential to operate autonomously. As such, we highlight the importance of incorporating uncertainty into the decision-making process and present the advantages of Bayesian decision theory. We showcase an example of classifying vehicles from their acoustic recordings, where certain classes have significantly higher threat levels. We show how carefully adopting the Bayesian paradigm not only leads to safer decisions, but also provides a clear distinction between the roles of the machine learning expert and the domain expert.
Adam D. Cobb, Brian Jalaian, Nathaniel D. Bastian, Stephen Russell
Chapter 20. Engineering Context from the Ground Up
Abstract
We are engineering a system that is designed for a human and a robot to solve problems in a shared space. This system uses context to manage interactions with a human collaborator as well as to manage more mundane aspects of context, such a combining speech and gesture input. Our system is highly modular so as to facilitate good engineering practice. It uses a blackboard type architecture to represent and maintain information of different aspects in the problem-solving process and to maintain context. We give an overview of the current status of our system. We explain the components of our system and provide details of the information produced by the various system components. Additionally, we explain how information is accumulated on the blackboard and discuss and evaluate how various aspects of context are addressed in our system.
Michael Wollowski, Lilin Chen, Xiangnan Chen, Yifan Cui, Joseph Knierman, Xusheng Liu
Chapter 21. Meta-reasoning in Assembly Robots
Abstract
As robots become increasingly pervasive in human society, there is a need for developing theoretical frameworks for “human–machine shared contexts.” In this chapter, we develop a framework for endowing robots with a human-like capacity for meta-reasoning. We consider the case of an assembly robot that is given a task slightly different from the one for which it was preprogrammed. In this scenario, the assembly robot may fail to accomplish the novel task. We develop a conceptual framework for using meta-reasoning to recover and learn from the robot failure, including a specification of the problem, a taxonomy of failures, and an architecture for meta-reasoning. Our framework for robot learning from failure grounds meta-reasoning in action and perception.
Priyam Parashar, Ashok K. Goel
Chapter 22. From Informal Sketches to Systems Engineering Models Using AI Plan Recognition
Abstract
The transition to Computer-Aided Systems Engineering (CASE) changed engineers’ day-to-day tasks in many disciplines such as mechanical or electronic ones. System engineers are still looking for the right set of tools to embrace this opportunity. Indeed, they deal with many kinds of data which evolve a lot during the development life cycle. Model-Based Systems Engineering (MBSE) should be an answer to that but failed to convince and to be accepted by system engineers and architects. The complexity of creating, editing, and annotating models of systems engineering takes its root from different sources: high abstraction levels, static representations, complex interfaces, and the time-consuming activities to keep a model and its associated diagrams consistent. As a result, system architects still heavily rely on traditional methods (whiteboards, papers, and pens) to outline a problem and its solution, and then they use modeling expert users to digitize informal data in modeling tools. In this chapter, we present an approach based on automated plan recognition to capture sketches of systems engineering models and to incrementally formalize them using specific representations. We present a first implementation of our approach with AI plan recognition, and we detail an experiment on applying plan recognition to systems engineering.
Nicolas Hili, Alexandre Albore, Julien Baclet
Chapter 23. An Analogy of Sentence Mood and Use
Abstract
Interpreting the force of an utterance, be it an assertion, command, or question, remains a task for achieving joint action in artificial intelligence. It is not an easy task. An interpretation of force depends on a speaker’s use of words for a hearer at the moment of utterance. As a result, grammatical mood is less than certain at indicating force. Navigating the break between sentence use and mood reveals how people get things done with language—that is, the fact meaning comes from the act of uttering. The main goal of this chapter is to motivate research into the relation between mood and use. Past theories, I argue, underestimate the evasiveness of force in interpretations (formal or otherwise). Making their relation explicit and precise expands the use of argumentation schemes in language processing and joint action. Building from prior work, I propose a model for conceiving the mood/force relation and offer questions for future research.
Ryan Phillip Quandt
Chapter 24. Effective Decision Rules for Systems of Public Engagement in Radioactive Waste Disposal: Evidence from the United States, the United Kingdom, and Japan
Abstract
For large decision-making systems, radioactive waste is one of the most contentious of technological risks, associated with perceptions of “dread” and deep social stigma. These characteristics contribute to the intractable nature of the radioactive waste problem throughout systems in western democracies. The disposal and long-term management of radioactive waste is an issue entangled in technical, environmental, societal and ethical quandaries. The present study asks how different systems in societies address these multifaceted quandaries. Drawing on formal decision-making theory, it identifies a decision rule that facilitates the approval of deep geological disposal plans while achieving a successful outcome in social and technological terms, with the perception of fairness and legitimacy. We compare two decision rules, the consensus rule and the majority rule, and argue that the principle of majority rule maximizes information processing across a system and increases the likelihood of reaching lasting decisions. We also note positive effects of early public participation in the decision process. This conclusion is reached by a comparative analysis across three societies: The United States, the United Kingdom, and Japan. One remarkable finding is the actual and potential effectiveness of majority rule in these countries despite different policy priorities and cultures. This study reached its conclusion through a synthesis of multiple methods: case studies in the United States and the United Kingdom, and a survey and simulated workshop in Japan.
Mito Akiyoshi, John Whitton, Ioan Charnley-Parry, William F. Lawless
Chapter 25. Outside the Lines: Visualizing Influence Across Heterogeneous Contexts in PTSD
Abstract
Open-world processes generate information that cannot be captured in a single data set. In fields such as medicine and defense, where precise information can be life-saving, a modeling paradigm is needed in which multiple media and contexts can be logically and visually integrated, in order to inform the engineering of large systems. One barrier is the underlying ontological heterogeneity that multiple contexts can exhibit, along with the need for those facts to be compatible with or translated between domains and situations. Another barrier is the dynamism and influence of context, which has traditionally been difficult to represent. This chapter describes a method for modeling the changes of interpretation that occur when facts cross-over context boundaries, whether those contexts are differentiated by discipline, time or perspective (or all three). We  that processing Here, a new modeling environment is developed in which those transitions can be visualized. Our prototype modeling platform, Wunderkammer, can connect video, text, image and data while representing the context from which these artifacts were derived. It can also demonstrate transfers of information among situations, enabling the depiction of influence. Our example focuses on post-traumatic stress disorder (PTSD), combining psychological, neurological and physiological information, with a view to informing the aggregation of information in intelligent systems. These different forms of information are connected in a single modeling space using a narrative-based visual grammar. The goal is to develop a method and tool that supports the integration of information from different fields in order to model changing phenomena in an open world, with a focus on detecting emerging disorders. In turn, this will ultimately support more powerful knowledge systems for fields such as neurobiology, autonomous systems and artificial intelligence (AI).
Beth Cardier, Alex C. Nieslen, John Shull, Larry D. Sanford
Metadata
Title
Systems Engineering and Artificial Intelligence
Editors
William F. Lawless
Ranjeev Mittu
Donald A. Sofge
Thomas Shortell
Thomas A. McDermott
Copyright Year
2021
Electronic ISBN
978-3-030-77283-3
Print ISBN
978-3-030-77282-6
DOI
https://doi.org/10.1007/978-3-030-77283-3

Premium Partner