Skip to main content
main-content

Über dieses Buch

Medical Imaging Informatics provides an overview of this growing discipline, which stems from an intersection of biomedical informatics, medical imaging, computer science and medicine. Supporting two complementary views, this volume explores the fundamental technologies and algorithms that comprise this field, as well as the application of medical imaging informatics to subsequently improve healthcare research. Clearly written in a four part structure, this introduction follows natural healthcare processes, illustrating the roles of data collection and standardization, context extraction and modeling, and medical decision making tools and applications.

Medical Imaging Informatics identifies core concepts within the field, explores research challenges that drive development, and includes current state-of-the-art methods and strategies.

Inhaltsverzeichnis

Frontmatter

Performing the Imaging Exam

Frontmatter

Chapter 1. Introduction

Medical imaging informatics is the rapidly evolving field that combines biomedical informatics and imaging, developing and adapting core methods in informatics to improve the usage and application of imaging in healthcare; and to derive new knowledge from imaging studies. This chapter introduces the ideas and motivation behind medical imaging informatics. Starting with an illustration of the importance of imaging in today’s patient care, we demonstrate imaging informatics’ potential in enhancing clinical care and biomedical research. From this perspective, we provide an example of how different aspects of medical imaging informatics can impact the process of selecting an imaging protocol. To help readers appreciate this growing discipline, a brief history is given of different efforts that have contributed to its development over several decades, leading to its current challenges.
Alex A. T. Bui, Ricky K. Taira, Hooshang Kangarloo

Chapter 2. A Primer on Imaging Anatomy and Physiology

An understanding of medical imaging informatics begins with knowledge of medical imaging and its application toward diagnostic and therapeutic clinical assessment. This chapter is divided into two sections: a review of current imaging modalities; and a primer on imaging anatomy and physiology. In the first half, we introduce the major imaging modalities that are in use today: projectional imaging, computed tomography, magnetic resonance, and ultrasound. The core physics concepts behind each modality; the parameters and algorithms driving image formation; and variants and newer advances in each of these areas are briefly covered to familiarize the reader with the capabilities of each technique. From this foundation, in the second half of the chapter we describe several anatomical and physiologic systems from the perspective of imaging. Three areas are covered in detail: 1) the respiratory system; 2) the brain; and 3) breast imaging. Additional coverage of musculoskeletal, cardiac, urinary, and upper gastrointestinal systems is included. Each anatomical section begins with a general description of the anatomy and physiology, discusses the use of different imaging modalities, and concludes with a description of common medical problems/conditions and their appearance on imaging. From this chapter, the utility of imaging and its complexities becomes apparent and will serve to ground discussion in future chapters.
Denise Aberle, Suzie El-saden, Pablo Abbona, Ana Gomez, Kambiz Motamedi, Nagesh Ragavendra, Lawrence Bassett, Leanne Seeger, Matthew Brown, Kathleen Brown, Alex A. T. Bui, Hooshang Kangarloo

Integrating Imaging into the Patient Record

Frontmatter

Chapter 3. Information Systems & Architectures

Since the advent of computers in medicine, the objective of creating an electronic medical record (EMR) has been to transcend the traditional limitations of paper-based charts through a digital repository capable of quickly organizing patient data, and ultimately, aiding physicians with medical decision-making tasks. This chapter introduces concepts related to the EMR, and covers the development of information systems seen in today's clinical settings. The data and communication standards used by these systems are described (e.g., Digital Imaging and Communications in Medicine, DICOM; Health Level 7, HL7). But as healthcare progressively moves from a centralized practice to a more distributed environment involving multiple sites, providers, and an array of different tasks (both clinical and research), the underlying information architectures must also change. A new generation of informatics challenges has arisen, with different frameworks such as peer-to-peer (P2P) and grid computing being explored to create large scale infrastructures to link operations. We highlight several ongoing projects and solutions in creating medical information architectures, including teleradiology/telemedicine, the integrated healthcare enterprise, and collaborative clinical research involving imaging.
Alex A. T. Bui, Craig Morioka

Chapter 4. Medical Data Visualization: Toward Integrated Clinical Workstations

As our ability to access the abundance of clinical data grows, it is imperative that methods to organize and to visualize this information be in place so as not to overwhelm users: increasingly, users are faced with information overload. Moreover, the manner of presentation is fundamental to how such information is interpreted, and can be the turning point in uncovering new insights and knowledge about a patient or a disease. And of course, medical imaging is itself an inherently visual medium. This chapter presents work related to the visualization of medical data, focusing on issues related to navigation and presentation by drawing upon imaging and other disciplines for examples of display and integration methods. We first cover different visual paradigms that have been developed (e.g., icons, graphs), grouped along dimensions that emphasize the different types of data relationships and workflow. Subsequently, issues related to combining these visualizations are given. As no single graphical user interface (GUI) can accommodate all users and the spectrum of tasks seen in the healthcare environment, the ultimate goal is to create an adaptive graphical interface that integrates clinical information so as to be conducive to a given user's objectives: efforts in this direction are discussed. Throughout, we describe applications that illustrate the many open issues revolving around medical data visualization.
Alex A. T. Bui, William Hsu

Documenting Imaging Findings

Frontmatter

Chapter 5. Characterizing Imaging Data

Imaging represents a frequent, non-invasive, longitudinal, in vivo sampling technique for acquiring objective insight into normal and disease phenomenon. Imaging is increasingly used to document complex patient conditions, for diagnostic purposes as well as for assessment of therapeutic interventions (e.g., drug, surgery, radiation therapy) [81]. Imaging can capture structural, compositional, and functional information across multiple scales of evidence, including manifestations of disease processes at the molecular, genetic, cellular, tissue, and organ level [47]. Imaging allows both global assessment of disease extent as well as the characterization of disease micro-environments. Advances in imaging during the past decade have provided an unparalleled view into the human body; and in all likelihood these advances will continue in the foreseeable future. There has been considerable research directed to developing imaging biomarkers, defined as, “…anatomic, physiologic, biochemical, or molecular parameters detectable with imaging methods used to establish the presence or severity of disease which offers the prospect of improved early medical product development and preclinical testing” [188]. Yet the full utility of image data is not realized, with prevailing interpretation methods that almost entirely rely on conventional subjective interpretation of images. Quantitative methods to extract the underlying tissue specific parameters that change with pathology will provide a better understanding of pathological processes. The interdisciplinary field of imaging informatics addresses many issues that have prevented the systematic, scientific understanding of radiological evidence and the creation of comprehensive diagnostic models from which the most plausible explanation can be considered for decision making tasks.
In this chapter, we explore issues and approaches directed to understanding the process of extracting information from imaging data. We will cover methods for improving procedural information, improving patient assessment, and creating statistical models of normality and disease. Specifically, we want to ascertain what type of knowledge a medical image represents, and what its constituent elements mean. What do contrast and brightness represent in an image? Why are there different presentations of images even when the patient state has not changed? How do we ground a particular pixel measurement to an originating (biological) process? Understanding of the data generation process will permit more effective top-down and bottom-up processing approaches to image analysis.
Ricky K. Taira, Juan Eugenio Iglesias, Neda Jahanshad

Chapter 6. Natural Language Processing of Medical Reports

A significant amount of information regarding the observations, assessments, and recommendations related to a patient's case is documented within free-text medical reports. The ability to structure and standardize clinical patient data has been a grand goal of medical informatics since the inception of the field - especially if this structuring can be (automatically) achieved at the patient bedside and within the modus operandi of current medical practice. A computational infrastructure that transforms the process of clinical data collection from an uncontrolled to highly controlled operation (i.e., precise, completely specified, standard representation) can facilitate medical knowledge acquisition and its application to improve healthcare. Medical natural language processing (NLP) systems attempt to interpret free-text to facilitate a clinical, research, or teaching task. An NLP system performs translates a source language (e.g., free-text) to a target surrogate, computer-understandable representation (e.g., first-order logic), which in turn can support the operations of a driving application. NLP is really then a transformation from a representational form that is not very useful from the perspective of a computer (a sequence of characters) to a form that is useful (a logic-based representation of the text meaning). In general, the accuracy and speed of translation is heavily dependent on the end application. This chapter presents work related to natural language processing of clinical reports, covering issues related to representation, computation, and evaluation. We first summarize a number of typical clinical applications. We then present a high-level formalization of the medical NLP problem in order to provide structure as to how various aspects of NLP fit and complement one another. Examples of approaches that target various forms of representations and degrees of potential accuracy are discussed. Individual NLP subtasks are subsequently discussed. We conclude this chapter with evaluation methods and a discussion of the directions expected in the processing of clinical medical reports. Throughout, we describe applications illustrating the many open issues revolving around medical natural language processing.
Ricky K. Taira

Chapter 7. Organizing Observations: Data Models

Thus far, discussion has focused on issues related to collecting and analyzing clinical data. Yet central to the challenge of informatics is the organization of all of this information to enable a continuum of healthcare and research applications: the type of attributes supported in characterizing an entity within a data model and the scope of relationships defined between these objects determine the ease with which we can retrieve information and ultimately drive how we come to perceive and work with the data. This chapter overviews several data models that have been proposed over the years to address representational issues inherent to medical information. Three categories of data models are covered: spatial models, which are concerned with representing physical and anatomical relations between objects; temporal models that embody a chronology and/or other time-based sequences/patterns; and clinically-oriented models, which systematically arrange information around a healthcare abstraction or process. Notably, these models no longer serve the sole purpose of being data structures, but are also foundations upon which rudimentary logical reasoning and inference can occur. Finally, as translational informatics begins to move toward the use of large clinical datasets, the context under which such data are captured is important to consider; this chapter thus concludes by introducing the idea of a the phenomenon-centric data model (PCDM) that explicitly embeds the principles of scientific investigation and hypotheses with clinical observations.
Alex A. T. Bui, Ricky K. Taira

Toward Medical Decision Making

Frontmatter

Chapter 8. Disease Models, Part I: Graphical Models

Scientists building models of the world by necessity abstract away features not directly relevant to their line of inquiry. Furthermore, complete knowledge of relevant features is not generally possible. The mathematical formalism that has proven to be the most successful at simultaneously abstracting the irrelevant, while effectively summarizing incomplete knowledge, is probability theory. First studied in the context of analyzing games of chance, probability theory has flowered into a mature mathematical discipline today whose tools, methods, and concepts permeate statistics, engineering, and social and empirical sciences. A key insight, discovered multiple times independently during the 20th century, but refined, generalized, and popularized by computer scientists, is that there is a close link between probabilities and graphs. This link allows numerical, quantitative relationships such as conditional independence found in the study of probability to be expressed in a visual, qualitative way using the language of graphs. As human intuitions are more readily brought to bear in visual rather than algebraic and computational settings, graphs aid human comprehension in complex probabilistic domains. This connection between probabilities and graphs has other advantages as well - for instance the magnitude of computational resources needed to reason about a particular probabilistic domain can be read from a graph representing this domain. Finally, graphs provide a concise and intuitive language for reasoning about causes and effects. In this chapter, we explore the basic laws of probability, the relationship between probability and causation, the way in which graphs can be used to reason about probabilistic and causal models, and finally how such graphical models can be learned from data. The application of these graphs to formalize observations and knowledge about disease are provided.
Ilya Shpitser

Chapter 9. Disease Models, Part II: Querying & Applications

In the previous chapter, the mathematical formalisms that allow us to encode medical knowledge into graphical models were described. Here, we focus on how users can interact with these models (specifically, belief networks) to pose a wide range of questions and understand inferred results - an essential part of the healthcare process as patients and healthcare providers make decisions. Two general classes of queries are explored: belief updating, which computes the posterior probability of the network variables in the presence of evidence; and abductive reasoning, which identifies the most probable instantiation of network variables given some evidence. Many diagnostic, prognostic, and therapeutic questions can be represented in terms of these query Types. For models that are complex, exact inference techniques are computationally intractable; instead, approximate inference methods can be leveraged. We also briefly cover special classes of belief networks that are relevant in medicine: probabilistic relational models, which provide a compact representation of large number of propositional variables through the use of first-order logic; influence diagrams, which provide a means of selecting optimal plans given cost/preference constraints; and naïve Bayes classifiers. Importantly, the question of how to validate the accuracy of belief networks is explored through cross validation and sensitivity analysis. Finally, we explore how the intrinsic properties of a graphical model (e.g., variable selection, structure, parameters) can assist users with interacting with and understanding the results of a model through feedback. Applications of Bayesian belief networks in image processing, querying, and case-based retrieval from large imaging repositories are demonstrated.
William Hsu, Alex A. T. Bui

Chapter 10. Evaluation

Evaluation is a cornerstone of informatics, allowing us to objectively assess the strengths and weaknesses of a given tool. These insights ultimately provide insight and feedback for the improvement of a system and its approach in the future. Thus, this final chapter aims to provide an overview of the fundamental techniques that are used in informatics evaluations. The basis upon which any quantitative evaluation starts is with statistics and formal study design. A review of inferential statistical concepts is provided from the perspective of biostatistics (confidence intervals; hypothesis testing; error assessment including sensitivity/ specificity and receiver operating characteristics). Under study design, differences between observational investigations and controlled experiments are covered. Issues pertaining to population selection and study errors are briefly introduced. With these general tools, we then look to more specific informatics evaluations, using information retrieval (IR) systems and usability studies as examples to motivate further discussion. Methods for designing both types of evaluations and endpoint metrics are described in detail.
Emily Watt, Corey Arnold, James Sayre

Backmatter

Weitere Informationen

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Globales Erdungssystem in urbanen Kabelnetzen

Bedingt durch die Altersstruktur vieler Kabelverteilnetze mit der damit verbundenen verminderten Isolationsfestigkeit oder durch fortschreitenden Kabelausbau ist es immer häufiger erforderlich, anstelle der Resonanz-Sternpunktserdung alternative Konzepte für die Sternpunktsbehandlung umzusetzen. Die damit verbundenen Fehlerortungskonzepte bzw. die Erhöhung der Restströme im Erdschlussfall führen jedoch aufgrund der hohen Fehlerströme zu neuen Anforderungen an die Erdungs- und Fehlerstromrückleitungs-Systeme. Lesen Sie hier über die Auswirkung von leitfähigen Strukturen auf die Stromaufteilung sowie die Potentialverhältnisse in urbanen Kabelnetzen bei stromstarken Erdschlüssen. Jetzt gratis downloaden!

Bildnachweise