Skip to main content

Über dieses Buch

This book offers the first comprehensive overview of artificial intelligence (AI) technologies in decision support systems for diagnosis based on medical images, presenting cutting-edge insights from thirteen leading research groups around the world.

Medical imaging offers essential information on patients’ medical condition, and clues to causes of their symptoms and diseases. Modern imaging modalities, however, also produce a large number of images that physicians have to accurately interpret. This can lead to an “information overload” for physicians, and can complicate their decision-making. As such, intelligent decision support systems have become a vital element in medical-image-based diagnosis and treatment.

Presenting extensive information on this growing field of AI, the book offers a valuable reference guide for professors, students, researchers and professionals who want to learn about the most recent developments and advances in the field.



Advanced Machine Learning in Computer-Aided Systems


Multi-modality Feature Learning in Diagnoses of Alzheimer’s Disease

Many machine learning and pattern classification methods have been applied to the diagnosis of Alzheimer’s disease (AD) and its prodromal stage, which is mild cognitive impairment (MCI). Recently, multi-task feature selection methods are typically used for joint selection of common features across multiple modalities. In this chapter, we review several latest multi-modality feature learning works in diagnoses of AD. Specifically, multi-task feature selection (MTFS) is proposed to jointly select the common subset of relevant features for multiple variables from each modality. Based on MTFS, a manifold regularized multi-task feature learning method (M2TFS) is used to preserve both the intrinsic relatedness among multiple modalities of data and the data distribution information in each modality. However, most existing methods focus on mining the relationship across multiple modalities of the same subjects, while ignoring the potentially useful relationship across different subjects. In order to overcome this issue, label-aligned multi-task feature selection (LAMTFS) which can fully explore the realtionships across both modalities and subjects is proposed. Then a discriminative multi-task feature selection method is proposed to select the most discriminative features for multi-modality based classification. The experimental results on the baseline magnetic resonance image (MRI), fluorodeoxyglucose positron emission tomography (FDG-PET), and cerebrospinal fluid (CSF) data of subjects from AD neuroimaging initiative (ADNI) data base demonstrate the effectiveness of those above proposed methods.
Daoqiang Zhang, Chen Zu, Biao Jie, Tingting Ye

A Comparative Study of Modern Machine Learning Approaches for Focal Lesion Detection and Classification in Medical Images: BoVW, CNN and MTANN

Two dominant classes of modern approaches for the detection and classification of focal lesions are a bag of visual words and end-to-end learning machines. In this study, we reviewed and compared these approaches for lung nodule detection, colorectal polyp detection, and lung nodule classification in CT images. Specifically, we considered massive-training artificial neural networks (MTANNs) and convolutional neural networks (CNNs) as representatives of end-to-end learning machines, and Fisher vectors as a representative of the bag of visual words. We first compared CNNs with Fisher vectors in nodule detection, nodule classification, and polyp detection, concluding that the best performing CNN model achieved comparable performance to that of Fisher vectors. We also analyzed the performance of CNNs with varying depths for the 3 studied applications. Our experiments showed that the CNN architectures with 3 or 4 convolutional layers were more effective than shallower architectures, but we did not observe a further performance gain by using deeper architectures. We then compared CNNs with MTANNs, concluding that MTANNs outperformed CNNs for nodule detection and classification particularly given limited training data. Specifically, for nodule detection, the MTANNs generated 0.08 false positives per section at 100% sensitivity, which was significantly (p < 0.05) lower than the best performing CNN model with 0.67 false positives per section at the same level of sensitivity. We showed that the best performing CNN model achieved comparable performance to that of Fisher vectors in the 3 studied applications, and that MTANNs outperformed CNNs in nodule detection and classification, especially given limited training data.
Nima Tajbakhsh, Kenji Suzuki

Introduction to Binary Coordinate Ascent: New Insights into Efficient Feature Subset Selection for Machine Learning

Feature selection has been an active area of research in machine learning area, and a number of techniques have been developed for selecting an optimal or sub-optimal subset of features, because it is a major factor to determine the performance of a machine-learning technique. In this study, we propose and develop a novel optimization technique, namely, a binary coordinate ascent (BCA) algorithm inspired by the coordinate descent algorithm. The BCA algorithm is an iterative deterministic local optimization approach that can be coupled with wrapper, filter, or hybrid feature selection (FS) techniques. The algorithm searches throughout the space of binary coded input variables by iteratively optimizing the objective function in each dimension at a time. We investigated our BCA approach in a wrapper-based FS framework for the task of classification. In this framework, area under the receiver-operating-characteristic (ROC) curve (AUC) is used as the criterion to find the best subset of features. We evaluated our BCA-based FS in optimization of features for support vector machine, multilayer perceptron, and Naïve Bayes classifiers with five publicly available datasets. Our experimental datasets are distinct in terms of the number of attributes (ranging from 18 to 60), and the number of classes (binary or multi-class classification). The efficiency in terms of the number of subset evaluations was improved substantially (by factors of 5–40) compared with two popular FS meta-heuristics, i.e., sequential forward selection (SFS) and sequential floating forward selection (SFFS), while the classification performance for unseen data was maintained.
Amin Zarshenas, Kenji Suzuki

Computer-Aided Detection


Automated Lung Nodule Detection Using Positron Emission Tomography/Computed Tomography

Lung cancer is a leading cause of death in human globally. Owing to the low survival rates among lung cancer patients, it is essential to detect and treat cancer at an early stage. In some countries, positron emission tomography (PET)/X-ray computed tomography (CT) examination is also used for the cancer screening in addition to diagnosis and follow-up of treatment. PET/CT images provide both anatomical and functional information of the lung cancer. However, radiologists must examine a large number of these images and therefore, support tools for the localization of lung nodule are desired. This chapter highlights our recent contributions to a hybrid detection scheme of lung nodules in PET/CT images. In the CT image, a massive region is first detected using a cylindrical nodule enhancement filter (CNEF), which is a cylindrical kernel shaped by contrast enhancement filter. Subsequently, high-uptake regions detected by the PET images are merged with the region detected by the CT image. False positives (FPs) among the leading candidates are eliminated by a rule-based classifier and three support vector machines based on the characteristic features obtained from CT and PET images. Experimentally, the detection capability was evaluated using 100 cases of PET/CT images. As a result, the sensitivity in detecting candidates was 83%, with 5 FPs/case. These results indicate that the proposed hybrid method may be useful for the computer-aided detection of lung cancer in clinical practice.
Atsushi Teramoto, Hiroshi Fujita

Detecting Mammographic Masses via Image Retrieval and Discriminative Learning

During the past half century, numerous computer-aided diagnosis (CAD) approaches have been proposed to assist the detection of masses in mammograms. Most of these methods are based on either machine learning or content-based image retrieval (CBIR) techniques. Nevertheless, either category has its limitations. Learning-based methods are affected by the fact that masses have large variation in shape and size and are often indistinguishable from surrounding tissues. CBIR-based methods, on the other hand, rely heavily on radiologist-specified suspicious regions and cannot work fully automatically. To overcome the drawbacks of both kinds of methods, we introduce an automatic CAD approach that integrates image retrieval and discriminative learning. A large set of previously diagnosed mammographic masses are collected to form an exemplar database. A query mammogram is first matched with all the exemplar masses, getting a series of similarity maps. Then, these maps are subtracted by discriminatively learned thresholds to eliminate noise. At last, individual similarity maps are aggregated, and local maxima in the final map are selected as masses. For each detected mass, the most similar exemplar masses are also presented to the radiologist. Compared with learning-based methods, our approach could achieve better mass detection accuracy since it utilizes rare exemplar masses to detect “unusual” query masses. Moreover, it provides radiologists with relevant diagnosed cases as decision support. Compared with CBIR-based methods, our approach serves as a fully automated “double reading” aid without radiologists’ labeling of suspicious regions. Our approach is validated on a dataset constructed from the digital database for screening mammography (DDSM), which consists of 2,021 exemplar masses, 500 mammograms containing masses and 500 mammograms depicting healthy breasts. The proposed approach achieves high mass detection accuracy and retrieval precision, comparing favorably with traditional methods.
Menglin Jiang, Shaoting Zhang, Dimitris N. Metaxas

Computer-Aided Diagnosis


High-Order Statistics of Micro-Texton for HEp-2 Staining Pattern Classification

This study addresses the classification problem of the HEp-2 cell using indirect immunofluorescent (IIF) image analysis, which can indicate the presence of autoimmune diseases by finding antibodies in the patient serum. Generally, the method used for IIF analysis remains subjective, and depends too heavily on the experience and expertise of the physician. Recently, studies have shown that it is possible to identify the cell patterns using IIF image analysis and machine learning techniques. However, it still has large gap in recognition rates to the physical experts’ one. This paper explores an approach in which the discriminative features of HEp-2 cell images in IIF are extracted and then, the patterns of the HEp-2 cell are identified using machine learning techniques. This study aims to realize a method for extracting highly-discriminant features from HEp-2 cell images by exploring a robust local descriptor inspired by Weber’s law. The investigated local descriptor is based on the fact that human perception for distinguishing a pattern depends not only on the absolute intensity of the stimulus but also on the relative variance of the stimulus. Therefore, we firstly transform the original stimulus (the images in our study) into a differential excitation-domain according to Weber’s law, and then explore a local patch, also called as micro-Texton, in the transformed domain as Weber local descriptor. Furthermore, we propose to employ a parametric probability process to model the Weber local descriptors, and extract the higher-order statistics to the model parameters for image representation. The proposed strategy can adaptively characterize the Weber local descriptor space using generative probability model, and then learn the parameters for better fitting the training space, which would lead to more discriminant representation for HEp-2 cell images. The simple linear support vector machine is used for cell pattern identification because of its low computational cost, in particular for large-scale datasets. Experiments using the open HEp-2 cell dataset used in the ICIP2013 contest validate that the proposed strategy can achieve a much better performance than the widely used local binary pattern (LBP) histogram and its extensions, Rotation Invariant Co-occurrence LBP (RICLBP) and Pairwise Rotation Invariant Co-occurrence LBP (PRICoLBP), and that the achieved recognition error rate is even very significantly below the observed intra-laboratory variability.
Xian-Hua Han, Yen-Wei Chen

Intelligent Diagnosis of Breast Cancer Based on Quantitative B-Mode and Elastography Features

Early breast cancer diagnosis improves prognosis of patients. However, the reviewing of image ultrasound is operator-dependent. Computer-aided diagnosis (CAD) systems as the helper for radiologists were proposed to reduce oversight error and increase cancer diagnosis rate.
Chung-Ming Lo, Ruey-Feng Chang

Categorization of Lung Tumors into Benign/Malignant, Solid/GGO, and Typical Benign/Others

In this chapter, various categorization methods for lung tumors in chest X-ray CT images are described. These categorization methods include not only a method for benign/malignant classification, but also methods for solid/Ground-glass opacity (GGO) classification and typical benign/others classification. Furthermore, extraction methods for lung tumors and lung blood vessels are also are described as the fundamental techniques for the structural analysis leading up to these categorization methods.
Yasushi Hirano

Fuzzy Object Growth Model for Neonatal Brain MR Understanding

This chapter summaries a brain region segmentation method for newborn using magnetic resonance (MR) images. The method deploys fuzzy object growth model (FOGM) which is an extension of fuzzy object model. It is a 4-dimensional model which gives a prior knowledge of brain shape and position at any growing time. First we calculate 4th dimension of FOGM, called growth index in this chapter. Because the growth index will be different from person to person even in the same age group, the method estimates the growth index from cerebral shape using Manifold learning. Using the growth index, FOGM is constructed from the training dataset. To recognize the brain region in evaluating subject, it first estimates the growth index. Then, the brain region is segmented using fuzzy connected image segmentation with the FOGM matched by the growth index. To evaluate the method, this study segments the parenchymal region of 16 subjects (revised age; 0–2 years old) using synthesized FOGM.
Saadia Binte Alam, Syoji Kobashi, Jayaram K Udupa

Computer-Aided Prognosis


Computer-Aided Prognosis: Accurate Prediction of Patients with Neurologic and Psychiatric Diseases via Multi-modal MRI Analysis

Multi-modal magnetic resonance imaging (MRI) is increasingly used in neuroscience research, as it allowed the non-invasive investigation of structure and function of the human brain in health and pathology. One of the most important applications of multi-modal MRI is the provision of vital diagnostic data for neurologic and psychiatric disorders. As traditional MRI researches using univariate analyses can only reveal disease-related structural and functional alterations at group level which limited the clinical application, and recent attention has turned toward integrating multi-modal neuroimaging and computer-aided prognosis (CAD) technology, especially machine learning, to assist clinical disease diagnose. Research in this area is growing exponentially, and therefore it is meaningful to review the current and future development of this emerging area. Hence, in this paper, based on our own studies and contributions, we review the recent advances in multi-modal MRI and CAD technologies, and their applications to assist the clinical diagnosis of three common neurologic and psychiatric disorders, namely, Alzheimer’s disease, Attention deficit/hyperactivity disorder and Tourette syndrome. We extracted multi-modal features from structural, diffusion and resting-state functional MRI, then different feature selection methods and classifiers were applied. In addition, we applied different feature fusion schemes (e.g. multiple kernel learning) to combining multi-modal features for classification. Our experiments show that using feature fusion techniques to integrate multi-modal features can yield better classification results for diseases prediction, which may outline some future directions for multi-modal neuroimaging where researchers can design more advanced methods and models for neurologic and psychiatric research.
Huiguang He, Hongwei Wen, Dai Dai, Jieqiong Wang

Radiomics in Medical Imaging—Detection, Extraction and Segmentation

Radiomics, as a newly emerging technology, converts medical images into high-dimensional data via high-throughput extraction of quantitative features, followed by subsequent data analysis for decision support. It identifies general diagnostic or prognostic phenotypes with target clinical need, providing an unprecedented opportunity to improve individualized treatment in cancer at low cost. In this chapter, we will introduce radiomics from its development to its clinical applications. We divide the clinical applications into three sections based on three most common medical modality, including computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET), to give a comprehensive introduction of how radiomics works with the example of a typical cancer type. The workflow and detailed technology skills are well described in each section.
Jie Tian, Di Dong, Zhenyu Liu, Yali Zang, Jingwei Wei, Jiangdian Song, Wei Mu, Shuo Wang, Mu Zhou

Computer-Aided Therapy and Surgery


Markerless Tumor Gating and Tracking for Lung Cancer Radiotherapy based on Machine Learning Techniques

The respiratory lung tumor motion poses great challenge for radiation therapy of lung cancer patients. Traditional methods leverage external surrogates or implanted markers to indicate the position of tumors, but these methods suffer from inaccuracies or the risk of pneumothorax. In this chapter fluoroscopic images are employed to indicate the tumor position. We show how machine learning techniques can be used for tumor gating and tracking. Experimental results demonstrate the effectiveness of this new method without external or implanted markers. We also discuss some problems about this new method and point out new promising research frontiers.
Tong Lin, Yucheng Lin

Image Guided and Robot Assisted Precision Surgery

Computer aided surgery (CAS) which integrates image guidance with surgical robot technologies is well accepted worldwide because of improved precision and small incisions. In this chapter, three key technologies in CAS will be introduced: (1) image processing based guidance, which involves analysis and integration of multimodality medical information; (2) 3D augmented reality based image guidance, which focuses on intuitive visualization of medical images; (3) various surgical robots that can be implemented precisely to complete complex tasks. Eventually, we discuss the future developments of image guidance and surgical robots in precise CAS.
Fang Chen, Jia Liu, Hongen Liao
Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.



Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!