Skip to main content

2012 | Buch

Medical Content-Based Retrieval for Clinical Decision Support

Second MICCAI International Workshop, MCBR-CDS 2011, Toronto, ON, Canada, September 22, 2011, Revised Selected Papers

herausgegeben von: Henning Müller, Hayit Greenspan, Tanveer Syeda-Mahmood

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the Second MICCAI Workshop on Medical Content-Based Retrieval for Clinical Decision Support, MCBR-CBS 2011, held in Toronto, Canada, in September 2011. The 11 revised full papers presented together with 2 invited talks were carefully reviewed and selected from 17 submissions. The papers are divided on several topics on medical image retrieval with textual approaches, visual word based approaches, applications and multidimensional retrieval.

Inhaltsverzeichnis

Frontmatter

Workshop Overview

Overview of the Second Workshop on Medical Content–Based Retrieval for Clinical Decision Support
Abstract
The second workshop on Medical Content–Based Retrieval for Clinical Decision Support took place at the MICCAI conference in Toronto, Canada on September 22, 2011. The workshop brought together more than 40 registered researchers interested in the field of medical content–based retrieval. Eleven papers were accepted and presented at the workshop. Two invited speakers gave overviews on state–of–the–art academic research and industrial perspectives. The program was completed with a panel discussion on the role of content–based retrieval in clinical decision support. This overview introduces the main highlights and discussions in the workshop, summarizes the novelties and introduces the presented papers, which are provided in these proceedings.
Adrien Depeursinge, Hayit Greenspan, Tanveer Syeda, Henning Müller

Invited Speech

Content-Based Retrieval in Endomicroscopy: Toward an Efficient Smart Atlas for Clinical Diagnosis
Abstract
In this paper we present the first Content-Based Image Retrieval (CBIR) framework in the field of in vivo endomicroscopy, with applications ranging from training support to diagnosis support. We propose to adjust the standard Bag-of-Visual-Words method for the retrieval of endomicroscopic videos. Retrieval performance is evaluated both indirectly from a classification point-of-view, and directly with respect to a perceived similarity ground truth. The proposed method significantly outperforms, on two different endomicroscopy databases, several state-of-the-art methods in CBIR. With the aim of building a self-training simulator, we use retrieval results to estimate the interpretation difficulty experienced by the endoscopists. Finally, by incorporating clinical knowledge about perceived similarity and endomicroscopy semantics, we are able: 1) to learn an adequate visual similarity distance and 2) to build visual-word-based semantic signatures that extract, from low-level visual features, a higher-level clinical knowledge expressed in the endoscopist own language.
Barbara André, Tom Vercauteren, Nicholas Ayache

Medical Image Retrieval with Textual Approaches

Biomedical Image Retrieval Using Multimodal Context and Concept Feature Spaces
Abstract
This paper presents a unified medical image retrieval method that integrates visual features and text keywords using multimodal classification and filtering. For content-based image search, concepts derived from visual features are modeled using support vector machine (SVM)-based classification of local patches from local image regions. Text keywords from associated metadata provides the context and are indexed using the vector space model of information retrieval. The concept and context vectors are combined and trained for SVM classification at a global level for image modality (e.g., CT, MR, x-ray, etc.) detection. In this method, the probabilistic outputs from the modality categorization are used to filter images so that the search can be performed only on a candidate subset. An evaluation of the method on ImageCLEFmed 2010 dataset of 77,000 images, XML annotations and topics results in a mean average precision (MAP) score of 0.1125. It demonstrates the effectiveness and efficiency of the proposed multimodal framework compared to using only a single modality or without using any classification information.
Md. Mahmudur Rahman, Sameer K. Antani, Dina Demner Fushman, George R. Thoma
Using MeSH to Expand Queries in Medical Image Retrieval
Abstract
The presence of huge collections of medical images in scientific repositories and hospital databases has given rise to increasing interest in access to this information. This paper addresses the issue, focusing on image retrieval based on textual information related to the image. The initial hypothesis is that query expansion could improve the effectiveness of image retrieval systems. In this proposal, several information elements contained in MeSH ontology were used. The ImageCLEF 2009 and 2010 document collections were used for the experiment. Results showed a slight increase in MAP and a more significant difference when the evaluation was performed using the F-measure in 2009 collection. The final conclusion is that query expansion is not sufficient to achieve a substantial improvement in the efficacy of this type of information retrieval systems.
Jacinto Mata, Mariano Crespo, Manuel J. Maña
Building Implicit Dictionaries Based on Extreme Random Clustering for Modality Recognition
Abstract
Introduced as a new subtask of the ImageCLEF 2010 challenge, we aim at recognizing the modality of a medical image based on its content only. Therefore, we propose to rely on a representation of images in terms of words from a visual dictionary. To this end, we introduce a very fast approach that allows the learning of implicit dictionaries which permits the construction of compact and discriminative bag of visual words. Instead of a unique computationally expensive clustering to create the dictionary, we propose a multiple random partitioning method based on Extreme Random Subspace Projection Ferns. By concatenating these multiple partitions, we can very efficiently create an implicit global quantization of the feature space and build a dictionary of visual words. Taking advantages of extreme randomization, our approach achieves very good speed performance on a real medical database, and this for a better accuracy than K-means clustering.
Olivier Pauly, Diana Mateus, Nassir Navab

Visual Word Based Approaches

Superpixel-Based Interest Points for Effective Bags of Visual Words Medical Image Retrieval
Abstract
The present work introduces a 2D medical image retrieval system which employs interest points derived from superpixels in a bags of visual words (BVW) framework. BVWs rely on stable interest points so that the local descriptors can be clustered into representative, discriminative prototypes (the visual words). We show that using the centers of mass of superpixels as interest points yields higher retrieval accuracy when compared to using Difference of Gaussians (DoG) or a dense grid of interest points. Evaluation is performed on two data sets. The ImageCLEF 2009 data set of 14.400 radiographs is used in a categorization setting and the results compare favorable to more specialized methods. The second set contains 13 thorax CTs and is used in a hybrid 2D/3D localization task, localizing the axial position of the lung through the retrieval of representative 2D slices.
Sebastian Haas, René Donner, Andreas Burner, Markus Holzer, Georg Langs
Using Multiscale Visual Words for Lung Texture Classification and Retrieval
Abstract
Interstitial lung diseases (ILDs) are regrouping over 150 heterogeneous disorders of the lung parenchyma. High–Resolution Computed Tomography (HRCT) plays an important role in diagnosis, as standard chest x–rays are often non–specific for ILDs. Assessment of ILDs is considerd hard for clinicians because the diseases are rare, patterns often look visually similar and various clinical data need to be integrated. An image retrieval system to support interpretation of HRCT images by retrieving similar images is presented in this paper. The system uses a wavelet transform based on Difference of Gaussians (DoG) in order to extract texture descriptors from a set of 90 image series containing 1679 manually annotated regions corresponding to various ILDs. Visual words are used for feature aggregation and to describe tissue patterns. The optimal scale–progression scheme, number of visual words, as well as distance measure for clustering to generate visual words are investigated. A sufficiently high number of visual words is required to accurately describe patterns with high intra–class variations such as healthy tissue. Scale progression has less influence and the Euclidean distance performs better than other distances. The results show that the system is able to learn the wide intra–class variations of healthy tissue and the characteristics of abnormal lung tissue to provide reliable assistance to clinicians.
Antonio Foncubierta-Rodríguez, Adrien Depeursinge, Henning Müller

Applications

Histology Image Indexing Using a Non-negative Semantic Embedding
Abstract
Large on-line collections of biomedical images are becoming more common and may be a potential source of knowledge. An important unsolved issue that is actively investigated is the efficient and effective access to these repositories. A good access strategy demands an appropriate indexing of the collection. This paper presents a new method for indexing histology images using multimodal information, taking advantage of two kinds of data: visual data extracted directly from images and available text data from annotations performed by experts. The new strategy called Non-negative Semantic Embedding defines a mapping between visual an text data assuming that the latent space spanned by text annotations is good enough representation of the images semantic. Evaluation of the proposed method is carried out by comparing it with other strategies, showing a remarkable image search improvement since the proposed approach effectively exploits the image semantic relationships.
Jorge A. Vanegas, Juan C. Caicedo, Fabio A. González, Eduardo Romero
A Discriminative Distance Learning–Based CBIR Framework for Characterization of Indeterminate Liver Lesions
Abstract
In this paper we propose a novel learning–based CBIR method for fast content–based retrieval of similar 3D images based on the intrinsic Random Forest (RF) similarity. Furthermore, we allow the combination of flexible user–defined semantics (in the form of retrieval contexts and high–level concepts) and appearance–based (low–level) features in order to yield search results that are both meaningful to the user and relevant in the given clinical case. Due to the complexity and clinical relevance of the domain, we have chosen to apply the framework to the retrieval of similar 3D CT hepatic pathologies, where search results based solely on similarity of low–level features would be rarely clinically meaningful. The impact of high–level concepts on the quality and relevance of the retrieval results has been measured and is discussed for three different set–ups. A comparison study with the commonly used canonical Euclidean distance is presented and discussed as well.
María Jimena Costa, Alexey Tsymbal, Matthias Hammon, Alexander Cavallaro, Michael Sühling, Sascha Seifert, Dorin Comaniciu
Computer–Aided Diagnosis of Pigmented Skin Dermoscopic Images
Abstract
Diagnosis of benign and malign skin lesions is currently mostly relying on visual assessment and frequent biopsies performed by dermatologists. As the timely and correct diagnosis of these skin lesions is one of the most important factors in the therapeutic outcome, leveraging new technologies to assist the dermatologist seems natural. In this paper we propose a machine learning approach to classify melanocytic lesions into malignant and benign from dermoscopic images. The dermoscopic image database is composed of 4240 benign lesions and 232 malignant melanoma. For segmentation we are using multiphase soft segmentation with total variation and H 1 regularization. Then, each lesion is characterized by a feature vector that contains shape, color and texture information, as well as local and global parameters that try to reflect structures used in medical diagnosis. The learning and classification stage is performed using SVM with polynomial kernels. The classification delivered accuracy of 98.57% with a true positive rate of 0.991% and a false positive rate of 0.019%.
Asad Safi, Maximilian Baust, Olivier Pauly, Victor Castaneda, Tobias Lasser, Diana Mateus, Nassir Navab, Rüdliger Hein, Mahzad Ziai

Multidimensional Retrieval

Texture Bags: Anomaly Retrieval in Medical Images Based on Local 3D-Texture Similarity
Abstract
Providing efficient access to the huge amounts of existing medical imaging data is a highly relevant but challenging problem. In this paper, we present an effective method for content-based image retrieval (CBIR) of anomalies in medical imaging data, based on similarity of local 3D texture. During learning, a texture vocabulary is obtained from training data in an unsupervised fashion by extracting the dominant structure of texture descriptors. It is based on a 3D extension of the Local Binary Pattern operator (LBP), and captures texture properties via descriptor histograms of supervoxels, or texture bags. For retrieval, our method computes a texture histogram of a query region marked by a physician, and searches for similar bags via diffusion distance. The retrieval result is a ranked list of cases based on the occurrence of regions with similar local texture structure. Experiments show that the proposed local texture retrieval approach outperforms analogous global similarity measures.
Andreas Burner, René Donner, Marius Mayerhoefer, Markus Holzer, Franz Kainberger, Georg Langs
Evaluation of Fast 2D and 3D Medical Image Retrieval Approaches Based on Image Miniatures
Abstract
The present work evaluates four medical image retrieval approaches based on features derived from image miniatures. We argue that due to the restricted domain of medical image data, the standardized acquisition protocols and the absence of a potentially cluttered background a holistic image description is sufficient to capture high-level image similarities. We compare four different miniature 2D and 3D descriptors and corresponding metrics, in terms of their retrieval performance: (A) plain miniatures together with euclidean distances in a k Nearest Neighbor based retrieval backed by kD-trees; (B) correlations of rigidly aligned miniatures, initialized using the kD-tree; (C) distribution fields together with the l 1-norm; (D) SIFT-like histogram of gradients using the χ 2-distance. We evaluate the approaches on two data sets: the ImageClef 2009 benchmark of 2D radiographs with the aim to categorize the images and a large set of 3D-CTs representing a realistic sample in a hospital PACS with the objective to estimate the location of the query volume.
René Donner, Sebastian Haas, Andreas Burner, Markus Holzer, Horst Bischof, Georg Langs
Semantic Analysis of 3D Anatomical Medical Images for Sub-image Retrieval
Abstract
Voluminous medical images are critical assets for clinical decision support systems. Retrieval based on the image content can help the clinician in mining images relevant to the current case from a large database. In this paper we address the problem of retrieving relevant sub-images with similar anatomical structures as that of the query image across modalities. The images in the database are automatically annotated with information regarding body region depicted in the scan and organs present, along with their localizing bounding box. For this purpose, initially a coarse localization of body regions is done in the 2D space taking contextual information into account. Following this, finer localization and verification of organs is done using a novel, computationally efficient fuzzy approximation method for constructing 3D texture signatures of organs of interest. They are then indexed using an inverted-file data structure which helps in ranked retrieval of relevant images. Apart from retrieving sub-images across modalities by image example, automatic annotation and efficient indexing allows query by text, limited only by the semantic vocabulary. The algorithm was tested on a database of non-contrast CT and T1-weighted MR volumes. Quantitative assessment of the proposed algorithm was evaluated using ground-truth database sanitized by medical experts.
Vikram Venkatraghavan, Sohan Ranjan
Backmatter
Metadaten
Titel
Medical Content-Based Retrieval for Clinical Decision Support
herausgegeben von
Henning Müller
Hayit Greenspan
Tanveer Syeda-Mahmood
Copyright-Jahr
2012
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-28460-1
Print ISBN
978-3-642-28459-5
DOI
https://doi.org/10.1007/978-3-642-28460-1