Skip to main content

2015 | Buch

Patch-Based Techniques in Medical Imaging

First International Workshop, Patch-MI 2015, Held in Conjunction with MICCAI 2015, Munich, Germany, October 9, 2015, Revised Selected Papers

herausgegeben von: Guorong Wu, Pierrick Coupé, Yiqiang Zhan, Brent Munsell, Daniel Rueckert

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the referred proceedings of the First International Workshop on Patch-based Techniques in Medical Images, Patch-MI 2015, which was held in conjunction with MICCAI 2015, in Munich, Germany, in October 2015.

The 25 full papers presented in this volume were carefully reviewed and selected from 35 submissions. The topics covered are such as image segmentation of anatomical structures or lesions; image enhancement; computer-aided prognostic and diagnostic; multi-modality fusion; mono and multi modal image synthesis; image retrieval; dynamic, functional physiologic and anatomic imaging; super-pixel/voxel in medical image analysis; sparse dictionary learning and sparse coding; analysis of 2D, 2D+t, 3D, 3D+t, 4D, and 4D+t data.

Inhaltsverzeichnis

Frontmatter
A Multi-level Canonical Correlation Analysis Scheme for Standard-Dose PET Image Estimation
Abstract
In order to obtain positron emission tomography (PET) image with diagnostic quality, we seek to estimate a standard-dose PET (S-PET) image from its low-dose counterpart (L-PET), instead of obtaining the S-PET image directly by injecting standard-dose radioactive tracer to the patient. Therefore, the risk of radiation exposure can be significantly reduced. To achieve this goal, one possible way is to first map both S-PET and L-PET data into a common space and then perform a patch-based estimation of S-PET from L-PET patches. However, the approach of using all training data to globally learn the common space may not lead to an optimal estimation of a particular target S-PET patch. In this paper, we introduce a data-driven multi-level Canonical Correlation Analysis (m-CCA) scheme to tackle this problem. Specifically, a subset of training data that are most useful in estimating a target S-PET patch are identified in each level, and using these selected training data in the subsequent level leads to more accurate common space mapping and improved estimation. In addition, we also leverage multi-modal magnetic resonance (MR) images to provide complementary information to the estimation from L-PET. Validation on a real human brain dataset demonstrates the advantage of our method as compared to other techniques.
Le An, Pei Zhang, Ehsan Adeli-Mosabbeb, Yan Wang, Guangkai Ma, Feng Shi, David S. Lalush, Weili Lin, Dinggang Shen
Image Super-Resolution by Supervised Adaption of Patchwise Self-similarity from High-Resolution Image
Abstract
Image super-resolution is of great interest in medical imaging field. However, different from natural images studied in computer vision field, the low-resolution (LR) medical imaging data is often a stack of high-resolution (HR) 2D slices with large slice thickness. Consequently, the goal of super-resolution for medical imaging data is to reconstruct the missing slice(s) between any two consecutive slices. Since some modalities (e.g., T1-weighted MR image) are often acquired with high-resolution (HR) image, it is intuitive to harness the prior self-similarity information in the HR image for guiding the super-resolution of LR image (e.g., T2-weighted MR image). The conventional way is to find the profile of patchwise self-similarity in the HR image and then use it to reconstruct the missing information at the same location of LR image. However, the local morphological patterns could vary significantly across the LR and HR images, due to the use of different imaging protocols. Therefore, such direct (un-supervised) adaption of self-similarity profile from HR image is often not effective in revealing the actual information in the LR image. To this end, we propose to employ the existing image information in the LR image to supervise the estimation of self-similarity profile by requiring it not only being optimal in representing patches in the HR image, but also producing less reconstruction errors for the existing image information in the LR image. Moreover, to make the anatomical structures spatially consistent in the reconstructed image, we simultaneously estimate the self-similarity profiles for a stack of patches across consecutive slices by solving a group sparse patch representation problem. We have evaluated our proposed super-resolution method on both simulated brain MR images and real patient images with multiple sclerosis lesion, achieving promising results with more anatomical details and sharpness.
Guorong Wu, Xiaofeng Zhu, Qian Wang, Dinggang Shen
Automatic Hippocampus Labeling Using the Hierarchy of Sub-region Random Forests
Abstract
In this paper, we propose a multi-atlas-based framework for labeling hippocampus regions in the MR images. Our work aims at extending the random forests techniques for better performance, which contains two novel contributions: First, we design a novel strategy for training forests, to ensure that each forest is specialized in labeling the certain sub-region of the hippocampus in the images. In the testing stage, a novel approach is also presented for automatically finding the forests relevant to the corresponding sub-regions of the test image. Second, we present a novel localized registration strategy, which further reduces the shape variations of the hippocampus region in each atlas. This can provide better support for the proposed sub-region random forest approach. We validate the proposed framework on the ADNI dataset, in which atlases from NC, MCI and AD subjects are randomly selected for the experiments. The estimations demonstrated the validity of the proposed framework, showing that it yields better performances than the conventional random forests techniques.
Lichi Zhang, Qian Wang, Yaozong Gao, Guorong Wu, Dinggang Shen
Isointense Infant Brain Segmentation by Stacked Kernel Canonical Correlation Analysis
Abstract
Segmentation of isointense infant brain (at ~ 6-month-old) MR images is challenging due to the ongoing maturation and myelination process in the first year of life. In particular, signal contrast between white and gray matters inverses around 6 months of age, where brain tissues appear isointense and hence exhibit extremely low tissue contrast, thus posing significant challenges for automated segmentation. In this paper, we propose a novel segmentation method to address the above-mentioned challenges based on stacked kernel canonical correlation analysis (KCCA). Our main idea is to utilize the 12-month-old brain image with high tissue contrast to guide the segmentation of 6-month-old brain images with extremely low contrast. Specifically, we use KCCA to learn the common feature representations for both 6-month-old and the subsequent 12-month-old brain images of same subjects to make their features comparable in the common space. Note that the longitudinal 12-month-old brain images are not required in the testing stage, and they are required only in the KCCA based training stage to provide a set of longitudinal 6- and 12-month-old image pairs for training. Moreover, for optimizing the common feature representations, we propose a stacked KCCA mapping, instead of using only the conventional one-step of KCCA mapping. In this way, we can better use the 12-month-old brain images as multiple atlases to guide the segmentation of isointense brain images. Specifically, sparse patch-based multi-atlas labeling is used to propagate tissue labels in the (12-month-old) atlases and segment isointense brain images by measuring patch similarity between testing and atlas images with their learned common features. The proposed method was evaluated on 20 isointense brain images via leave-one-out cross-validation, showing much better performance than the state-of-the-art methods.
Li Wang, Feng Shi, Yaozong Gao, Gang Li, Weili Lin, Dinggang Shen
Improving Accuracy of Automatic Hippocampus Segmentation in Routine MRI by Features Learned from Ultra-High Field MRI
Abstract
Ultra-high field MR imaging (e.g., 7T) provides unprecedented spatial resolution and superior signal-to-noise ratio, compared to the routine MR imaging (e.g., 1.5T and 3T). It allows precise depiction of small anatomical structures such as hippocampus and further benefits diagnosis of neurodegenerative diseases. However, the routine MR imaging is still mainly used in research and clinical studies, where accurate hippocampus segmentation is desired. In this paper, we present an automatic method for segmenting hippocampus from the routine MR images by learning 7T-like features from the training 7T MR images. Our main idea is to map features of the routine MR image to be similar to 7T image features, thus increasing their discriminability in hippocampus segmentation. Specifically, we propose a patch-based mapping method to map image patches of the routine MR images to the space of image patches of the 7T MR images. Thus, for each patch in the routine MR image, we can generate a new mapped patch with 7T-like pattern. Then, using those mapped patches, we can use a random forest to train a sequence of classifiers for hippocampus segmentation based on the appearance, texture, and contexture features of those mapped patches. Finally, hippocampi in the test image can be segmented by applying the learned image patch mapping and trained classifiers. Experimental results show that the accuracy of hippocampus segmentation can be significantly improved by using our learned 7T-like image features, in comparison to the direct use of features extracted from the routine MR images.
Shuyu Li, Feng Shi, Guangkai Ma, Minjeong Kim, Dinggang Shen
Dual-Layer $$\ell _1$$ -Graph Embedding for Semi-supervised Image Labeling
Abstract
In non-local patch-based (NLPB) labeling, a target voxel can fuse its label from the manual labels of the atlas voxels in accordance to the patch-based voxel similarities. Although state-of-the-art NLPB method mainly focuses on labeling a single target image by many atlases, we propose a novel semi-supervised strategy to address the realistic case of only a few atlases yet many unlabeled targets. Specifically, we create an \(\ell _1\)-graph of voxels, such that each target voxel can fuse its label from not only atlas voxels but also other target voxels. Meanwhile, each atlas voxel can utilize the feedbacks from the graph to check whether its expert labeling needs to be corrected. The \(\ell _1\)-graph is built by applying (dual-layer) sparsity learning to all target and atlas voxels represented by their surrounding patches. By embedding the voxel labels to the graph, the target voxels can jointly compute their labels. In the experiment, our method with the capabilities of (1) joint labeling and (2) atlas label correction has enhanced the accuracy of NLPB labeling significantly.
Qian Wang, Guorong Wu, Dinggang Shen
Automatic Liver Tumor Segmentation in Follow-Up CT Scans: Preliminary Method and Results
Abstract
We present a new, fully automatic algorithm for liver tumors segmentation in follow-up CT studies. The inputs are a baseline CT scan and a delineation of the tumors in it and a follow-up scan; the outputs are the tumors delineations in the follow-up CT scan. The algorithm starts by defining a region of interest using a deformable registration of the baseline scan and tumors delineations to the follow-up CT scan and automatic liver segmentation. Then, it constructs a voxel classifier by training a Convolutional Neural Network (CNN). Finally, it segments the tumor in the follow-up study with the learned classifier. The main novelty of our method is the combination of follow-up based detection with CNN-based segmentation. Our experimental results on 67 tumors from 21 patients with ground-truth segmentations approved by a radiologist yield a success rate of 95.4 % and an average overlap error of 16.3 % (std = 10.3).
Refael Vivanti, Ariel Ephrat, Leo Joskowicz, Naama Lev-Cohain, Onur A. Karaaslan, Jacob Sosna
Block-Based Statistics for Robust Non-parametric Morphometry
Abstract
Automated algorithms designed for comparison of medical images are generally dependent on a sufficiently large dataset and highly accurate registration as they implicitly assume that the comparison is being made across a set of images with locally matching structures. However, very often sample size is limited and registration methods are not perfect and may be prone to errors due to noise, artifacts, and complex variations of brain topology. In this paper, we propose a novel statistical group comparison algorithm, called block-based statistics (BBS), which reformulates the conventional comparison framework from a non-local means perspective in order to learn what the statistics would have been, given perfect correspondence. Through this formulation, BBS (1) explicitly considers image registration errors to reduce reliance on high-quality registrations, (2) increases the number of samples for statistical estimation by collapsing measurements from similar signal distributions, and (3) diminishes the need for large image sets. BBS is based on permutation test and hence no assumption, such as Gaussianity, is imposed on the distribution. Experimental results indicate that BBS yields markedly improved lesion detection accuracy especially with limited sample size, is more robust to sample imbalance, and converges faster to results expected for large sample size.
Geng Chen, Pei Zhang, Ke Li, Chong-Yaw Wee, Yafeng Wu, Dinggang Shen, Pew-Thian Yap
Automatic Collimation Detection in Digital Radiographs with the Directed Hough Transform and Learning-Based Edge Detection
Abstract
Collimation is widely used for X-ray examinations to reduce the overall radiation exposure to the patient and improve the contrast resolution in the region of interest (ROI), that has been exposed directly to X-rays. It is desirable to detect the region of interest and exclude the unexposed area to optimize the image display. Although we only focus on the X-ray images generated with a rectangular collimator, it remains a challenging task because of the large variability of collimated images. In this study, we detect the region of interest as an optimal quadrilateral, which is the intersection of the optimal group of four half-planes. Each half-plane is defined as the positive side of a directed straight line. We develop an extended Hough transform for directed straight lines on a model-aware gray level edge-map, which is estimated with random forests [1] on features of pairs of superpixels. Experiments show that our algorithm can extract the region of interest quickly and accurately, despite variations in size, shape and orientation, and incompleteness of boundaries.
Liang Zhao, Zhigang Peng, Klaus Finkler, Anna Jerebko, Jason J. Corso, Xiang (Sean) Zhou
Efficient Lung Cancer Cell Detection with Deep Convolution Neural Network
Abstract
Lung cancer cell detection serves as an important step in the automation of cell-based lung cancer diagnosis. In this paper, we propose a robust and efficient lung cancer cell detection method based on the accelerated Deep Convolution Neural Network framework(DCNN). The efficiency of the proposed method is demonstrated in two aspects: (1) We adopt a training strategy, learning the DCNN model parameters from only weakly annotated cell information (one click near the nuclei location). This technique significantly reduces the manual annotation cost and the training time. (2) We introduce a novel DCNN forward acceleration technique into our method, which speeds up the cell detection process several hundred times than the conventional sliding-window based DCNN. In the reported experiments, state-of-the-art accuracy and the impressive efficiency are demonstrated in the lung cancer histopathological image dataset.
Zheng Xu, Junzhou Huang
An Effective Approach for Robust Lung Cancer Cell Detection
Abstract
As lung cancer is one of the most frequent and serious disease causing death for both men and women, early diagnosis and differentiation of lung cancers is clinically important. Lung cancer cell detection is the most basic step among the Computer-aided histopathology lung image analysis applications. We proposed an automatic lung cancer cell detection method based on deep convolutional neural network. In this method, we need only the weakly annotated images to achieve the image patches as the training set. The detection problem is formulated into a deep learning framework using these patches efficiently. Then, the feature extraction is made through the training of the deep convolutional neural networks. A challenging clinical use case including hundreds of patients’ lung cancer histopathological images is used in our experiment. Our method has achieved promising performance on the lung cancer cell detection in terms of accuracy and efficiency.
Hao Pan, Zheng Xu, Junzhou Huang
Laplacian Shape Editing with Local Patch Based Force Field for Interactive Segmentation
Abstract
Segmenting structure-of-interest is a fundamental problem in medical image analysis. Numerous automatic segmentation algorithms have been extensively studied for the task. However, misleading image information and the complex organ structures with high curvature boundaries may cause under- or over-segmentation for the deformable models. Learning based approaches can alleviate this issue, while they usually require a large number of representative training samples for each use case, which may not be available in practice. On the other hand, manually correcting segmentation errors produces good results and doctors would like such tools to improve accuracy in local areas. Therefore, we propose a 3D editing framework to interactively and efficiently refine the segmentation results, by editing the mesh directly. Specifically, the shape editing framework is modeled by integrating the Laplacian coordinates, image context information and user specified control points. We employ a local patch based optimization to enhance the supplement force field near the control points to improve correction accuracy. Our method requires few (and intuitive) user inputs, and the experimental results show competitive performance of our interactive refinement compared to other state-of-the-art methods.
Chaowei Tan, Zhennan Yan, Kang Li, Dimitris Metaxas, Shaoting Zhang
Hippocampus Segmentation Through Distance Field Fusion
Abstract
In this paper, we propose a novel automatic hippocampus segmentation framework called distance field fusion (DFF). We perform a distance transformation for each label image of training subjects and obtain a distance field (DF), which contains the shape prior information of hippocampus. Magnetic resonance (MR) and DF dictionaries are constructed for each voxel in the testing MR image. We combine the MR dictionary through local linear representation to present the testing sample and the DF dictionary by using the corresponding coefficients derived from local linear representation to predict the DF of the testing sample. We predict the label of testing images through threshold processing for DF. The proposed method was evaluated on brain images derived from the MICCAI 2013 challenge dataset of 35 subjects. The proposed method is proven effective and yields mean Dice similarity coefficients of 0.8822 and 0.8710 for the right and left hippocampi, respectively.
Shumao Pang, Zhentai Lu, Wei Yang, Yao Wu, Zixiao Lu, Liming Zhong, Qianjin Feng
Learning a Spatiotemporal Dictionary for Magnetic Resonance Fingerprinting with Compressed Sensing
Abstract
Magnetic resonance fingerprinting (MRF) is a novel technique that allows for the fast and simultaneous quantification of multiple tissue properties, progressing from qualitative images, such as T1- or T2-weighted images commonly used in clinical routines, to quantitative parametric maps. MRF consists of two main elements: accelerated pseudorandom acquisitions that create unique signal evolutions over time and the voxel-wise matching of these signals to a dictionary simulated using the Bloch equations. In this study, we propose to increase the performance of MRF by not only considering the simulated temporal signal, but a full spatiotemporal neighborhood for parameter reconstruction. We achieve this goal by first training a dictionary from a set of spatiotemporal image patches and subsequently coupling the trained dictionary with an iterative projection algorithm consistent with the theory of compressed sensing (CS). Using data from BrainWeb, we show that the proposed patch-based reconstruction can accurately recover T1 and T2 maps from highly undersampled k-space measurements, demonstrating the added benefit of using spatiotemporal dictionaries in MRF.
Pedro A. Gómez, Cagdas Ulas, Jonathan I. Sperl, Tim Sprenger, Miguel Molina-Romero, Marion I. Menzel, Bjoern H. Menze
Fast Regions-of-Interest Detection in Whole Slide Histopathology Images
Abstract
In this paper, we present a novel superpixel based Region of Interest (ROI) search and segmentation algorithm. The proposed superpixel generation method differs from pioneer works due to its combination of boundary update and coarse-to-fine refinement for superpixel clustering. The former maintains the accuracy of segmentation, meanwhile, avoids much of unnecessary revisit to the ‘non-boundary’ pixels. The latter reduces the complexity by faster localizing those boundary blocks. The paper introduces the novel superpixel algorithm [10] to the problem of ROI detection and segmentation along with a coarse-to-fine refinement scheme over a set of image of different magnification. Extensive experiments indicates that the proposed method gives better accuracy and efficiency than other superpixel-based methods for lung cancer cell images. Moreover, the block-wise coarse-to-fine scheme enables a quick search and segmentation of ROIs in whole slide images, while, other methods still cannot.
Ruoyu Li, Junzhou Huang
Reliability Guided Forward and Backward Patch-Based Method for Multi-atlas Segmentation
Abstract
Label fusion is an important step in multi-atlas based segmentation. It uses label propagation from multiple atlases to predict final label. However, most of the current label fusion methods consider each voxel equally and independently during the procedure of label fusion. In general, voxels which are misclassified are at the edge of ROIs, meanwhile the voxels labeled correctly with high reliability are far from the edge of ROIs. In light of this, we propose a novel framework for multi-atlas based image segmentation by using voxels of the target image with high reliability to guide the labeling procedure of other voxels with low reliability to afford more accurate label fusion. Specifically, we first measure the corresponding labeling reliability for each voxel based on traditional label fusion result, i.e., nonlocal mean weighted voting methods. In the second step, we use the voxels with high reliability to guide the label fusion process, at the same time we consider the location relationship of different voxels. We propagate not only labels from atlases, but also labels from the neighboring voxels with high reliability on the target. Meanwhile, an alternative method is supplied, we utilize the backward nonlocal mean patch-based method for reliability estimation. We compare our method with nonlocal mean patch-based method. In experiments, we apply all methods in the NIREP dataset to 32 regions of interest segmentation. Experimental results show our method can improve the performance of the nonlocal mean patch-based method.
Liang Sun, Chen Zu, Daoqiang Zhang
Correlating Tumour Histology and ex vivo MRI Using Dense Modality-Independent Patch-Based Descriptors
Abstract
Histological images provide reliable information on tissue characteristics which can be used to validate and improve our understanding for developing radiological imaging analysis methods. However, due to the large amount of deformation in histology stemming from resected tissues, estimating spatial correspondence with other imaging modalities is a challenging image registration problem. In this work we develop a three-stage framework for nonlinear registration between ex vivo MRI and histology of rectal cancer. For this multi-modality image registration task, two similarity metrics from patch-based feature transformations were used: the dense Scale Invariant Feature Transform (dense SIFT) and the Modality Independent Neighbourhood Descriptor (MIND). The potential of our method is demonstrated on a dataset of eight rectal histology images from two patients using annotated landmarks. The mean registration error was 1.80 mm after the rigid registration steps which improved to 1.08 mm after nonlinear motion correction using dense SIFT and to 1.52 mm using MIND.
Andre Hallack, Bartłomiej W. Papież, James Wilson, Lai Mun Wang, Tim Maughan, Mark J. Gooding, Julia A. Schnabel
Multi-atlas Segmentation Using Patch-Based Joint Label Fusion with Non-Negative Least Squares Regression
Abstract
This work presents a patch-based multi-atlas segmentation approach based on non-negative least squares regression. Our approach finds a weighted linear combination of local image patches that best models the target patch, jointly for all considered atlases. The local coefficients are optimised with the constraint of being positive or zero and serve as weights, of the underlying segmentation patches, for a multi-atlas voting. The negative influence of erroneous local registration outcome is shown to be reduced by avoiding negative weights. For challenging abdominal MRI, the segmentation accuracy is significantly improved compared to standard joint least squares regression and independent similarity-based weighting. Our experiments show that restricting weights to be non-negative yields significantly better segmentation results than a sparsity promoting \(\ell _1\) penalty. We present an efficient numerical implementation that rapidly calculates correlation matrices for all overlapping image patches and atlases in few seconds.
Mattias P. Heinrich, Matthias Wilms, Heinz Handels
A Spatially Constrained Deep Learning Framework for Detection of Epithelial Tumor Nuclei in Cancer Histology Images
Abstract
Detection of epithelial tumor nuclei in standard Hematoxylin & Eosin stained histology images is an essential step for the analysis of tissue architecture. The problem is quite challenging due to the high chromatin texture of the tumor nuclei and their irregular size and shape. In this work, we propose a spatially constrained convolutional neural network (CNN) for the detection of malignant epithelial nuclei in histology images. Given an input patch, the proposed CNN is trained to regress, for every pixel in the patch, the probability of being the center of an epithelial tumor nucleus. The estimated probability values are topologically constrained such that high probability values are concentrated in the vicinity of the center of nuclei. The location of local maxima is then used as a cue for the final detection. Experimental results show that the proposed network outperforms the conventional CNN with center-pixel-only regression for the task of epithelial tumor nuclei detection.
Korsuk Sirinukunwattana, Shan E. Ahmed Raza, Yee-Wah Tsang, David Snead, Ian Cree, Nasir Rajpoot
3D MRI Denoising Using Rough Set Theory and Kernel Embedding Method
Abstract
In this paper, we have presented a manifold embedding based method for denoising volumetric MRI data. The proposed method via kernel mapping tries to find linearity among data in the projection/feature space. Prior to kernel mapping, a Rough Set Theory (RST) based clustering technique has been used with extension to volumetric data. RST clustering method groups similar voxels (3D cubes) using class and edge information. The basis vector representation of each cluster is then explored in the Kernel space via Principal Component Analysis (known as KPCA). The work has been compared with state-of-the-art methods under various measures for synthetic and real databases.
Ashish Phophalia, Suman K. Mitra
A Novel Cell Orientation Congruence Descriptor for Superpixel Based Epithelium Segmentation in Endometrial Histology Images
Abstract
Recurrent miscarriage can be caused by an abnormally high number of Uterine Natural Killer (UNK) cells in human female uterus lining. Recently a diagnosis protocol has been developed based on the ratio of UNK cells to stromal cells in endometrial biopsy slides immunohistochemically stained with Haematoxylin for all cells and CD56 as a marker for the UNK cells. The counting of UNK cells and stromal cells is an essential process in the protocol. However, the cell counts must not include epithelial cells from glandular structures and UNK cells from epithelium. In this paper, we propose a novel superpixel based epithelium segmentation algorithm based on the observation that neighbouring epithelial cells packed at the boundary of glandular structures or background tend to have similar local orientations. Our main contribution is a novel cell orientation congruence descriptor in a machine learning framework to differentiate between epithelial and non-epithelial cells.
Guannan Li, Shan E. Ahmed Raza, Nasir Rajpoot
Patch-Based Segmentation from MP2RAGE Images: Comparison to Conventional Techniques
Abstract
In structural and functional MRI studies there is a need for robust and accurate automatic segmentation of various brain structures. We present a comparison study of three automatic segmentation methods based on the new T1-weighted MR sequence called MP2RAGE, which has superior soft tissue contrast. Automatic segmentations of the thalamus and hippocampus are compared to manual segmentations. In addition, we qualitatively evaluate the segmentations when warped to co-registered maps of the fractional anisotropy (FA) of water diffusion. Compared to manual segmentation, the best results were obtained with a patch-based segmentation method (volBrain) using a library of images from the same scanner (local), followed by volBrain using an external library (external), FSL and Freesurfer. The qualitative evaluation showed that volBrain local and volBrain external produced almost no segmentation errors when overlaid on FA maps, while both FSL and Freesurfer segmentations were found to overlap with white matter tracts. These results underline the importance of applying accurate and robust segmentation methods and demonstrate the superiority of patch-based methods over more conventional methods.
Erhard T. Næss-Schmidt, Anna Tietze, Irene K. Mikkelsen, Mikkel Petersen, Jakob U. Blicher, Pierrick Coupé, José V. Manjón, Simon F. Eskildsen
Multi-atlas and Multi-modal Hippocampus Segmentation for Infant MR Brain Images by Propagating Anatomical Labels on Hypergraph
Abstract
Accurate segmentation of hippocampus from infant magnetic resonance (MR) images is very important in the study of early brain development and neurological disorder. Recently, multi-atlas patch-based label fusion methods have shown a great success in segmenting anatomical structures from medical images. However, the dramatic appearance change from birth to 1-year-old and the poor image contrast make the existing label fusion methods less competitive to handle infant brain images. To alleviate these difficulties, we propose a novel multi-atlas and multi-modal label fusion method, which can unanimously label for all voxels by propagating the anatomical labels on a hypergraph. Specifically, we consider not only all voxels within the target image but also voxels across the atlas images as the vertexes in the hypergraph. Each hyperedge encodes a high-order correlation, among a set of vertexes, in different perspectives which incorporate (1) feature affinity within the multi-modal feature space, (2) spatial coherence within target image, and (3) population heuristics from multiple atlases. In addition, our label fusion method further allows those reliable voxels to supervise the label estimation on other difficult-to-label voxels, based on the established hyperedges, until all the target image voxels reach the unanimous labeling result. We evaluate our proposed label fusion method in segmenting hippocampus from T1 and T2 weighted MR images acquired from at 2-week-old, 3-month-old, 6-month-old, 9-month-old, and 12-month-old. Our segmentation results achieves improvement of labeling accuracy over the conventional state-of-the-art label fusion methods, which shows a great potential to facilitate the early infant brain studies.
Pei Dong, Yanrong Guo, Dinggang Shen, Guorong Wu
Prediction of Infant MRI Appearance and Anatomical Structure Evolution Using Sparse Patch-Based Metamorphosis Learning Framework
Abstract
Magnetic resonance imaging (MRI) of pediatric brain provides invaluable information for early normal and abnormal brain development. Longitudinal neuroimaging has spanned various research works on examining infant brain development patterns. However, studies on predicting postnatal brain image evolution remain scarce, which is very challenging due to the dynamic tissue contrast change and even inversion in postnatal brains. In this paper, we unprecedentedly propose a dual image intensity and anatomical structure (label) prediction framework that nicely links the geodesic image metamorphosis model with sparse patch-based image representation, thereby defining spatiotemporal metamorphic patches encoding both image photometric and geometric deformation. In the training stage, we learn the 4D metamorphosis trajectories for each training subject. In the prediction stage, we define various strategies to sparsely represent each patch in the testing image using the training metamorphosis patches; while progressively incrementing the richness of the patch (from appearance-based to multimodal kinetic patches). We used the proposed framework to predict 6, 9 and 12-month brain MR image intensity and structure (white and gray matter maps) from 3 months in 10 infants. The proposed framework showed promising preliminary prediction results for the spatiotemporally complex, drastically changing brain images.
Islem Rekik, Gang Li, Guorong Wu, Weili Lin, Dinggang Shen
Efficient Multi-scale Patch-Based Segmentation
Abstract
The objective of this paper is to devise an efficient and accurate patch-based method for image segmentation. The method presented in this paper builds on the work of Wu et al. [14] with the introduction of a compact multi-scale feature representation and heuristics to speed up the process. A smaller patch representation along with hierarchical pruning allowed the inclusion of more prior knowledge, resulting in a more accurate segmentation. We also propose an intuitive way of optimizing the search strategy to find similar voxel, making the method computationally efficient. An additional approach at improving the speed was explored with the integration of our method with Optimised PatchMatch [11]. The proposed method was validated using the 100 hippocampus images with ground truth segmentation from ADNI-1 (mean DSC = 0.892) and the MICCAI SATA segmentation challenge dataset (mean DSC = 0.8587).
Abinash Pant, David Rivest-Hénault, Pierrick Bourgeat
Backmatter
Metadaten
Titel
Patch-Based Techniques in Medical Imaging
herausgegeben von
Guorong Wu
Pierrick Coupé
Yiqiang Zhan
Brent Munsell
Daniel Rueckert
Copyright-Jahr
2015
Electronic ISBN
978-3-319-28194-0
Print ISBN
978-3-319-28193-3
DOI
https://doi.org/10.1007/978-3-319-28194-0