Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 4th International Workshop on Patch-Based Techniques in Medical Images, Patch-MI 2018, held in conjunction with MICCAI 2018, in Granada, Spain, in September 2018.

The 15 full papers presented were carefully reviewed and selected from 17 submissions. The papers are organized in the following topical sections: Image Denoising¸ Image Registration and Matching, Image Classification and Detection, Brain Image Analysis, and Retinal Image Analysis.

Inhaltsverzeichnis

Frontmatter

Image Denoising

Frontmatter

Learning Real Noise for Ultra-Low Dose Lung CT Denoising

Abstract
Neural image denoising is a promising approach for quality enhancement of ultra-low dose (ULD) CT scans after image reconstruction. The availability of high-quality training data is instrumental to its success. Still, synthetic noise is generally used to simulate the ULD scans required for network training in conjunction with corresponding normal dose scans. This reductive approach may be practical to implement but ignores any departure of the real noise from the assumed model. In this paper, we demonstrate the training of denoising neural networks with real noise. For this purpose, a special training set is created from a pair of ULD and normal-dose scans acquired on each subject. Accurate deformable registration is computed to ensure the required pixel-wise overlay between corresponding ULD and normal-dose patches. To our knowledge, it is the first time real CT noise is used for the training of denoising neural networks. The benefits of the proposed approach in comparison to synthetic noise training are demonstrated both qualitatively and quantitatively for several state-of-the art denoising neural networks. The obtained results prove the feasibility and applicability of real noise learning as a way to improve neural denoising of ULD lung CT.
Michael Green, Edith M. Marom, Eli Konen, Nahum Kiryati, Arnaldo Mayer

MRI Denoising Using Deep Learning

Abstract
MRI denoising is a classical preprocessing step which aims at reducing the noise naturally present in MR images. In this paper, we present a new method for MRI denoising that combines recent advances in deep learning with classical approaches for noise reduction. Specifically, the proposed method follows a two-stage strategy. The first stage is based on an overcomplete patch-based convolutional neural network that blindly removes the noise without estimation of local noise level present in the images. The second stage uses this filtered image as a guide image within a rotationally invariant non-local means filter. The proposed approach has been compared with related state-of-the-art methods and showed competitive results in all the studied cases.
José V. Manjón, Pierrick Coupe

MRI Denoising and Artefact Removal Using Self-Organizing Maps for Fast Global Block-Matching

Abstract
Image noise and motion degrade the quality of MR images. Block-matching methods are a well-demonstrated means of improving signal-to-noise ratios in such images. Ideally, block-matching methods would search within the entire image for matching patches to a target, leveraging an image’s full informational redundancy, but this carries impractical computational costs. A well-known workaround, implemented in the traditional Non-Local Means (NLM) filter, is to search for matching patches only within a local neighborhood. Here, we detail a Global Approximate Block-matching (GAB) method that, via a self-organizing map, rapidly searches an entire image for patches similar to a target. Four sets of five T1 + five FLAIR images were acquired. GAB and NLM both denoised the T1s; the results were compared to subject-wise mean images with very low noise. GAB reliably produced images that were more similar to these ‘templates’ than NLM. This was repeated for the same images with motion-like artefacts artificially added. GAB, again, outperformed NLM. For this task, GAB further improved with multichannel inputs, even if the FLAIR image contained artefacts. GAB’s competitive performance appeared to be due to a better balance between preserving image features and removing noise/artefacts. The performance of GAB and NLM variants hinted that GAB’s advantage was not brute-force processing, but its ability to effectively search the whole image.
Lee B. Reid, Ashley Gillman, Alex M. Pagnozzi, José V. Manjón, Jurgen Fripp

A Monte Carlo Framework for Denoising and Missing Wedge Reconstruction in Cryo-electron Tomography

Abstract
We propose a statistical method to address an important issue in cryo electron tomography image analysis: reduction of a high amount of noise and artifacts due to the presence of a missing wedge (MW) in the spectral domain. The method takes as an input a 3D tomogram derived from limited-angle tomography, and gives as an output a 3D denoised and artifact compensated tomogram. The artifact compensation is achieved by filling up the MW with meaningful information. The method can be used to enhance visualization or as a pre-processing step for image analysis, including segmentation and classification. Results are presented for both synthetic and experimental data.
E. Moebel, C. Kervrann

Image Registration and Matching

Frontmatter

Robust Supervoxel Matching Combining Mid-Level Spectral and Context-Rich Features

Abstract
This paper presents an innovative way to reach accurate semi-dense registration between images based on robust matching of structural entities. The proposed approach relies on a decomposition of images into visual primitives called supervoxels generated by aggregating adjacent voxels sharing similar characteristics. Two new categories of features are estimated at the supervoxel extent: mid-level spectral features relying on a spectral method applied on supervoxel graphs to capture the non-linear modes of intensity displacements, and mid-level context-rich features describing the broadened spatial context on the resulting spectral representations. Accurate supervoxel pairings are established by nearest neighbor search on these newly designed features. The effectiveness of the approach is demonstrated against state-of-the-art methods for semi-dense longitudinal registration of abdominal CT images, relying on liver label propagation and consistency assessment.
Florian Tilquin, Pierre-Henri Conze, Patrick Pessaux, Mathieu Lamard, Gwenolé Quellec, Vincent Noblet, Fabrice Heitz

Stereo Matching for Wireless Capsule Endoscopy Using Direct Attenuation Model

Abstract
We propose a robust approach to estimate depth maps designed for stereo camera-based wireless capsule endoscopy. Since there is no external light source except ones attached to the capsule, we employ the direct attenuation model to estimate a depth map up to a scale factor. Afterward, we estimate the scale factor by using sparse feature correspondences. Finally, the estimated depth map is used to guide stereo matching to recover the detailed structure of the captured scene. We experimentally verify the proposed method with various images captured by stereo-type endoscopic capsules in the gastrointestinal tract.
Min-Gyu Park, Ju Hong Yoon, Youngbae Hwang

Image Classification and Detection

Frontmatter

Liver Tissue Classification Using an Auto-context-based Deep Neural Network with a Multi-phase Training Framework

Abstract
In this project, our goal is to classify different types of liver tissue on 3D multi-parameter magnetic resonance images in patients with hepatocellular carcinoma. In these cases, 3D fully annotated segmentation masks from experts are expensive to acquire, thus the dataset available for training a predictive model is usually small. To achieve the goal, we designed a novel deep convolutional neural network that incorporates auto-context elements directly into a U-net-like architecture. We used a patch-based strategy with a weighted sampling procedure in order to train on a sufficient number of samples. Furthermore, we designed a multi-resolution and multi-phase training framework to reduce the learning space and to increase the regularization of the model. Our method was tested on images from 20 patients and yielded promising results, outperforming standard neural network approaches as well as a benchmark method for liver tissue classification.
Fan Zhang, Junlin Yang, Nariman Nezami, Fabian Laage-gaupp, Julius Chapiro, Ming De Lin, James Duncan

Using 1D Patch-Based Signatures for Efficient Cascaded Classification of Lung Nodules

Abstract
In the last years, convolutional neural networks (CNN) have been largely used to address a wide range of image analysis problems. In medical imaging, their importance increased exponentially despite of known difficulties in building large annotated training datasets in medicine. When it comes to 3D image exams analysis, 3D convolutional networks commonly represent the state-of-art, but can easily became computationally prohibitive due to the massive amount of data and processing involved. This scenario creates opportunities for methods that deliver competitive results while promoting efficiency in data usage and processing time. In this context, this paper proposes a comprehensive 1D patch-based data representation model to be used in an efficient cascaded approach for lung nodules false positive reduction. The proposed pipeline combines three convolutional networks: a 3D network that uses regular multi-scale volumetric patches, a 2D network that uses a trigonometric bi-dimensional representation of these patches, and a 1D network that uses a very compact 1D patch representation for filtering obvious cases. We run our experiments using the publicly available LUNA challenge dataset and demonstrate that the proposed cascaded approach achieves very competitive results while using up to 55 times less data in average and running around 3.5 times faster in average when compared to regular 3D CNNs.
Dario Augusto Borges Oliveira, Matheus Palhares Viana

Predicting Future Bone Infiltration Patterns in Multiple Myeloma

Abstract
Multiple Myeloma (MM) is a bone marrow malignancy affecting the generation pathway of plasma cells and B-lymphocytes. It results in their uncontrolled proliferation and malignant transformation and ultimately can lead to osteolytic lesions first visible in MRI. The earliest possible reliable detection of these lesions is critical, since they are a prime marker of disease advance and a trigger for treatment. However, their detection is difficult. Here, we present and evaluate a methodology to predict future lesion emergence based on T1 weighted Magnetic Resonance Imaging (MRI) patch data. We train a predictor to identify early signatures of emerging lesions before they reach thresholds for reporting. The algorithm proposed uses longitudinal training data, and visualises high- risk locations in the bone structure.
Roxane Licandro, Johannes Hofmanninger, Marc-André Weber, Bjoern Menze, Georg Langs

A Fast Automatic Juxta-pleural Lung Nodule Detection Framework Using Convolutional Neural Networks and Vote Algorithm

Abstract
Lung Nodule Detection from CT scans is a crucial task for the early detection of lung cancer with high difficulty performing an automatic detection. In this paper, we propose a fast automatic voting based framework using Convolutional Neural Network to detect juxta-pleural nodules, which are pulmonary (lung) nodules attached to the chest wall and hard to detect even by human experts. The detection result for each region in the CT scan is voted by the detection results of the extracted candidates from the region, which we formulate as a generative model. We perform two sets of experiments: one is to validate our framework, and the other is to compare different convolution neural network settings under our framework. The result shows our framework is competent to detect juxta-pleural lung nodules especially when only a weak classifier trained on noisy data is available. Meanwhile, we overcome the problem of determining the proper input size for nodules with high variance in diameters.
Jiaxing Tan, Yumei Huo, Zhengrong Liang, Lihong Li

Brain Image Analysis

Frontmatter

LesionBrain: An Online Tool for White Matter Lesion Segmentation

Abstract
In this paper, we present a new tool for white matter lesion segmentation called lesionBrain. Our method is based on a 3-stage strategy including multimodal patch-based segmentation, patch-based regularization of probability map and patch-based error correction using an ensemble of shallow neural networks. Its robustness and accuracy have been evaluated on the MSSEG challenge 2016 datasets. During our validation, the performance obtained by lesionBrain was competitive compared to recent deep learning methods. Moreover, lesionBrain proposes automatic lesion categorization according to location. Finally, complementary information on gray matter atrophy is included in the generated report. LesionBrain follows a software as a service model in full open access.
Pierrick Coupé, Thomas Tourdias, Pierre Linck, José E. Romero, José V. Manjón

Multi-atlas Parcellation in the Presence of Lesions: Application to Multiple Sclerosis

Abstract
Intensity-based multi-atlas strategies have shown leading performance in segmenting healthy subjects, but when lesions are present, the abnormal lesion intensities affect the fusion result. Here, we propose a reformulated statistical fusion approach for multi-atlas segmentation that is applicable to both healthy and injured brains. This method avoids the interference of lesion intensities on the segmentation by incorporating two a priori masks to the Non-Local STAPLE statistical framework. First, we extend the theory to include a lesion mask, which improves the voxel correspondence between the target and the atlases. Second, we extend the theory to include a known label mask, that forces the label decision in case it is beforehand known and enables seamless integration of manual edits. We evaluate our method with simulated and MS patient images and compare our results with those of other state-of-the-art multi-atlas strategies: Majority vote, Non-local STAPLE, Non-local Spatial STAPLE and Joint Label Fusion. Quantitative and qualitative results demonstrate the improvement in the lesion areas.
Sandra González-Villà, Yuankai Huo, Arnau Oliver, Xavier Lladó, Bennett A. Landman

A Patch-Based Segmentation Approach with High Level Representation of the Data for Cortical Sulci Recognition

Abstract
Because of the strong variability of the cortical sulci, their automatic recognition is still a challenging problem. The last algorithm developed in our laboratory for 125 sulci reaches an average recognition rate around 86%. It has been applied to thousands of brains for morphometric studies (www.​brainvisa.​info). A weak point of this approach is the modeling of the training dataset as a single template of sulcus-wise probability maps, losing information about the alternative patterns of each sulcus. To overcome this limit, we propose a different strategy inspired by Multi-Atlas Segmentation (MAS) and more particularly the patch-based approaches. As the standard way of extracting patches does not seem capable of exploiting the sulci geometry and the relations between them, which we believe to be the discriminative features for recognition, we propose a new patch generation strategy based on a high level representation of the sulci. We show that our new approach is slightly, but significantly, better than the reference one, while we still have an avenue of potential refinements that were beyond reach for a single template strategy.
Léonie Borne, Jean-François Mangin, Denis Rivière

Tumor Delineation for Brain Radiosurgery by a ConvNet and Non-uniform Patch Generation

Abstract
Deep learning methods are actively used for brain lesion segmentation. One of the most popular models is DeepMedic, which was developed for segmentation of relatively large lesions like glioma and ischemic stroke. In our work, we consider segmentation of brain tumors appropriate to stereotactic radiosurgery which limits typical lesion sizes. These differences in target volumes lead to a large number of false negatives (especially for small lesions) as well as to an increased number of false positives for DeepMedic. We propose a new patch-sampling procedure to increase network performance for small lesions. We used a 6-year dataset from a stereotactic radiosurgery center. To evaluate our approach, we conducted experiments with the three most frequent brain tumors: metastasis, meningioma, schwannoma. In addition to cross-validation, we estimated quality on a hold-out test set which was collected several years later than the train one. The experimental results show solid improvements in both cases.
Egor Krivov, Valery Kostjuchenko, Alexandra Dalechina, Boris Shirokikh, Gleb Makarchuk, Alexander Denisenko, Andrey Golanov, Mikhail Belyaev

Retinal Image Analysis

Frontmatter

Iterative Deep Retinal Topology Extraction

Abstract
This paper tackles the task of estimating the topology of filamentary networks such as retinal vessels. Building on top of a global model that performs a dense semantical classification of the pixels of the image, we design a Convolutional Neural Network (CNN) that predicts the local connectivity between the central pixel of an input patch and its border points. By iterating this local connectivity we sweep the whole image and infer the global topology of the filamentary network, inspired by a human delineating a complex network with the tip of their finger. We perform a qualitative and quantitative evaluation on retinal veins and arteries topology extraction on DRIVE dataset, where we show superior performance to very strong baselines.
Carles Ventura, Jordi Pont-Tuset, Sergi Caelles, Kevis-Kokitsi Maninis, Luc Van Gool

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise