Skip to main content

2018 | Buch

Machine Learning for Medical Image Reconstruction

First International Workshop, MLMIR 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Proceedings

share
TEILEN
insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the First International Workshop on Machine Learning for Medical Reconstruction, MLMIR 2018, held in conjunction with MICCAI 2018, in Granada, Spain, in September 2018.

The 17 full papers presented were carefully reviewed and selected from 21 submissions. The papers are organized in the following topical sections: deep learning for magnetic resonance imaging; deep learning for computed tomography, and deep learning for general image reconstruction.

Inhaltsverzeichnis

Frontmatter

Deep Learning for Magnetic Resonance Imaging

Frontmatter
Deep Learning Super-Resolution Enables Rapid Simultaneous Morphological and Quantitative Magnetic Resonance Imaging
Abstract
Obtaining magnetic resonance images (MRI) with high resolution and generating quantitative image-based biomarkers for assessing tissue biochemistry is crucial in clinical and research applications. However, acquiring quantitative biomarkers requires high signal-to-noise ratio (SNR), which is at odds with high-resolution in MRI, especially in a single rapid sequence. In this paper, we demonstrate how super-resolution (SR) can be utilized to maintain adequate SNR for accurate quantification of the T\(_2\) relaxation time biomarker, while simultaneously generating high-resolution images. We compare the efficacy of resolution enhancement using metrics such as peak SNR and structural similarity. We assess accuracy of cartilage T\(_2\) relaxation times by comparing against a standard reference method. Our evaluation suggests that SR can successfully maintain high-resolution and generate accurate biomarkers for accelerating MRI scans and enhancing the value of clinical and research MRI.
Akshay Chaudhari, Zhongnan Fang, Jin Hyung Lee, Garry Gold, Brian Hargreaves
ETER-net: End to End MR Image Reconstruction Using Recurrent Neural Network
Abstract
Recently, an end-to-end MR image reconstruction technique, called AUTOMAP, was introduced to simplify the complicated reconstruction process of MR image and to improve the quality of reconstructed MR images using deep learning. Despite the benefits of end-to-end architecture and superior quality of reconstructed MR images, AUTOMAP suffers from the large amount of training parameters required by multiple fully connected layers. In this work, we propose a new end-to-end MR image reconstruction technique based on the recurrent neural network (RNN) architecture, which can be more efficiently used for magnetic resonance (MR) image reconstruction than the convolutional neural network (CNN). We modified the RNN architecture of ReNet for image domain data to reconstruct an MR image from k-space data by utilizing recurrent cells. The proposed network reconstructs images from the k-space data with a reduced number of parameters compared with that of fully connected architectures. We present a quantitative evaluation of the proposed method for Cartesian trajectories using nMSE and SSIM. We also present preliminary images reconstructed from k-space data acquired in the radial trajectory.
Changheun Oh, Dongchan Kim, Jun-Young Chung, Yeji Han, HyunWook Park
Cardiac MR Motion Artefact Correction from K-space Using Deep Learning-Based Reconstruction
Abstract
Incorrect ECG gating of cardiac magnetic resonance (CMR) acquisitions can lead to artefacts, which hampers the accuracy of diagnostic imaging. Therefore, there is a need for robust reconstruction methods to ensure high image quality. In this paper, we propose a method to automatically correct motion-related artefacts in CMR acquisitions during reconstruction from k-space data. Our method is based on the Automap reconstruction method, which directly reconstructs high quality MR images from k-space using deep learning. Our main methodological contribution is the addition of an adversarial element to this architecture, in which the quality of image reconstruction (the generator) is increased by using a discriminator. We train the reconstruction network to automatically correct for motion-related artefacts using synthetically corrupted CMR k-space data and uncorrupted reconstructed images. Using 25000 images from the UK Biobank dataset we achieve good image quality in the presence of synthetic motion artefacts, but some structural information was lost. We quantitatively compare our method to a standard inverse Fourier reconstruction. In addition, we qualitatively evaluate the proposed technique using k-space data containing real motion artefacts.
Ilkay Oksuz, James Clough, Aurelien Bustin, Gastao Cruz, Claudia Prieto, Rene Botnar, Daniel Rueckert, Julia A. Schnabel, Andrew P. King
Complex Fully Convolutional Neural Networks for MR Image Reconstruction
Abstract
Undersampling the k-space data is widely adopted for acceleration of Magnetic Resonance Imaging (MRI). Current deep learning based approaches for supervised learning of MRI image reconstruction employ real-valued operations and representations by treating complex valued k-space/spatial-space as real values. In this paper, we propose complex dense fully convolutional neural network (\(\mathbb {C}\)DFNet) for learning to de-alias the reconstruction artifacts within undersampled MRI images. We fashioned a densely-connected fully convolutional block tailored for complex-valued inputs by introducing dedicated layers such as complex convolution, batch normalization, non-linearities etc. \(\mathbb {C}\)DFNet leverages the inherently complex-valued nature of input k-space and learns richer representations. We demonstrate improved perceptual quality and recovery of anatomical structures through \(\mathbb {C}\)DFNet in contrast to its real-valued counterparts.
Muneer Ahmad Dedmari, Sailesh Conjeti, Santiago Estrada, Phillip Ehses, Tony Stöcker, Martin Reuter
Magnetic Resonance Fingerprinting Reconstruction via Spatiotemporal Convolutional Neural Networks
Abstract
Magnetic resonance fingerprinting (MRF) quantifies multiple nuclear magnetic resonance parameters in a single and fast acquisition. Standard MRF reconstructs parametric maps using dictionary matching, which lacks scalability due to computational inefficiency. We propose to perform MRF map reconstruction using a spatiotemporal convolutional neural network, which exploits the relationship between neighboring MRF signal evolutions to replace the dictionary matching. We evaluate our method on multiparametric brain scans and compare it to three recent MRF reconstruction approaches. Our method achieves state-of-the-art reconstruction accuracy and yields qualitatively more appealing maps compared to other reconstruction methods. In addition, the reconstruction time is significantly reduced compared to a dictionary-based approach.
Fabian Balsiger, Amaresha Shridhar Konar, Shivaprasad Chikop, Vimal Chandran, Olivier Scheidegger, Sairam Geethanath, Mauricio Reyes
Improved Time-Resolved MRA Using k-Space Deep Learning
Abstract
In dynamic contrast enhanced (DCE) MRI, temporal and spatial resolution can be improved by time-resolved angiography with interleaved stochastic trajectories (TWIST) thanks to its highly accelerated acquisitions. However, due to limited k-space samples, the periphery of the k-space data from several adjacent frames should be combined to reconstruct one temporal frame so that the temporal resolution of TWIST is limited. Furthermore, the k-space sampling patterns of TWIST imaging have been especially designed for a generalized autocalibrating partial parallel acquisition (GRAPPA) reconstruction. Therefore, the number of shared frames cannot be reduced to provide a reconstructed image with better temporal resolution. The purpose of this study is to improve the temporal resolution of TWIST using a novel k-space deep learning approach. Direct k-space interpolation is performed simultaneously for multiple coils by exploiting spatial domain redundancy and multi-coil diversity. Furthermore, the proposed method can provide the reconstructed images with various numbers of view sharing. Experimental results using in vivo TWIST data set showed the accuracy and the flexibility of the proposed method.
Eunju Cha, Eung Yeop Kim, Jong Chul Ye
Joint Motion Estimation and Segmentation from Undersampled Cardiac MR Image
Abstract
Accelerating the acquisition of magnetic resonance imaging (MRI) is a challenging problem, and many works have been proposed to reconstruct images from undersampled k-space data. However, if the main purpose is to extract certain quantitative measures from the images, perfect reconstructions may not always be necessary as long as the images enable the means of extracting the clinically relevant measures. In this paper, we work on jointly predicting cardiac motion estimation and segmentation directly from undersampled data, which are two important steps in quantitatively assessing cardiac function and diagnosing cardiovascular diseases. In particular, a unified model consisting of both motion estimation branch and segmentation branch is learned by optimising the two tasks simultaneously. Additional corresponding fully-sampled images are incorporated into the network as a parallel sub-network to enhance and guide the learning during the training process. Experimental results using cardiac MR images from 220 subjects show that the proposed model is robust to undersampled data and is capable of predicting results that are close to that from fully-sampled ones, while bypassing the usual image reconstruction stage.
Chen Qin, Wenjia Bai, Jo Schlemper, Steffen E. Petersen, Stefan K. Piechnik, Stefan Neubauer, Daniel Rueckert
Bayesian Deep Learning for Accelerated MR Image Reconstruction
Abstract
Recently, many deep learning (DL) based MR image reconstruction methods have been proposed with promising results. However, only a handful of work has been focussing on characterising the behaviour of deep networks, such as investigating when the networks may fail to reconstruct. In this work, we explore the applicability of Bayesian DL techniques to model the uncertainty associated with DL-based reconstructions. In particular, we apply MC-dropout and heteroscedastic loss to the reconstruction networks to model epistemic and aleatoric uncertainty. We show that the proposed Bayesian methods achieve competitive performance when the test images are relatively far from the training data distribution and outperforms when the baseline method is over-parametrised. In addition, we qualitatively show that there seems to be a correlation between the magnitude of the produced uncertainty maps and the error maps, demonstrating the potential utility of the Bayesian DL methods for assessing the reliability of the reconstructed images.
Jo Schlemper, Daniel C. Castro, Wenjia Bai, Chen Qin, Ozan Oktay, Jinming Duan, Anthony N. Price, Jo Hajnal, Daniel Rueckert

Deep Learning for Computed Tomography

Frontmatter
Sparse-View CT Reconstruction Using Wasserstein GANs
Abstract
We propose a 2D computed tomography (CT) slice image reconstruction method from a limited number of projection images using Wasserstein generative adversarial networks (wGAN). Our wGAN optimizes the 2D CT image reconstruction by utilizing an adversarial loss to improve the perceived image quality as well as an \(L_1\) content loss to enforce structural similarity to the target image. We evaluate our wGANs using different weight factors between the two loss functions and compare to a convolutional neural network (CNN) optimized on \(L_1\) and the Filtered Backprojection (FBP) method. The evaluation shows that the results generated by the machine learning based approaches are substantially better than those from the FBP method. In contrast to the blurrier looking images generated by the CNNs trained on \(L_1\), the wGANs results appear sharper and seem to contain more structural information. We show that a certain amount of projection data is needed to get a correct representation of the anatomical correspondences.
Franz Thaler, Kerstin Hammernik, Christian Payer, Martin Urschler, Darko Štern
Detecting Anatomical Landmarks for Motion Estimation in Weight-Bearing Imaging of Knees
Abstract
Patient motion is one of the major challenges in cone-beam computed tomography (CBCT) scans acquired under weight-bearing conditions, since it leads to severe artifacts in reconstructions. In knee imaging, a state-of-the-art approach to compensate for patient motion uses fiducial markers attached to the skin. However, marker placement is a tedious and time consuming procedure for both, the physician and the patient. In this manuscript we investigate the use of anatomical landmarks in an attempt to replace externally attached fiducial markers. To this end, we devise a method to automatically detect anatomical landmarks in projection domain X-ray images irrespective of the viewing direction. To overcome the need for annotation of every X-ray image and to assure consistent annotation across images from the same subject, annotations and projection images are generated from 3D CT data. Twelve landmarks are annotated in supine CBCT reconstructions of the knee joint and then propagated to synthetically generated projection images. Then, a sequential Convolutional Neuronal Network is trained to predict the desired landmarks in projection images. The network is evaluated on synthetic images and real clinical data. On synthetic data promising results are achieved with a mean prediction error of \(8.4 \pm 8.2\) pixel. The network generalizes to real clinical data without the need of re-training. However, practical issues, such as the second leg entering the field of view, limit the performance of the method at this stage. Nevertheless, our results are promising and encourage further investigations on the use of anatomical landmarks for motion management.
Bastian Bier, Katharina Aschoff, Christopher Syben, Mathias Unberath, Marc Levenston, Garry Gold, Rebecca Fahrig, Andreas Maier
A U-Nets Cascade for Sparse View Computed Tomography
Abstract
We propose a new convolutional neural network architecture for image reconstruction in sparse view computed tomography. The proposed network consists of a cascade of U-nets and data consistency layers. While the U-nets address the undersampling artifacts, the data consistency layers model the specific scanner geometry and make direct use of measured data. We train the network cascade end-to-end on sparse view cardiac CT images. The proposed network’s performance is evaluated according to different quantitative measures and compared to the one of a cascade with fully convolutional neural networks with residual connections and to the one of a single U-net with approximately the same number of trainable parameters. While in both experiments the methods show similar performance in terms of quantitative measures, our proposed U-nets cascade yields superior visual results and better preserves the overall image structure as well as fine diagnostic details, e.g. the coronary arteries. The latter is also confirmed by a statistically significant increase of the Haar-wavelet-based perceptual similarity index measure in all the experiments.
Andreas Kofler, Markus Haltmeier, Christoph Kolbitsch, Marc Kachelrieß, Marc Dewey

Deep Learning for General Image Reconstruction

Frontmatter
Approximate k-Space Models and Deep Learning for Fast Photoacoustic Reconstruction
Abstract
We present a framework for accelerated iterative reconstructions using a fast and approximate forward model that is based on k-space methods for photoacoustic tomography. The approximate model introduces aliasing artefacts in the gradient information for the iterative reconstruction, but these artefacts are highly structured and we can train a CNN that can use the approximate information to perform an iterative reconstruction. We show feasibility of the method for human in-vivo measurements in a limited-view geometry. The proposed method is able to produce superior results to total variation reconstructions with a speed-up of 32 times.
Andreas Hauptmann, Ben Cox, Felix Lucka, Nam Huynh, Marta Betcke, Paul Beard, Simon Arridge
Deep Learning Based Image Reconstruction for Diffuse Optical Tomography
Abstract
Diffuse optical tomography (DOT) is a relatively new imaging modality that has demonstrated its clinical potential of probing tumors in a non-invasive and affordable way. Image reconstruction is an ill-posed challenging task because knowledge of the exact analytic inverse transform does not exist a priori, especially in the presence of sensor non-idealities and noise. Standard reconstruction approaches involve approximating the inverse function and often require expert parameters tuning to optimize reconstruction performance. In this work, we evaluate the use of a deep learning model to reconstruct images directly from their corresponding DOT projection data. The inverse problem is solved by training the model via training pairs created using physics-based simulation. Both quantitative and qualitative results indicate the superiority of the proposed network compared to an analytic technique.
Hanene Ben Yedder, Aïcha BenTaieb, Majid Shokoufi, Amir Zahiremami, Farid Golnaraghi, Ghassan Hamarneh
Image Reconstruction via Variational Network for Real-Time Hand-Held Sound-Speed Imaging
Abstract
Speed-of-sound is a biomechanical property for quantitative tissue differentiation, with great potential as a new ultrasound-based image modality. A conventional ultrasound array transducer can be used together with an acoustic mirror, or so-called reflector, to reconstruct sound-speed images from time-of-flight measurements to the reflector collected between transducer element pairs, which constitutes a challenging problem of limited-angle computed tomography. For this problem, we herein present a variational network based image reconstruction architecture that is based on optimization loop unrolling, and provide an efficient training protocol of this network architecture on fully synthetic inclusion data. Our results indicate that the learned model presents good generalization ability, being able to reconstruct images with significantly different statistics compared to the training set. Complex inclusion geometries were shown to be successfully reconstructed, also improving over the prior-art by 23% in reconstruction error and by 10% in contrast on synthetic data. In a phantom study, we demonstrated the detection of multiple inclusions that were not distinguishable by prior-art reconstruction, meanwhile improving the contrast by 27% for a stiff inclusion and by 219% for a soft inclusion. Our reconstruction algorithm takes approximately 10 ms, enabling its use as a real-time imaging method on an ultrasound machine, for which we are demonstrating an example preliminary setup herein.
Valery Vishnevskiy, Sergio J. Sanabria, Orcun Goksel
Towards Arbitrary Noise Augmentation—Deep Learning for Sampling from Arbitrary Probability Distributions
Abstract
Accurate noise modelling is important for training of deep learning reconstruction algorithms. While noise models are well known for traditional imaging techniques, the noise distribution of a novel sensor may be difficult to determine a priori. Therefore, we propose learning arbitrary noise distributions. To do so, this paper proposes a fully connected neural network model to map samples from a uniform distribution to samples of any explicitly known probability density function. During the training, the Jensen-Shannon divergence between the distribution of the model’s output and the target distribution is minimized.
We experimentally demonstrate that our model converges towards the desired state. It provides an alternative to existing sampling methods such as inversion sampling, rejection sampling, Gaussian mixture models and Markov-Chain-Monte-Carlo. Our model has high sampling efficiency and is easily applied to any probability distribution, without the need of further analytical or numerical calculations.
Felix Horger, Tobias Würfl, Vincent Christlein, Andreas Maier
Left Atria Reconstruction from a Series of Sparse Catheter Paths Using Neural Networks
Abstract
Modeling and reconstructing the shape of a heart chamber from partial or noisy data is useful in many (minimally) invasive heart procedures. We propose a method to reconstruct the shape of the left atria during the electrophysiology procedure from a series of simple catheter maneuvers. We use left atria shapes generated from a statistical based physical model and approximate traversal locations of catheter maneuvers inside the left atria. These paths mimic realistic ones doable in a lab phantom. We demonstrate the ability of a deep neural network to approximate the atria shape solely based on the given paths. We compare the results against training from partial data generated by the intersection of a randomly generated sphere and the atria. We test the presented network on actual lab phantoms and show promising results.
Alon Baram, Moshe Safran, Avi Ben-Cohen, Hayit Greenspan
High Quality Ultrasonic Multi-line Transmission Through Deep Learning
Abstract
Frame rate is a crucial consideration in cardiac ultrasound imaging and 3D sonography. Several methods have been proposed in the medical ultrasound literature aiming at accelerating the image acquisition. In this paper, we consider one such method called multi-line transmission (MLT), in which several evenly separated focused beams are transmitted simultaneously. While MLT reduces the acquisition time, it comes at the expense of a heavy loss of contrast due to the interactions between the beams (cross-talk artifact). In this paper, we introduce a data-driven method to reduce the artifacts arising in MLT. To this end, we propose to train an end-to-end convolutional neural network consisting of correction layers followed by a constant apodization layer. The network is trained on pairs of raw data obtained through MLT and the corresponding single-line transmission (SLT) data. Experimental evaluation demonstrates significant improvement both in the visual image quality and in objective measures such as contrast ratio and contrast-to-noise ratio, while preserving resolution unlike traditional apodization-based methods. We show that the proposed method is able to generalize well across different patients and anatomies on real and phantom data.
Sanketh Vedula, Ortal Senouf, Grigoriy Zurakhov, Alex Bronstein, Michael Zibulevsky, Oleg Michailovich, Dan Adam, Diana Gaitini
Backmatter
Metadaten
Titel
Machine Learning for Medical Image Reconstruction
herausgegeben von
Florian Knoll
Prof. Dr. Andreas Maier
Daniel Rueckert
Copyright-Jahr
2018
Electronic ISBN
978-3-030-00129-2
Print ISBN
978-3-030-00128-5
DOI
https://doi.org/10.1007/978-3-030-00129-2

Premium Partner