Skip to main content

2017 | Buch

Simulation and Synthesis in Medical Imaging

Second International Workshop, SASHIMI 2017, Held in Conjunction with MICCAI 2017, Québec City, QC, Canada, September 10, 2017, Proceedings

herausgegeben von: Sotirios A. Tsaftaris, Ali Gooya, Prof. Alejandro F. Frangi, Jerry L. Prince

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the Second International Workshop on Simulation and Synthesis in Medical Imaging, held in conjunction with MICCAI 2017, in Québec City, Canada, in September 2017.

The 11 revised full papers presented were carefully reviewed and selected from 14 submissions. The contributions span the following broad categories: cross modality (PET/MR, PET/CT, CT/MR, etc.) image synthesis, simulation and synthesis from large-scale image databases, automated techniques for quality assessment images, and several applications of image synthesis and simulation in medical imaging such as image interpolation and segmentation, image reconstruction, cell imaging, and blood flow.

Inhaltsverzeichnis

Frontmatter

Synthesis and Its Applications in Computational Medical Imaging

Frontmatter
Adversarial Image Synthesis for Unpaired Multi-modal Cardiac Data
Abstract
This paper demonstrates the potential for synthesis of medical images in one modality (e.g. MR) from images in another (e.g. CT) using a CycleGAN [24] architecture. The synthesis can be learned from unpaired images, and applied directly to expand the quantity of available training data for a given task. We demonstrate the application of this approach in synthesising cardiac MR images from CT images, using a dataset of MR and CT images coming from different patients. Since there can be no direct evaluation of the synthetic images, as no ground truth images exist, we demonstrate their utility by leveraging our synthetic data to achieve improved results in segmentation. Specifically, we show that training on both real and synthetic data increases accuracy by 15% compared to real data. Additionally, our synthetic data is of sufficient quality to be used alone to train a segmentation neural network, that achieves 95% of the accuracy of the same model trained on real data.
Agisilaos Chartsias, Thomas Joyce, Rohan Dharmakumar, Sotirios A. Tsaftaris
Deep MR to CT Synthesis Using Unpaired Data
Abstract
MR-only radiotherapy treatment planning requires accurate MR-to-CT synthesis. Current deep learning methods for MR-to-CT synthesis depend on pairwise aligned MR and CT training images of the same patient. However, misalignment between paired images could lead to errors in synthesized CT images. To overcome this, we propose to train a generative adversarial network (GAN) with unpaired MR and CT images. A GAN consisting of two synthesis convolutional neural networks (CNNs) and two discriminator CNNs was trained with cycle consistency to transform 2D brain MR image slices into 2D brain CT image slices and vice versa. Brain MR and CT images of 24 patients were analyzed. A quantitative evaluation showed that the model was able to synthesize CT images that closely approximate reference CT images, and was able to outperform a GAN model trained with paired MR and CT images.
Jelmer M. Wolterink, Anna M. Dinkla, Mark H. F. Savenije, Peter R. Seevinck, Cornelis A. T. van den Berg, Ivana Išgum
Synthesizing CT from Ultrashort Echo-Time MR Images via Convolutional Neural Networks
Abstract
With the increasing popularity of PET-MR scanners in clinical applications, synthesis of CT images from MR has been an important research topic. Accurate PET image reconstruction requires attenuation correction, which is based on the electron density of tissues and can be obtained from CT images. While CT measures electron density information for x-ray photons, MR images convey information about the magnetic properties of tissues. Therefore, with the advent of PET-MR systems, the attenuation coefficients need to be indirectly estimated from MR images. In this paper, we propose a fully convolutional neural network (CNN) based method to synthesize head CT from ultra-short echo-time (UTE) dual-echo MR images. Unlike traditional \(T_1\)-w images which do not have any bone signal, UTE images show some signal for bone, which makes it a good candidate for MR to CT synthesis. A notable advantage of our approach is that accurate results were achieved with a small training data set. Using an atlas of a single CT and dual-echo UTE pair, we train a deep neural network model to learn the transform of MR intensities to CT using patches. We compared our CNN based model with a state-of-the-art registration based as well as a Bayesian model based CT synthesis method, and showed that the proposed CNN model outperforms both of them. We also compared the proposed model when only \(T_1\)-w images are available instead of UTE, and show that UTE images produce better synthesis than using just \(T_1\)-w images.
Snehashis Roy, John A. Butman, Dzung L. Pham
A Supervoxel Based Random Forest Synthesis Framework for Bidirectional MR/CT Synthesis
Abstract
Synthesizing magnetic resonance (MR) and computed tomography (CT) images (from each other) has important implications for clinical neuroimaging. The MR to CT direction is critical for MRI-based radiotherapy planning and dose computation, whereas the CT to MR direction can provide an economic alternative to real MRI for image processing tasks. Additionally, synthesis in both directions can enhance MR/CT multi-modal image registration. Existing approaches have focused on synthesizing CT from MR. In this paper, we propose a multi-atlas based hybrid method to synthesize T1-weighted MR images from CT and CT images from T1-weighted MR images using a common framework. The task is carried out by: (a) computing a label field based on supervoxels for the subject image using joint label fusion; (b) correcting this result using a random forest classifier (RF-C); (c) spatial smoothing using a Markov random field; (d) synthesizing intensities using a set of RF regressors, one trained for each label. The algorithm is evaluated using a set of six registered CT and MR image pairs of the whole head.
Can Zhao, Aaron Carass, Junghoon Lee, Amod Jog, Jerry L. Prince
Region-Enhanced Joint Dictionary Learning for Cross-Modality Synthesis in Diffusion Tensor Imaging
Abstract
Diffusion tensor imaging (DTI) has notoriously long acquisition times, and the sensitivity of the tensor computation often make this technique vulnerable to various interferences, for example, physiological motions, limited scanning time and patients with different medical conditions. In neuroimaging, studies usually involve different modalities. We considered the problem of inferring key information in DTI from other modalities. To address such a problem, several cross-modality image synthesis approaches have been proposed recently, in which the content of an image modality is reproduced based on those of another modality. However, these methods typically focus on two modalities of same complexity. In this work we propose a region-enhanced joint dictionary learning method that combines the region-specific information in a joint learning manner. The proposed method encodes intrinsic differences among different modalities, while the jointly learned dictionaries preserve common structures among them. Experimental results show that our approach has desirable properties on cross-modality image synthesis in diffusion tensor images.
Danyang Wang, Yawen Huang, Alejandro F. Frangi
Virtual PET Images from CT Data Using Deep Convolutional Networks: Initial Results
Abstract
In this work we present a novel system for PET estimation using CT scans. We explore the use of fully convolutional networks (FCN) and conditional generative adversarial networks (GAN) to export PET data from CT data. Our dataset includes 25 pairs of PET and CT scans where 17 were used for training and 8 for testing. The system was tested for detection of malignant tumors in the liver region. Initial results look promising showing high detection performance with a TPR of 92.3% and FPR of 0.25 per case. Future work entails expansion of the current system to the entire body using a much larger dataset. Such a system can be used for tumor detection and drug treatment evaluation in a CT-only environment instead of the expansive and radioactive PET-CT scan.
Avi Ben-Cohen, Eyal Klang, Stephen P. Raskin, Michal Marianne Amitai, Hayit Greenspan

Simulation and Processing Approaches for Medical Imaging

Frontmatter
Semi-supervised Assessment of Incomplete LV Coverage in Cardiac MRI Using Generative Adversarial Nets
Abstract
Cardiac magnetic resonance (CMR) images play a growing role in diagnostic imaging of cardiovascular diseases. Ensuring full coverage of the Left Ventricle (LV) is a basic criteria of CMR image quality. Complete LV coverage, from base to apex, precedes accurate cardiac volume and functional assessment. Incomplete coverage of the LV is identified through visual inspection, which is time-consuming and usually done retrospectively in large imaging cohorts. In this paper, we propose a novel semi-supervised method to check the coverage of LV from CMR images by using generative adversarial networks (GAN), we call it Semi-Coupled-GANs (SCGANs). To identify missing basal and apical slices in a CMR volume, a two-stage framework is proposed. First, the SCGANs generate adversarial examples and extract high-level features from the CMR images; then these image attributes are used to detect missing basal and apical slices. We constructed extensive experiments to validate the proposed method on UK Biobank with more than 6000 independent volumetric MR scans, which achieved high accuracy and robust results for missing slice detection, comparable with those of state of the art deep learning methods. The proposed method, in principle, can be adapted to other CMR image data for LV coverage assessment.
Le Zhang, Ali Gooya, Alejandro F. Frangi
High Order Slice Interpolation for Medical Images
Abstract
In this paper we introduce a high order object- and intensity-based method for slice interpolation. Similar structures along the slices are registered using a symmetric similarity measure to calculate displacement fields between neighboring slices. For the intensity-based and curvature-regularized registration no manual landmarks are needed but the structures between two subsequent slices have to be similar. The set of displacement fields is used to calculate a natural spline interpolation for structural motion that avoids kinks. Along every correspondence point trajectory, again high order intensity interpolating splines are calculated for gray values. We test our method on an artificial scenario and on real MR images. Leave-one-slice-out evaluations show that the proposed method improves the slice estimation compared to piecewise linear registration-based slice interpolation and cubic interpolation.
Antal Horváth, Simon Pezold, Matthias Weigel, Katrin Parmar, Philippe Cattin
A Monte Carlo Framework for Low Dose CT Reconstruction Testing
Abstract
We propose a framework using freely available tools for the synthesis of physically realistic CT measurements for low dose reconstruction development and validation, using a fully sampled Monte Carlo method. This allows the generation of test data that has artefacts such as photon starvation, beam-hardening and scatter, that are both physically realistic and not unfairly biased towards model-based iterative reconstruction (MBIR) algorithms. Using the open source Monte Carlo tool GATE and spectrum simulator SpekCalc, we describe how physical elements such as source, specimen and detector may be modelled, and demonstrate the construction of fan-beam and cone-beam CT systems. We then show how this data may be consolidated and used with image reconstruction tools. We give examples with a low dose polyenergetic source, and quantitatively analyse reconstructions against the numerical ground-truth for MBIR with simulated and ‘inverse crime’ data. The proposed framework offers a flexible and easily reproducible tool to aid MBIR development, and may reduce the gap between synthetic and clinical results.
Jonathan H. Mason, Willam H. Nailon, Mike E. Davies
Multimodal Simulations in Live Cell Imaging
Abstract
During the last two decades a large amount of new simulation frameworks in the field of cell imaging has emerged. They were expected to serve as performance assessment tools for newly developed as well as for already existing cell segmentation or tracking algorithms. These simulators have typically been designed as single purpose tools. They generate the synthetic image data for one particular modality and one particular cell type. In this study, we introduce a novel multipurpose simulation framework, which produces the synthetic time-lapse image sequences of living endothelial cells for two different modalities: fluorescence and phase contrast microscopy, both in widefield or confocal mode. This may help in evaluating a wider range of desired image processing algorithms across multiple modalities.
David Svoboda, Michal Kozubek
Medical Image Processing and Numerical Simulation for Digital Hepatic Parenchymal Blood Flow
Abstract
This paper deals with the personalized simulation of blood flow within the liver parenchyma, by considering a complete pipeline of medical image segmentation, organ volume reconstruction, and numerical simulation of blood diffusion. To do so, we employ model-based segmentation algorithms developed with ITK/VTK librairies, CATIA software for volumetric reconstructions based on NURBS and Abaqus solution for adapted simulation of Darcy’s law. After presenting experimental results of each step, we explore scientific and technical bottlenecks so that a valid digital hepatic blood flow phantom may be developed in our future research, in direct relation with current open challenges in this domain.
Marie-Ange Lebre, Khaled Arrouk, Anh-Khoa Võ Văn, Aurélie Leborgne, Manuel Grand-Brochier, Pierre Beaurepaire, Antoine Vacavant, Benoît Magnin, Armand Abergel, Pascal Chabrot
Backmatter
Metadaten
Titel
Simulation and Synthesis in Medical Imaging
herausgegeben von
Sotirios A. Tsaftaris
Ali Gooya
Prof. Alejandro F. Frangi
Jerry L. Prince
Copyright-Jahr
2017
Electronic ISBN
978-3-319-68127-6
Print ISBN
978-3-319-68126-9
DOI
https://doi.org/10.1007/978-3-319-68127-6