Skip to main content

2018 | Buch

Data Driven Treatment Response Assessment and Preterm, Perinatal, and Paediatric Image Analysis

First International Workshop, DATRA 2018 and Third International Workshop, PIPPI 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Proceedings

herausgegeben von: Andrew Melbourne, Roxane Licandro, Matthew DiFranco, Dr. Paolo Rota, Melanie Gau, Dr. Martin Kampel, Rosalind Aughwane, Pim Moeskops, Ernst Schwartz, Emma Robinson, Antonios Makropoulos

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed joint proceedings of the First International Workshop on Data Driven Treatment Response Assessment, DATRA 2018 and the Third International Workshop on Preterm, Perinatal and Paediatric Image Analysis, PIPPI 2018, held in conjunction with the 21st International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2018, in Granada, Spain, in September 2018.
The 5 full papers presented at DATRA 2018 and the 12 full papers presented at PIPPI 2018 were carefully reviewed and selected.
The DATRA papers cover a wide range of exploring pattern recognition technologies for tackling clinical issues related to the follow-up analysis of medical data with focus on malignancy progression analysis, computer-aided models of treatment response, and anomaly detection in recovery feedback.
The PIPPI papers cover topics of advanced image analysis approaches focused on the analysis of growth and development in the fetal, infant and paediatric period.

Inhaltsverzeichnis

Frontmatter

DATRA

Frontmatter
DeepCS: Deep Convolutional Neural Network and SVM Based Single Image Super-Resolution
Abstract
Computer based patient monitoring systems help in keeping track of the patients’ responsiveness to the treatment over the course of the treatment. Further, development of these kind of healthcare systems that require minimal or no human intervention form one of the most essential elements of smart cities. In order to make it a reality, the computer vision and machine learning techniques provide numerous ways to improve the efficiency of the automated healthcare systems. Image super-resolution (SR) has been an active area of research in the field of computer vision for the past couple of decades. The SR algorithms are offline and independent of image capturing devices making them suitable for various applications such as video surveillance, medical image analysis, remote sensing etc. This paper proposes a learning based SR algorithm for generating high resolution (HR) images from low resolution (LR) images. The proposed approach uses the fusion of deep convolutional neural network (CNN) and support vector machines (SVM) with regression for learning and reconstruction. Learning with deep neural networks exhibit better approximation and support vector machines work well in decision making. The experiments with the retinal images from RIMONE and CHASEDB have shown that the proposed approach outperforms the existing image super-resolution approaches in terms of peak signal to noise ratio (PSNR) as well as mean squared error (MSE).
Jebaveerasingh Jebadurai, J. Dinesh Peter
Automatic Segmentation of Thigh Muscle in Longitudinal 3D T1-Weighted Magnetic Resonance (MR) Images
Abstract
The quantification of muscle mass is important in clinical populations with chronic paralysis, cachexia, and sarcopenia. This is especially true when testing interventions which are designed to maintain or improve muscle mass. The purpose of this paper is to report on an automated method of MRI-based thigh muscle segmentation framework that minimizes longitudinal deviation by using femur segmentation as a reference in a two-phase registration. Imaging data from seven patients with severe multiple sclerosis who had undergone MRI scans at multiple time points were used to develop and validate our method. The proposed framework results in robust, automated co-registration between baseline and follow up scans, and generates a reliable thigh muscle mask that excludes intramuscular fat.
Zihao Tang, Chenyu Wang, Phu Hoang, Sidong Liu, Weidong Cai, Domenic Soligo, Ruth Oliver, Michael Barnett, Ché Fornusek
Detecting Bone Lesions in Multiple Myeloma Patients Using Transfer Learning
Abstract
The detection of bone lesions is important for the diagnosis and staging of multiple myeloma patients. The scarce availability of annotated data renders training of automated detectors challenging. Here, we present a transfer learning approach using convolutional neural networks to detect bone lesions in computed tomography imaging data. We compare different learning approaches, and demonstrate that pretraining a convolutional neural network on natural images improves detection accuracy. Also, we describe a patch extraction strategy which encodes different information into each input channel of the networks. We train and evaluate our approach on a dataset with 660 annotated bone lesions, and show how the resulting marker map high-lights lesions in computed tomography imaging data.
Matthias Perkonigg, Johannes Hofmanninger, Björn Menze, Marc-André Weber, Georg Langs
Quantification of Local Metabolic Tumor Volume Changes by Registering Blended PET-CT Images for Prediction of Pathologic Tumor Response
Abstract
Quantification of local metabolic tumor volume (MTV) changes after Chemo-radiotherapy would allow accurate tumor response evaluation. Currently, local MTV changes in esophageal (soft-tissue) cancer are measured by registering follow-up PET to baseline PET using the same transformation obtained by deformable registration of follow-up CT to baseline CT. Such approach is suboptimal because PET and CT capture fundamentally different properties (metabolic vs. anatomy) of a tumor. In this work we combined PET and CT images into a single blended PET-CT image and registered follow-up blended PET-CT image to baseline blended PET-CT image. B-spline regularized diffeomorphic registration was used to characterize the large MTV shrinkage. Jacobian of the resulting transformation was computed to measure the local MTV changes. Radiomic features (intensity and texture) were then extracted from the Jacobian map to predict pathologic tumor response. Local MTV changes calculated using blended PET-CT registration achieved the highest correlation with ground truth segmentation (R = 0.88) compared to PET-PET (R = 0.80) and CT-CT (R = 0.67) registrations. Moreover, using blended PET-CT registration, the multivariate prediction model achieved the highest accuracy with only one Jacobian co-occurrence texture feature (accuracy = 82.3%). This novel framework can replace the conventional approach that applies CT-CT transformation to the PET data for longitudinal evaluation of tumor response.
Sadegh Riyahi, Wookjin Choi, Chia-Ju Liu, Saad Nadeem, Shan Tan, Hualiang Zhong, Wengen Chen, Abraham J. Wu, James G. Mechalakos, Joseph O. Deasy, Wei Lu
Optimizing External Surface Sensor Locations for Respiratory Tumor Motion Prediction
Abstract
Real-time tracking of tumor motion due to the patient’s respiratory cycle is a crucial task in radiotherapy treatments. In this work a proof-of-concept setup is presented where real-time tracked external skin attached sensors are used to predict the internal tumor locations. The spatiotemporal relationships between external sensors and targets during the respiratory cycle are modeled using Gaussian Process regression and trained on a preoperative 4D-CT image sequence of the respiratory cycle. A large set (\(N \approx 25\)) of computer-tomography markers are attached on the patient’s skin before CT acquisition to serve as candidate sensor locations from which a smaller subset (\( N \approx 6 \)) is selected based on their combined predictive power using a genetic algorithm based optimization technique. A custom 3D printed sensor-holder design is used to allow accurate positioning of optical or electromagnetic sensors at the best predictive CT marker locations preoperatively, which are then used for real-time prediction of the internal tumor locations. The method is validated on an artificial respiratory phantom model. The model represents the candidate external locations (fiducials) and internal targets (tumors) with CT markers. A 4D-CT image sequence with 11 time-steps at different phases of the respiratory cycles was acquired. Within this test setup, the CT markers for both internal and external structures are automatically determined by a morphology-based algorithm in the CT images. The method’s in-sample cross validation accuracy in the training set as given by the average root mean-squared error (RMSE) is between 0.00024 and 0.072 mm.
Yusuf Özbek, Zoltan Bardosi, Srdjan Milosavljevic, Wolfgang Freysinger

PIPPI

Frontmatter
Segmentation of Fetal Adipose Tissue Using Efficient CNNs for Portable Ultrasound
Abstract
Adipose tissue mass has been shown to have a strong correlation with fetal nourishment, which has consequences on health in infancy and later life. In rural areas of developing nations, ultrasound has the potential to be the key imaging modality due to its portability and cost. However, many ultrasound image analysis algorithms are not compatibly portable, with many taking several minutes to compute on modern CPUs.
The contributions of this work are threefold. Firstly, by adapting the popular U-Net, we show that CNNs can achieve excellent results in fetal adipose segmentation from ultrasound images. We then propose a reduced model, U-Ception, facilitating deployment of the algorithm on mobile devices. The U-Ception network provides a 98.4% reduction in model size for a 0.6% reduction in segmentation accuracy (mean Dice coefficient). We also demonstrate the clinical applicability of the work, showing that CNNs can be used to predict a trend between gestational age and adipose area.
Sagar Vaze, Ana I. L. Namburete
Automatic Shadow Detection in 2D Ultrasound Images
Abstract
Automatically detecting acoustic shadows is of great importance for automatic 2D ultrasound analysis ranging from anatomy segmentation to landmark detection. However, variation in shape and similarity in intensity to other structures make shadow detection a very challenging task. In this paper, we propose an automatic shadow detection method to generate a pixel-wise, shadow-focused confidence map from weakly labelled, anatomically-focused images. Our method: (1) initializes potential shadow areas based on a classification task. (2) extends potential shadow areas using a GAN model. (3) adds intensity information to generate the final confidence map using a distance matrix. The proposed method accurately highlights the shadow areas in 2D ultrasound datasets comprising standard view planes as acquired during fetal screening. Moreover, the proposed method outperforms the state-of-the-art quantitatively and improves failure cases for automatic biometric measurement.
Qingjie Meng, Christian Baumgartner, Matthew Sinclair, James Housden, Martin Rajchl, Alberto Gomez, Benjamin Hou, Nicolas Toussaint, Veronika Zimmer, Jeremy Tan, Jacqueline Matthew, Daniel Rueckert, Julia Schnabel, Bernhard Kainz
Multi-channel Groupwise Registration to Construct an Ultrasound-Specific Fetal Brain Atlas
Abstract
In this paper, we describe a method to construct a 3D atlas from fetal brain ultrasound (US) volumes. A multi-channel groupwise Demons registration is proposed to simultaneously register a set of images from a population to a common reference space, thereby representing the population average. Similar to the standard Demons formulation, our approach takes as input an intensity image, but with an additional channel which contains phase-based features extracted from the intensity channel. The proposed multi-channel atlas construction method is evaluated using a groupwise Dice overlap, and is shown to outperform standard (single-channel) groupwise diffeomorphic Demons registration. This method is then used to construct an atlas from US brain volumes collected from a population of 39 healthy fetal subjects at 23 gestational weeks. The resulting atlas manifests high structural overlap, and correspondence between the US-based and an age-matched fetal MRI-based atlas is observed.
Ana I. L. Namburete, Raquel van Kampen, Aris T. Papageorghiou, Bartłomiej W. Papież
Investigating Brain Age Deviation in Preterm Infants: A Deep Learning Approach
Abstract
This study examined postmenstrual age (PMA) estimation (in weeks) from brain diffusion MRI of very preterm born infants (born <31weeks gestational age), with an objective to investigate how differences in estimated brain age and PMA were associated with the risk of Cerebral Palsy disorders (CP). Infants were scanned up to 2 times, between 29 and 46 weeks (w) PMA. We applied a deep learning 2D convolutional neural network (CNN) regression model to estimate PMA from local image patches extracted from the diffusion MRI dataset. These were combined to form a global prediction for each MRI scan. We found that CNN can reliably estimate PMA (Pearson’s r = 0.6, p < 0.05) from MRIs before 36 weeks of age (‘Early’ scans). These results revealed that the local fractional anisotropy (FA) measures of these very early scans preserved age specific information. Most interestingly, infants who were later diagnosed with CP were more likely to have an estimated younger brain age from ‘Early’ scans, the estimated age deviations were significantly different (Regression coefficient: −2.16, p < 0.05, corrected for actual age) compared to those infants who were not diagnosed with CP.
Susmita Saha, Alex Pagnozzi, Joanne George, Paul B. Colditz, Roslyn Boyd, Stephen Rose, Jurgen Fripp, Kerstin Pannek
Segmentation of Pelvic Vessels in Pediatric MRI Using a Patch-Based Deep Learning Approach
Abstract
In this paper, we propose a patch-based deep learning approach to segment pelvic vessels in 3D MRI images of pediatric patients. For a given T2 weighted MRI volume, a set of 2D axial patches are extracted using a limited number of user-selected landmarks. In order to take into account the volumetric information, successive 2D axial patches are combined together, producing a set of pseudo RGB color images. These RGB images are then used as input for a convolutional neural network (CNN), pre-trained on the ImageNet dataset, which results into both segmentation and vessel labeling as veins or arteries. The proposed method is evaluated on 35 MRI volumes of pediatric patients, obtaining an average segmentation accuracy in terms of Average Symmetric Surface Distance of \(ASSD = 0.89 \pm 0.07\) mm and Dice Index of \(DC = 0.79 \pm 0.02\).
A. Virzì, P. Gori, C. O. Muller, E. Mille, Q. Peyrot, L. Berteloot, N. Boddaert, S. Sarnacki, I. Bloch
Multi-view Image Reconstruction: Application to Fetal Ultrasound Compounding
Abstract
Ultrasound (US), a standard diagnostic tool to detect fetal abnormalities, is a direction dependent imaging modality, i.e. the position of the probe highly influences the appearance of the image. View-dependent artifacts such as shadows can obstruct parts of the anatomy of interest and degrade the quality and usefulness of the image. If multiple images of the same structure are acquired from different views, view-dependent artifacts can be minimized.
In this work, we propose a new US image reconstruction technique using multiple B-spline grids to enable multi-view US image compounding. The B-spline coefficients of different control point grids adapted to the geometry of the data are simultaneously optimized at every resolution level. Data points are weighted depending on their view, position and intensity. We demonstrate our method on the compounding of co-planar 2D fetal US images acquired from multiple views. Using quantitative and qualitative evaluation scores, we show that the proposed method outperforms other multi-view compounding methods.
Veronika A. Zimmer, Alberto Gomez, Yohan Noh, Nicolas Toussaint, Bishesh Khanal, Robert Wright, Laura Peralta, Milou van Poppel, Emily Skelton, Jacqueline Matthew, Julia A. Schnabel
EchoFusion: Tracking and Reconstruction of Objects in 4D Freehand Ultrasound Imaging Without External Trackers
Abstract
Ultrasound (US) is the most widely used fetal imaging technique. However, US images have limited capture range, and suffer from view dependent artefacts such as acoustic shadows. Compounding of overlapping 3D US acquisitions into a high-resolution volume can extend the field of view and remove image artefacts, which is useful for retrospective analysis including population based studies. However, such volume reconstructions require information about relative transformations between probe positions from which the individual volumes were acquired. In prenatal US scans, the fetus can move independently from the mother, making external trackers such as electromagnetic or optical tracking unable to track the motion between probe position and the moving fetus. We provide a novel methodology for image-based tracking and volume reconstruction by combining recent advances in deep learning and simultaneous localisation and mapping (SLAM). Tracking semantics are established through the use of a Residual 3D U-Net and the output is fed to the SLAM algorithm. As a proof of concept, experiments are conducted on US volumes taken from a whole body fetal phantom, and from the heads of real fetuses. For the fetal head segmentation, we also introduce a novel weak annotation approach to minimise the required manual effort for ground truth annotation. We evaluate our method qualitatively, and quantitatively with respect to tissue discrimination accuracy and tracking robustness.
Bishesh Khanal, Alberto Gomez, Nicolas Toussaint, Steven McDonagh, Veronika Zimmer, Emily Skelton, Jacqueline Matthew, Daniel Grzech, Robert Wright, Chandni Gupta, Benjamin Hou, Daniel Rueckert, Julia A. Schnabel, Bernhard Kainz
Better Feature Matching for Placental Panorama Construction
Abstract
Twin-to-twin transfusion syndrome is a potentially fatal placental vascular disease of twin pregnancies. The only definitive treatment is surgical cauterization of problematic vascular formations with a fetal endoscope. This surgery is made difficult by the poor visibility conditions of the intrauterine environment and the limited field of view of the endoscope. There have been efforts to address the limited field of view of fetal endoscopes with algorithms that use visual correspondences between successive fetoscopic video frames to stitch those frames together into a composite map of the placental surface. The existing work, however, has been evaluated primarily on ex vivo images of placentas, which tend to have more visual features and fewer visual distractors than the in vivo images that would be encountered in actual surgical procedures. This work shows that guiding feature matching with deep learned segmentations of placental vessels and grid-based motion statistics can make feature-based registration tractable even in in vivo images that have few distinctive visual features.
Praneeth Sadda, John A. Onofrey, Mert O. Bahtiyar, Xenophon Papademetris
Combining Deep Learning and Multi-atlas Label Fusion for Automated Placenta Segmentation from 3DUS
Abstract
In recent years there is growing interest in studying the placenta in vivo. However, 3D ultrasound images (3DUS) are typically very noisy, and the placenta shape and position are highly variable. As such, placental segmentation efforts to date have focused on interactive methods that require considerable user input, or automated methods with relatively low performance and various limitations. We propose a novel algorithm using a combination of deep learning and multi-atlas joint label fusion (JLF) methods for automated segmentation of the placenta in 3DUS images. We extract 2D cross-sections of the ultrasound cone beam with a variety of orientations from the 3DUS images and train a convolutional neural network (CNN) on these slices. We use the prediction by the CNN to initialize the multi-atlas JLF algorithm. The posteriors obtained by the CNN and JLF models are combined to enhance the overall segmentation performance. The method is evaluated on a dataset of 47 patients in the first trimester. We perform 4-fold cross-validation and achieve a mean Dice coefficient of \(86.3 \pm 5.3\) for the test folds. This is a substantial increase in accuracy compared to existing automated methods and is comparable to the performance of semi-automated methods currently considered the bronze standard in placenta segmentation.
Baris U. Oguz, Jiancong Wang, Natalie Yushkevich, Alison Pouch, James Gee, Paul A. Yushkevich, Nadav Schwartz, Ipek Oguz
LSTM Spatial Co-transformer Networks for Registration of 3D Fetal US and MR Brain Images
Abstract
In this work, we propose a deep learning-based method for iterative registration of fetal brain images acquired by ultrasound and magnetic resonance, inspired by “Spatial Transformer Networks”. Images are co-aligned to a dual modality spatio-temporal atlas, where computational image analysis may be performed in the future. Our results show better alignment accuracy compared to “Self-Similarity Context descriptors”, a state-of-the-art method developed for multi-modal image registration. Furthermore, our method is robust and able to register highly misaligned images, with any initial orientation, where similarity-based methods typically fail.
Robert Wright, Bishesh Khanal, Alberto Gomez, Emily Skelton, Jacqueline Matthew, Jo V. Hajnal, Daniel Rueckert, Julia A. Schnabel
Automatic and Efficient Standard Plane Recognition in Fetal Ultrasound Images via Multi-scale Dense Networks
Abstract
The determination and interpretation of fetal standard planes (FSPs) in ultrasound examinations are the precondition and essential step for prenatal ultrasonography diagnosis. However, identifying multiple standard planes from ultrasound videos is a time-consuming and tedious task since there are only little differences between standard and non-standard planes in the adjacent scan frames. To address this challenge, we propose a general and efficient framework to detect several standard planes from ultrasound scan images or videos automatically. Specifically, a multi-scale dense networks (MSDNet) utilizing the multi-scale architecture and dense connection is exploited, which combines the fine level features from the shallow layers and coarse level features from the deep layers. Moreover, this MSDNet is resource efficient, and the cascade structure can adaptively select lightweight networks when test images are not complicated or computational resources limited. Experimental results based on our self-collected dataset demonstrate that the proposed method achieves a mean average precision (mAP) of 98.15% with half resources and double speeds in FSPs recognition task.
Peiyao Kong, Dong Ni, Siping Chen, Shengli Li, Tianfu Wang, Baiying Lei
Paediatric Liver Segmentation for Low-Contrast CT Images
Abstract
CT images from combined PET-CT scanners are of low contrast. Automatic organ segmentation on these images are challenging. This paper proposed an adaptive kernel-based Statistical Region Merging (SRM) algorithm for paediatric liver segmentation in low contrast PET-CT images. The results are compared to that from the original SRM. The average dice index is 0.79 for SRM and 0.85 for the adaptive kernel-based SRM. In addition, the proposed method was successful in segmenting all 37 CT images while SRM failed in 5 images.
Mariusz Bajger, Gobert Lee, Martin Caon
Backmatter
Metadaten
Titel
Data Driven Treatment Response Assessment and Preterm, Perinatal, and Paediatric Image Analysis
herausgegeben von
Andrew Melbourne
Roxane Licandro
Matthew DiFranco
Dr. Paolo Rota
Melanie Gau
Dr. Martin Kampel
Rosalind Aughwane
Pim Moeskops
Ernst Schwartz
Emma Robinson
Antonios Makropoulos
Copyright-Jahr
2018
Electronic ISBN
978-3-030-00807-9
Print ISBN
978-3-030-00806-2
DOI
https://doi.org/10.1007/978-3-030-00807-9

Premium Partner