Skip to main content

2018 | Buch

Simulation, Image Processing, and Ultrasound Systems for Assisted Diagnosis and Navigation

International Workshops, POCUS 2018, BIVPCS 2018, CuRIOUS 2018, and CPM 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16–20, 2018, Proceedings

herausgegeben von: Danail Stoyanov, Zeike Taylor, Stephen Aylward, João Manuel R.S. Tavares, Yiming Xiao, Amber Simpson, Anne Martel, Lena Maier-Hein, Shuo Li, Hassan Rivaz, Ingerid Reinertsen, Matthieu Chabanas, Keyvan Farahani

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed joint proceedings of the International Workshop on Point-of-Care Ultrasound, POCUS 2018, the International Workshop on Bio-Imaging and Visualization for Patient-Customized Simulations, BIVPCS 2017, the International Workshop on Correction of Brainshift with Intra-Operative Ultrasound, CuRIOUS 2018, and the International Workshop on Computational Precision Medicine, CPM 2018, held in conjunction with the 21st International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2018, in Granada, Spain, in September 2018.

The 10 full papers presented at POCUS 2018, the 4 full papers presented at BIVPCS 2018, the 8 full papers presented at CuRIOUS 2018, and the 2 full papers presented at CPM 2018 were carefully reviewed and selected. The papers feature research from complementary fields such as ultrasound image systems applications as well as signal and image processing, mechanics, computational vision, mathematics, physics, informatics, computer graphics, bio-medical-practice, psychology and industry. They discuss intra-operative ultrasound-guided brain tumor resection as well as pancreatic cancer survival prediction.

Inhaltsverzeichnis

Frontmatter

International Workshop on Point-of-Care Ultrasound, POCUS 2018

Frontmatter
Robust Photoacoustic Beamforming Using Dense Convolutional Neural Networks
Abstract
Photoacoustic (PA) is a promising technology for imaging of endogenous tissue chromophores and exogenous contrast agents in a wide range of clinical applications. The imaging technique is based on excitation of a tissue sample using short light pulse, followed by acquisition of the resultant acoustic signal using an ultrasound (US) transducer. To reconstruct an image of the tissue from the received US signals, the most common approach is to use the delay-and-sum (DAS) beamforming technique that assumes a wave propagation with a constant speed of sound. Unfortunately, such assumption often leads to artifacts such as sidelobes and tissue aberration; in addition, the image resolution is degraded. With an aim to improve the PA image reconstruction, in this work, we propose a deep convolutional neural networks-based beamforming approach that uses a set of densely connected convolutional layers with dilated convolution at higher layers. To train the network, we use simulated images with various sizes and contrasts of target objects, and subsequently simulating the PA effect to obtain the raw US signals at an US transducer. We test the network on an independent set of 1,500 simulated images and we achieve a mean peak-to-signal-ratio of 38.7 dB between the estimated and reference images. In addition, a comparison of our approach with the DAS beamforming technique indicates a statistical significant improvement of the proposed technique.
Emran Mohammad Abu Anas, Haichong K. Zhang, Chloé Audigier, Emad M. Boctor
A Training Tool for Ultrasound-Guided Central Line Insertion with Webcam-Based Position Tracking
Abstract
PURPOSE: This paper describes an open-source ultrasound-guided central line insertion training system. Modern clinical guidelines are increasingly recommending ultrasound guidance for this procedure due to the decrease in morbidity it provides. However, there are no adequate low-cost systems for helping new clinicians train their inter-hand coordination for this demanding procedure. METHODS: This paper details a training platform which can be recreated with any standard ultrasound machine using inexpensive components. We describe the hardware, software, and calibration procedures with the intention that a reader can recreate this system themselves. RESULTS: The reproducibility and accuracy of the ultrasound calibration for this system was examined. We found that across the ultrasound image the calibration error was less than 2 mm. In a small feasibility study, two participants performed 5 needle insertions each with an average of slightly above 2 mm error. CONCLUSION: We conclude that the accuracy of the system is sufficient for clinician training.
Mark Asselin, Tamas Ungi, Andras Lasso, Gabor Fichtinger
GLUENet: Ultrasound Elastography Using Convolutional Neural Network
Abstract
Displacement estimation is a critical step in ultrasound elastography and failing to estimate displacement correctly can result in large errors in strain images. As conventional ultrasound elastography techniques suffer from decorrelation noise, they are prone to fail in estimating displacement between echo signals obtained during tissue deformations. This study proposes a novel elastography technique which addresses the decorrelation in estimating displacement field. We call our method GLUENet (GLobal Ultrasound Elastography Network) which uses deep Convolutional Neural Network (CNN) to get a coarse but robust time-delay estimation between two ultrasound images. This displacement is later used for formulating a nonlinear cost function which incorporates similarity of RF data intensity and prior information of estimated displacement [3]. By optimizing this cost function, we calculate the finer displacement exploiting all the information of all the samples of RF data simultaneously. The coarse displacement estimate generated by CNN is substantially more robust than the Dynamic Programming (DP) technique used in GLUE for finding the coarse displacement estimates. Our results validate that GLUENet outperforms GLUE in simulation, phantom and in-vivo experiments.
Md. Golam Kibria, Hassan Rivaz
CUST: CNN for Ultrasound Thermal Image Reconstruction Using Sparse Time-of-Flight Information
Abstract
Thermotherapy is a clinical procedure to induce a desired biological tissue response through temperature changes. To precisely operate the procedure, temperature monitoring during the treatment is essential. Ultrasound propagation velocity in biological tissue changes as temperature increases. An external ultrasound element was integrated with a bipolar radiofrequency (RF) ablation probe to collect time-of-flight information carried by ultrasound waves going through the ablated tissues. Recovering temperature at the pixel level from the limited information acquired from this minimal setup is an ill-posed problem. Therefore, we propose a learning approach using a designed convolutional neural network. Training and testing were performed with temperature images generated with a computational bioheat model simulating a RF ablation. The reconstructed thermal images were compared with results from another sound velocity reconstruction method. The proposed method showed better stability and accuracy for different ultrasound element locations. Ex-vivo experiments were also performed on porcine liver to evaluate the proposed temperature reconstruction method.
Younsu Kim, Chloé Audigier, Emran M. A. Anas, Jens Ziegle, Michael Friebe, Emad M. Boctor
Quality Assessment of Fetal Head Ultrasound Images Based on Faster R-CNN
Abstract
Clinically, the transthalamic plane of the fetal head is manually examined by sonographers to identify whether it is a standard plane. This examination routine is subjective, time-consuming and requires comprehensive understanding of fetal anatomy. An automatic and effective computer aided diagnosis method to determine the standard plane in ultrasound images is highly desirable. This study presents a novel method for the quality assessment of fetal head in ultrasound images based on Faster Region-based Convolutional Neural Networks (Faster R-CNN). Faster R-CNN is able to learn and extract features from the training data. During the training, Fast R-CNN and Region Proposal Network (RPN) share the same feature layer through joint training and alternate optimization. The RPN generates more accurate region proposals, which are used as the inputs for the Fast R-CNN module to perform target detection. The network then outputs the detected categories and scores. Finally, the quality of the transthalamic plane is determined via the scores obtained from the numbers of detected anatomical structures. These scores detect the standard plane as well. Experimental results demonstrated that our method could accurately locate five specific anatomical structures of the transthalamic plane with an average accuracy of 80.18%, which takes only an approximately 0.27 s running time per image.
Zehui Lin, Minh Hung Le, Dong Ni, Siping Chen, Shengli Li, Tianfu Wang, Baiying Lei
Recent Advances in Point-of-Care Ultrasound Using the for Real-Time Image Analysis
Abstract
Medical ultrasound is rapidly advancing both through more powerful hardware and software; in combination these allow the modality to become an ever more indispensable point-of-care tool. In this paper, we summarize some recent developments on the image analysis side that are enabled through the proprietary ImFusion Suite software and corresponding software development kit (SDK). These include 3D reconstruction of arbitrary untracked 2D US clips, image filtering and classification, speed-of-sound calibration and live acquisition parameter tuning in a visual servoing fashion.
Oliver Zettinig, Mehrdad Salehi, Raphael Prevost, Wolfgang Wein
Markerless Inside-Out Tracking for 3D Ultrasound Compounding
Abstract
Tracking of rotation and translation of medical instruments plays a substantial role in many modern interventions and is essential for 3D ultrasound compounding. Traditional external optical tracking systems are often subject to line-of-sight issues, in particular when the region of interest is difficult to access. The introduction of inside-out tracking systems aims to overcome these issues. We propose a marker-less tracking system based on visual SLAM to enable tracking of ultrasound probes in an interventional scenario. To achieve this goal, we mount a miniature multi-modal (mono, stereo, active depth) vision system on the object of interest and relocalize its pose within an adaptive map of the operating room. We compare state-of-the-art algorithmic pipelines and apply the idea to transrectal 3D ultrasound (TRUS). Obtained volumes are compared to reconstruction using a commercial optical tracking system as well as a robotic manipulator. Feature-based binocular SLAM is identified as the most promising method and is tested extensively in challenging clinical environments and for the use case of prostate US biopsies.
Benjamin Busam, Patrick Ruhkamp, Salvatore Virga, Beatrice Lentes, Julia Rackerseder, Nassir Navab, Christoph Hennersperger
Ultrasound-Based Detection of Lung Abnormalities Using Single Shot Detection Convolutional Neural Networks
Abstract
Ultrasound imaging can be used to identify a variety of lung pathologies, including pneumonia, pneumothorax, pleural effusion, and acute respiratory distress syndrome (ARDS). Ultrasound lung images of sufficient quality are relatively easy to acquire, but can be difficult to interpret as the relevant features are mostly non-structural and require expert interpretation. In this work, we developed a convolutional neural network (CNN) algorithm to identify five key lung features linked to pathological lung conditions: B-lines, merged B-lines, lack of lung sliding, consolidation and pleural effusion. The algorithm was trained using short ultrasound videos of in vivo swine models with carefully controlled lung conditions. Key lung features were annotated by expert radiologists and snonographers. Pneumothorax (absence of lung sliding) was detected with an Inception V3 CNN using simulated M-mode images. A single shot detection (SSD) framework was used to detect the remaining features. Our results indicate that deep learning algorithms can successfully detect lung abnormalities in ultrasound imagery. Computer-assisted ultrasound interpretation can place expert-level diagnostic accuracy in the hands of low-resource health care providers.
Sourabh Kulhare, Xinliang Zheng, Courosh Mehanian, Cynthia Gregory, Meihua Zhu, Kenton Gregory, Hua Xie, James McAndrew Jones, Benjamin Wilson
Quantitative Echocardiography: Real-Time Quality Estimation and View Classification Implemented on a Mobile Android Device
Abstract
Accurate diagnosis in cardiac ultrasound requires high quality images, containing different specific features and structures depending on which of the 14 standard cardiac views the operator is attempting to acquire. Inexperienced operators can have a great deal of difficulty recognizing these features and thus can fail to capture diagnostically relevant heart cines. This project aims to mitigate this challenge by providing operators with real-time feedback in the form of view classification and quality estimation. Our system uses a frame grabber to capture the raw video output of the ultrasound machine, which is then fed into an Android mobile device, running a customized mobile implementation of the TensorFlow inference engine. By multi-threading four TensorFlow instances together, we are able to run the system at 30 Hz with a latency of under 0.4 s.
Nathan Van Woudenberg, Zhibin Liao, Amir H. Abdi, Hani Girgis, Christina Luong, Hooman Vaseli, Delaram Behnami, Haotian Zhang, Kenneth Gin, Robert Rohling, Teresa Tsang, Purang Abolmaesumi
Single-Element Needle-Based Ultrasound Imaging of the Spine: An In Vivo Feasibility Study
Abstract
Spinal interventional procedures, such as lumbar puncture, require insertion of an epidural needle through the spine without touching the surrounding bone structures. To minimize the number of insertion trials and navigate to a desired target, an image-guidance technique is necessary. We developed a single-element needle-based ultrasound system that is composed of a needle-shaped ultrasound transducer that reconstructs B-mode images from lateral movement with synthetic aperture focusing. The objective of this study is to test the feasibility of needle-based single-element ultrasound imaging on spine in vivo. Experimental validation was performed on a metal wire phantom, ex vivo porcine bone in both water tank and porcine tissue, and spine on living swine model. The needle-based ultrasound system could visualize the structure, although reverberation and multiple reflections associated with the needle shaft were observed. These results show the potential of the system to be used for in vivo environment.
Haichong K. Zhang, Younsu Kim, Abhay Moghekar, Nicholas J. Durr, Emad M. Boctor

International Workshop on Bio-Imaging and Visualization for Patient-Customized Simulations, BIVPCS 2018

Frontmatter
A Novel Interventional Guidance Framework for Transseptal Puncture in Left Atrial Interventions
Abstract
Access to the left atrium is required for several percutaneous cardiac interventions. In these procedures, the inter-atrial septal wall is punctured using a catheter inserted in the right atrium under image guidance. Although this approach (transseptal puncture - TSP) is performed daily, complications are common. In this work, we present a novel concept for the development of an interventional guidance framework for TSP. The pre-procedural planning stage is fused with 3D intra-procedural images (echocardiography) using manually defined landmarks, transferring the relevant anatomical landmarks to the interventional space and enhancing the echocardiographic images. In addition, electromagnetic sensors are attached to the surgical instruments, tracking and including them in the enhanced intra-procedural world. Two atrial phantom models were used to evaluate this framework. To assess its accuracy, a metallic landmark was positioned in the punctured location and compared with the ideal one. The intervention was possible in both models, but in one case positioning of the landmark failed. An error of approximately of 6 mm was registered for the successful case. Technical characteristics of the framework showed an acceptable performance (frame rate ~5 frames/s). This study presented a proof-of-concept for an interventional guidance framework for TSP. However, a more automated solution and further studies are required.
Pedro Morais, João L. Vilaça, Sandro Queirós, Pedro L. Rodrigues, João Manuel R. S. Tavares, Jan D’hooge
Holographic Visualisation and Interaction of Fused CT, PET and MRI Volumetric Medical Imaging Data Using Dedicated Remote GPGPU Ray Casting
Abstract
Medical experts commonly use imaging including Computed Tomography (CT), Positron-Emission Tomography (PET) and Magnetic Resonance Imaging (MRI) for diagnosis or to plan a surgery. These scans give a highly detailed representation of the patient anatomy, but the usual Three-Dimensional (3D) separate visualisations on screens does not provide an convenient and performant understanding of the real anatomical complexity. This paper presents a computer architecture allowing medical staff to visualise and interact in real-time holographic fused CT, PET, MRI of patients. A dedicated workstation with a wireless connection enables real-time General-Purpose Processing on Graphics Processing Units (GPGPU) ray casting computation through the mixed reality (MR) headset. The hologram can be manipulated with hand gestures and voice commands through the following interaction features: instantaneous visualisation and manipulation of 3D scans with a frame rate of 30 fps and a delay lower than 120 ms. These performances give a seamless interactive experience for the user [10].
Magali Fröhlich, Christophe Bolinhas, Adrien Depeursinge, Antoine Widmer, Nicolas Chevrey, Patric Hagmann, Christian Simon, Vivianne B. C. Kokje, Stéphane Gobron
Mr. Silva and Patient Zero: A Medical Social Network and Data Visualization Information System
Abstract
Detection of Patient Zero is an increasing concern in a world where fast international transports makes pandemia a Public Health issue and a social fear, in cases such as Ebola or H5N1. The development of a medical social network and data visualization information system, which would work as an interface between the patient medical data and geographical and/or social connections, could be an interesting solution, as it would allow to quickly evaluate not only individuals at risk but also the prospective geographical areas for imminent contagion. In this work we propose an ideal model, and contrast it with the status quo of present medical social networks, within the context of medical data visualization. From recent publications, it is clear that our model converges with the identified aspects of prospective medical networks, though data protection is a key concern and implementation would have to seriously consider it.
Patrícia C. T. Gonçalves, Ana S. Moura, M. Natália D. S. Cordeiro, Pedro Campos
Fully Convolutional Network-Based Eyeball Segmentation from Sparse Annotation for Eye Surgery Simulation Model
Abstract
This paper presents a fully convolutional network-based segmentation method to create an eyeball model data for patient-specific ophthalmologic surgery simulation. In order to create an elaborate eyeball model for each patient, we need to accurately segment eye structures with different sizes and complex shapes from high-resolution images. Therefore, we aim to construct a fully convolutional network to enable accurate segmentation of anatomical structures in an eyeball from training on sparsely-annotated images, which can provide a user with all annotated slices if he or she annotates a few slices in each image volume data. In this study, we utilize a fully convolutional network with full-resolution residual units that effectively learns multi-scale image features for segmentation of eye macro- and microstructures by acting as a bridge between the two processing streams (residual and pooling streams). In addition, a weighted loss function and data augmentation are utilized for network training to accurately perform the semantic segmentation from only sparsely-annotated axial images. From the results of segmentation experiments using micro-CT images of pig eyeballs, we found that the proposed network provided better segmentation performance than conventional networks and achieved mean Dice similarity coefficient scores of 91.5% for segmentation of eye structures even from a small amount of training data.
Takaaki Sugino, Holger R. Roth, Masahiro Oda, Kensaku Mori

International Workshop on Correction of Brainshift with Intra-Operative Ultrasound, CuRIOUS 2018

Frontmatter
Resolve Intraoperative Brain Shift as Imitation Game
Abstract
Soft tissue deformation induced by craniotomy and tissue manipulation (brain shift) limits the use of preoperative image overlay in an image-guided neurosurgery, and therefore reduces the accuracy of the surgery as a consequence. An inexpensive modality to compensate for the brain shift in real-time is Ultrasound (US). The core subject of research in this context is the non-rigid registration of preoperative MR and intraoperative US images. In this work, we propose a learning based approach to address this challenge. Resolving intraoperative brain shift is considered as an imitation game, where the optimal action (displacement) for each landmark on MR is trained with a multi-task network. The result shows a mean target error of 1.21 ± 0.55 mm.
Xia Zhong, Siming Bayer, Nishant Ravikumar, Norbert Strobel, Annette Birkhold, Markus Kowarschik, Rebecca Fahrig, Andreas Maier
Non-linear Approach for MRI to intra-operative US Registration Using Structural Skeleton
Abstract
Gliomas are primary brain tumors of central nervous system. Appropriate resection of gliomas in the early tumor stage is known to increase survival rate. However, the accurate resection of tumor is a challenging problem because the soft tissue shift may occur during the operation. To provide proper guidance to neurosurgery, it is necessary to align magnetic resonance imaging (MRI) and intra-operative ultrasound (iUS). In previous studies, many algorithms tried to find fiducial points that can lead to the appropriate registration. But these methods required manual specifications from experts to ensure the reliability of the fiducials. In this study, we proposed a data-driven approach for MRI-iUS non-linear registration using structural skeletons. The visualization of our results indicated that our approach might provide better registration performance.
Jisu Hong, Hyunjin Park
Brain-Shift Correction with Image-Based Registration and Landmark Accuracy Evaluation
Abstract
We describe an algorithm and its implementation details for automatic image-based registration of intra-operative ultrasound to MRI for brain-shift correction during neurosurgery. It is evaluated on a public database of 22 surgeries for retrospective evaluation, with a particular focus on choosing the appropriate transformation model and designing the most meaningful evaluation strategy. The method succeeds in a fully automatic fashion in all cases, with an average landmark registration error for the rigid model of 1.75 mm.
Wolfgang Wein
Deformable MRI-Ultrasound Registration Using 3D Convolutional Neural Network
Abstract
Precise tracking of intra-operative tissue shift is important for accurate resection of brain tumor. Alignment of pre-interventional magnetic resonance imaging (MRI) to intra-operative ultrasound (iUS) is required to access tissue shift and enable guided surgery. However, accurate and robust image registration needed to relate pre-interventional MRI to iUS images is difficult due to the very different nature of image intensity between modalities. Here we present a framework that can perform non-rigid MRI-ultrasound registration using 3D convolutional neural network (CNN). The framework is composed of three components: feature extractor, deformation field generator and spatial sampler. Our automatic registration framework adopts unsupervised learning approach, allows accurate end-to-end deformable MRI-ultrasound registration. Our proposed method avoids the downfall of intensity-based methods by considering both image intensity and gradient. It achieves competitive registration accuracy on RESECT dataset. In addition, our method takes only about one second to register each image pair, enabling applications such as real time registration.
Li Sun, Songtao Zhang
Intra-operative Ultrasound to MRI Fusion with a Public Multimodal Discrete Registration Tool
Abstract
We present accurate results for multi-modal fusion of intra-operative 3D ultrasound and magnetic resonance imaging (MRI) using the publicly available and robust discrete registration approach deeds. After pre-processing the scans to have isotropic voxel sizes of 0.5 mm and a common coordinate system, we run both linear and deformable registration using the self-similarity context metric. We use default parameters that have previously been applied for multi-atlas fusion demonstrating the generalisation of the approach. Transformed landmark locations are obtained by either directly applying the nonlinear warp or fitting a rigid transform with six parameters. The two approaches yield average target registration errors of 1.88 mm and 1.67 mm respectively on the 22 training scans of the CuRIOUS challenge. Optimising the regularisation weight can further improve this to 1.62 mm (within 0.5 mm of the theoretical lower bound). Our findings demonstrate that in contrast to classification and segmentation tasks, multimodal registration can be appropriately handled without designing domain-specific algorithms and without any expert supervision.
Mattias P. Heinrich
Deformable MRI-Ultrasound Registration via Attribute Matching and Mutual-Saliency Weighting for Image-Guided Neurosurgery
Abstract
Intraoperative brain deformation reduces the effectiveness of using preoperative images for intraoperative surgical guidance. We propose an algorithm for deformable registration of intraoperative ultrasound (US) and preoperative magnetic resonance (MR) images in the context of brain tumor resection. From each image voxel, a set of multi-scale and multi-orientation Gabor attributes is extracted from which optimal components are selected to establish a distinctive morphological signature of the anatomical and geometric context of its surroundings. To match the attributes across image pairs, we assign higher weights – higher mutual-saliency values - to those voxels more likely to establish reliable correspondences across images. The correlation coefficient is used as the similarity measure to evaluate effectiveness of the algorithm for multi-modal registration. Free-form deformation and discrete optimization are chosen as the deformation model and optimization strategy, respectively. Experiments demonstrate our methodology on registering preoperative T2-FLAIR MR to intraoperative US in 22 clinical cases. Using manually labelled corresponding landmarks between preoperative MR and intraoperative US images, we show that the mean target registration error decreases from an initial value of 5.37 ± 4.27 mm to 3.35 ± 1.19 mm after registration.
Inês Machado, Matthew Toews, Jie Luo, Prashin Unadkat, Walid Essayed, Elizabeth George, Pedro Teodoro, Herculano Carvalho, Jorge Martins, Polina Golland, Steve Pieper, Sarah Frisken, Alexandra Golby, William Wells III, Yangming Ou
Registration of MRI and iUS Data to Compensate Brain Shift Using a Symmetric Block-Matching Based Approach
Abstract
This paper describes the application of an established block-matching based registration approach to the CuRIOUS 2018 MICCAI registration challenge. Different variations of this method are compared to demonstrate possible results of a fully automatic and general approach. The results can be used as a reference, for example when evaluating the performance of methods that are specifically developed for ultrasound to MRI registration.
David Drobny, Tom Vercauteren, Sébastien Ourselin, Marc Modat
Intra-operative Brain Shift Correction with Weighted Locally Linear Correlations of 3DUS and MRI
Abstract
During brain tumor resection procedures, 3D ultrasound (US) can be used to assess brain shift, as intra-operative MRI is challenging due to immobilization issues, and may require sedation. Brain shift can cause uncertainty in the localization of resected tumor margins and deviate the registered pre-operative MRI surgical plan. Hence, 3D US can be used to compensate for the deformation. The objective of this study is to propose an approach to automatically register the patient’s MRI to intra-operative 3D US using a deformable registration approach based on a weighted adaptation of the locally linear correlation metric for US-MRI fusion, adapting both hyper-echoic and hypo-echoic regions within the cortex. Evaluation was performed on a cohort of 23 patients, where 3D US and MRI were acquired on the same day. The proposed approach demonstrates a statistically significant improvement of internal landmark localization made by expert radiologists, with a mean target registration error (mTRE) of \(4.6 \pm 3.4\) mm, compared to an initial mTRE of \(5.3 \pm 4.2\) mm, demonstrating the clinical benefit of this tool to correct for brain shift using 3D ultrasound.
Roozbeh Shams, Marc-Antoine Boucher, Samuel Kadoury

International Workshop on Computational Precision Medicine, CPM 2018

Frontmatter
Survival Modeling of Pancreatic Cancer with Radiology Using Convolutional Neural Networks
Abstract
No reliable biomarkers for early detection of pancreatic cancer are known to date but morphological signatures from non-invasive imaging might be able to close this gap. In this paper, we present a convolutional neural network-based survival model trained directly from computed tomography (CT) images. 159 CT images with associated survival data, and 3D segmentations of organ and tumor were provided by the Pancreatic Cancer Survival Prediction MICCAI grand challenge. A simple, yet novel, approach was used to convert CT slices into RGB-channel images in order to utilize pre-training of the model’s convolutional layers. The proposed model achieves a concordance index of 0.85, indicating a relationship between high-level features in CT imaging and disease progression. The ultimate hope is that these promising results translate to more personalized treatment decisions and better cancer care for patients.
Hassan Muhammad, Ida Häggström, David S. Klimstra, Thomas J. Fuchs
Pancreatic Cancer Survival Prediction Using CT Scans and Clinical Variables
Abstract
Pancreatic cancer resulted in 411,600 deaths globally in 2015. Pancreatic ductal adenocarcinoma (PDAC) is the most common type of pancreatic cancer and it is highly lethal. Survival for patients with PDAC is dismal due to its aggressive nature, thus the development of novel and reliable prognostic model for early detection and therapy is much desired. Here we proposed a prognostic framework for prediction of overall survival of PDAC patients based on predictors derived from pancreas CT scans and patient clinical variables. Our framework includes three parts: feature extraction, feature selection and survival prediction. First, 2436 radiomics features were extracted from CT scans and were combined with the clinical variables, and a Cox model was fitted to each covariate individually to select the most predictive features. The optimal cut-off was determined by cross-validation. Finally, gradient boosting with component-wise Cox’s proportional hazards model was utilized to predict the overall survival of patients. Our framework achieves excellent performance on MICCAI 2018 Pancreatic Cancer Survival Prediction Challenge dataset, achieving mean concordance index of 0.7016 using five-fold cross-validation.
Li Sun, Songtao Zhang
Backmatter
Metadaten
Titel
Simulation, Image Processing, and Ultrasound Systems for Assisted Diagnosis and Navigation
herausgegeben von
Danail Stoyanov
Zeike Taylor
Stephen Aylward
João Manuel R.S. Tavares
Yiming Xiao
Amber Simpson
Anne Martel
Lena Maier-Hein
Shuo Li
Hassan Rivaz
Ingerid Reinertsen
Matthieu Chabanas
Keyvan Farahani
Copyright-Jahr
2018
Electronic ISBN
978-3-030-01045-4
Print ISBN
978-3-030-01044-7
DOI
https://doi.org/10.1007/978-3-030-01045-4