Skip to main content

2020 | Buch

Statistical Atlases and Computational Models of the Heart. Multi-Sequence CMR Segmentation, CRT-EPiggy and LV Full Quantification Challenges

10th International Workshop, STACOM 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 13, 2019, Revised Selected Papers

herausgegeben von: Mihaela Pop, Maxime Sermesant, Dr. Oscar Camara, Prof. Dr. Xiahai Zhuang, Shuo Li, Alistair Young, Tommaso Mansi, Avan Suinesiaputra

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the thoroughly refereed post-workshop proceedings of the 10th International Workshop on Statistical Atlases and Computational Models of the Heart: Atrial Segmentation and LV Quantification Challenges, STACOM 2019, held in conjunction with MICCAI 2019, in Shenzhen, China, in October 2019.

The 42 revised full workshop papers were carefully reviewed and selected from 76 submissions. The topics of the workshop included: cardiac imaging and image processing, machine learning applied to cardiac imaging and image analysis, atlas construction, statistical modelling of cardiac function across different patient populations, cardiac computational physiology, model customization, atlas based functional analysis, ontological schemata for data and results, integrated functional and structural analyses, as well as the pre-clinical and clinical applicability of these methods.

Inhaltsverzeichnis

Frontmatter

Regular Papers

Frontmatter
Co-registered Cardiac ex vivo DT Images and Histological Images for Fibrosis Quantification

Cardiac magnetic resonance (MR) imaging can detect infarct scar, a major cause of lethal arrhythmia and heart failure. Here, we describe a robust image processing pipeline developed to quantitatively analyze collagen density and features in a pig model of chronic fibrosis. Specifically, we use ex vivo diffusion tensor imaging (DTI) ($$0.6 \times 0.6 \times 1.2$$ mm resolution) to calculate fractional anisotropy maps in: healthy tissue, infarct core (IC) and gray zone (GZ) (i.e., a mixture of viable myocytes and collagen fibrils bordering IC and healthy zones). The 3 zones were validated using collagen-sensitive histological slides co-registered with MR images. Our results showed a significant ($$\mathrm{p}< 0.05$$) reduction in the mean FA values of GZ (by 17%) and IC (by 44%) compared to healthy areas; however, we found that these differences do not depend on the location of occluded coronary artery (LAD vs LCX). This work validates the utility of DTI-MR imaging for fibrosis quantification, with histological validation.

Peter Lin, Anne Martel, Susan Camilleri, Mihaela Pop
Manufacturing of Ultrasound- and MRI-Compatible Aortic Valves Using 3D Printing for Analysis and Simulation

Valve-related heart disease affects 27 million patients worldwide and is associated with inflammation, fibrosis and calcification which progressively lead to organ structure change. Aortic stenosis is the most common valve pathology with controversies regarding its optimal management, such as the timing of valve replacement. Therefore, there is emerging demand for analysis and simulation of valves to help researchers and companies to test novel approaches. This paper describes how to build ultrasound- and MRI-compatible aortic valves compliant phantoms with a two-part mold technique using 3D printing. The choice of the molding material, PVA, was based on its material properties and experimentally tested dissolving time. Different diseased valves were then manufactured with ecoflex silicone, a commonly used tissue-mimicking material. The valves were mounted with an external support and tested in physiological flow conditions. Flow images were obtained with both ultrasound and MRI, showing physiologically plausible anatomy and function of the valves. The simplicity of the manufacturing process and low cost of materials should enable an easy adoption of proposed methodology. Future research will focus on the extension of the method to cover a larger anatomical area (e.g. aortic arch) and the use of this phantom to validate the non-invasive assessment of blood pressure differences.

Shu Wang, Harminder Gill, Weifeng Wan, Helen Tricker, Joao Filipe Fernandes, Yohan Noh, Sergio Uribe, Jesus Urbina, Julio Sotelo, Ronak Rajani, Pablo Lamata, Kawal Rhode
Assessing the Impact of Blood Pressure on Cardiac Function Using Interpretable Biomarkers and Variational Autoencoders

Maintaining good cardiac function for as long as possible is a major concern for healthcare systems worldwide and there is much interest in learning more about the impact of different risk factors on cardiac health. The aim of this study is to analyze the impact of systolic blood pressure (SBP) on cardiac function while preserving the interpretability of the model using known clinical biomarkers in a large cohort of the UK Biobank population. We propose a novel framework that combines deep learning based estimation of interpretable clinical biomarkers from cardiac cine MR data with a variational autoencoder (VAE). The VAE architecture integrates a regression loss in the latent space, which enables the progression of cardiac health with SBP to be learnt. Results on 3,600 subjects from the UK Biobank show that the proposed model allows us to gain important insight into the deterioration of cardiac function with increasing SBP, identify key interpretable factors involved in this process, and lastly exploit the model to understand patterns of positive and adverse adaptation of cardiac function.

Esther Puyol-Antón, Bram Ruijsink, James R. Clough, Ilkay Oksuz, Daniel Rueckert, Reza Razavi, Andrew P. King
Ultra-DenseNet for Low-Dose X-Ray Image Denoising in Cardiac Catheter-Based Procedures

The continuous development and prolonged use of X-ray fluoroscopic imaging in cardiac catheter-based procedures is associated with increasing radiation dose to both patients and clinicians. Reducing the radiation dose leads to increased image noise and artifacts, which may reduce discernable image information. Therefore, advanced denoising methods for low-dose X-ray images are needed to improve safety and reliability. Previous X-ray imaging denoising methods mainly rely on domain filtration and iterative reconstruction algorithms and some remaining artifacts still appear in the denoised X-ray images. Inspired by recent achievements of convolutional neural networks (CNNs) on feature representation in the medical image analysis field, this paper introduces an ultra-dense denoising network (UDDN) within the CNN framework for X-ray image denoising in cardiac catheter-based procedures. After patch-based iterative training, the proposed UDDN achieves a competitive performance in both simulated and clinical cases by achieving higher peak signal-to-noise ratio (PSNR) and signal-to-noise ratio (SNR) when compared to previous CNN architectures.

Yimin Luo, Daniel Toth, Kui Jiang, Kuberan Pushparajah, Kawal Rhode
A Cascade Regression Model for Anatomical Landmark Detection

Automatic anatomical landmark detection is beneficial to many other medical image analysis tasks. In this paper, we propose a two-stage cascade regression model to make coarse-to-fine landmark detection. Specifically, in the first stage, a Gaussian heatmap regression model customized from U-Net is exploited to make primary prediction, which takes the downsampled entire image as input. In the second stage, we develop a CNN to regress displacements from the primary prediction to the landmarks, using patches in original resolution centered at the previous localization as input. Owing to the different sizes and resolutions of inputs in two stages, the global context information and local appearance can be integrated by our algorithm. The spacial relationships among landmarks can also be exploited by predicting all the landmarks simultaneously. In evaluation on the coronary and aorta CTA images, we show that our proposed method is widely applicable and delivers state-of-the-art performance even with limited training data.

Zimeng Tan, Yongjie Duan, Ziyi Wu, Jianjiang Feng, Jie Zhou
Comparison of 2D Echocardiography and Cardiac Cine MRI in the Assessment of Regional Left Ventricular Wall Thickness

The generation of kinematic models of the heart using 3D echocardiography (echo) can be difficult due to poor image contrast and signal dropout, particularly at the epicardial surface. 2D echo images generally have a better contrast-to-noise ratio compared to 3D echo images, thus wall thickness (WT) estimates from 2D echo may provide a reliable means to constrain model fits to 3D echo images. WT estimates were calculated by solving a pair of differential equations guided by a vector field, which is constructed from the solution of Laplace’s equation on binary segmentations of the left ventricular myocardium. We compared 2D echo derived WT estimates against values calculated using gold-standard cardiac cine magnetic resonance imaging (MRI) to assess reliability. We found that 2D echo WT estimates were higher compared to WT values from MRI at end-diastole with a mean difference of 1.3 mm (95% CI: 0.74–1.8 mm), 1.5 mm (95% CI: 0.91–2.1 mm) and 2.1 mm (95% CI: 1.6–2.6 mm) for basal, mid-ventricular and apical segments respectively. At end-systole, the WT estimates from MRI were higher compared to those derived from 2D echo with a mean difference of 2.6 mm (95% CI: 2.0–3.1 mm), 2.1 mm (95% CI: 1.5–2.7 mm) and 1.1 mm (95% CI: 0.49–1.7 mm) for basal, mid-ventricular and apical segments, respectively. The quantitative WT comparison in this study will contribute to the ongoing efforts to better translate kinematic modelling analyses from gold-standard cardiac MRI to the more widely accessible echocardiography.

Vera H. J. van Hal, Debbie Zhao, Kathleen Gilbert, Thiranja P. Babarenda Gamage, Charlene Mauger, Robert N. Doughty, Malcolm E. Legget, Jichao Zhao, Aaqel Nalar, Oscar Camara, Alistair A. Young, Vicky Y. Wang, Martyn P. Nash
Fully Automatic 3D Bi-Atria Segmentation from Late Gadolinium-Enhanced MRIs Using Double Convolutional Neural Networks

Segmentation of the 3D human atria from late gadolinium-enhanced (LGE)-MRIs is crucial for understanding and analyzing the underlying atrial structures that sustain atrial fibrillation (AF), the most common cardiac arrhythmia. However, due to the lack of a large labeled dataset, current automated methods have only been developed for left atrium (LA) segmentation. Since AF is sustained across both the LA and right atrium (RA), an automatic bi-atria segmentation method is of high interest. We have therefore created a 3D LGE-MRI database from AF patients with both LA and RA labels to train a double, sequentially used convolutional neural network (CNN) for automatic LA and RA epicardium and endocardium segmentation. To mitigate issues regarding the severe class imbalance and the complex geometry of the atria, the first CNN accurately detects the region of interest (ROI) containing the atria and the second CNN performs targeted regional segmentation of the ROI. The CNN comprises of a U-Net backbone enhanced with residual blocks, pre-activation normalization, and a Dice loss to improve accuracy and convergence. The receptive field of the CNN was increased by using 5 × 5 kernels to capture large variations in the atrial geometry. Our algorithm segments and reconstructs the LA and RA within 2 s, achieving a Dice accuracy of 94% and a surface-to-surface distance error of approximately 1 pixel. To our knowledge, the proposed approach is the first of its kind, and is currently the most robust automatic bi-atria segmentation method, creating a solid benchmark for future studies.

Zhaohan Xiong, Aaqel Nalar, Kevin Jamart, Martin K. Stiles, Vadim V. Fedorov, Jichao Zhao
4D CNN for Semantic Segmentation of Cardiac Volumetric Sequences

We propose a 4D convolutional neural network (CNN) for the segmentation of retrospective ECG-gated cardiac CT, a series of single-channel volumetric data over time. While only a small subset of volumes in the temporal sequence is annotated, we define a sparse loss function on available labels to allow the network to leverage unlabeled images during training and generate a fully segmented sequence. We investigate the accuracy of the proposed 4D network to predict temporally consistent segmentations and compare with traditional 3D segmentation approaches. We demonstrate the feasibility of the 4D CNN and establish its performance on cardiac 4D CCTA (video: https://drive.google.com/uc?id=1n-GJX5nviVs8R7tque2zy2uHFcN_Ogn1 .).

Andriy Myronenko, Dong Yang, Varun Buch, Daguang Xu, Alvin Ihsani, Sean Doyle, Mark Michalski, Neil Tenenholtz, Holger Roth
Two-Stage 2D CNN for Automatic Atrial Segmentation from LGE-MRIs

Atrial fibrillation (AF) is the most common sustained heart rhythm disturbance and a leading cause of hospitalization, heart failure and stroke. In the current medical practice, atrial segmentation from medical images for clinical diagnosis and treatment, is a labor-intensive and error-prone manual process. The atrial segmentation challenge held in conjunction with the 2018 the Medical Image Computing and Computer Assisted Intervention Society (MICCAI) conference and Statistical Atlases and Computational Modelling of the Heart (STACOM), offered the opportunity to develop reliable approaches to automatically annotate and perform segmentation of the left atrial (LA) chamber using the largest available 3D late gadolinium-enhanced MRI (LGE-MRI) dataset with 154 3D LGE-MRIs and labels. For this challenge, 11 out the 27 contestants achieved more than 90% Dice score accuracy, however, a critical question remains as which is the optimal approach for LA segmentation. In this paper, we propose a two-stage 2D fully convolutional neural network with extensive data augmentation and achieves a superior segmentation accuracy with a Dice score of 93.7% using the same dataset and conditions as for the atrial segmentation challenge. Thus, our approach outperforms the methods proposed in the atrial segmentation challenge while employing less computational resources than the challenge winning method.

Kevin Jamart, Zhaohan Xiong, Gonzalo Maso Talou, Martin K. Stiles, Jichao Zhao
3D Left Ventricular Segmentation from 2D Cardiac MR Images Using Spatial Context

Accurate left ventricular (LV) segmentation in cardiac MRI facilitates quantification of clinical parameters such as LV volume and ejection fraction (EF). We present a CNN-based method to obtain a 3D representation of LV by integrating information from 2D short-axis and horizontal and vertical long-axis images. Our CNN is flexible to the number of input slices and uses an additional input of image coordinates as spatial context. This concept is validated on variations of two well-known CNN architectures for medical image segmentation: U-Net and DeepMedic. Five-fold cross validation on a dataset of 20 patients achieved a correlation of 95.0/93.1$$\%$$ for quantification of end-diastolic volume, 91.6/90.8$$\%$$ for end-systolic volume and 80.5/84.5$$\%$$ for EF for the two architectures respectively. We show that (1) incorporating long-axis data improves segmentation performance and (2) providing spatial context by adding image coordinates as input to the CNN yields similar performance with a smaller receptive field.

Sofie Tilborghs, Tom Dresselears, Piet Claus, Jan Bogaert, Frederik Maes
Towards Hyper-Reduction of Cardiac Models Using Poly-affine Transformations

This paper presents a method for frame-based finite element models in order to develop fast personalised cardiac electromechanical models. Its originality comes from the choice of the deformation model: it relies on a reduced number of degrees of freedom represented by affine transformations located at arbitrary control nodes over a tetrahedral mesh. This is motivated by the fact that cardiac motion can be well represented by such poly-affine transformations. The shape functions use then a geodesic distance over arbitrary Voronoï-like regions containing the control nodes. The high order integration of elastic energy density over the domain is performed at arbitrary integration points. This integration, which is associated to affine degrees of freedom, allows a lower computational cost while preserving a good accuracy for simple geometries. The method is validated on a cube under simple compression and preliminary results on simplified cardiac geometries are presented, reducing by a factor 100 the number of degrees of freedom.

Gaëtan Desrues, Hervé Delingette, Maxime Sermesant
Conditional Generative Adversarial Networks for the Prediction of Cardiac Contraction from Individual Frames

Cardiac anatomy and function are interrelated in many ways, and these relations can be affected by multiple pathologies. In particular, this applies to ventricular shape and mechanical deformation. We propose a machine learning approach to capture these interactions by using a conditional Generative Adversarial Network (cGAN) to predict cardiac deformation from individual Cardiac Magnetic Resonance (CMR) frames, learning a deterministic mapping between end-diastolic (ED) to end-systolic (ES) CMR short-axis frames. We validate the predicted images by quantifying the difference with real images using mean squared error (MSE) and structural similarity index (SSIM), as well as the Dice coefficient between their respective endo- and epicardial segmentations, obtained with an additional U-Net. We evaluate the ability of the network to learn “healthy” deformations by training it on $$\sim $$33,500 image pairs from $$\sim $$12,000 subjects, and testing on a separate test set of $$\sim $$4,500 image pairs from the UK Biobank study. Mean MSE, SSIM and Dice scores were 0.0026 ± 0.0013, 0.89 ± 0.032 and 0.89 ± 0.059 respectively. We subsequently re-trained the network on specific patient group data, showing that the network is capable of extracting physiologically meaningful differences between patient populations suggesting promising applications on pathological data.

Julius Ossenberg-Engels, Vicente Grau
Learning Interactions Between Cardiac Shape and Deformation: Application to Pulmonary Hypertension

Cardiac shape and deformation are two relevant descriptors for the characterization of cardiovascular diseases. It is also known that strong interactions exist between them depending on the disease. In clinical routine, these high dimensional descriptors are reduced to scalar values (ventricular ejection fraction, volumes, global strains...), leading to a substantial loss of information. Methods exist to better integrate these high-dimensional data by reducing the dimension and mixing heterogeneous descriptors. Nevertheless, they usually do not consider the interactions between the descriptors. In this paper, we propose to apply dimensionality reduction on high dimensional cardiac shape and deformation descriptors and take into account their interactions. We investigated two unsupervised linear approaches, an individual analysis of each feature (Principal Component Analysis), and a joint analysis of both features (Partial Least Squares) and related their output to the main characteristics of the studied pathology. We experimented both methods on right ventricular meshes from a population of 254 cases tracked along the cycle (154 with pulmonary hypertension, 100 controls). Despite similarities in the output space obtained by the two methods, substantial differences are observed in the reconstructed shape and deformation patterns along the principal modes of variation, in particular in regions of interest for the studied disease.

Maxime Di Folco, Patrick Clarysse, Pamela Moceri, Nicolas Duchateau
Multimodal Cardiac Segmentation Using Disentangled Representation Learning

Magnetic Resonance (MR) protocols use several sequences to evaluate pathology and organ status. Yet, despite recent advances, the analysis of each sequence’s images (modality hereafter) is treated in isolation. We propose a method suitable for multimodal and multi-input learning and analysis, that disentangles anatomical and imaging factors, and combines anatomical content across the modalities to extract more accurate segmentation masks. Mis-registrations between the inputs are handled with a Spatial Transformer Network, which non-linearly aligns the (now intensity-invariant) anatomical factors. We demonstrate applications in Late Gadolinium Enhanced (LGE) and cine MRI segmentation. We show that multi-input outperforms single-input models, and that we can train a (semi-supervised) model with few (or no) annotations for one of the modalities. Code is available at https://github.com/agis85/multimodal_segmentation .

Agisilaos Chartsias, Giorgos Papanastasiou, Chengjia Wang, Colin Stirrat, Scott Semple, David Newby, Rohan Dharmakumar, Sotirios A. Tsaftaris
DeepLA: Automated Segmentation of Left Atrium from Interventional 3D Rotational Angiography Using CNN

Accurate segmentation of the shape of the left atrium (LA) is important for treatment of atrial fibrillation (AF) by catheter ablation. Interventional 3D rotational angiography (3DRA) can be used to obtain 3D images during the intervention. Low dose 3DRA poses segmentation challenges due to high image noise. There is a significant amount of research focusing on the automatic segmentation from 3DRA images, all based on an active shape or atlas-based approaches.We present an algorithm based on a 3D deep convolutional neural network (CNN) for automated segmentation of 3DRA images to predict the shape of the LA. The CNN is based on the U-Net architecture and consists of an encoder and a decoder part. It is designed to be trained end-to-end from scratch on interactive semi-automated 3DRA images, which include the body of the LA and the proximal pulmonary veins up to the first branching vessel.The CNN is trained and validated using 5-fold cross-validation on 20 3DRA images by computing the Dice score (0.959 ± 0.015), recall (0.962 ± 0.026), precision (0.957 ± 0.021) and mean surface distance (0.716 ± 0.276 mm). We further validated the algorithm on an additional data set of 5 images. The algorithm achieved a Dice score and mean surface distance of 0.937 ± 0.016 and 1.500 ± 0.368 respectively.

Kobe Bamps, Stijn De Buck, Jeroen Bertels, Rik Willems, Christophe Garweg, Peter Haemers, Joris Ector
Non-invasive Pressure Estimation in Patients with Pulmonary Arterial Hypertension: Data-Driven or Model-Based?

Right heart catheterisation is considered as the gold standard for the assessment of patients with suspected pulmonary hypertension. It provides clinicians with meaningful data, such as pulmonary capillary wedge pressure and pulmonary vascular resistance, however its usage is limited due to its invasive nature. Non-invasive alternatives, like Doppler echocardiography could present insightful measurements of right heart but lack detailed information related to pulmonary vasculature. In order to explore non-invasive means, we studied a dataset of 95 pulmonary hypertension patients, which includes measurements from echocardiography and from right-heart catheterisation. We used data extracted from echocardiography to conduct cardiac circulation model personalisation and tested its prediction power of catheter data. Standard machine learning methods were also investigated for pulmonary artery pressure prediction. Our preliminary results demonstrated the potential prediction power of both data-driven and model-based approaches.

Yingyu Yang, Stephane Gillon, Jaume Banus, Pamela Moceri, Maxime Sermesant
Deep Learning Surrogate of Computational Fluid Dynamics for Thrombus Formation Risk in the Left Atrial Appendage

Recently, the risk of thrombus formation in the left atrium (LA) has been assessed through patient-specific computational fluid dynamic (CFD) simulations, characterizing the complex 4D nature of blood flow in the left atrial appendage (LAA). Nevertheless, the vast computational resources and long computing times required by traditional CFD methods prevents its embedding in the clinical workflow of time-sensitive applications. In this study, two distinct deep learning (DL) architectures have been developed to receive the patient-specific LAA geometry as an input and predict the endothelial cell activation potential (ECAP), which is linked to the risk of thrombosis. The first network is based on a simple fully-connected network, while the latter also performs a dimensionality reduction of the variables. Both models have been trained with a synthetic dataset of 210 LAA geometries being able to accurately predict the ECAP distributions with an average error of 4.72% for the fully-connected approach and 5.75% for its counterpart. Most importantly, the obtention of the ECAP predictions was quasi-instantaneous, orders of magnitude faster than conventional CFD.

Xabier Morales, Jordi Mill, Kristine A. Juhl, Andy Olivares, Guillermo Jimenez-Perez, Rasmus R. Paulsen, Oscar Camara
End-to-end Cardiac Ultrasound Simulation for a Better Understanding of Image Quality

Ultrasound imaging is a very versatile and fast medical imaging modality, however it can suffer from serious image quality degradation. The origin of such loss of image quality is often difficult to identify in detail, therefore it makes it difficult to design probes and tools that are less impacted. The objective of this manuscript is to present an end-to-end simulation pipeline that makes it possible to generate synthetic ultrasound images while controlling every step of the pipeline, from the simulated cardiac function, to the torso anatomy, probe parameters, and reconstruction process. Such a pipeline enables to vary every parameter in order to quantitatively evaluate its impact on the final image quality. We present here first results on classical ultrasound phantoms and a digital heart. The utility of this pipeline is exemplified with the impact of ribs on the resulting cardiac ultrasound image.

Alexandre Legay, Thomas Tiennot, Jean-François Gelly, Maxime Sermesant, Jean Bulté
Probabilistic Motion Modeling from Medical Image Sequences: Application to Cardiac Cine-MRI

We propose to learn a probabilistic motion model from a sequence of images. Besides spatio-temporal registration, our method offers to predict motion from a limited number of frames, useful for temporal super-resolution. The model is based on a probabilistic latent space and a novel temporal dropout training scheme. This enables simulation and interpolation of realistic motion patterns given only one or any subset of frames of a sequence. The encoded motion also allows to be transported from one subject to another without the need of inter-subject registration. An unsupervised generative deformation model is applied within a temporal convolutional network which leads to a diffeomorphic motion model – encoded as a low-dimensional motion matrix. Applied to cardiac cine-MRI sequences, we show improved registration accuracy and spatio-temporally smoother deformations compared to three state-of-the-art registration algorithms. Besides, we demonstrate the model’s applicability to motion transport by simulating a pathology in a healthy case. Furthermore, we show an improved motion reconstruction from incomplete sequences compared to linear and cubic interpolation.

Julian Krebs, Tommaso Mansi, Nicholas Ayache, Hervé Delingette
Deep Learning for Cardiac Motion Estimation: Supervised vs. Unsupervised Training

Deep learning based registration methods have emerged as alternatives to traditional registration methods, with competitive accuracy and significantly less runtime. Two different strategies have been proposed to train such deep learning registration networks: supervised training strategy where the model is trained to regress to generated ground truth deformation; and unsupervised training strategy where the model directly optimises the similarity between the registered images. In this work, we directly compare the performance of these two training strategies for cardiac motion estimation on cardiac cine MR sequences. Testing on real cardiac MRI data shows that while the supervised training yields more regular deformation, the unsupervised more accurately captures the deformation of anatomical structures in cardiac motion.

Huaqi Qiu, Chen Qin, Loic Le Folgoc, Benjamin Hou, Jo Schlemper, Daniel Rueckert

Multi-Sequence Cardiac MR Segmentation Challenge

Frontmatter
Style Data Augmentation for Robust Segmentation of Multi-modality Cardiac MRI

We propose a data augmentation method to improve the segmentation accuracy of the convolutional neural network on multi-modality cardiac magnetic resonance (CMR) dataset. The strategy aims to reduce over-fitting of the network toward any specific intensity or contrast of the training images by introducing diversity in these two aspects. The style data augmentation (SDA) strategy increases the size of the training dataset by using multiple image processing functions including adaptive histogram equalisation, Laplacian transformation, Sobel edge detection, intensity inversion and histogram matching. For the segmentation task, we developed the thresholded connection layer network (TCL-Net), a minimalist rendition of the U-Net architecture, which is designed to reduce convergence and computation time. We integrate the dual U-Net strategy to increase the resolution of the 3D segmentation target. Utilising these approaches on a multi-modality dataset, with SSFP and T2 weighted images as training and LGE as validation, we achieve 90% and 96% validation Dice coefficient for endocardium and epicardium segmentations. This result can be interpreted as a proof of concept for a generalised segmentation network that is robust to the quality or modality of the input images. When testing with our mono-centric LGE image dataset, the SDA method also improves the performance of the epicardium segmentation, with an increase from 87% to 90% for the single network segmentation.

Buntheng Ly, Hubert Cochet, Maxime Sermesant
Unsupervised Multi-modal Style Transfer for Cardiac MR Segmentation

In this work, we present a fully automatic method to segment cardiac structures from late-gadolinium enhanced (LGE) images without using labelled LGE data for training, but instead by transferring the anatomical knowledge and features learned on annotated balanced steady-state free precession (bSSFP) images, which are easier to acquire. Our framework mainly consists of two neural networks: a multi-modal image translation network for style transfer and a cascaded segmentation network for image segmentation. The multi-modal image translation network generates realistic and diverse synthetic LGE images conditioned on a single annotated bSSFP image, forming a synthetic LGE training set. This set is then utilized to fine-tune the segmentation network pre-trained on labelled bSSFP images, achieving the goal of unsupervised LGE image segmentation. In particular, the proposed cascaded segmentation network is able to produce accurate segmentation by taking both shape prior and image appearance into account, achieving an average Dice score of 0.92 for the left ventricle, 0.83 for the myocardium, and 0.88 for the right ventricle on the test set.

Chen Chen, Cheng Ouyang, Giacomo Tarroni, Jo Schlemper, Huaqi Qiu, Wenjia Bai, Daniel Rueckert
An Automatic Cardiac Segmentation Framework Based on Multi-sequence MR Image

LGE CMR is an efficient technology for detecting infarcted myocardium. An efficient and objective ventricle segmentation method in LGE can benefit the location of the infarcted myocardium. In this paper, we proposed an automatic framework for LGE image segmentation. There are just 5 labeled LGE volumes with about 15 slices of each volume. We adopted histogram match, an invariant of rotation registration method, on the other labeled modalities to achieve effective augmentation of the training data. A CNN segmentation model was trained based on the augmented training data by leave-one-out strategy. The predicted result of the model followed a connected component analysis for each class to remain the largest connected component as the final segmentation result. Our model was evaluated by the 2019 Multi-sequence Cardiac MR Segmentation Challenge. The mean testing result of 40 testing volumes on Dice score, Jaccard score, Surface distance, and Hausdorff distance is 0.8087, 0.6976, 2.8727 mm, and 15.6387 mm, respectively. The experiment result shows a satisfying performance of the proposed framework. Code is available at https://github.com/Suiiyu/MS-CMR2019 .

Yashu Liu, Wei Wang, Kuanquan Wang, Chengqin Ye, Gongning Luo
Cardiac Segmentation of LGE MRI with Noisy Labels

In this work, we attempt the segmentation of cardiac structures in late gadolinium-enhanced (LGE) magnetic resonance images (MRI) using only minimal supervision in a two-step approach. In the first step, we register a small set of five LGE cardiac magnetic resonance (CMR) images with ground truth labels to a set of 40 target LGE CMR images without annotation. Each manually annotated ground truth provides labels of the myocardium and the left ventricle (LV) and right ventricle (RV) cavities, which are used as atlases. After multi-atlas label fusion by majority voting, we possess noisy labels for each of the targeted LGE images. A second set of manual labels exists for 30 patients of the target LGE CMR images, but are annotated on different MRI sequences (bSSFP and T2-weighted). Again, we use multi-atlas label fusion with a consistency constraint to further refine our noisy labels if additional annotations in other modalities are available for a given patient. In the second step, we train a deep convolutional network for semantic segmentation on the target data while using data augmentation techniques to avoid over-fitting to the noisy labels. After inference and simple post-processing, we achieve our final segmentation for the targeted LGE CMR images, resulting in an average Dice of 0.890, 0.780, and 0.844 for LV cavity, LV myocardium, and RV cavity, respectively.

Holger Roth, Wentao Zhu, Dong Yang, Ziyue Xu, Daguang Xu
Pseudo-3D Network for Multi-sequence Cardiac MR Segmentation

Deep learning approaches have been regarded as a powerful model for cardiac magnetic resonance (CMR) image segmentation. However, most current deep learning approaches do not fully utilize the information from multi-sequence (MS) cardiac magnetic resonance. In this work, the deep learning method is used to fully-automatic segment the MS CMR data. The balanced-Steady State Free Precession (bSSFP) cine sequence is used to perform left ventricular positioning as a priori knowledge, and then the Late Gadolinium Enhancement (LGE) cine sequence is used for precise segmentation. This segmentation strategy makes full use of the complementary information from the MS CMR data. Moreover, to solve the anisotropy of volumetric medical images, we employ the Pseudo-3D convolution neural network structure to segment the LGE CMR data, which combines the advantage of 2D networks and preserving the spatial structure information in 3D data without compromising segmentation accuracy. Experimental results of the Multi-sequence Cardiac MR Segmentation Challenge (MS-CMRSeg 2019) show that our approach has achieved gratifying results even with limited GPU computing resources and small amounts of annotated data. The full implementation and configuration files in this article are available at https://github.com/liut969/Multi-sequence-Cardiac-MR-Segmentation .

Tao Liu, Yun Tian, Shifeng Zhao, XiaoYing Huang, Yang Xu, Gaoyuan Jiang, Qingjun Wang
SK-Unet: An Improved U-Net Model with Selective Kernel for the Segmentation of Multi-sequence Cardiac MR

In the clinical environment, myocardial infarction (MI) as one common cardiovascular disease is mainly evaluated based on the late gadolinium enhancement (LGE) cardiac magnetic resonance images (CMRIs). The automatic segmentations of left ventricle (LV), right ventricle (RV), and left ventricular myocardium (LVM) in the LGE CMRIs are desired for the aided diagnosis in clinic. To accomplish this segmentation task, this paper proposes a modified U-net architecture by combining multi-sequence CMRIs, including the cine, LGE, and T2-weighted CMRIs. The cine and T2-weighted CMRIs are used to assist the segmentation in the LGE CMRIs. In this segmentation network, the squeeze-and-excitation residual (SE-Res) and selective kernel (SK) modules are inserted in the down-sampling and up-sampling stages, respectively. The SK module makes the obtained feature maps more informative in both spatial and channel-wise space, and attains more precise segmentation result. The utilized dataset is from the MICCAI challenge (MS-CMRSeg 2019), which is acquired from 45 patients including three CMR sequences. The cine and T2-weighted CMRIs acquired from 35 patients and the LGE CMRIs acquired from 5 patients are labeled. Our method achieves the mean dice score of 0.922 (LV), 0.827 (LVM), and 0.874 (RV) in the LGE CMRIs.

Xiyue Wang, Sen Yang, Mingxuan Tang, Yunpeng Wei, Xiao Han, Ling He, Jing Zhang
Multi-sequence Cardiac MR Segmentation with Adversarial Domain Adaptation Network

Automatic and accurate segmentation of the ventricles and myocardium from multi-sequence cardiac MRI (CMR) is crucial for the diagnosis and treatment management for patients suffering from myocardial infarction (MI). However, due to the existence of domain shift among different modalities of datasets, the performance of deep neural networks drops significantly when the training and testing datasets are distinct. In this paper, we propose an unsupervised domain alignment method to explicitly alleviate the domain shifts among different modalities of CMR sequences, e.g., bSSFP, LGE, and T2-weighted. Our segmentation network is attention U-Net with pyramid pooling module, where multi-level feature space and output space adversarial learning are proposed to transfer discriminative domain knowledge across different datasets. Moreover, we further introduce a group-wise feature recalibration module to enforce the fine-grained semantic-level feature alignment that matching features from different networks but with the same class label. We evaluate our method on the multi-sequence cardiac MR Segmentation Challenge 2019 datasets, which contain three different modalities of MRI sequences. Extensive experimental results show that the proposed methods can obtain significant segmentation improvements compared with the baseline models.

Jiexiang Wang, Hongyu Huang, Chaoqi Chen, Wenao Ma, Yue Huang, Xinghao Ding
Deep Learning Based Multi-modal Cardiac MR Image Segmentation

Accurate modelling and segmentation of the ventricles and myocardium in cardiac MR (CMR) image is crucial for diagnosis and treatment management for patients suffering from myocardial infarction (MI). As the infarcted myocardium can be enhanced in LGE CMR through appearing with distinctive brightness compared with the healthy tissues, it can help doctors better study the presence, location, and extent of MI in clinical diagnosis. Hence it is of great significance to delineate ventricles and myocardium from LGE CMR images. In this study, we proposed a multi-modal cardiac MR image segmentation strategy via combining the T2-weighted CMR and the balanced-Steady State Free Precession (bSSFP) CMR sequence. Specifically, the T2-weighted CMR and bSSFP are co-registered and set as the input of the convolution neural network to do the first stage segmentation in bSSFP space. By predicting all the labels, we further registered T2-weighted CMR, bSSFP and the corresponding labels into LGE space, and as an input to the convolution neural network to do the second stage segmentation. In the end, we post-processed the output masks to further ensure the accuracy of the segmentation results. The dice score of the proposed method in test set of Multi-sequence Cardiac MR (MS-CMR) Challenge 2019 achievers 0.8541, 0.7131 and 0.7924 for left ventricular (LV), left ventricular myocardium (LV myo), and right ventricular (RV).

Rencheng Zheng, Xingzhong Zhao, Xingming Zhao, He Wang
Segmentation of Multimodal Myocardial Images Using Shape-Transfer GAN

Myocardium segmentation of late gadolinium enhancement (LGE) Cardiac MR images is important for evaluation of infarction regions in clinical practice. The pathological myocardium in LGE images presents distinctive brightness and textures compared with the healthy tissues, making it much more challenging to be segment. Instead, the balanced-Steady State Free Precession (bSSFP) cine images show clearly boundaries and can be easily segmented. Given this fact, we propose a novel shape-transfer GAN for LGE images, which can (1) learn to generate realistic LGE images from bSSFP with the anatomical shape preserved, and (2) learn to segment the myocardium of LGE images from these generated images. It’s worth to note that no segmentation label of the LGE images is used during this procedure. We test our model on dataset from the Multi-sequence Cardiac MR Segmentation Challenge. The results show that the proposed Shape-Transfer GAN can achieve accurate myocardium masks of LGE images.

Xumin Tao, Hongrong Wei, Wufeng Xue, Dong Ni
Knowledge-Based Multi-sequence MR Segmentation via Deep Learning with a Hybrid U-Net++ Model

The accurate segmentation, analysis and modelling of ventricles and myocardium plays a significant role in the diagnosis and treatment of patients with myocardial infarction (MI). Magnetic resonance imaging (MRI) is specifically employed to collect imaging anatomical and functional information about the cardiac. In this paper, we have proposed a segmentation framework for the MS-CMRSeg Multi-sequence Cardiac MR Segmentation Challenge, which can extract the desired regions and boundaries. In our framework, we have designed a binary classifier to improve the accuracy of the left ventricles (LVs). Extensive experiments on both validation dataset and testing dataset demonstrate the effectiveness of this strategy and give an insight towards the future work.

Jinchang Ren, He Sun, Yumin Huang, Hao Gao
Combining Multi-Sequence and Synthetic Images for Improved Segmentation of Late Gadolinium Enhancement Cardiac MRI

Accurate segmentation of the cardiac boundaries in late gadolinium enhancement magnetic resonance images (LGE-MRI) is a fundamental step for accurate quantification of scar tissue. However, while there are many solutions for automatic cardiac segmentation of cine images, the presence of scar tissue can make the correct delineation of the myocardium in LGE-MRI challenging even for human experts. As part of the Multi-Sequence Cardiac MR Segmentation Challenge, we propose a solution for LGE-MRI segmentation based on two components. First, a generative adversarial network is trained for the task of modality-to-modality translation between cine and LGE-MRI sequences to obtain extra synthetic images for both modalities. Second, a deep learning model is trained for segmentation with different combinations of original, augmented and synthetic sequences. Our results based on three magnetic resonance sequences (LGE, bSSFP and T2) from 45 different patients show that the multi-sequence model training integrating synthetic images and data augmentation improves in the segmentation over conventional training with real datasets. In conclusion, the accuracy of the segmentation of LGE-MRI images can be improved by using complementary information provided by non-contrast MRI sequences.

Víctor M. Campello, Carlos Martín-Isla, Cristian Izquierdo, Steffen E. Petersen, Miguel A. González Ballester, Karim Lekadir
Automated Multi-sequence Cardiac MRI Segmentation Using Supervised Domain Adaptation

Left ventricle segmentation and morphological assessment are essential for improving diagnosis and our understanding of cardiomyopathy, which in turn is imperative for reducing risk of myocardial infarctions in patients. Convolutional neural network (CNN) based methods for cardiac magnetic resonance (CMR) image segmentation rely on supervision with pixel-level annotations, and may not generalize well to images from a different domain. These methods are typically sensitive to variations in imaging protocols and data acquisition. Since annotating multi-sequence CMR images is tedious and subject to inter- and intra-observer variations, developing methods that can automatically adapt from one domain to the target domain is of great interest. In this paper, we propose an approach for domain adaptation in multi-sequence CMR segmentation task using transfer learning that combines multi-source image information. We first train an encoder-decoder CNN on T2-weighted and balanced-Steady State Free Precession (bSSFP) MR images with pixel-level annotation and fine-tune the same network with a limited number of Late Gadolinium Enhanced-MR (LGE-MR) subjects, to adapt the domain features. The domain-adapted network was trained with just four LGE-MR training samples and obtained an average Dice score of $$\sim $$85.0% on the test set comprises of 40 LGE-MR subjects. The proposed method significantly outperformed a network without adaptation trained from scratch on the same set of LGE-MR training data.

Sulaiman Vesal, Nishant Ravikumar, Andreas Maier
A Two-Stage Fully Automatic Segmentation Scheme Using Both 2D and 3D U-Net for Multi-sequence Cardiac MR

Multi-sequence cardiac magnetic resonance (MR) segmentation is an important medical imaging technology that facilitates intelligent interpretation of clinical MR images. However, fully automatic segmentation of multi-sequence cardiac MR is a challenging task due to the complexity and variability of cardiac anatomy. In this study, we propose a two-stage deep learning scheme for automatic segmentation of volumetric multi-sequence MR images by leveraging both 2D and 3D U-Net. In the first stage, a 2D U-Net model coupled with the iterative randomized Hough transform is employed on the balanced-steady state free precession (bSSFP) MR sequences, so as to find the center coordinates of the left ventricles (LVs). The regions of interest (ROIs) are then localized around the center coordinates on the corresponding late gadolinium enhanced (LGE) MR sequences. In the second stage, a 3D probabilistic U-Net model is performed on the ROIs in the LGE data to segment the LV, right ventricle (RV) and left ventricular myocardium (MYO). Experimental results on the MICCAI 2019 Multi-Sequence Cardiac MR Segmentation (MS-CMRSeg) Challenge show that the proposed scheme performs well with average Dice similarity coefficients of LV, RV and MYO as 0.792, 0.697 and 0.611, respectively.

Haohao Xu, Zhuangwei Xu, Wenting Gu, Qi Zhang
Adversarial Convolutional Networks with Weak Domain-Transfer for Multi-sequence Cardiac MR Images Segmentation

Analysis and modeling of the ventricles and myocardium are important in the diagnostic and treatment of heart diseases. Manual delineation of those tissues in cardiac MR (CMR) scans is laborious and time-consuming. The ambiguity of the boundaries makes the segmentation task rather challenging. Furthermore, the annotations on some modalities such as Late Gadolinium Enhancement (LGE) MRI, are often not available. We propose an end-to-end segmentation framework based on convolutional neural network (CNN) and adversarial learning. A dilated residual U-shape network is used as a segmentor to generate the prediction mask; meanwhile, a CNN is utilized as a discriminator model to judge the segmentation quality. To leverage the available annotations across modalities per patient, a new loss function named weak domain-transfer loss is introduced to the pipeline. The proposed model is evaluated on the public dataset released by the challenge organizer in MICCAI 2019, which consists of 45 sets of multi-sequence CMR images. We demonstrate that the proposed adversarial pipeline outperforms baseline deep-learning methods.

Jingkun Chen, Hongwei Li, Jianguo Zhang, Bjoern Menze

CRT-EPiggy19 Challenge

Frontmatter
Best (and Worst) Practices for Organizing a Challenge on Cardiac Biophysical Models During AI Summer: The CRT-EPiggy19 Challenge

During the last years tens of challenges have been organized to benchmark computational techniques with shared data. Historically, most challenges in conferences such as MICCAI have been devoted to medical image processing, especially on object recognition or segmentation tasks. Due to the increasing popularity and easy access to machine (deep) learning methods, as part of our current Artificial Intellingence (AI) summer, the number of AI-related challenges has exploded. In parallel, the community of biophysical models also has a valuable history of organizing challenges, including synthetic and experimental data, to assess the accuracy of the resulting simulations. In this paper, the similarities and differences in computational challenges organized by these communities are reviewed, suggesting best practices and what to avoid when organizing a challenge on biophysical models. Specifically, details will be given about the preparation of the CRT-EPiggy19 challenge.

Oscar Camara
Prediction of CRT Activation Sequence by Personalization of Biventricular Models from Electroanatomical Maps

Optimization of lead placement and interventricular delay settings in patients under cardiac resynchronization therapy is a complex task that might benefit from prior information based on models. Biophysical models can be used to predict the sequence of electrical heart activation in a patient given a set of parameters which should be personalized to the patient. In this paper, we use electroanatomical maps to personalize the endocardial activation of the right ventricle, and the different tissue conductivities in a pig model with left bundle branch block, to reproduce personalized biventricular activations. Following, we tested the personalized heart model by virtually simulating cardiac resynchronization therapy.

Juan Francisco Gomez, Beatriz Trenor, Rafael Sebastian
Prediction of CRT Response on Personalized Computer Models

Congestive heart failure (CHF) is one of the leading causes of death worldwide, despite the optimal treatment. Cardiac resynchronization therapy (CRT) is one of the established methods for treating severe CHF with conduction disorders, in particular, complete left bundle branch block (LBBB). However, to the date, up to 30% of patients do not respond to CRT. This study is focused on the developing model-based approaches allowing one to predict consequences of ventricular pacing after installing a CRT device based on computational cardiac models.In this work, we used experimental data from the STACOM 2019 “CRT-EPiggy” Challenge containing a training dataset of EAM data recorded in ventricles of 4 pig hearts. To simulate local activation time (LAT) in the model we used the Eikonal equation based model, which parameters were identified based on the experimental data. Solving an optimisation problem over the conductivity parameters of this model, we were able to achieve a good quality of LAT simulations before and after bi-ventricular pacing with a mean error of about 3 ms.We found essential changes in the local conduction velocity (CV) in the ventricles at bi-ventricular pacing after CRT both in experimental data and simulations. To predict these changes and post-operational LAT from the pre-operational data, we used a population based approach to simulate effects of conductivity modulation due to pacing. This approach allowed us to predict an activation pattern at ventricular pacing based on the optimised model of LAT before pacing with an average error of 7 ms. Despite the promising overall results of our pilot study, the presence of rather big local errors in the model predictions requires further algorithm improvement.

Svyatoslav Khamzin, Arsenii Dokuchaev, Olga Solovyova
Eikonal Model Personalisation Using Invasive Data to Predict Cardiac Resynchronisation Therapy Electrophysiological Response

In this manuscript, we personalise an Eikonal model of cardiac wave front propagation using data acquired during an invasive electrophysiological study. To this end, we use a genetic algorithm to determine the parameters that provide the best fit between simulated and recorded activation maps during sinus rhythm. We propose a way to parameterise the Eikonal simulations that take into account the Purkinje network and the septomarginal trabecula influences while keeping the computational cost low. We then re-use these parameters to predict the cardiac resynchronisation therapy electrophysiological response by adapting the simulation initialisation to the pacing locations. We experiment different divisions of the myocardium on which the propagation velocities have to be optimised. We conclude that separating both ventricles and both endocardia seems to provide a reasonable personalisation framework in terms of accuracy and predictive power.

Nicolas Cedilnik, Maxime Sermesant

LV-Full Quantification Challenge

Frontmatter
Left Ventricle Quantification Using Direct Regression with Segmentation Regularization and Ensembles of Pretrained 2D and 3D CNNs

Cardiac left ventricle (LV) quantification provides a tool for diagnosing cardiac diseases. Automatic calculation of all relevant LV indices from cardiac MR images is an intricate task due to large variations among patients and deformation during the cardiac cycle. Typical methods are based on segmentation of the myocardium or direct regression from MR images. To consider cardiac motion and deformation, recurrent neural networks and spatio-temporal convolutional neural networks (CNNs) have been proposed. We study an approach combining state-of-the-art models and emphasizing transfer learning to account for the small dataset provided for the LVQuan19 challenge. We compare 2D spatial and 3D spatio-temporal CNNs for LV indices regression and cardiac phase classification. To incorporate segmentation information, we propose an architecture-independent segmentation-based regularization. To improve the robustness further, we employ a search scheme that identifies the optimal ensemble from a set of architecture variants. Evaluating on the LVQuan19 Challenge training dataset with 5-fold cross-validation, we achieve mean absolute errors of $$111 \pm 76\,\mathrm{mm}^2$$, $$1.84 \pm 0.90\,\mathrm{mm}$$ and $$1.22\,\pm \,0.60\,\mathrm{mm}$$ for area, dimension and regional wall thickness regression, respectively. The error rate for cardiac phase classification is $${6.7\,\mathrm{\%}}$$.

Nils Gessert, Alexander Schlaefer
Left Ventricle Quantification with Cardiac MRI: Deep Learning Meets Statistical Models of Deformation

Deep learning has been widely applied for left ventricle (LV) analysis, obtaining state of the art results in quantification through image segmentation. When the training datasets are limited, data augmentation becomes critical, but standard augmentation methods do not usually incorporate the natural variation of anatomy. In this paper we propose a pipeline for LV quantification applying our data augmentation methodology based on statistical models of deformations (SMOD) to quantify LV based on segmentation of cardiac MR (CMR) images, and present an in-depth analysis of the effects of deformation parameters in SMOD performance. We trained and evaluated our pipeline on the MICCAI 2019 Left Ventricle Full Quantification Challenge dataset, and achieved average mean absolute error (MAE) for areas, dimensions, regional wall thickness and phase of 106 mm2, 1.52 mm, 1.01 mm and 8.0% respectively in a 3-fold cross-validation experiment.

Jorge Corral Acero, Hao Xu, Ernesto Zacur, Jurgen E. Schneider, Pablo Lamata, Alfonso Bueno-Orovio, Vicente Grau
Left Ventricular Parameter Regression from Deep Feature Maps of a Jointly Trained Segmentation CNN

Quantification of left ventricular (LV) parameters from cardiac MRI is important to assess cardiac condition and help in the diagnosis of certain pathologies. We present a CNN-based approach for automatic quantification of 11 LV indices: LV and myocardial area, 3 LV dimensions and 6 regional wall thicknesses (RWT). We use an encoder-decoder segmentation architecture and hypothesize that deep feature maps contain important shape information suitable to start an additional network branch for LV index regression. The CNN is simultaneously trained on regression and segmentation losses. We validated our approach on the LVQuan19 training dataset and found that our proposed CNN significantly outperforms a standard encoder regression CNN. The mean absolute error and Pearson correlation coefficient obtained for the different indices are respectively 190 mm$$^2$$ (96$$\%$$), 214 mm$$^2$$ (0.90$$\%$$), 2.99 mm (95$$\%$$) and 1.82 mm (71$$\%$$) for LV area, myocardial area, LV dimensions and RWT on a three-fold cross validation and 186 mm$$^2$$ (97$$\%$$), 222 mm$$^2$$ (0.88$$\%$$), 3.03 mm (0.95$$\%$$) and 1.67 mm (73$$\%$$) on a five-fold cross validation.

Sofie Tilborghs, Frederik Maes
A Two-Stage Temporal-Like Fully Convolutional Network Framework for Left Ventricle Segmentation and Quantification on MR Images

Automatic segmentation of the left ventricle (LV) of a living human heart in a magnetic resonance (MR) image (2D+t) allows to measure some clinical significant indices like the regional wall thicknesses (RWT), cavity dimensions, cavity and myocardium areas, and cardiac phase. Here, we propose a novel framework made of a sequence of two fully convolutional networks (FCN). The first is a modified temporal-like VGG16 (the “localization network”) and is used to localize roughly the LV (filled-in) epicardium position in each MR volume. The second FCN is a modified temporal-like VGG16 too, but devoted to segment the LV myocardium and cavity (the “segmentation network”). We evaluate the proposed method with 5-fold-cross-validation on the MICCAI 2019 LV Full Quantification Challenge dataset. For the network used to localize the epicardium, we obtain an average dice index of 0.8953 on validation set. For the segmentation network, we obtain an average dice index of 0.8664 on validation set (there, data augmentation is used). The mean absolute error (MAE) of average cavity and myocardium areas, dimensions, RWT are 114.77 mm$$^{2}$$; 0.9220 mm; 0.9185 mm respectively. The computation time of the pipeline is less than 2 s for an entire 3D volume. The error rate of phase classification is 7.6364$${\%}$$, which indicates that the proposed approach has a promising performance to estimate all these parameters.

Zhou Zhao, Nicolas Boutry, Élodie Puybareau, Thierry Géraud
Backmatter
Metadaten
Titel
Statistical Atlases and Computational Models of the Heart. Multi-Sequence CMR Segmentation, CRT-EPiggy and LV Full Quantification Challenges
herausgegeben von
Mihaela Pop
Maxime Sermesant
Dr. Oscar Camara
Prof. Dr. Xiahai Zhuang
Shuo Li
Alistair Young
Tommaso Mansi
Avan Suinesiaputra
Copyright-Jahr
2020
Electronic ISBN
978-3-030-39074-7
Print ISBN
978-3-030-39073-0
DOI
https://doi.org/10.1007/978-3-030-39074-7

Premium Partner