Skip to main content
Top

2025 | Book

Perinatal, Preterm and Paediatric Image Analysis

9th International Workshop, PIPPI 2024, Held in Conjunction with MICCAI 2024, Marrakesh, Morocco, October 6, 2024, Proceedings

Editors: Daphna Link-Sourani, Esra Abaci Turk, Christopher Macgowan, Jana Hutter, Andrew Melbourne, Roxane Licandro

Publisher: Springer Nature Switzerland

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 9th International Workshop on Perinatal, Preterm and Paediatric Image Analysis, PIPPI 2024, held in conjunction with the 27th International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2024, in Marrakesh, Morocco, on October 6, 2024.

The 14 full papers presented in this book were carefully reviewed and selected from 17 submissions.
The methods presented in these proceedings cover the full scope of medical image analysis including segmentation, registration, classification, reconstruction, population analysis and advanced structural, and functional and longitudinal modeling, all with an application to younger cohorts.

Table of Contents

Frontmatter

Fetal MRI: Motion Correction and Quality Assessment

Frontmatter
SpaER: Learning Spatio-temporal Equivariant Representations for Fetal Brain Motion Tracking
Abstract
In this paper, we introduce SpaER, a pioneering method for fetal motion tracking that leverages equivariant filters and self-attention mechanisms to effectively learn spatio-temporal representations. Different from conventional approaches that statically estimate fetal brain motions from pairs of images, our method dynamically tracks the rigid movement patterns of the fetal head across temporal and spatial dimensions. Specifically, we first develop an equivariant neural network that efficiently learns rigid motion sequences through low-dimensional spatial representations of images. Subsequently, we learn spatio-temporal representations by incorporating time encoding and self-attention neural network layers. This approach allows for the capture of long-term dependencies of fetal brain motion and addresses alignment errors due to contrast changes and severe motion artifacts. Our model also provides a geometric deformation estimation that properly addresses image distortions among all time frames. To the best of our knowledge, our approach is the first to learn spatial-temporal representations via deep neural networks for fetal motion tracking without data augmentation. We validated our model using real fetal echo-planar images with simulated and real motions. Our method carries significant potential value in accurately measuring, tracking, and correcting fetal motion in fetal MRI.
Jian Wang, Razieh Faghihpirayesh, Polina Golland, Ali Gholipour
Joint Multi-contrast Reconstruction of Fetal MRI Based on Implicit Neural Representations
Abstract
Fetal cerebral brain magnetic resonance imaging (MRI) is critical for the detection of abnormal brain development before birth. A key image processing step is the reconstruction of a 3D high resolution volume from the acquired series of 2D slices. Several types of MR sequences are commonly acquired during a scanning session, but current reconstruction methods consider each sequence (or contrast) separately. Multi-contrast techniques have been proposed but they do not compensate for potential movement during the acquisition, which occurs almost systematically in the context of fetal MRI. In this work, we introduce a new method for the joint reconstruction of multiple 3D volumes from different contrasts. Our method combines the redundant and complementary information across several stacks of 2D slices from different acquisition sequences via an implicit neural representation, and includes a slice motion correction module. Our results on both simulations and real data acquired in clinical routine demonstrates the relevance and efficiency of the proposed method.
Steven Jia, Chloé Mercier, Alexandre Pron, Nadine Girard, Guillaume Auzias, François Rousseau
Automatic Disentanglement of Motion in Fetal Low Field MRI Scans
Abstract
Fetal dynamic MRI acquisitions allow insights into fetal motor behaviour, serving as a surrogate for fetal cognitive development. Systematic assessment of these highly temporally resolved 2D images might hence allow novel insights into both normal development and contain crucial information in pathology. Here, we present an automatic deep learning-based assessment for masking of the uterine and fetal structures and fetal and maternal motion quantification tool, applied to 61 low field 0.55T fetal cine MRI datasets acquired between 24 and 40 weeks gestational age at two sites using two different MR contrasts. The lack of cine labels was overcome by imitating cine acquisitions from high-resolution labelled static 3D anatomical balanced steady-state free precession datasets. Results illustrate high segmentation accuracy for the larger uterine structures (mean dice coefficient of 0.86 for the fetal body, 0.88 for the fetal head, 0.61 for the placenta) as well as the ability to detect fetal motion.
Michael Kitzberger, Sara Neves Silva, Jordina Aviles Verdera, Diego Fajardo Rojas, Alena Uus, Susanne Schulz-Heise, Sandy Schmidt, Michael Schneider, Lisa Story, Michael Uder, Mary Rutherford, Jana Hutter
Advanced Framework for Fetal Diffusion MRI: Dynamic Distortion and Motion Correction
Abstract
Diffusion magnetic resonance imaging (dMRI) is essential for studying the microstructure of the developing fetal brain. However, fetal motion during scans and its interaction with magnetic field inhomogeneities lead to amplified artifacts and data scattering, compromising the consistency of dMRI analysis. This work introduces HAITCH, a novel open-source framework for correcting and reconstructing high-angular resolution dMRI data from challenging fetal scans. Our multi-stage approach incorporates an optimized multi-shell design for increased information capture and motion tolerance, a blip-reversed dual-echo multi-shell acquisition for dynamic distortion correction, advanced motion correction for robust and model-free reconstruction, and outlier detection for improved reconstruction fidelity. Validation experiments on real fetal dMRI scans demonstrate significant improvements and accurate correction across diverse fetal ages and motion levels. HAITCH effectively removes artifacts and reconstructs high-fidelity dMRI data suitable for advanced diffusion modeling and tractography.
Haykel Snoussi, Davood Karimi, Onur Afacan, Mustafa Utkur, Ali Gholipour
Assessing Data Quality on Fetal Brain MRI Reconstruction: A Multi-site and Multi-rater Study
Abstract
Quality assessment (QA) has long been considered essential to guarantee the reliability of neuroimaging studies. It is particularly important for fetal brain MRI, where unpredictable fetal motion can lead to substantial artifacts in the acquired images. Multiple images are then combined into a single volume through super-resolution reconstruction (SRR) pipelines, a step that can also introduce additional artifacts. While multiple studies designed automated quality control pipelines, no work evaluated the reproducibility of the manual quality ratings used to train these pipelines. In this work, our objective is twofold. First, we assess the inter- and intra-rater variability of the quality scoring performed by three experts on over 100 SRR images reconstructed using three different SRR pipelines. The raters were asked to assess the quality of images following 8 specific criteria like blurring or tissue contrast, providing a multi-dimensional view on image quality. We show that, using a protocol and training sessions, artifacts like bias field and blur level still have a low agreement (ICC below 0.5), while global quality scores show very high agreement (\(\text {ICC} = 0.9\)) across raters. We also observe that the SRR methods are influenced differently by factors like gestational age, input data quality and number of stacks used by reconstruction. Finally, our quality scores allow us to unveil systematic weaknesses of the different pipelines, indicating how further development could lead to more robust, well rounded SRR methods.
Thomas Sanchez, Angeline Mihailov, Yvan Gomez, Gerard Martí Juan, Elisenda Eixarch, András Jakab, Vincent Dunet, Mériam Koob, Guillaume Auzias, Meritxell Bach Cuadra

Neurodevelopmental MRI: Segmentation and Morphometry

Frontmatter
Automatic 8-Tissue Segmentation for 6-Month Infant Brains
Abstract
Numerous studies have highlighted that atypical brain development, particularly during infancy and toddlerhood, is linked to an increased likelihood of being diagnosed with a neurodevelopmental condition, such as autism. Accurate brain tissue segmentations for morphological analysis are essential in numerous infant studies. However, due to ongoing white matter (WM) myelination changing tissue contrast in T1- and T2-weighted images, automatic tissue segmentation in 6-month infants is particularly difficult. On the other hand, manual labeling by experts is time-consuming and labor-intensive. In this study, we propose the first 8-tissue segmentation pipeline for six-month-old infant brains. This pipeline utilizes domain adaptation (DA) techniques to leverage our longitudinal data, including neonatal images segmented with the neonatal Developing Human Connectome Project structural pipeline. Our pipeline takes raw 6-month images as inputs and generates the 8-tissue segmentation as outputs, forming an end-to-end segmentation pipeline. The segmented tissues include WM, gray matter (GM), cerebrospinal fluid (CSF), ventricles, cerebellum, basal ganglia, brainstem, and hippocampus/amygdala. Cycle-Consistent Generative Adversarial Network (CycleGAN) and Attention U-Net were employed to achieve the image contrast transformation between neonatal and 6-month images and perform tissue segmentation on the synthesized 6-month images (neonatal images with 6-month intensity contrast), respectively. Moreover, we incorporated the segmentation outputs from Infant Brain Extraction and Analysis Toolbox (iBEAT) and another Attention U-Net to further enhance the performance and construct the end-to-end segmentation pipeline. Our evaluation with real 6-month images achieved a DICE score of 0.92, an HD95 of 1.6, and an ASSD of 0.42.
Yilan Dong, Vanessa Kyriakopoulou, Irina Grigorescu, Grainne McAlonan, Dafnis Batalle, Maria Deprez
Towards Accurate Fetal Brain Parcellation via Hierarchical Network and Loss
Abstract
Automatic fetal brain parcellation on Magnetic Resonance (MR) images is increasingly being used to assess prenatal brain growth and development. Despite their progress, existing methods are limited due to ignoring of the hierarchical nature of segmentation labels and the rich complementary information among hierarchical labels. To address these limitations, we propose a novel deep-learning model to segment the whole fetal brain into 87 fine-grained regions hierarchically. Specifically, we design a hierarchical network with adjustable levels and define a three-level structure. These levels are dedicated, respectively, to predicting 8 types of brain tissues, 36 more detailed brain regions, and ultimately 87 brain regions according to developing Human Connectome Project (dHCP) labels. The coarse-level network is capable of providing prior features to the fine-level network for fine-grained brain parcellation. This design involves decomposing complex problems into simpler ones and addresses intricate issues with the priors for resolving simple problems. Furthermore, we design a data augmentation module to simulate variations in scanning parameters, enabling precise segmentation of fetal brain images across diverse domains. Finally, we integrate this data augmentation module into a semi-supervised paradigm to alleviate the shortage of high-quality labeled data and enhance the generalizability of our model. Thanks to these designs, our model can obtain fine-grained and multi-scale brain segmentation with high robustness to variations in MR scanners and imaging protocols. Extensive experiments on 558 dHCP and 176 fetal brain MR images demonstrate that our model achieves state-of-the-art segmentation performance across multi-site datasets. Our code is publicly available at https://​github.​com/​sj-huang/​HieraParceNet.
Shijie Huang, Kai Zhang, Jiawei Huang, Lingnan Kong, Fangmei Zhu, Zhongxiang Ding, Geng Chen, Dinggang Shen
Automatic Assessment of Fetal Multi-echo Diffusion Weighted Scans
Abstract
Fetal magnetic resonance imaging (MRI) is a intriguing tool to gain insights into early human development. Diffusion MRI is of particular interest to study neuronal development in vivo. However, fetal motion hampers accurate quantification. We suggest an automated quality check for a combined multi-echo diffusion-weighted fetal sequence on the low field (0.55T) consisting of deep learning-based masking of the brain and quality assessment. Results from 56 fetal datasets between 17 and 41 weeks gestational age illustrate the ability to obtain high-quality masks and transparent, insightful quality scores. Next, the achieved automatic assessment will be performed in real-time to guide the scan, initiate possible field-of-view (FOV) shifts, or repeat individual volumes.
Antonia Bortolazzi, Jordina Aviles Verdera, Kelly Payette, Sara Neves Silva, Mary Rutherford, Jo Hajnal, Jana Hutter
Rethinking Fetal Brain Atlas Construction: A Deep Learning Perspective
Abstract
Atlas construction is a crucial task for the analysis of fetal brain magnetic resonance imaging (MRI). Traditional registration-based methods for atlas construction may suffer from issues such as inaccurate registration and difficulty in defining morphology and geometric information. To address these challenges, we propose a novel deep learning-based approach for fetal brain atlas construction, which can replace traditional registration-based methods. Our fundamental assumption is that, in the feature space, the atlas is positioned at the center of a group of images, with the minimum distance to all images. Our approach utilizes the powerful representation ability of deep learning methods to learn the complex anatomical structure of the brain at multiple scales, by introducing a distance loss function to minimize the sum of distances between the atlas and all images in the group. We further utilize tissue maps as a structural guide to constrain our results, making them more physiologically realistic. To the best of our knowledge, we are the first to construct fetal brain atlases with powerful deep learning techniques. Our experiments on a large-scale fetal brain MRI dataset demonstrate that our approach can construct fetal brain atlases with better performance than previous registration-based methods while avoiding their limitations. Our code is publicly available at https://​github.​com/​ZhangKai47/​FetalBrainAtlas.
Kai Zhang, Shijie Huang, Fangmei Zhu, Zhongxiang Ding, Geng Chen, Dinggang Shen
Enhancing Prenatal Diagnosis: Automated Fetal Brain MRI Morphometry
Abstract
Fetal brain MRI is a pivotal tool for the prenatal diagnosis and prognosis of neurodegenerative diseases, relying on linear morphologic quantification traditionally performed manually. These manual measurements, derived from the visual assessment of 2D MRI slices, are adversely affected by high inter- and intra-rater variability due to variations in slice selection. The distortion of slices caused by fetal motion and the acquisition of images in oblique views due to uncontrolled fetal movements further exacerbates the variability in morphologic measurements. A promising method to standardize these measurements involves a sophisticated sequence of fetal MRI processing steps, including super-resolution 3D MRI reconstruction (SRR), automated detection of landmarks for performing standard morphologic metrics, and subsequent diagnosis/prognosis models. In this study, we developed and validated a fully automated pipeline that incorporates these computational steps, utilizing publicly available computational and deep learning tools and frameworks. A dataset of 81 fetuses from 22 to 37 weeks of gestation was acquired using fast multiview 2D HASTE MRI, where 35 were healthy fetuses and 46 diagnosed with ventriculomegaly. Our computational pipeline achieved overall mean localization accuracy of 3.72 ± 2.55 mm across 22 landmarks, which was in good agreement and with lower uncertainty versus two manual raters across the 11 standard morphologic fetal brain measurements. These measurements obtained on 35 healthy fetuses were also in good agreement with the established manual normative values. Utilizing these measurements with the ventriculomegaly diagnostic criteria attained a 98.7% accuracy, underscoring the practical value of proposed pipeline in enhancing prenatal neurodegenerative disease diagnosis and prognosis. Code is publicly available at https://​github.​com/​zigaso/​fetal-mri-markers.
Ema Masterl, Anja Parkelj, Tina Vipotnik Vesnaver, Žiga Špiclin

Fetal Body and Organ Segmentation in MRI

Frontmatter
Towards Automated Multi-regional Lung Parcellation for 0.55-3T 3D T2w Fetal MRI
Abstract
Fetal MRI is increasingly being employed in the diagnosis of fetal lung anomalies and segmentation-derived total fetal lung volumes are used as one of the parameters for prediction of neonatal outcomes. However, in clinical practice, segmentation is performed manually in 2D motion-corrupted stacks with thick slices which is time consuming and can lead to variations in estimated volumes. Furthermore, there is a known lack of consensus regarding a universal lung parcellation protocol and expected normal total lung volume formulas. The lungs are also segmented as one label without parcellation into lobes. In terms of automation, to the best of our knowledge, there have been no reported works on multi-lobe segmentation for fetal lung MRI. This work introduces the first automated deep learning segmentation pipeline for multi-regional lung segmentation for 3D motion-corrected T2w fetal body images for normal anatomy and congenital diaphragmatic hernia cases. The protocol for parcellation into 5 standard lobes was defined in the population-averaged 3D atlas. It was then used to generate a multi-label training dataset including 104 normal anatomy controls and 45 congenital diaphragmatic hernia cases from 0.55T, 1.5T and 3T acquisition protocols. The performance of 3D Attention UNet network was evaluated on 18 cases and showed good results for normal lung anatomy with expectedly lower Dice values for the ipsilateral lung. In addition, we also produced normal lung volumetry growth charts from 290 0.55T and 3T controls. This is the first step towards automated multi-regional fetal lung analysis for 3D fetal MRI.
Alena U. Uus, Carla Avena Zampieri, Fenella Downes, Alexia Egloff Collado, Megan Hall, Joseph Davidson, Kelly Payette, Jordina Aviles Verdera, Irina Grigorescu, Joseph V. Hajnal, Maria Deprez, Michael Aertsen, Jana Hutter, Mary A. Rutherford, Jan Deprest, Lisa Story
Fetal Body Parts Segmentation Using Volumetric MRI Reconstructions
Abstract
Fetal body parts segmentation can be useful to detect abnormalities and assess fetal growth from magnetic resonance imaging (MRI). In this work, 3D fetal head, trunk and limbs body parts segmentation is leveraged for the first time by volumetric reconstructions of the whole fetal anatomy coupled to deep learning techniques for volumetric segmentation. Due to the time consuming manual segmentation required for training in this volumetric multi-label setting, sparse annotations are used, marking slices with a \(6\,\text {mm}\) separation and alternating the slicing axis between cases. These manual segmentations are used to train and compare different models using both the original MRI data and the data after volumetric reconstruction in a dataset of 45 cases. The setup consisting on 3D volumetric reconstructions and a 3D U-net based learning model results in optimal segmentation metrics, with Dice scores higher than 0.9 for all the considered structures and 0.973 for the fetal body (0.979 when highly motion corrupted datasets are discarded). In comparison, the performance of the setups that use the original MRI data exhibit a pronounced decline in segmentation scores, highlighting the importance of robust reconstruction techniques for automatic fetal growth characterization. Finally, we conduct a Bland-Altman analysis studying the reliability of our proposed 3D reconstruction and segmentation pipeline for automatic estimation of fetal body part weights.
Pedro Pablo Alarcón-Gil, Felicia Alfano, Alena Uus, María Jesús Ledesma-Carbayo, Lucilio Cordero-Grande

Fetal Ultrasound and Segmentation

Frontmatter
Generative Diffusion Model Bootstraps Zero-Shot Classification of Fetal Ultrasound Images in Underrepresented African Populations
Abstract
Developing robust deep learning models for fetal ultrasound image analysis requires comprehensive, high-quality datasets to effectively learn informative data representations within the domain. However, the scarcity of labelled ultrasound images poses substantial challenges, especially in low-resource settings. To tackle this challenge, we leverage synthetic data to enhance the generalizability of deep learning models. This study proposes a diffusion-based method, Fetal Ultrasound LoRA (FU-LoRA), which involves fine-tuning latent diffusion models using the LoRA technique to generate synthetic fetal ultrasound images. These synthetic images are integrated into a hybrid dataset that combines real-world and synthetic images to improve the performance of zero-shot classifiers in low-resource settings. Our experimental results on fetal ultrasound images from African cohorts demonstrate that FU-LoRA outperforms the baseline method by a 13.73% increase in zero-shot classification accuracy. Furthermore, FU-LoRA achieves the highest accuracy of 82.40%, the highest F-score of 86.54%, and the highest AUC of 89.78%. It demonstrates that the FU-LoRA method is effective in the zero-shot classification of fetal ultrasound images in low-resource settings. Our code and data are publicly accessible on GitHub.
Fangyijie Wang, Kevin Whelan, Guénolé Silvestre, Kathleen M. Curran
Semi-supervised Three-Dimensional Detection of Congenital Brain Anomalies in First Trimester Ultrasound
Abstract
Congenital brain anomalies occur in about 1% of pregnancies and often lead to termination of pregnancy due to their severity. Detection rates in the first trimester are low, between 53% and 73%. Our goal is to improve the detection rate of anomaly screening through the development of an automated anomaly detection method. For our detection method, we developed a three-dimensional (3D) extension of EfficientAD, a state-of-the-art semi-supervised anomaly detection method originally designed for two-dimensional images. To assess the impact of insufficient image quality and preprocessing errors, we trained the method on both the original dataset and a subset that passed visual quality inspection. The original dataset consisted of 411 3D ultrasound images acquired in the 9th week of pregnancy, with 404 images of 404 normally developing embryos and 7 images of 4 embryos with brain anomalies. The normal dataset was split in 70%/15%/15% for training, validation and testing, the anomalous dataset was only used for testing. We evaluated the models using the area under the receiver operating characteristic curve (AUC) and balanced accuracy. The method demonstrated good performance on the original dataset (AUC 0.87, accuracy 0.67). Model performance increased for the quality-checked subset (AUC 0.86, accuracy 0.83), indicating that excluding images with insufficient scan quality and preprocessing errors enhances overall model performance. In conclusion, our method lays the foundation for automatic anomaly detection in 3D ultrasound scans, ultimately leading to a higher detection rate of congenital anomalies in the first trimester.
Marcella C. Zijta, Wietske A. P. Bastiaansen, Rene M. H. Wijnen, Régine P. M. Steegers-Theunissen, Bernadette S. de Bakker, Melek Rousian, Stefan Klein
Backmatter
Metadata
Title
Perinatal, Preterm and Paediatric Image Analysis
Editors
Daphna Link-Sourani
Esra Abaci Turk
Christopher Macgowan
Jana Hutter
Andrew Melbourne
Roxane Licandro
Copyright Year
2025
Electronic ISBN
978-3-031-73260-7
Print ISBN
978-3-031-73259-1
DOI
https://doi.org/10.1007/978-3-031-73260-7

Premium Partner