Skip to main content

2020 | Buch

Biomedical Image Registration

9th International Workshop, WBIR 2020, Portorož, Slovenia, December 1–2, 2020, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 9th International Workshop on Biomedical Image Registration, WBIR 2020, which was supposed to be held in Portorož, Slovenia, in June 2020. The conference was postponed until December 2020 due to the COVID-19 pandemic.

The 16 full and poster papers included in this volume were carefully reviewed and selected from 22 submitted papers. The papers are organized in the following topical sections: Registration initialization and acceleration, interventional registration, landmark based registration, multi-channel registration, and sliding motion.

Inhaltsverzeichnis

Frontmatter

Registration Initialization and Acceleration

Frontmatter
Nonlinear Alignment of Whole Tractograms with the Linear Assignment Problem
Abstract
After registration of the imaging data of two brains, homologous anatomical structures are expected to overlap better than before registration. Diffusion magnetic resonance imaging (dMRI) techniques and tractography techniques provide a representation of the anatomical connections in the white matter, as hundreds of thousands of streamlines, forming the tractogram. The literature on methods for aligning tractograms is in active development and provides methods that operate either from voxel information, e.g. fractional anisotropy, orientation distribution function, T1-weighted MRI, or directly from streamline information. In this work, we align streamlines using the linear assignment problem (LAP) and propose a method to reduce the high computational cost of aligning whole brain tractograms. As further contribution, we present a comparison among some of the freely-available linear and nonlinear tractogram alignment methods, where we show that our LAP-based method outperforms all others. In discussing the results, we show that a main limitation of all streamline-based nonlinear registration methods is the computational cost and that addressing such problem may lead to further improvement in the quality of registration.
Emanuele Olivetti, Pietro Gori, Pietro Astolfi, Giulia Bertó, Paolo Avesani
Learning-Based Affine Registration of Histological Images
Abstract
The use of different stains for histological sample preparation reveals distinct tissue properties and may result in a more accurate diagnosis. However, as a result of the staining process, the tissue slides are being deformed and registration is required before further processing. The importance of this problem led to organizing an open challenge named Automatic Non-rigid Histological Image Registration Challenge (ANHIR), organized jointly with the IEEE ISBI 2019 conference. The challenge organizers provided several hundred image pairs and a server-side evaluation platform. One of the most difficult sub-problems for the challenge participants was to find an initial, global transform, before attempting to calculate the final, non-rigid deformation field. This article solves the problem by proposing a deep network trained in an unsupervised way with a good generalization. We propose a method that works well for images with different resolutions, aspect ratios, without the necessity to perform image padding, while maintaining a low number of network parameters and fast forward pass time. The proposed method is orders of magnitude faster than the classical approach based on the iterative similarity metric optimization or computer vision descriptors. The success rate is above 98% for both the training set and the evaluation set. We make both the training and inference code freely available.
Marek Wodzinski, Henning Müller
Enabling Manual Intervention for Otherwise Automated Registration of Large Image Series
Abstract
Aligning thousands of images from serial imaging techniques can be a cumbersome task. Methods ([2, 11, 21]) and programs for automation exist (e.g. [1, 4, 10]) but often need case-specific tuning of many meta-parameters (e.g. mask, pyramid-scales, denoise, transform-type, method/metric, optimizer and its parameters). Other programs, that apparently only depend on a few parameter often just hide many of the remaining ones (initialized with default values), often cannot handle challenging cases satisfactorily.
Instead of spending much time on the search for suitable meta-parameters that yield a usable result for the complete image series, the described approach allows to intervene by manually aligning problematic image pairs. The manually found transform is then used by the automatic alignment as an initial transformation that is then optimized as in the pure automatic case. Therefore the manual alignment does not have to be very precise. This way the worst case time consumption is limited and can be estimated (manual alignment of the whole series) in contrast to tuning of meta-parameters of pure auto-alignment of complete series which can hardly be guessed.
Roman Grothausmann, Dženan Zukić, Matt McCormick, Christian Mühlfeld, Lars Knudsen
Towards Segmentation and Spatial Alignment of the Human Embryonic Brain Using Deep Learning for Atlas-Based Registration
Abstract
We propose an unsupervised deep learning method for atlas-based registration to achieve segmentation and spatial alignment of the embryonic brain in a single framework. Our approach consists of two sequential networks with a specifically designed loss function to address the challenges in 3D first trimester ultrasound. The first part learns the affine transformation and the second part learns the voxelwise nonrigid deformation between the target image and the atlas. We trained this network end-to-end and validated it against a ground truth on synthetic datasets designed to resemble the challenges present in 3D first trimester ultrasound. The method was tested on a dataset of human embryonic ultrasound volumes acquired at 9 weeks gestational age, which showed alignment of the brain in some cases and gave insight in open challenges for the proposed method. We conclude that our method is a promising approach towards fully automated spatial alignment and segmentation of embryonic brains in 3D ultrasound.
Wietske A. P. Bastiaansen, Melek Rousian, Régine P. M. Steegers-Theunissen, Wiro J. Niessen, Anton Koning, Stefan Klein
Learning Deformable Image Registration with Structure Guidance Constraints for Adaptive Radiotherapy
Abstract
Accurate registration of CT and CBCT images is key for adaptive radiotherapy. A particular challenge is the alignment of flexible organs, such as bladder or rectum, that often yield extreme deformations. In this work we analyze the impact of so-called structure guidance for learning based registration when additional segmentation information is provided to a neural network. We present a novel weakly supervised deep learning based method for multi-modal 3D deformable CT-CBCT registration with structure guidance constraints. Our method is not supervised by ground-truth deformations and we use the energy functional of a variational registration approach as loss for training. Incorporating structure guidance constraints in our learning based approach results in an average Dice score of \(0.91\pm 0.08\) compared to a score of \(0.76\pm 0.15\) for the same method without constraints. An iterative registration approach with structure guidance results in a comparable average Dice score of \(0.91\pm 0.09\). However, learning based registration requires only a single pass through the network, yielding computation of a deformation fields in less than 0.1 s which is more than 100 times faster than the runtime of iterative registration.
Sven Kuckertz, Nils Papenberg, Jonas Honegger, Tomasz Morgas, Benjamin Haas, Stefan Heldmann

Interventional Registration

Frontmatter
Multilevel 2D-3D Intensity-Based Image Registration
Abstract
2D-3D image registration is an important task for computer-aided minimally invasive vascular therapies. A crucial component for practical image registration is the use of multilevel strategies to avoid local optima and to speed-up runtime. However, due to the different dimensionalities of the 2D fixed and 3D moving image, the setup of multilevel strategies is not straightforward.
In this work, we propose an intensity-driven 2D-3D multiresolution registration approach using the normalized gradient fields (NGF) distance measure. We discuss and empirically analyze the impact on the choice of 2D and 3D image resolutions. Furthermore, we show that our approach produces results that are comparable or superior to other state-of-the-art methods.
Annkristin Lange, Stefan Heldmann
Towards Automated Spine Mobility Quantification: A Locally Rigid CT to X-ray Registration Framework
Abstract
Different pathologies of the vertebral column, such as scoliosis, require quantification of the mobility of individual vertebrae or of curves of the spine for treatment planning. Without the necessary mobility, vertebrae can not be safely re-positioned and fused. The current clinical workflow consists of radiologists or surgeons estimating angular differences of neighbouring vertebrae from different x-ray images. This procedure is time consuming and prone to inaccuracy. The proposed method automates this quantification by deforming a CT image in a physiologically reasonable way and matching it to the x-ray images of interest. We propose a proof of concept evaluation on synthetic data. The automatic and quantitative analysis enables reproducible results independent of the investigator.
David Drobny, Marta Ranzini, Amanda Isaac, Tom Vercauteren, Sébastien Ourselin, David Choi, Marc Modat

Landmark Based Registration

Frontmatter
Reinforced Redetection of Landmark in Pre- and Post-operative Brain Scan Using Anatomical Guidance for Image Alignment
Abstract
Re-identifying locations of interest in pre- and post-operative images is a hard identification problem, as the anatomical landscape changes dramatically due to tumor resection and tissue displacement. Classical image registration techniques oftentimes fail in vicinity of the tumor, where the enclosing structures are massively altered from one scan to another. Still, locations nearby the tumor or the resection cavity are the most relevant for evaluating tumor progression patterns and for comparing pre- and post-operative radiomic signatures. We address this issue by exploring a Reinforcement Learning (RL) approach. An artificial agent is self-taught to find the optimal path towards a target driven by a feedback signal from the environment. Incorporating anatomical guidance, we restrict the agent’s search space to surgery-unaffected structures only. By defining landmarks for each patient individually, we aim to obtain a patient-specific representation of its differential radiomic features across different time points for enhancing image alignment. Estimated landmarks reach a remarkable mean distance error around 3 mm. In addition, they show a high agreement with expert annotations on a challenging dataset of MR scans from the brain before and after tumor resection.
Diana Waldmannstetter, Fernando Navarro, Benedikt Wiestler, Jan S. Kirschke, Anjany Sekuboyina, Ester Molero, Bjoern H. Menze
Deep Volumetric Feature Encoding for Biomedical Images
Abstract
Deep learning research has demonstrated the effectiveness of using pre-trained networks as feature encoders. The large majority of these networks are trained on 2D datasets with millions of samples and diverse classes of information. We demonstrate and evaluate approaches to transferring deep 2D feature spaces to 3D in order to take advantage of these and related resources in the biomedical domain. First, we show how VGG-19 activations can be mapped to a 3D variant of the network (VGG-19-3D). Second, using varied medical decathlon data, we provide a technique for training 3D networks to predict the encodings induced by 3D VGG-19. Lastly, we compare five different 3D networks (one of which is trained only on 3D MRI and another of which is not trained at all) across layers and patch sizes in terms of their ability to identify hippocampal landmark points in 3D MRI data that was not included in their training. We make observations about the performance, recommend different networks and layers and make them publicly available for further evaluation.
Brian Avants, Elliot Greenblatt, Jacob Hesterman, Nicholas Tustison

Multi-channel Registration

Frontmatter
Multi-channel Image Registration of Cardiac MR Using Supervised Feature Learning with Convolutional Encoder-Decoder Network
Abstract
It is difficult to register the images involving large deformation and intensity inhomogeneity. In this paper, a new multi-channel registration algorithm using modified multi-feature mutual information (α-MI) based on minimal spanning tree (MST) is presented. First, instead of relying on handcrafted features, a convolutional encoder-decoder network is employed to learn the latent feature representation from cardiac MR images. Second, forward computation and backward propagation are performed in a supervised fashion to make the learned features more discriminative. Finally, local features containing appearance information is extracted and integrated into α-MI for achieving multi-channel registration. The proposed method has been evaluated on cardiac cine-MRI data from 100 patients. The experimental results show that features learned from deep network are more effective than handcrafted features in guiding intra-subject registration of cardiac MR images.
Xuesong Lu, Yuchuan Qiao
Multi-channel Registration for Diffusion MRI: Longitudinal Analysis for the Neonatal Brain
Abstract
In multi-channel (MC) registration, fusion of structural and diffusion brain MRI provides information on both cortex and white matter (WM) structures thus decreasing the uncertainty of deformation fields. However, the existing solutions employ only diffusion tensor imaging (DTI) derived metrics which are limited by inconsistencies in fiber-crossing regions. In this work, we extend the pipeline for registration of multi-shell high angular resolution diffusion imaging (HARDI) [15] with a novel similarity metric based on angular correlation and an option for multi-channel registration that allows incorporation of structural MRI. The contributions of channels to the displacement field are weighted with spatially varying certainty maps. The implementation is based on MRtrix3 (MRtrix3: https://​www.​mrtrix.​org) toolbox. The approach is quantitatively evaluated on intra-patient longitudinal registration of diffusion MRI datasets of 20 preterm neonates with 7–11 weeks gap between the scans. In addition, we present an example of an MC template generated using the proposed method.
Alena Uus, Maximilian Pietsch, Irina Grigorescu, Daan Christiaens, Jacques-Donald Tournier, Lucilio Cordero Grande, Jana Hutter, David Edwards, Joseph Hajnal, Maria Deprez
An Image Registration-Based Method for EPI Distortion Correction Based on Opposite Phase Encoding (COPE)
Abstract
Surprisingly, estimated voxel displacement maps (VDMs), based on image registration, seem to work just as well to correct geometrical distortion in functional MRI data (EPI) as VDMs based on actual information about the magnetic field. In this article, we compare our new image registration-based distortion correction method ‘COPE’ to an implementation of the pixelshift method. Our approach builds on existing image registration-based techniques using opposite phase encoding, extending these by local cost aggregation. Comparison of these methods with 3T and 7T spin-echo (SE) and gradient-echo (GE) data show that the image registration-based method is a good alternative to the fieldmap-based EPI distortion correction method.
Hester Breman, Joost Mulders, Levin Fritz, Judith Peters, John Pyles, Judith Eck, Matteo Bastiani, Alard Roebroeck, John Ashburner, Rainer Goebel
Diffusion Tensor Driven Image Registration: A Deep Learning Approach
Abstract
Tracking microsctructural changes in the developing brain relies on accurate inter-subject image registration. However, most methods rely on either structural or diffusion data to learn the spatial correspondences between two or more images, without taking into account the complementary information provided by using both. Here we propose a deep learning registration framework which combines the structural information provided by \(T_2\)-weighted (\(T_2\)w) images with the rich microstructural information offered by diffusion tensor imaging (DTI) scans. This allows our trained network to register pairs of images in a single pass. We perform a leave-one-out cross-validation study where we compare the performance of our multi-modality registration model with a baseline model trained on structural data only, in terms of Dice scores and differences in fractional anisotropy (FA) maps. Our results show that in terms of average Dice scores our model performs better in subcortical regions when compared to using structural data only. Moreover, average sum-of-squared differences between warped and fixed FA maps show that our proposed model performs better at aligning the diffusion data.
Irina Grigorescu, Alena Uus, Daan Christiaens, Lucilio Cordero-Grande, Jana Hutter, A. David Edwards, Joseph V. Hajnal, Marc Modat, Maria Deprez
Multimodal MRI Template Creation in the Ring-Tailed Lemur and Rhesus Macaque
Abstract
We present a multimodal registration algorithm for simultaneous alignment of datasets with both scalar and tensor MRI images. We employ a volumetric, cubic B-spline parametrised transformation model. Regularisation is based on the logarithm of the singular values of the local Jacobian and ensures diffeomorphic warps. Tensor registration takes reorientation into account during optimisation, through a finite-strain approximation of rotation due to the warp. The combination of scalar, tensor and regularisation cost functions allows us to optimise the deformations in terms of tissue matching, orientation matching and distortion minimisation simultaneously. We apply our method to creating multimodal T2 and DTI MRI brain templates of two small primates (the ring-tailed lemur and rhesus macaque) from high-quality, ex vivo, 0.5/0.6 mm isotropic data. The resulting templates are of very high quality across both modalities and species. Tissue contrast in the T2 channel is high indicating excellent tissue-boundary alignment. The DTI channel displays strong anisotropy in white matter, as well as consistent left/right orientation information even in relatively isotropic grey matter regions. Finally, we demonstrate where the multimodal templating approach overcomes anatomical inconsistencies introduced by unimodal only methods.
Frederik J. Lange, Stephen M. Smith, Mads F. Bertelsen, Alexandre A. Khrapitchev, Paul R. Manger, Rogier B. Mars, Jesper L. R. Andersson

Sliding Motion

Frontmatter
An Unsupervised Learning Approach to Discontinuity-Preserving Image Registration
Abstract
Most traditional image registration algorithms aimed at aligning a pair of images impose well-established regularizers to guarantee smoothness of unknown deformation fields. Since these methods assume global smoothness within the image domain, they pose issues for scenarios where local discontinuities are expected, such as the sliding motion between the lungs and the chest wall during the respiratory cycle. Furthermore, an objective function must be optimized for each given pair of images, thus registering multiple sets of images become very time-consuming and scale poorly to higher resolution image volumes.
Using recent advances in deep learning, we propose an unsupervised learning-based image registration model. The model is trained over a loss function with a custom regularizer that preserves local discontinuities, while simultaneously respecting the smoothness assumption in homogeneous regions of image volumes. Qualitative and quantitative validations on 3D pairs of lung CT datasets will be presented.
Eric Ng, Mehran Ebrahimi
An Image Registration Framework for Discontinuous Mappings Along Cracks
Abstract
A novel crack capable image registration framework is proposed. The approach is designed for registration problems suffering from cracks, gaps, or holes. The approach enables discontinuous transformation fields and also features an automatically computed crack indicator function and therefore does not require a pre-segmentation. The new approach is a generalization of the commonly used variational image registration approach. New contributions are an additional dissipation term in the overall energy, a proper balancing of different ingredients, and a joint optimization for both, the crack indicator function and the transformation. Results for histological serial sectioning of marmoset brain images demonstrate the potential of the approach and its superiority as compared to a standard registration.
Hari Om Aggrawal, Martin S. Andersen, Jan Modersitzki
Backmatter
Metadaten
Titel
Biomedical Image Registration
herausgegeben von
Žiga Špiclin
Dr. Jamie McClelland
Jan Kybic
Orcun Goksel
Copyright-Jahr
2020
Electronic ISBN
978-3-030-50120-4
Print ISBN
978-3-030-50119-8
DOI
https://doi.org/10.1007/978-3-030-50120-4