Skip to main content

2017 | Buch

Medical Image Computing and Computer Assisted Intervention − MICCAI 2017

20th International Conference, Quebec City, QC, Canada, September 11-13, 2017, Proceedings, Part I

herausgegeben von: Maxime Descoteaux, Lena Maier-Hein, Alfred Franz, Pierre Jannin, D. Louis Collins, Dr. Simon Duchesne

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The three-volume set LNCS 10433, 10434, and 10435 constitutes the refereed proceedings of the 20th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2017, held inQuebec City, Canada, in September 2017.
The 255 revised full papers presented were carefully reviewed and selected from 800 submissions in a two-phase review process. The papers have been organized in the following topical sections: Part I: atlas and surface-based techniques; shape and patch-based techniques; registration techniques, functional imaging, connectivity, and brain parcellation; diffusion magnetic resonance imaging (dMRI) and tensor/fiber processing; and image segmentation and modelling. Part II: optical imaging; airway and vessel analysis; motion and cardiac analysis; tumor processing; planning and simulation for medical interventions; interventional imaging and navigation; and medical image computing. Part III: feature extraction and classification techniques; and machine learning in medical image computing.

Inhaltsverzeichnis

Frontmatter

Atlas and Surface-Based Techniques

Frontmatter
The Active Atlas: Combining 3D Anatomical Models with Texture Detectors

While modern imaging technologies such as fMRI have opened exciting possibilities for studying the brain in vivo, histological sections remain the best way to study brain anatomy at the level of neurons. The procedure for building histological atlas changed little since 1909 and identifying brain regions is a still a labor intensive process performed only by experienced neuroanatomists. Existing digital atlases such as the Allen Reference Atlas are constructed using downsampled images and can not reliably map low-contrast parts such as brainstem, which is usually annotated based on high-resolution cellular texture.We have developed a digital atlas methodology that combines information about the 3D organization and the detailed texture of different structures. Using the methodology we developed an atlas for the mouse brainstem, a region for which there are currently no good atlases. Our atlas is “active” in that it can be used to automatically align a histological stack to the atlas, thus reducing the work of the neuroanatomist.

Yuncong Chen, Lauren McElvain, Alex Tolpygo, Daniel Ferrante, Harvey Karten, Partha Mitra, David Kleinfeld, Yoav Freund
Exploring Gyral Patterns of Infant Cortical Folding Based on Multi-view Curvature Information

The human cortical folding is intriguingly complex in its variability and regularity across individuals. Exploring the principal patterns of cortical folding is of great importance for neuroimaging research. The term-born neonates with minimum exposure to the complicated environments are the ideal candidates to mine the postnatal origins of principal cortical folding patterns. In this work, we propose a novel framework to study the gyral patterns of neonatal cortical folding. Specifically, first, we leverage multi-view curvature-derived features to comprehensively characterize the complex and multi-scale nature of cortical folding. Second, for each feature, we build a dissimilarity matrix for measuring the difference of cortical folding between any pair of subjects. Then, we convert these dissimilarity matrices as similarity matrices, and nonlinearly fuse them into a single matrix via a similarity network fusion method. Finally, we apply a hierarchical affinity propagation clustering approach to group subjects into several clusters based on the fused similarity matrix. The proposed framework is generic and can be applied to any cortical region, or even the whole cortical surface. Experiments are carried out on a large dataset with 600+ term-born neonates to mine the principal folding patterns of three representative gyral regions.

Dingna Duan, Shunren Xia, Yu Meng, Li Wang, Weili Lin, John H. Gilmore, Dinggang Shen, Gang Li
Holistic Mapping of Striatum Surfaces in the Laplace-Beltrami Embedding Space

In brain shape analysis, the striatum is typically divided into three parts: the caudate, putamen, and accumbens nuclei for its analysis. Recent connectivity and animal studies, however, indicate striatum-cortical inter-connections do not always follow such subdivisions. For the holistic mapping of striatum surfaces, conventional spherical registration techniques are not suitable due to the large metric distortions in spherical parameterization of striatal surfaces. To overcome this difficulty, we develop a novel striatal surface mapping method using our recently proposed Riemannian metric optimization techniques in the Laplace-Beltrami (LB) embedding space. For the robust resolution of sign ambiguities in the LB spectrum, we also devise novel anatomical contextual features to guide the surface mapping in the embedding space. In our experimental results, we compare with spherical registration tools from FreeSurfer and FSL to demonstrate that our novel method provides a superior solution to the striatal mapping problem. We also apply our method to map the striatal surfaces from 211 subjects of the Human Connectome Project (HCP), and use the surface maps to construct a cortical connectivity atlas. Our atlas results show that the striato-cortical connectivity is not distinctive according to traditional structural subdivision of the striatum, and further confirms the holistic approach for mapping striatal surfaces.

Jin Kyu Gahm, Yonggang Shi
Novel Local Shape-Adaptive Gyrification Index with Application to Brain Development

Conventional approaches to quantification of the cortical folding employ a simple circular kernel. Such a kernel commonly covers multiple cortical gyral/sulcal regions that may be functionally unrelated and also often blurs local gyrification measurements. We propose a novel adaptive kernel for quantification of the local cortical folding, which incorporates neighboring gyral crowns and sulcal fundi. The proposed kernel is adaptively elongated to cover regions along the cortical folding patterns. The experimental results showed that the proposed kernel-based gyrification measure achieved a higher reproducibility in a multi-scan human phantom dataset and captured the cortical folding in a more shape-adaptive way than the conventional method. In early human brain development, we found positive correlations with age over most cortical regions as previously found as well as novel, refined regions of both positive and negative correlations undetectable by the conventional method.

Ilwoo Lyu, Sun Hyung Kim, Jessica Bullins, John H. Gilmore, Martin A. Styner
Joint Sparse and Low-Rank Regularized Multi-Task Multi-Linear Regression for Prediction of Infant Brain Development with Incomplete Data

Studies involving dynamic infant brain development has received increasing attention in the past few years. For such studies, a complete longitudinal dataset is often required to precisely chart the early brain developmental trajectories. Whereas, in practice, we often face missing data at different time point(s) for different subjects. In this paper, we propose a new method for prediction of infant brain development scores at future time points based on longitudinal imaging measures at early time points with possible missing data. We treat this as a multi-dimensional regression problem, for predicting multiple brain development scores (multi-task) from multiple previous time points (multi-linear). To solve this problem, we propose an objective function with a joint $$\ell _1$$ and low-rank regularization on the mapping weight tensor, to enforce feature selection, while preserving the structural information from multiple dimensions. Also, based on the bag-of-words model, we propose to extract features from longitudinal imaging data. The experimental results reveal that we can effectively predict the brain development scores assessed at the age of four years, using the imaging data as early as two years of age.

Ehsan Adeli, Yu Meng, Gang Li, Weili Lin, Dinggang Shen
Graph-Constrained Sparse Construction of Longitudinal Diffusion-Weighted Infant Atlases

Constructing longitudinal diffusion-weighted atlases of infant brains poses additional challenges due to the small brain size and the dynamic changes in the early developing brains. In this paper, we introduce a novel framework for constructing longitudinally-consistent diffusion-weighted infant atlases with improved preservation of structural details and diffusion characteristics. In particular, instead of smoothing diffusion signals by simple averaging, our approach fuses the diffusion-weighted images in a patch-wise manner using sparse representation with a graph constraint that encourages spatiotemporal consistency. Diffusion-weighted atlases across time points are jointly constructed for patches that are correlated in time and space. Compared with existing methods, including the one using sparse representation with $$l_{2,1}$$ regularization, our approach generates longitudinal infant atlases with much richer and more consistent features of the developing infant brain, as shown by the experimental results.

Jaeil Kim, Geng Chen, Weili Lin, Pew-Thian Yap, Dinggang Shen
4D Infant Cortical Surface Atlas Construction Using Spherical Patch-Based Sparse Representation

The 4D infant cortical surface atlas with densely sampled time points is highly needed for neuroimaging analysis of early brain development. In this paper, we build the 4D infant cortical surface atlas firstly covering 6 postnatal years with 11 time points (i.e., 1, 3, 6, 9, 12, 18, 24, 36, 48, 60, and 72 months), based on 339 longitudinal MRI scans from 50 healthy infants. To build the 4D cortical surface atlas, first, we adopt a two-stage groupwise surface registration strategy to ensure both longitudinal consistency and unbiasedness. Second, instead of simply averaging over the co-registered surfaces, a spherical patch-based sparse representation is developed to overcome possible surface registration errors across different subjects. The central idea is that, for each local spherical patch in the atlas space, we build a dictionary, which includes the samples of current local patches and their spatially-neighboring patches of all co-registered surfaces, and then the current local patch in the atlas is sparsely represented using the built dictionary. Compared to the atlas built with the conventional methods, the 4D infant cortical surface atlas constructed by our method preserves more details of cortical folding patterns, thus leading to boosted accuracy in registration of new infant cortical surfaces.

Zhengwang Wu, Gang Li, Yu Meng, Li Wang, Weili Lin, Dinggang Shen
Developmental Patterns Based Individualized Parcellation of Infant Cortical Surface

The human cerebral cortex develops dynamically during the early postnatal stage, reflecting the underlying rapid changes of cortical microstructures and their connections, which jointly determine the functional principles of cortical regions. Hence, the dynamic cortical developmental patterns are ideal for defining the distinct cortical regions in microstructure and function for neurodevelopmental studies. Moreover, given the remarkable inter-subject variability in terms of cortical structure/function and their developmental patterns, the individualized cortical parcellation based on each infant’s own developmental patterns is critical for precisely localizing personalized distinct cortical regions and also understanding inter-subject variability. To this end, we propose a novel method for individualized parcellation of the infant cortical surface into distinct and meaningful regions based on each individual’s cortical developmental patterns. Specifically, to alleviate the effects of cortical measurement errors and also make the individualized cortical parcellation comparable across subjects, we first create a population-based cortical parcellation to capture the general developmental landscape of the cortex in an infant population. Then, this population-based parcellation is leveraged to guide the individualized parcellation based on each infant’s own cortical developmental patterns in an iterative manner. At each iteration, the individualized parcellation is gradually updated based on (1) the prior information of the population-based parcellation, (2) the individualized parcellation at the previous iteration, and also (3) the developmental patterns of all vertices. Experiments on fifteen healthy infants, each with longitudinal MRI scans acquired at six time points (i.e., 1, 3, 6, 9, 12 and 18 months of age), show that our method generates a reliable and meaningful individualized cortical parcellation based on each infant’s own developmental patterns.

Gang Li, Li Wang, Weili Lin, Dinggang Shen
Longitudinal Modeling of Multi-modal Image Contrast Reveals Patterns of Early Brain Growth

The brain undergoes rapid development during early childhood as a series of biophysical and chemical processes occur, which can be observed in magnetic resonance (MR) images as a change over time of white matter intensity relative to gray matter. Such a contrast change manifests in specific patterns in different imaging modalities, suggesting that brain maturation is encoded by appearance changes in multi-modal MRI. In this paper, we explore the patterns of early brain growth encoded by multi-modal contrast changes in a longitudinal study of children. For a given modality, contrast is measured by comparing histograms of intensity distributions between white and gray matter. Multivariate non-linear mixed effects (NLME) modeling provides subject-specific as well as population growth trajectories which accounts for contrast from multiple modalities. The multivariate NLME procedure and resulting non-linear contrast functions enable the study of maturation in various regions of interest. Our analysis of several brain regions in a study of 70 healthy children reveals a posterior to anterior pattern of timing of maturation in the major lobes of the cerebral cortex, with posterior regions maturing earlier than anterior regions. Furthermore, we find significant differences between maturation rates between males and females.

Avantika Vardhan, James Fishbaugh, Clement Vachet, Guido Gerig
Prediction of Brain Network Age and Factors of Delayed Maturation in Very Preterm Infants

Babies born very preterm (<32 weeks postmenstral age), are at a high risk of having delayed or altered neurodevelopment. Diffusion MRI (dMRI) is a non-invasive neuroimaging modality that allows for early analysis of an infant’s brain connectivity network (i.e., structural connectome) during the critical period of development shortly after birth. In this paper we present a method to accurately assess delayed brain maturation and then use our method to study how certain anatomical and diagnostic brain injury factors are related to this delay. We first train a model to predict the age of an infant from its structural brain network. We then define the relative brain network maturation index (RBNMI) as the predicted age minus the true age of that infant. To ensure the predicted age is as accurate as possible, we examine a variety of models to predict age and use one that performs best when trained on a normative subset (77 scans) of our preterm infant cohort dataset of 168 dMRI scans. We found that a random forest regressor could predict preterm infants’ ages to within an average of $${\sim }1.6$$ weeks. We validate our approach by analysing the correlation between RBNMI and a set of demographic, diagnostic and brain connectivity related variables.

Colin J. Brown, Kathleen P. Moriarty, Steven P. Miller, Brian G. Booth, Jill G. Zwicker, Ruth E. Grunau, Anne R. Synnes, Vann Chau, Ghassan Hamarneh
Falx Cerebri Segmentation via Multi-atlas Boundary Fusion

The falx cerebri is a meningeal projection of dura in the brain, separating the cerebral hemispheres. It has stiffer mechanical properties than surrounding tissue and must be accurately segmented for building computational models of traumatic brain injury. In this work, we propose a method to segment the falx using T1-weighted magnetic resonance images (MRI) and susceptibility-weighted MRI (SWI). Multi-atlas whole brain segmentation is performed using the T1-weighted MRI and the gray matter cerebrum labels are extended into the longitudinal fissure using fast marching to find an initial estimate of the falx. To correct the falx boundaries, we register and then deform a set of SWI with manually delineated falx boundaries into the subject space. The continuous-STAPLE algorithm fuses sets of corresponding points to produce an estimate of the corrected falx boundary. Correspondence between points on the deformed falx boundaries is obtained using coherent point drift. We compare our method to manual ground truth, a multi-atlas approach without correction, and single-atlas approaches.

Jeffrey Glaister, Aaron Carass, Dzung L. Pham, John A. Butman, Jerry L. Prince
A 3D Femoral Head Coverage Metric for Enhanced Reliability in Diagnosing Hip Dysplasia

Developmental dysplasia of the hip (DDH) in infancy refers to hip joint abnormalities ranging from mild acetabular dysplasia to irreducible femoral head dislocations. While 2D B-mode ultrasound (US) is currently used clinically to estimate the severity of femoral head subluxation in infant hips, such estimates suffer from high inter-exam variability. We propose using a novel 3D US-derived dysplasia metric, the 3D femoral head coverage ($$FHC_{3D}$$), which characterizes the 3D morphology of the femoral head relative to the vertical cortex of the ilium in an infants hip joint. We compute our 3D dysplasia metric by segmenting the femoral head using a voxel-wise probability map based on a tomographic reconstruction of 2D cross-sections each labeled with a probability score of that slice containing the femoral head. Using a dataset of 20 patient hip examinations, we demonstrate that our reconstructed femoral heads agree reasonably well with manually segmented femoral heads (mean dice coefficient of 0.71), with a significant reduction in variability of the associated metric relative to the existing manual 2D-based FHC ratio ($${\sim } 20\%$$ reduction, $$p<0.05$$). Our findings suggest that the proposed 3D dysplasia metric may be more reliable than the conventional 2D metric, which may lead to a more reproducible test for diagnosing DDH.

Niamul Quader, Antony J. Hodgson, Kishore Mulpuri, Anthony Cooper, Rafeef Abugharbieh
Learning-Based Multi-atlas Segmentation of the Lungs and Lobes in Proton MR Images

Delineation of the lung and lobar anatomy in MR images is challenging due to the limited image contrast and the absence of visible interlobar fissures. Here we propose a novel automated lung and lobe segmentation method for pulmonary MR images. This segmentation method employs prior information of the lungs and lobes extracted from CT in the form of multiple MRI atlases, and adopts a learning-based atlas-encoding scheme, based on random forests, to improve the performance of multi-atlas segmentation. In particular, we encode each CT-derived MRI atlas by training an atlas-specific random forest for each structure of interest. In addition to appearance features, we also extract label context features from the registered atlases to introduce additional information to the non-linear mapping process. We evaluated our proposed framework on 10 clinical MR images acquired from COPD patients. It outperformed state-of-the-art approaches in segmenting the lungs and lobes, yielding a mean Dice score of 95.7%.

Hoileong Lee, Tahreema Matin, Fergus Gleeson, Vicente Grau
Unsupervised Discovery of Spatially-Informed Lung Texture Patterns for Pulmonary Emphysema: The MESA COPD Study

Unsupervised discovery of pulmonary emphysema subtypes offers the potential for new definitions of emphysema on lung computed tomography (CT) that go beyond the standard subtypes identified on autopsy. Emphysema subtypes can be defined on CT as a variety of textures with certain spatial prevalence. However, most existing approaches for learning emphysema subtypes on CT are limited to texture features, which are sub-optimal due to the lack of spatial information. In this work, we exploit a standardized spatial mapping of the lung and propose a novel framework for combining spatial and texture information to discover spatially-informed lung texture patterns (sLTPs). Our spatial mapping is demonstrated to be a powerful tool to study emphysema spatial locations over different populations. The discovered sLTPs are shown to have high reproducibility, ability to encode standard emphysema subtypes, and significant associations with clinical characteristics.

Jie Yang, Elsa D. Angelini, Pallavi P. Balte, Eric A. Hoffman, John H. M. Austin, Benjamin M. Smith, Jingkuan Song, R. Graham Barr, Andrew F. Laine

Shape and Patch-Based Techniques

Frontmatter
Automatic Landmark Estimation for Adolescent Idiopathic Scoliosis Assessment Using BoostNet

Adolescent Idiopathic Scoliosis (AIS) exhibits as an abnormal curvature of the spine in teens. Conventional radiographic assessment of scoliosis is unreliable due to the need for manual intervention from clinicians as well as high variability in images. Current methods for automatic scoliosis assessment are not robust due to reliance on segmentation or feature engineering. We propose a novel framework for automated landmark estimation for AIS assessment by leveraging the strength of our newly designed BoostNet, which creatively integrates the robust feature extraction capabilities of Convolutional Neural Networks (ConvNet) with statistical methodologies to adapt to the variability in X-ray images. In contrast to traditional ConvNets, our BoostNet introduces two novel concepts: (1) a BoostLayer for robust discriminatory feature embedding by removing outlier features, which essentially minimizes the intra-class variance of the feature space and (2) a spinal structured multi-output regression layer for compact modelling of landmark coordinate correlation. The BoostNet architecture estimates required spinal landmarks within a mean squared error (MSE) rate of 0.00068 in 431 crossvalidation images and 0.0046 in 50 test images, demonstrating its potential for robust automated scoliosis assessment in the clinical setting.

Hongbo Wu, Chris Bailey, Parham Rasoulinejad, Shuo Li
Nonlinear Statistical Shape Modeling for Ankle Bone Segmentation Using a Novel Kernelized Robust PCA

Statistical shape models (SSMs) are widely employed in medical image segmentation. However, an inferior SSM will degenerate the quality of segmentations. It is challenging to derive an efficient model because: (1) often the training datasets are corrupted by noise and/or artifacts; (2) conventional SSM is not capable to capture nonlinear variabilities of a population of shape. Addressing these challenges, this work aims to create SSMs that are not only robust to abnormal training data but also satisfied with nonlinear distribution. As Robust PCA is an efficient tool to seek a clean low-rank linear subspace, a novel kernelized Robust PCA (KRPCA) is proposed to cope with nonlinear distribution for statistical shape modeling. In evaluation, the built nonlinear model is used in ankle bone segmentation where 9 bones are separately distributed. Evaluation results show that the model built with KRPCA has a significantly higher quality than other state-of-the-art methods.

Jingting Ma, Anqi Wang, Feng Lin, Stefan Wesarg, Marius Erdt
Adaptable Landmark Localisation: Applying Model Transfer Learning to a Shape Model Matching System

We address the challenge of model transfer learning for a shape model matching (SMM) system. The goal is to adapt an existing SMM system to work effectively with new data without rebuilding the system from scratch.Recently, several SMM systems have been proposed that combine the outcome of a Random Forest (RF) regression step with shape constraints. These methods have been shown to lead to accurate and robust results when applied to the localisation of landmarks annotating skeletal structures in radiographs. However, as these methods contain a supervised learning component, their performance heavily depends on the data that was used to train the system, limiting their applicability to a new dataset with different properties.Here we show how to tune an existing SMM system by both updating the RFs with new samples and re-estimating the shape model. We demonstrate the effectiveness of tuning a cephalometric SMM system to replicate the annotation style of a new observer.Our results demonstrate that tuning an existing system leads to significant improvements in performance on new data, up to the extent of performing a well as a system that was fully rebuilt using samples from the new dataset. The proposed approach is fast and does not require access to the original training data.

C. Lindner, D. Waring, B. Thiruvenkatachari, K. O’Brien, T. F. Cootes
Representative Patch-based Active Appearance Models Generated from Small Training Populations

Active Appearance Models (AAMs) and Constrained Local Models (CLMs) are classical approaches for model-based image segmentation in medical image analysis. AAMs consist of global statistical models of shape and appearance, are known to be hard to fit, and often suffer from insufficient generalization capabilities in case of limited training data. CLMs model appearance only for local patches and relax or completely remove the global appearance constraint. They are, therefore, much easier to optimize but in certain cases they lack the robustness of AAMs. In this paper, we present a framework for patch-based active appearance modeling, which elegantly combines strengths of AAMs and CLMs. Our models provide global shape and appearance constraints and we make use of recent methodological advances from computer vision for efficient joint optimization of shape and appearance parameters during model fitting. Furthermore, the insufficient generalization abilities of those global models are tackled by incorporating and extending a recent approach for learning representative statistical shape models from small training populations. We evaluate our approach on publicly available chest radiographs and cardiac MRI data. The results show that the proposed framework leads to competitive results in terms of segmentation accuracy for challenging multi-object segmentation problems even when only few training samples are available.

Matthias Wilms, Heinz Handels, Jan Ehrhardt
Integrating Statistical Prior Knowledge into Convolutional Neural Networks

In this work we show how to integrate prior statistical knowledge, obtained through principal components analysis (PCA), into a convolutional neural network in order to obtain robust predictions even when dealing with corrupted or noisy data. Our network architecture is trained end-to-end and includes a specifically designed layer which incorporates the dataset modes of variation discovered via PCA and produces predictions by linearly combining them. We also propose a mechanism to focus the attention of the CNN on specific regions of interest of the image in order to obtain refined predictions. We show that our method is effective in challenging segmentation and landmark localization tasks.

Fausto Milletari, Alex Rothberg, Jimmy Jia, Michal Sofka
Statistical Shape Model of Nested Structures Based on the Level Set

We propose a method for constructing a multi-shape statistical shape model (SSM) for nested structures such that each is a subset or superset of another. The proposed method has potential application to any pair of shapes with an inclusive relationship. These types of shapes are often found in anatomy such as the brain and ventricle. Most existing multi-shape SSMs can be used to describe these nested shapes; however, none of them guarantees a correct inclusive relationship. The basic concept of the proposed method is to describe nested shapes by applying different thresholds to a single continuous real-valued function in an image space. We demonstrate that there exists a one-to-one mapping from an arbitrary pair of nested shapes to this type of function. We also demonstrate that this method can be easily extended to represent three or more nested structures. We demonstrate the effectiveness of proposed SSM using brain and ventricle volumes obtained from particular stages of human embryos. The performance of the SSM was evaluated in terms of generalization and specificity ability. Additionally, we measured leakage criteria to assess the ability to preserve inclusive relationships. A quantitative comparison of our SSM with conventional multi-shape SSMs demonstrates the superiority of the proposed method.

Atsushi Saito, Masaki Tsujikawa, Tetsuya Takakuwa, Shigehito Yamada, Akinobu Shimizu
Locally Adaptive Probabilistic Models for Global Segmentation of Pathological OCT Scans

Segmenting retinal tissue deformed by pathologies can be challenging. Segmentation approaches are often constructed with a certain pathology in mind and may require a large set of labeled pathological scans, and therefore are tailored to that particular pathology.We present an approach that can be easily transfered to new pathologies, as it is designed with no particular pathology in mind and requires no pathological ground truth. The approach is based on a graphical model trained for healthy scans, which is modified locally by adding pathology-specific shape modifications. We use the framework of sum-product networks (SPN) to find the best combination of modified and unmodified local models that globally yield the best segmentation. The approach further allows to localize and quantify the pathology. We demonstrate the flexibility and the robustness of our approach, by presenting results for three different pathologies: diabetic macular edema (DME), age-related macular degeneration (AMD) and non-proliferative diabetic retinopathy.

Fabian Rathke, Mattia Desana, Christoph Schnörr
Learning Deep Features for Automated Placement of Correspondence Points on Ensembles of Complex Shapes

Correspondence-based shape models are an enabling technology for various medical imaging applications that rely on statistical analysis of populations of anatomical shape. One strategy for automatic correspondence placement is to simultaneously learn a compact representation of the underlying anatomical variation in the shape space while capturing the geometric characteristics of individual shapes. The inherent geometric complexity and population-level shape variation in anatomical structures introduce significant challenges in finding optimal shape correspondence models. Existing approaches adopt iterative optimization schemes with objective functions derived from probabilistic modeling of shape space, e.g. entropy of Gaussian-distributed shape space, to find useful sets of dense correspondence on shape ensembles. Nonetheless, anatomical shape distributions can be far more complex than this Gaussian assumption, which entails linear shape variation. Recent works address this limitation by adopting an application-specific notion of correspondence through lifting positional data to a higher-dimensional feature space (e.g. sulcal depth, brain connectivity, and geodesic distance to anatomical landmarks), with the goal of simplifying the optimization problem. However, this typically requires a careful selection of hand-crafted features and their success heavily rely on expertise in finding such features consistently. This paper proposes an automated feature learning approach using deep convolutional neural networks for optimization of dense point correspondence on shape ensembles. The proposed method endows anatomical shapes with learned features that enhance the shape correspondence objective function to deal with complex objects and populations. Results demonstrate that deep learning based features perform better than methods that rely on position and compete favorably with hand-crafted features.

Praful Agrawal, Ross T. Whitaker, Shireen Y. Elhabian
Robust Multi-scale Anatomical Landmark Detection in Incomplete 3D-CT Data

Robust and fast detection of anatomical structures is an essential prerequisite for the next-generation automated medical support tools. While machine learning techniques are most often applied to address this problem, the traditional object search scheme is typically driven by suboptimal and exhaustive strategies. Most importantly, these techniques do not effectively address cases of incomplete data, i.e., scans taken with a partial field-of-view. To address these limitations, we present a solution that unifies the anatomy appearance model and the search strategy by formulating a behavior-learning task. This is solved using the capabilities of deep reinforcement learning with multi-scale image analysis and robust statistical shape modeling. Using these mechanisms artificial agents are taught optimal navigation paths in the image scale-space that can account for missing structures to ensure the robust and spatially-coherent detection of the observed anatomical landmarks. The identified landmarks are then used as robust guidance in estimating the extent of the body-region. Experiments show that our solution outperforms a state-of-the-art deep learning method in detecting different anatomical structures, without any failure, on a dataset of over 2300 3D-CT volumes. In particular, we achieve 0% false-positive and 0% false-negative rates at detecting the landmarks or recognizing their absence from the field-of-view of the scan. In terms of runtime, we reduce the detection-time of the reference method by 15−20 times to under 40 ms, an unmatched performance in the literature for high-resolution 3D-CT.

Florin C. Ghesu, Bogdan Georgescu, Sasa Grbic, Andreas K. Maier, Joachim Hornegger, Dorin Comaniciu
Learning and Incorporating Shape Models for Semantic Segmentation

Semantic segmentation has been popularly addressed using Fully convolutional networks (FCN) (e.g. U-Net) with impressive results and has been the forerunner in recent segmentation challenges. However, FCN approaches do not necessarily incorporate local geometry such as smoothness and shape, whereas traditional image analysis techniques have benefitted greatly by them in solving segmentation and tracking problems. In this work, we address the problem of incorporating shape priors within the FCN segmentation framework. We demonstrate the utility of such a shape prior in robust handling of scenarios such as loss of contrast and artifacts. Our experiments show $$\approx 5\%$$ improvement over U-Net for the challenging problem of ultrasound kidney segmentation.

H. Ravishankar, R. Venkataramani, S. Thiruvenkadam, P. Sudhakar, V. Vaidya
Surface-Wise Texture Patch Analysis of Combined MRI and PET to Detect MRI-Negative Focal Cortical Dysplasia

Focal cortical dysplasia (FCD) is intrinsically epileptogenic, and is a well-recognized cause of refractory epilepsy. Surgical resection can lead to seizure-freedom, and quantitative analyses of MRI have been an excellent presurgical evaluation tool. Yet many of the FCDs are small or “MRI-negative” at the visual evaluation, for which positron emission tomography (PET) may help identify an MRI-negative FCD. We proposed a pipeline that optimized cortical surface sampling of combined MRI and PET features and detection of subtle or visually negative FCDs. To this end, we extracted individual cortical surfaces and designed an adapted between-image modalities registration that conveyed the surface mesh into each image’s native space. To further improve detection accuracy, we integrated a framework of the patch library construction and label fusion that have been widely used for the brain structural segmentation into a surface-based classification. Our study demonstrated superior sensitivity in FCD detection using combined feature sampling of both MRI and PET, compared to MRI alone (93 vs 86%) while identifying no false positives (FP) in controls. Our classifier using the new patch analysis further outperformed a recently developed surface-based approach in terms of sensitivity (93 vs 64%), and FP rate (0 vs 0.1%). No FP vertices were found in controls: our classifier yet identified extralesional clusters in FCD patients. Patients with a higher prevalence of extralesional vertices had a lower chance of positive surgical outcome compared to those with a lower prevalence (67% vs 93%; p < 0.05). Our study showed that the advance in techniques for multimodal feature sampling is the key to improve lesion detection and understand the pattern of brain abnormalities in FCD.

Hosung Kim, Yee-Leng Tan, Seunghyun Lee, Anthony James Barkovich, Duan Xu, Robert Knowlton

Registration Techniques

Frontmatter
Training CNNs for Image Registration from Few Samples with Model-based Data Augmentation

Convolutional neural networks (CNNs) have been successfully used for fast and accurate estimation of dense correspondences between images in computer vision applications. However, much of their success is based on the availability of large training datasets with dense ground truth correspondences, which are only rarely available in medical applications. In this paper, we, therefore, address the problem of CNNs learning from few training data for medical image registration. Our contributions are threefold: (1) We present a novel approach for learning highly expressive appearance models from few training samples, (2) we show that this approach can be used to synthesize huge amounts of realistic ground truth training data for CNN-based medical image registration, and (3) we adapt the FlowNet architecture for CNN-based optical flow estimation to the medical image registration problem. This pipeline is applied to two medical data sets with less than 40 training images. We show that CNNs learned from the proposed generative model outperform those trained on random deformations or displacement fields estimated via classical image registration.

Hristina Uzunova, Matthias Wilms, Heinz Handels, Jan Ehrhardt
Nonrigid Image Registration Using Multi-scale 3D Convolutional Neural Networks

In this paper we propose a method to solve nonrigid image registration through a learning approach, instead of via iterative optimization of a predefined dissimilarity metric. We design a Convolutional Neural Network (CNN) architecture that, in contrast to all other work, directly estimates the displacement vector field (DVF) from a pair of input images. The proposed RegNet is trained using a large set of artificially generated DVFs, does not explicitly define a dissimilarity metric, and integrates image content at multiple scales to equip the network with contextual information. At testing time nonrigid registration is performed in a single shot, in contrast to current iterative methods. We tested RegNet on 3D chest CT follow-up data. The results show that the accuracy of RegNet is on par with a conventional B-spline registration, for anatomy within the capture range. Training RegNet with artificially generated DVFs is therefore a promising approach for obtaining good results on real clinical data, thereby greatly simplifying the training problem. Deformable image registration can therefore be successfully casted as a learning problem.

Hessam Sokooti, Bob de Vos, Floris Berendsen, Boudewijn P. F. Lelieveldt, Ivana Išgum, Marius Staring
Multimodal Image Registration with Deep Context Reinforcement Learning

Automatic and robust registration between real-time patient imaging and pre-operative data (e.g. CT and MRI) is crucial for computer-aided interventions and AR-based navigation guidance. In this paper, we present a novel approach to automatically align range image of the patient with pre-operative CT images. Unlike existing approaches based on the surface similarity optimization process, our algorithm leverages the contextual information of medical images to resolve data ambiguities and improve robustness. The proposed algorithm is derived from deep reinforcement learning algorithm that automatically learns to extract optimal feature representation to reduce the appearance discrepancy between these two modalities. Quantitative evaluations on 1788 pairs of CT and depth images from real clinical setting demonstrate that the proposed method achieves the state-of-the-art performance.

Kai Ma, Jiangping Wang, Vivek Singh, Birgi Tamersoy, Yao-Jen Chang, Andreas Wimmer, Terrence Chen
Directional Averages for Motion Segmentation in Discontinuity Preserving Image Registration

The registration of abdominal images is central in the analysis of motion patterns and physiological investigations of abdominal organs. Challenges which arise in this context are discontinuous changes in correspondence across sliding organ boundaries. Standard regularity criteria like smoothness, are not valid in such regions. In this paper, we introduce a novel regularity criterion which incorporates local motion segmentation in order to preserve discontinuous changes in the spatial mapping. Based on local directional statistics of the transformation parameters it is decided which part of a local neighborhood influences a parameter during registration. Thus, the mutual influence of neighboring parameters which are located on opposing sides of sliding organ boundaries is relaxed. The motion segmentation is performed within the regularizer as well as in the image similarity measure and is thus implicitly updated throughout the optimization. In the experiments on the 4DCT POPI dataset we achieve competitive registration performance compared to state-of-the-art methods.

Christoph Jud, Robin Sandkühler, Nadia Möri, Philippe C. Cattin
Similarity Metrics for Diffusion Multi-Compartment Model Images Registration

Diffusion multi-compartment models (MCM) allow for a fine and comprehensive study of the white matter microstructure. Non linear registration of MCM images may provide valuable information on the brain e.g. through population comparison. State-of-the-art MCM registration however relies on pairing-based similarity measures where the one-to-one mapping of MCM compartments is required. This approach leads to non differentiabilties or discontinuities, which may turn into poorer registration. Moreover, these measures are often specific to one MCM compartment model. We propose two new MCM similarity measures based on the space of square integrable functions, applied to MCM characteristic functions. These measures are pairing-free and agnostic to compartment types. We derive their analytic expressions for multi-tensor models and propose a spherical approximation for more complex models. Evaluation is performed on synthetic deformations and inter-subject registration, demonstrating the robustness of the proposed measures.

Olivier Commowick, Renaud Hédouin, Emmanuel Caruyer, Christian Barillot
SVF-Net: Learning Deformable Image Registration Using Shape Matching

In this paper, we propose an innovative approach for registration based on the deterministic prediction of the parameters from both images instead of the optimization of a energy criteria. The method relies on a fully convolutional network whose architecture consists of contracting layers to detect relevant features and a symmetric expanding path that matches them together and outputs the transformation parametrization. Whereas convolutional networks have seen a widespread expansion and have been already applied to many medical imaging problems such as segmentation and classification, its application to registration has so far faced the challenge of defining ground truth data on which to train the algorithm. Here, we present a novel training strategy to build reference deformations which relies on the registration of segmented regions of interest. We apply this methodology to the problem of inter-patient heart registration and show an important improvement over a state of the art optimization based algorithm. Not only our method is more accurate but it is also faster - registration of two 3D-images taking less than 30 ms second on a GPU - and more robust to outliers.

Marc-Michel Rohé, Manasi Datar, Tobias Heimann, Maxime Sermesant, Xavier Pennec
A Large Deformation Diffeomorphic Approach to Registration of CLARITY Images via Mutual Information

CLARITY is a method for converting biological tissues into translucent and porous hydrogel-tissue hybrids. This facilitates interrogation with light sheet microscopy and penetration of molecular probes while avoiding physical slicing. In this work, we develop a pipeline for registering CLARIfied mouse brains to an annotated brain atlas. Due to the novelty of this microscopy technique it is impractical to use absolute intensity values to align these images to existing standard atlases. Thus we adopt a large deformation diffeomorphic approach for registering images via mutual information matching. Furthermore we show how a cascaded multi-resolution approach can improve registration quality while reducing algorithm run time. As acquired image volumes were over a terabyte in size, they were far too large for work on personal computers. Therefore the NeuroData computational infrastructure was deployed for multi-resolution storage and visualization of these images and aligned annotations on the web.

Kwame S. Kutten, Nicolas Charon, Michael I. Miller, J. Tilak Ratnanather, Jordan Matelsky, Alexander D. Baden, Kunal Lillaney, Karl Deisseroth, Li Ye, Joshua T. Vogelstein
Mixed Metric Random Forest for Dense Correspondence of Cone-Beam Computed Tomography Images

Efficient dense correspondence and registration of CBCT images is an essential yet challenging task for inter-treatment evaluations of structural variations. In this paper, we propose an unsupervised mixed metric random forest (MMRF) for dense correspondence of CBCT images. The weak labeling resulted from a clustering forest is utilized to discriminate the badly-clustered supervoxels and related classes, which are favored in the following fine-tuning of the MMRF by penalized weighting in both classification and clustering entropy estimation. An iterative scheme is introduced for the forest reinforcement to minimize the inconsistent supervoxel labeling across CBCT images. In order to screen out the inconsistent matching pairs and to regularize the dense correspondence defined by the forest-based metric, we evaluate consistencies of candidate matching pairs by virtue of isometric constraints. The proposed correspondence method has been tested on 150 clinically captured CBCT images, and outperforms state-of-the-arts in terms of matching accuracy while being computationally efficient.

Yuru Pei, Yunai Yi, Gengyu Ma, Yuke Guo, Gui Chen, Tianmin Xu, Hongbin Zha
Optimal Transport for Diffeomorphic Registration

This paper introduces the use of unbalanced optimal transport methods as a similarity measure for diffeomorphic matching of imaging data. The similarity measure is a key object in diffeomorphic registration methods that, together with the regularization on the deformation, defines the optimal deformation. Most often, these similarity measures are local or non local but simple enough to be computationally fast. We build on recent theoretical and numerical advances in optimal transport to propose fast and global similarity measures that can be used on surfaces or volumetric imaging data. This new similarity measure is computed using a fast generalized Sinkhorn algorithm. We apply this new metric in the LDDMM framework on synthetic and real data, fibres bundles and surfaces and show that better matching results are obtained.

Jean Feydy, Benjamin Charlier, François-Xavier Vialard, Gabriel Peyré
Deformable Image Registration Based on Similarity-Steered CNN Regression

Existing deformable registration methods require exhaustively iterative optimization, along with careful parameter tuning, to estimate the deformation field between images. Although some learning-based methods have been proposed for initiating deformation estimation, they are often template-specific and not flexible in practical use. In this paper, we propose a convolutional neural network (CNN) based regression model to directly learn the complex mapping from the input image pair (i.e., a pair of template and subject) to their corresponding deformation field. Specifically, our CNN architecture is designed in a patch-based manner to learn the complex mapping from the input patch pairs to their respective deformation field. First, the equalized active-points guided sampling strategy is introduced to facilitate accurate CNN model learning upon a limited image dataset. Then, the similarity-steered CNN architecture is designed, where we propose to add the auxiliary contextual cue, i.e., the similarity between input patches, to more directly guide the learning process. Experiments on different brain image datasets demonstrate promising registration performance based on our CNN model. Furthermore, it is found that the trained CNN model from one dataset can be successfully transferred to another dataset, although brain appearances across datasets are quite variable.

Xiaohuan Cao, Jianhua Yang, Jun Zhang, Dong Nie, Minjeong Kim, Qian Wang, Dinggang Shen
Generalised Coherent Point Drift for Group-Wise Registration of Multi-dimensional Point Sets

In this paper we propose a probabilistic approach to group-wise registration of unstructured high-dimensional point sets. We focus on registration of generalised point sets which encapsulate both the positions of points on surface boundaries and corresponding normal vectors describing local surface geometry. Richer descriptions of shape can be especially valuable in applications involving complex and intricate variations in geometry, where spatial position alone is an unreliable descriptor for shape registration. A hybrid mixture model combining Student’s t and Von-Mises-Fisher distributions is proposed to model position and orientation components of the point sets, respectively. A group-wise rigid and non-rigid registration framework is then formulated on this basis. Two clinical data sets, comprising 27 brain ventricle and 15 heart shapes, were used to assess registration accuracy. Significant improvement in accuracy and anatomical validity of the estimated correspondences was achieved using the proposed approach, relative to state-of-the-art point set registration approaches, which consider spatial positions alone.

Nishant Ravikumar, Ali Gooya, Alejandro F. Frangi, Zeike A. Taylor
Fast Geodesic Regression for Population-Based Image Analysis

Geodesic regression on images enables studies of brain development and degeneration, disease progression, and tumor growth. The high-dimensional nature of image data presents significant computational challenges for the current regression approaches and prohibits large scale studies. In this paper, we present a fast geodesic regression method that dramatically decreases the computational cost of the inference procedure while maintaining prediction accuracy. We employ an efficient low dimensional representation of diffeomorphic transformations derived from the image data and characterize the regressed trajectory in the space of diffeomorphisms by its initial conditions, i.e., an initial image template and an initial velocity field computed as a weighted average of pairwise diffeomorphic image registration results. This construction is achieved by using a first-order approximation of pairwise distances between images. We demonstrate the efficiency of our model on a set of 3D brain MRI scans from the OASIS dataset and show that it is dramatically faster than the state-of-the-art regression methods while producing equally good regression results on the large subject cohort.

Yi Hong, Polina Golland, Miaomiao Zhang
Deformable Registration of a Preoperative 3D Liver Volume to a Laparoscopy Image Using Contour and Shading Cues

The deformable registration of a preoperative organ volume to an intraoperative laparoscopy image is required to achieve augmented reality in laparoscopy. This is an extremely challenging objective for the liver. This is because the preoperative volume is textureless, and the liver is deformed and only partially visible in the laparoscopy image. We solve this problem by modeling the preoperative volume as a Neo-Hookean elastic model, which we evolve under shading and contour cues. The contour cues combine the organ’s silhouette and a few curvilinear anatomical landmarks. The problem is difficult because the shading cue is highly nonconvex and the contour cues give curve-level (and not point-level) correspondences. We propose a convergent alternating projections algorithm, which achieves a $$4\%$$ registration error.

Bongjin Koo, Erol Özgür, Bertrand Le Roy, Emmanuel Buc, Adrien Bartoli
Parameter Sensitivity Analysis in Medical Image Registration Algorithms Using Polynomial Chaos Expansions

Medical image registration algorithms typically involve numerous user-defined ‘tuning’ parameters, such as regularization weights, smoothing parameters, etc. Their optimal settings depend on the anatomical regions of interest, image modalities, image acquisition settings, the expected severity of deformations, and the clinical requirements. Within a particular application, the optimal settings could even vary across the image pairs to be registered. It is, therefore, crucial to develop methods that provide insight into the effect of each tuning parameter in interaction with the other tuning parameters and allow a user to efficiently identify optimal parameter settings for a given pair of images. An exhaustive search over all possible parameter settings has obvious disadvantages in terms of computational costs and quickly becomes infeasible in practice when the number of tuning parameters increases, due to the curse of dimensionality. In this study, we propose a method based on Polynomial Chaos Expansions (PCE). PCE is a method for sensitivity analysis that approximates the model of interest (in our case the registration of a given pair of images) by a polynomial expansion which can be evaluated very efficiently. PCE renders this approach feasible for a large number of input parameters, by requiring only a modest number of function evaluations for model construction. Once the PCE has been constructed, the sensitivity of the registration results to changes in the parameters can be quantified, and the user can simulate registration results for any combination of input parameters in real-time. The proposed approach is evaluated on 8 pairs of liver CT scans and the results indicate that PCE is a promising method for parameter sensitivity analysis in medical image registration.

Gokhan Gunay, Sebastian van der Voort, Manh Ha Luu, Adriaan Moelker, Stefan Klein
Robust Non-rigid Registration Through Agent-Based Action Learning

Robust image registration in medical imaging is essential for comparison or fusion of images, acquired from various perspectives, modalities or at different times. Typically, an objective function needs to be minimized assuming specific a priori deformation models and predefined or learned similarity measures. However, these approaches have difficulties to cope with large deformations or a large variability in appearance. Using modern deep learning (DL) methods with automated feature design, these limitations could be resolved by learning the intrinsic mapping solely from experience. We investigate in this paper how DL could help organ-specific (ROI-specific) deformable registration, to solve motion compensation or atlas-based segmentation problems for instance in prostate diagnosis. An artificial agent is trained to solve the task of non-rigid registration by exploring the parametric space of a statistical deformation model built from training data. Since it is difficult to extract trustworthy ground-truth deformation fields, we present a training scheme with a large number of synthetically deformed image pairs requiring only a small number of real inter-subject pairs. Our approach was tested on inter-subject registration of prostate MR data and reached a median DICE score of .88 in 2-D and .76 in 3-D, therefore showing improved results compared to state-of-the-art registration algorithms.

Julian Krebs, Tommaso Mansi, Hervé Delingette, Li Zhang, Florin C. Ghesu, Shun Miao, Andreas K. Maier, Nicholas Ayache, Rui Liao, Ali Kamen
Selecting the Optimal Sequence for Deformable Registration of Microscopy Image Sequences Using Two-Stage MST-based Clustering Algorithm

We developed and implemented a novel two-stage Minimum Spanning Tree (MST)-based clustering method for deformable registration of microscopy image sequences. We first construct a MST for the input image sequence. MST mitigates the registration error propagation of time sequenced images by re-ordering the images in such a way where poor quality images appear at the end of the sequence. Then MST is clustered into several groups based on the similarity of the images. After that an optimal anchor image is selected automatically for each group through an iterative assessment of entropy and MSE based coarse registration error and the local deformable registration is performed within each group separately. Subsequently coarse registration is conducted to find the global anchor image selected among the whole time sequenced images and then a deformable registration is conducted on the whole sequence. Two-stage MST-based deformable registration algorithm can incorporate larger drifts and distortions more accurately than conventional one shot registration algorithm by fine-tuning the larger amount of deformation incrementally in a couple of stages. Our method outperforms other methods on both 2D and 3D in vivo microscopy image sequences of mouse arteries used in atherosclerosis study.

Baidya Nath Saha, Nilanjan Ray, Sara McArdle, Klaus Ley

Functional Imaging, Connectivity, and Brain Parcellation

Frontmatter
Dynamic Regression for Partial Correlation and Causality Analysis of Functional Brain Networks

We propose a general dynamic regression framework for partial correlation and causality analysis of functional brain networks. Using the optimal prediction theory, we present the solution of the dynamic regression problem by minimizing the entropy of the associated stochastic process. We also provide the relation between the solutions and the linear dependence models of Geweke and Granger and derive novel expressions for computing partial correlation and causality using an optimal prediction filter with minimum error variance. We use the proposed dynamic framework to study the intrinsic partial correlation and causality between seven different brain networks using resting state functional MRI (rsfMRI) data from the Human Connectome Project (HCP) and compare our results with those obtained from standard correlation and causality measures. The results show that our optimal prediction filter explains a significant portion of the variance in the rsfMRI data at low frequencies, unlike standard partial correlation analysis.

Lipeng Ning, Yogesh Rathi
Kernel-Regularized ICA for Computing Functional Topography from Resting-State fMRI

Topographic regularity is a fundamental property in brain connectivity. In this work, we present a novel method for studying topographic regularity of functional connectivity with resting-state fMRI (rfMRI). Our main idea is to incorporate topographically regular structural connectivity in independent component analysis (ICA), and our method is motivated by the recent development of novel tractography and fiber filtering algorithms that can generate highly organized fiber bundles connecting different brain regions. By leveraging these cutting-edge fiber tracking and filtering algorithms, here we develop a novel kernel-regularized ICA method for extracting functional topography with rfMRI signals. In our experiments, we use rfMRI scans of 35 unrelated, right-handed subjects from the Human Connectome Project (HCP) to study the functional topography of the motor cortex. We first demonstrate that our method can generate functional connectivity maps with more regular topography than conventional group ICA. We also show that the components extracted by our algorithm are able to capture co-activation patterns that represent the organized topography of the motor cortex across the hemispheres. Finally, we show that our method achieves improved reproducibility as compared to conventional group ICA.

Junyan Wang, Yonggang Shi
N-way Decomposition: Towards Linking Concurrent EEG and fMRI Analysis During Natural Stimulus

The human brain is intrinsically a high-dimensional and multi-variant system. Simultaneous EEG and fMRI recoding offers a powerful tool to examine the electrical activity of neuron populations at high temporal and frequency resolution, and concurrent blood oxygen level dependent (BOLD) responses at high spatial resolution. Joint analysis of EEG and fMRI data could thus help comprehensively understand the brain function at multiple scales. Such analysis, however, is challenging due to the limited knowledge on the coupling principle of neuronal-electrical and hemodynamic responses. A rich body of works have been done to model EEG-fMRI data during event related design and resting state, while few have explored concurrent data during natural stimulus due to the complexity of both stimulus and response. In this paper, we propose a novel method based on N-way decomposition to jointly analyze simultaneous EEG and fMRI data during natural movie viewing. Briefly, a 4-way tensor from the EEG data, constructed in four dimensions of time, frequency, channel and subject, is decomposed into group-wise rank one components by canonical polyadic decomposition (CPD). We then used the decomposed temporal features to constrain a 2-way sparse decomposition of fMRI data for network detection. Our results showed that the proposed method could effectively decode meaningful brain activity from both modalities and link EEG multi-way features with fMRI functional networks.

Jinglei Lv, Vinh Thai Nguyen, Johan van der Meer, Michael Breakspear, Christine Cong Guo
Connectome-Based Pattern Learning Predicts Histology and Surgical Outcome of Epileptogenic Malformations of Cortical Development

Focal cortical dysplasia (FCD) type II, a surgically amenable epileptogenic malformation, is characterized by intracortical dyslamination and dysmorphic neurons, either in isolation (IIA) or together with balloon cells (IIB). While evidence suggests diverging local function between these two histological grades, patterns of connectivity to the rest of the brain remain unknown. We present a novel MRI framework that subdivides a given FCD lesion into a set of smaller cortical patches using hierarchical clustering of resting-state functional connectivity profiles. We assessed the yield of this connectome-based subtyping to predict histological grade and response to surgery in individual patients. As the human functional connectome consists of multiple large-scale communities (e.g., the default mode and fronto-parietal networks), we dichotomized connectivity profiles of lesional patches into connectivity to the cortices belonging to the same functional community (intra-community) and to other communities (inter-community). Clustering these community-based patch profiles in 27 patients with histologically-proven FCD objectively identified three distinct lesional classes with (1) decreased intra- and inter-community connectivity, (2) decreased intra-community but normal inter-community connectivity, and (3) increased intra- as well as inter-community connectivity, relative to 34 healthy controls. Ensemble classifiers informed by these classes predicted histological grading (i.e., IIA vs. IIB) and post-surgical outcome (i.e., seizure-free vs. non-free) with high accuracy (≥84%, above-chance significance based on permutation tests, p < 0.01), suggesting benefits of MRI-based connectome stratification for individualized presurgical prognostics.

Seok-Jun Hong, Boris Bernhardt, Ravnoor Gill, Neda Bernasconi, Andrea Bernasconi
Joint Representation of Connectome-Scale Structural and Functional Profiles for Identification of Consistent Cortical Landmarks in Human Brains

There have been significant interests in the representation of structural or functional profiles for establishment of structural/functional correspondences across individuals and populations in the brain mapping field. For example, from the structural perspective, previous studies have identified hundreds of consistent cortical landmarks across human individuals and populations, each of which possess consistent DTI-derived fiber connection patterns. From the functional perspective, a large collection of well-characterized functional brain networks based on sparse coding of whole-brain fMRI signals have been identified. However, due to the considerable variability of structural and functional architectures in human brains, it is challenging for the earlier studies to jointly represent the connectome-scale profiles to establish a common cortical architecture which can comprehensively encode both brain structure and function. In order to address this challenge, in this paper, we proposed an effective computational framework to jointly represent the structural and functional profiles for identification of a set of consistent and common cortical landmarks with both structural and functional correspondences across different human brains based on multimodal DTI and fMRI data. Experiments on the Human Connectome Project (HCP) data demonstrated the promise of our framework.

Shu Zhang, Xi Jiang, Tianming Liu
Subject-Specific Structural Parcellations Based on Randomized AB-divergences

Brain parcellation provides a means to approach the brain in smaller regions. It also affords an appropriate dimensionality reduction in the creation of connectomes. Most approaches to creating connectomes start with registering individual scans to a template, which is then parcellated. Data processing usually ends with the projection of individual scans onto the parcellation for extracting individual biomarkers, such as connectivity signatures. During this process, registration errors can significantly alter the quality of biomarkers. In this paper, we propose to mitigate this issue with a hybrid approach for brain parcellation. We use diffusion MRI (dMRI) based structural connectivity measures to drive the refinement of an anatomical prior parcellation. Our method generates highly coherent structural parcels in native subject space while maintaining interpretability and correspondences across the population. This goal is achieved by registering a population-wide anatomical prior to individual dMRI scan and generating connectivity signatures for each voxel. The anatomical prior is then deformed by re-parcellating the brain according to the similarity between voxel connectivity signatures while constraining the number of parcels. We investigate a broad family of signature similarities known as AB-divergences and explain how a divergence adapted to our segmentation task can be selected. This divergence is used for parcellating a high-resolution dataset using two graph-based methods. The promising results obtained suggest that our approach produces coherent parcels and stronger connectomes than the original anatomical priors.

Nicolas Honnorat, Drew Parker, Birkan Tunç, Christos Davatzikos, Ragini Verma
Improving Functional MRI Registration Using Whole-Brain Functional Correlation Tensors

Population studies of brain function with resting-state functional magnetic resonance imaging (rs-fMRI) largely rely on the accurate inter-subject registration of functional areas. This is typically achieved through registration of the corresponding T1-weighted MR images with more structural details. However, accumulating evidence has suggested that such strategy cannot well-align functional regions which are not necessarily confined by the anatomical boundaries defined by the T1-weighted MR images. To mitigate this problem, various registration algorithms based directly on rs-fMRI data have been developed, most of which have utilized functional connectivity (FC) as features for registration. However, most of the FC-based registration methods usually extract the functional features only from the thin and highly curved cortical grey matter (GM), posing a great challenge in accurately estimating the whole-brain deformation field. In this paper, we demonstrate that the additional useful functional features can be extracted from brain regions beyond the GM, particularly, white-matter (WM) based on rs-fMRI, for improving the overall functional registration. Specifically, we quantify the local anisotropic correlation patterns of the blood oxygenation level-dependent (BOLD) signals, modeled by functional correlation tensors (FCTs), in both GM and WM. Functional registration is then performed based on multiple components of the whole-brain FCTs using a multichannel Large Deformation Diffeomorphic Metric Mapping (mLDDMM) algorithm. Experimental results show that our proposed method achieves superior functional registration performance, compared with other conventional registration methods.

Yujia Zhou, Pew-Thian Yap, Han Zhang, Lichi Zhang, Qianjin Feng, Dinggang Shen
Multi-way Regression Reveals Backbone of Macaque Structural Brain Connectivity in Longitudinal Datasets

Brain development has been an intriguing window to study the dynamic formation of brain features at a variety of scales, such as cortical convolution and white matter wiring diagram. However, recent studies only focused on a few specific fasciculus or several region. Very few studies focused on the development of macro-scale wiring diagrams due to the lack of longitudinal datasets and associated methods. In this work, we take the advantage of recently released longitudinal macaque MRI and DTI datasets and develop a novel multi-way regression method to model such datasets. By this method, we extract the backbone of structural connectome of macaque brains and study the trajectory of its development over time. Graphic statistics of these backbone connectomes demonstrate their soundness. Our findings are consistent with the reports in the literatures, suggesting the effectiveness and promise of this framework.

Tuo Zhang, Xiao Li, Lin Zhao, Xintao Hu, Tianming Liu, Lei Guo
Multimodal Hyper-connectivity Networks for MCI Classification

Hyper-connectivity network is a network where every edge is connected to more than two nodes, and can be naturally denoted using a hyper-graph. Hyper-connectivity brain network, either based on structural or functional interactions among the brain regions, has been used for brain disease diagnosis. However, the conventional hyper-connectivity network is constructed solely based on single modality data, ignoring potential complementary information conveyed by other modalities. The integration of complementary information from multiple modalities has been shown to provide a more comprehensive representation about the brain disruptions. In this paper, a novel multimodal hyper-network modelling method was proposed for improving the diagnostic accuracy of mild cognitive impairment (MCI). Specifically, we first constructed a multimodal hyper-connectivity network by simultaneously considering information from diffusion tensor imaging and resting-state functional magnetic resonance imaging data. We then extracted different types of network features from the hyper-connectivity network, and further exploited a manifold regularized multi-task feature selection method to jointly select the most discriminative features. Our proposed multimodal hyper-connectivity network demonstrated a better MCI classification performance than the conventional single modality based hyper-connectivity networks.

Yang Li, Xinqiang Gao, Biao Jie, Pew-Thian Yap, Min-jeong Kim, Chong-Yaw Wee, Dinggang Shen
Multi-modal EEG and fMRI Source Estimation Using Sparse Constraints

In this paper a multi-modal approach is black presented and validated on real data to estimate the brain neuronal sources based on EEG and fMRI. Combining these two modalities can lead to source estimations with high spatio-temporal resolution. The joint method is based on the idea of linear model already presented in the literature where each of the data modalities are first modeled linearly based on the sources. Afterwards, they are integrated in a joint framework which also considers the sparsity of sources. The sources are then estimated with the proximal algorithm. The results are validated on real data and show the efficiency of the joint model compared to the uni-modal ones. We also provide a calibration solution for the system and demonstrate the effect of the parameter values for uni- and multi-modal estimations on 8 subjects.

Saman Noorzadeh, Pierre Maurel, Thomas Oberlin, Rémi Gribonval, Christian Barillot
Statistical Learning of Spatiotemporal Patterns from Longitudinal Manifold-Valued Networks

We introduce a mixed-effects model to learn spatiotemporal patterns on a network by considering longitudinal measures distributed on a fixed graph. The data come from repeated observations of subjects at different time points which take the form of measurement maps distributed on a graph such as an image or a mesh. The model learns a typical group-average trajectory characterizing the propagation of measurement changes across the graph nodes. The subject-specific trajectories are defined via spatial and temporal transformations of the group-average scenario, thus estimating the variability of spatiotemporal patterns within the group. To estimate population and individual model parameters, we adapted a stochastic version of the Expectation-Maximization algorithm, the MCMC-SAEM. The model is used to describe the propagation of cortical atrophy during the course of Alzheimer’s Disease. Model parameters show the variability of this average pattern of atrophy in terms of trajectories across brain regions, age at disease onset and pace of propagation. We show that the personalization of this model yields accurate prediction of maps of cortical thickness in patients.

I. Koval, J.-B. Schiratti, A. Routier, M. Bacci, O. Colliot, S. Allassonnière, S. Durrleman, The Alzheimer’s Disease Neuroimaging Initiative
Population-Shrinkage of Covariance to Estimate Better Brain Functional Connectivity

Brain functional connectivity, obtained from functional Magnetic Resonance Imaging at rest (r-fMRI), reflects inter-subject variations in behavior and characterizes neuropathologies. It is captured by the covariance matrix between time series of remote brain regions. With noisy and short time series as in r-fMRI, covariance estimation calls for penalization, and shrinkage approaches are popular. Here we introduce a new covariance estimator based on a non-isotropic shrinkage that integrates prior knowledge of the covariance distribution over a large population. The estimator performs shrinkage tailored to the Riemannian geometry of symmetric positive definite matrices, coupled with a probabilistic modeling of the subject and population covariance distributions. Experiments on a large-scale dataset show that such estimators resolve better intra- and inter-subject functional connectivities compared existing covariance estimates. We also demonstrate that the estimator improves the relationship across subjects between their functional-connectivity measures and their behavioral assessments.

Mehdi Rahim, Bertrand Thirion, Gaël Varoquaux
Distance Metric Learning Using Graph Convolutional Networks: Application to Functional Brain Networks

Evaluating similarity between graphs is of major importance in several computer vision and pattern recognition problems, where graph representations are often used to model objects or interactions between elements. The choice of a distance or similarity metric is, however, not trivial and can be highly dependent on the application at hand. In this work, we propose a novel metric learning method to evaluate distance between graphs that leverages the power of convolutional neural networks, while exploiting concepts from spectral graph theory to allow these operations on irregular graphs. We demonstrate the potential of our method in the field of connectomics, where neuronal pathways or functional connections between brain regions are commonly modelled as graphs. In this problem, the definition of an appropriate graph similarity function is critical to unveil patterns of disruptions associated with certain brain disorders. Experimental results on the ABIDE dataset show that our method can learn a graph similarity metric tailored for a clinical application, improving the performance of a simple k-nn classifier by 11.9% compared to a traditional distance metric.

Sofia Ira Ktena, Sarah Parisot, Enzo Ferrante, Martin Rajchl, Matthew Lee, Ben Glocker, Daniel Rueckert
A Submodular Approach to Create Individualized Parcellations of the Human Brain

Recent studies on functional neuroimaging (e.g. fMRI) attempt to model the brain as a network. A conventional functional connectivity approach for defining nodes in the network is grouping similar voxels together, a method known as functional parcellation. The majority of previous work on human brain parcellation employs a group-level analysis by collapsing data from the entire population. However, these methods ignore the large amount of inter-individual variability and uniqueness in connectivity. This is particularly relevant for patient studies or even developmental studies where a single functional atlas may not be appropriate for all individuals or conditions. To account for the individual differences, we developed an approach to individualized parcellation. The algorithm starts with an initial group-level parcellation and forms the individualized ones using a local exemplar-based submodular clustering method. The utility of individualized parcellations is further demonstrated through improvement in the accuracy of a predictive model that predicts IQ using functional connectome.

Mehraveh Salehi, Amin Karbasi, Dustin Scheinost, R. Todd Constable
BrainSync: An Orthogonal Transformation for Synchronization of fMRI Data Across Subjects

We describe a method that allows direct comparison of resting fMRI (rfMRI) time series across subjects. For this purpose, we exploit the geometry of the rfMRI signal space to conjecture the existence of an orthogonal transformation that synchronizes fMRI time series across sessions and subjects. The method is based on the observation that rfMRI data exhibit similar connectivity patterns across subjects, as reflected in the pairwise correlations between different brain regions. The orthogonal transformation that performs the synchronization is unique, invertible, efficient to compute, and preserves the connectivity structure of the original data for all subjects. Similarly to image registration, where we spatially align the anatomical brain images, this synchronization of brain signals across a population or within subject across sessions facilitates longitudinal and cross-sectional studies of rfMRI data. The utility of this transformation is illustrated through applications to quantification of fMRI variability across subjects and sessions, joint cortical clustering of a population and comparison of task-related and resting fMRI.

Anand A. Joshi, Minqi Chong, Richard M. Leahy
Supervised Discriminative EEG Brain Source Imaging with Graph Regularization

As Electroencephalography (EEG) is a non-invasive brain imaging technique that records the electric field on the scalp instead of direct measuring activities of brain voxels on the cortex, many approaches were proposed to estimate the activated sources due to its significance in neuroscience research and clinical applications. However, since most part of the brain activity is composed of the spontaneous neural activities or non-task related activations, true task relevant activation sources can be very challenging to be discovered given strong background signals. For decades, the EEG source imaging problem was solved in an unsupervised way without taking into consideration the label information that representing different brain states (e.g. happiness, sadness, and surprise). A novel model for solving EEG inverse problem called Graph Regularized Discriminative Source Imaging (GRDSI) was proposed, which aims to explicitly extract the discriminative sources by implicitly coding the label information into the graph regularization term. The proposed model is capable of estimating the discriminative brain sources under different brain states and encouraging intra-class consistency. Simulation results show the effectiveness of our proposed framework in retrieving the discriminative sources.

Feng Liu, Rahilsadat Hosseini, Jay Rosenberger, Shouyi Wang, Jianzhong Su
Inference and Visualization of Information Flow in the Visual Pathway Using dMRI and EEG

We propose a method to visualize information flow in the visual pathway following a visual stimulus. Our method estimates structural connections using diffusion magnetic resonance imaging and functional connections using electroencephalography. First, a Bayesian network which represents the cortical regions of the brain and their connections is built from the structural connections. Next, the functional information is added as evidence into the network and the posterior probability of activation is inferred using a maximum entropy on the mean approach. Finally, projecting these posterior probabilities back onto streamlines generates a visual depiction of pathways used in the network. We first show the effect of noise in a simulated phantom dataset. We then present the results obtained from left and right visual stimuli which show expected information flow traveling from eyes to the lateral geniculate nucleus and to the visual cortex. Information flow visualization along white matter pathways has potential to explore the brain dynamics in novel ways.

Samuel Deslauriers-Gauthier, Jean-Marc Lina, Russell Butler, Pierre-Michel Bernier, Kevin Whittingstall, Rachid Deriche, Maxime Descoteaux

Diffusion Magnetic Resonance Imaging (dMRI) and Tensor/Fiber Processing

Frontmatter
Evaluating 35 Methods to Generate Structural Connectomes Using Pairwise Classification

There is no consensus on how to construct structural brain networks from diffusion MRI. How variations in pre-processing steps affect network reliability and its ability to distinguish subjects remains opaque. In this work, we address this issue by comparing 35 structural connectome-building pipelines. We vary diffusion reconstruction models, tractography algorithms and parcellations. Next, we classify structural connectome pairs as either belonging to the same individual or not. Connectome weights and eight topological derivative measures form our feature set. For experiments, we use three test-retest datasets from the Consortium for Reliability and Reproducibility (CoRR) comprised of a total of 105 individuals. We also compare pairwise classification results to a commonly used parametric test-retest measure, Intraclass Correlation Coefficient (ICC) (Code and results are available at https://github.com/lodurality/35_methods_MICCAI_2017).

Dmitry Petrov, Alexander Ivanov, Joshua Faskowitz, Boris Gutman, Daniel Moyer, Julio Villalon, Neda Jahanshad, Paul Thompson
Dynamic Field Mapping and Motion Correction Using Interleaved Double Spin-Echo Diffusion MRI

Diffusion MRI (dMRI) analysis requires combining data from many images and this generally requires corrections for image distortion and for subject motion during what may be a prolonged acquisition. Particularly in non-brain applications, changes in pose such as respiration can cause image distortion to be time varying, impeding static field map-based correction. In addition, motion and distortion correction is challenging at high b-values due to the low signal-to-noise ratio (SNR). In this work we develop a new approach that breaks the traditional “one-volume, one-weighting” paradigm by interleaving low-b and high-b slices, and combine this with a reverse phase-encoded double-spin echo sequence. Interspersing low and high b-value slices ensures that the low-b, high-SNR data is in close spatial and temporal proximity to support dynamic field map estimation from the double spin-echo acquisition and image-based motion correction. This information is propagated to high-b slices with interpolation across space and time. The method is tested in the challenging environment of fetal dMRI and it is demonstrated using data from 8 pregnant volunteers that combining dynamic distortion correction with slice-by-slice motion correction increases data consistency to facilitate advanced analyses where conventional methods fail.

Jana Hutter, Daan Christiaens, Maria Deprez, Lucilio Cordero-Grande, Paddy Slator, Anthony Price, Mary Rutherford, Joseph V. Hajnal
A Novel Anatomically-Constrained Global Tractography Approach to Monitor Sharp Turns in Gyri

Tractography from diffusion-weighted MRI (DWI) has revolutionized the brain connectivity investigation in vivo but still lack of anatomical tools to overcome controversial situations and to investigate the fibers reconstruction within gyri. In this work, we propose a non-generative global spin-glass tractography approach using anatomical priors to constrain the tracking process by integrating the pial surface knowledge to manage sharp turns of fibers approaching the cortical mantel according to its normal direction. The optimum configuration is obtained using a stochastic optimization of the global framework energy, resulting from a trade-off between a first term measuring the discrepancy between the spin glass directions and the underlying field of orientation distribution functions (ODF) or mean apparent propagators (MAP) estimated from the DWI data, and a second term related to the curvature of the generated fibers. The method was tested on a synthetic $$90^{\circ }$$ crossing phantom and on a real brain postmortem sample.

Achille Teillac, Justine Beaujoin, Fabrice Poupon, Jean-Francois Mangin, Cyril Poupon
Learn to Track: Deep Learning for Tractography

We show that deep learning techniques can be applied successfully to fiber tractography. Specifically, we use feed-forward and recurrent neural networks to learn the generation process of streamlines directly from diffusion-weighted imaging (DWI) data. Furthermore, we empirically study the behavior of the proposed models on a realistic white matter phantom with known ground truth. We show that their performance is competitive to that of commonly used techniques, even when the models are used on DWI data unseen at training time. We also show that our models are able to recover high spatial coverage of the ground truth white matter pathways while better controlling the number of false connections. In fact, our experiments suggest that exploiting past information within a streamline’s trajectory during tracking helps predict the following direction.

Philippe Poulin, Marc-Alexandre Côté, Jean-Christophe Houde, Laurent Petit, Peter F. Neher, Klaus H. Maier-Hein, Hugo Larochelle, Maxime Descoteaux
FiberNET: An Ensemble Deep Learning Framework for Clustering White Matter Fibers

White matter tracts are commonly analyzed in studies of micro-structural integrity and anatomical connectivity in the brain. Over the last decade, it has been an open problem as to how best to cluster white matter fibers, extracted from whole-brain tractography, into anatomically meaningful groups. Some existing techniques use region of interest (ROI) based clustering, atlas-based labeling, or unsupervised spectral clustering. ROI-based clustering is popular for analyzing anatomical connectivity among a set of ROIs, but it does not always partition the brain into recognizable fiber bundles. Here we propose an approach using convolutional neural networks (CNNs) to learn shape features of the fiber bundles, which are then exploited to cluster white matter fibers. To achieve such clustering, we first need to re-parameterize the fibers in an intrinsic space. The clustering is performed in induced parameterized coordinates. To our knowledge, this is one of the first approaches for fiber clustering using deep learning techniques. The results show strong accuracy - on a par with or better than other state-of-the-art methods.

Vikash Gupta, Sophia I. Thomopoulos, Faisal M. Rashid, Paul M. Thompson
Supra-Threshold Fiber Cluster Statistics for Data-Driven Whole Brain Tractography Analysis

This work presents a supra-threshold fiber cluster (STFC) analysis that leverages the whole brain fiber geometry to enhance statistical group difference analysis. The proposed method consists of (1) a study-specific data-driven tractography parcellation to obtain white matter (WM) tract parcels according to the WM anatomy and (2) a nonparametric permutation-based STFC test to identify significant differences between study populations (e.g. disease and healthy). The basic idea of our method is that a WM parcel’s neighborhood (parcels with similar WM anatomy) can support the parcel’s statistical significance when correcting for multiple comparisons. The method is demonstrated by application to a multi-shell diffusion MRI dataset from 59 individuals, including 30 attention deficit hyperactivity disorder (ADHD) patients and 29 healthy controls (HCs). Evaluations are conducted using both synthetic and real data. The results indicate that our STFC method gives greater sensitivity in finding group differences in WM tract parcels compared to several traditional multiple comparison correction methods.

Fan Zhang, Weining Wu, Lipeng Ning, Gloria McAnulty, Deborah Waber, Borjan Gagoski, Kiera Sarill, Hesham M. Hamoda, Yang Song, Weidong Cai, Yogesh Rathi, Lauren J. O’Donnell
White Matter Fiber Representation Using Continuous Dictionary Learning

With increasingly sophisticated Diffusion Weighted MRI acquisition methods and modeling techniques, very large sets of streamlines (fibers) are presently generated per imaged brain. These reconstructions of white matter architecture, which are important for human brain research and pre-surgical planning, require a large amount of storage and are often unwieldy and difficult to manipulate and analyze. This work proposes a novel continuous parsimonious framework in which signals are sparsely represented in a dictionary with continuous atoms. The significant innovation in our new methodology is the ability to train such continuous dictionaries, unlike previous approaches that either used pre-fixed continuous transforms or training with finite atoms. This leads to an innovative fiber representation method, which uses Continuous Dictionary Learning to sparsely code each fiber with high accuracy. This method is tested on numerous tractograms produced from the Human Connectome Project data and achieves state-of-the-art performances in compression ratio and reconstruction error.

Guy Alexandroni, Yana Podolsky, Hayit Greenspan, Tal Remez, Or Litany, Alexander Bronstein, Raja Giryes
Fiber Orientation Estimation Guided by a Deep Network

Diffusion magnetic resonance imaging (dMRI) is currently the only tool for noninvasively imaging the brain’s white matter tracts. The fiber orientation (FO) is a key feature computed from dMRI for tract reconstruction. Because the number of FOs in a voxel is usually small, dictionary-based sparse reconstruction has been used to estimate FOs. However, accurate estimation of complex FO configurations in the presence of noise can still be challenging. In this work we explore the use of a deep network for FO estimation in a dictionary-based framework and propose an algorithm named Fiber Orientation Reconstruction guided by a Deep Network (FORDN). FORDN consists of two steps. First, we use a smaller dictionary encoding coarse basis FOs to represent diffusion signals. To estimate the mixture fractions of the dictionary atoms, a deep network is designed to solve the sparse reconstruction problem. Second, the coarse FOs inform the final FO estimation, where a larger dictionary encoding a dense basis of FOs is used and a weighted $$\ell _{1}$$-norm regularized least squares problem is solved to encourage FOs that are consistent with the network output. FORDN was evaluated and compared with state-of-the-art algorithms that estimate FOs using sparse reconstruction on simulated and typical clinical dMRI data. The results demonstrate the benefit of using a deep network for FO estimation.

Chuyang Ye, Jerry L. Prince
FOD Restoration for Enhanced Mapping of White Matter Lesion Connectivity

To achieve improved understanding of white matter (WM) lesions and their effect on brain functions, it is important to obtain a comprehensive map of their connectivity. However, changes of the cellular environment in WM lesions attenuate diffusion MRI (dMRI) signals and make the robust estimation of fiber orientation distributions (FODs) difficult. In this work, we integrate techniques from image inpainting and compartment modeling to develop a novel method for enhancing FOD estimation in WM lesions from multi-shell dMRI, which is becoming increasingly popular with the success of the Human Connectome Project (HCP). By using FODs estimated from normal WM as the boundary condition, our method iteratively cycles through two key steps: diffusion-based inpainting and FOD reconstruction with compartment modeling for the successful restoration of FODs in WM lesions. In our experiments, we carry out extensive simulations to quantitatively demonstrate that our method outperforms a state-of-the-art method in angular accuracy and compartment parameter estimation. We also apply our method to multi-shell imaging data from 23 multiple sclerosis (MS) patients and one LifeSpan subject of HCP with WM lesion. We show that our method achieves superior performance in mapping the connectivity of WM lesions with FOD-based tractography.

Wei Sun, Lilyana Amezcua, Yonggang Shi
Learning-Based Ensemble Average Propagator Estimation

By capturing the anisotropic water diffusion in tissue, diffusion magnetic resonance imaging (dMRI) provides a unique tool for noninvasively probing the tissue microstructure and orientation in the human brain. The diffusion profile can be described by the ensemble average propagator (EAP), which is inferred from observed diffusion signals. However, accurate EAP estimation using the number of diffusion gradients that is clinically practical can be challenging. In this work, we propose a deep learning algorithm for EAP estimation, which is named learning-based ensemble average propagator estimation (LEAPE). The EAP is commonly represented by a basis and its associated coefficients, and here we choose the SHORE basis and design a deep network to estimate the coefficients. The network comprises two cascaded components. The first component is a multiple layer perceptron (MLP) that simultaneously predicts the unknown coefficients. However, typical training loss functions, such as mean squared errors, may not properly represent the geometry of the possibly non-Euclidean space of the coefficients, which in particular causes problems for the extraction of directional information from the EAP. Therefore, to regularize the training, in the second component we compute an auxiliary output of approximated fiber orientation (FO) errors with the aid of a second MLP that is trained separately. We performed experiments using dMRI data that resemble clinically achievable q-space sampling, and observed promising results compared with the conventional EAP estimation method.

Chuyang Ye
A Sparse Bayesian Learning Algorithm for White Matter Parameter Estimation from Compressed Multi-shell Diffusion MRI

We propose a sparse Bayesian learning algorithm for improved estimation of white matter fiber parameters from compressed (under-sampled q-space) multi-shell diffusion MRI data. The multi-shell data is represented in a dictionary form using a non-monoexponential decay model of diffusion, based on continuous gamma distribution of diffusivities. The fiber volume fractions with predefined orientations, which are the unknown parameters, form the dictionary weights. These unknown parameters are estimated with a linear un-mixing framework, using a sparse Bayesian learning algorithm. A localized learning of hyperparameters at each voxel and for each possible fiber orientations improves the parameter estimation. Our experiments using synthetic data from the ISBI 2012 HARDI reconstruction challenge and in-vivo data from the Human Connectome Project demonstrate the improvements.

Pramod Kumar Pisharady, Stamatios N. Sotiropoulos, Guillermo Sapiro, Christophe Lenglet
Bayesian Image Quality Transfer with CNNs: Exploring Uncertainty in dMRI Super-Resolution

In this work, we investigate the value of uncertainty modelling in 3D super-resolution with convolutional neural networks (CNNs). Deep learning has shown success in a plethora of medical image transformation problems, such as super-resolution (SR) and image synthesis. However, the highly ill-posed nature of such problems results in inevitable ambiguity in the learning of networks. We propose to account for intrinsic uncertainty through a per-patch heteroscedastic noise model and for parameter uncertainty through approximate Bayesian inference in the form of variational dropout. We show that the combined benefits of both lead to the state-of-the-art performance SR of diffusion MR brain images in terms of errors compared to ground truth. We further show that the reduced error scores produce tangible benefits in downstream tractography. In addition, the probabilistic nature of the methods naturally confers a mechanism to quantify uncertainty over the super-resolved output. We demonstrate through experiments on both healthy and pathological brains the potential utility of such an uncertainty measure in the risk assessment of the super-resolved images for subsequent clinical use.

Ryutaro Tanno, Daniel E. Worrall, Aurobrata Ghosh, Enrico Kaden, Stamatios N. Sotiropoulos, Antonio Criminisi, Daniel C. Alexander
q-Space Upsampling Using x-q Space Regularization

Acquisition time in diffusion MRI increases with the number of diffusion-weighted images that need to be acquired. Particularly in clinical settings, scan time is limited and only a sparse coverage of the vast q-space is possible. In this paper, we show how non-local self-similar information in the x-q space of diffusion MRI data can be harnessed for q-space upsampling. More specifically, we establish the relationships between signal measurements in x-q space using a patch matching mechanism that caters to unstructured data. We then encode these relationships in a graph and use it to regularize an inverse problem associated with recovering a high q-space resolution dataset from its low-resolution counterpart. Experimental results indicate that the high-resolution datasets reconstructed using the proposed method exhibit greater quality, both quantitatively and qualitatively, than those obtained using conventional methods, such as interpolation using spherical radial basis functions (SRBFs).

Geng Chen, Bin Dong, Yong Zhang, Dinggang Shen, Pew-Thian Yap
Neighborhood Matching for Curved Domains with Application to Denoising in Diffusion MRI

In this paper, we introduce a strategy for performing neighborhood matching on general non-Euclidean and non-flat domains. Essentially, this involves representing the domain as a graph and then extending the concept of convolution from regular grids to graphs. Acknowledging the fact that convolutions are features of local neighborhoods, neighborhood matching is carried out using the outcome of multiple convolutions at multiple scales. All these concepts are encapsulated in a sound mathematical framework, called graph framelet transforms (GFTs), which allows signals residing on non-flat domains to be decomposed according to multiple frequency subbands for rich characterization of signal patterns. We apply GFTs to the problem of denoising of diffusion MRI data, which can reside on domains defined in very different ways, such as on a shell, on multiple shells, or on a Cartesian grid. Our non-local formulation of the problem allows information of diffusion signal profiles of drastically different orientations to be borrowed for effective denoising.

Geng Chen, Bin Dong, Yong Zhang, Dinggang Shen, Pew-Thian Yap
Gray Matter Surface Based Spatial Statistics (GS-BSS) in Diffusion Microstructure

Tract-based spatial statistics (TBSS) has proven to be a popular technique for performing voxel-wise statistical analysis that aims to improve sensitivity and interpretability of analysis of multi-subject diffusion imaging studies in white matter. With the advent of advanced diffusion MRI models – e.g., the neurite orientation dispersion density imaging (NODDI), it is of interest to analyze microstructural changes within gray matter (GM). A recent study has proposed using NODDI in gray matter based spatial statistics (N-GBSS) to perform voxel-wise statistical analysis on GM microstructure. N-GBSS adapts TBSS by skeletonizing the GM and projecting diffusion metrics to a cortical ribbon. In this study, we propose an alternate approach, known as gray matter surface based spatial statistics (GS-BSS), to perform statistical analysis using gray matter surfaces by incorporating established methods of registration techniques of GM surface segmentation on structural images. Diffusion microstructure features from NODDI and GM surfaces are transferred to standard space. All the surfaces are then projected onto a common GM surface non-linearly using diffeomorphic spectral matching on cortical surfaces. Prior post-mortem studies have shown reduced dendritic length in prefrontal cortex region in schizophrenia and bipolar disorder population. To validate the results, statistical tests are compared between GS-BSS and N-GBSS to study the differences between healthy and psychosis population. Significant results confirming the microstructural changes are presented. GS-BSS results show higher sensitivity to group differences between healthy and psychosis population in previously known regions.

Prasanna Parvathaneni, Baxter P. Rogers, Yuankai Huo, Kurt G. Schilling, Allison E. Hainline, Adam W. Anderson, Neil D. Woodward, Bennett A. Landman
A Bag-of-Features Approach to Predicting TMS Language Mapping Results from DSI Tractography

Transcranial Magnetic Stimulation (TMS) can be used to indicate language-related cortex by highly focal temporary inhibition. Diffusion Spectrum Imaging (DSI) reconstructs fiber tracts that connect specific cortex regions. We present a novel machine learning approach that predicts a functional classification (TMS) from local structural connectivity (DSI), and a formal statistical hypothesis test to detect a significant relationship between brain structure and function. Features are chosen so that their weights in the classifier provide insight into anatomical differences that may underlie specificity in language functions. Results are reported for target sites systematically covering Broca’s region, which constitutes a core node in the language network.

Mohammad Khatami, Katrin Sakreida, Georg Neuloh, Thomas Schultz
Patient-Specific Skeletal Muscle Fiber Modeling from Structure Tensor Field of Clinical CT Images

We propose an optimization method for estimating patient-specific muscle fiber arrangement from clinical CT. Our approach first computes the structure tensor field to estimate local orientation, then a geometric template representing fiber arrangement is fitted using a B-spline deformation by maximizing fitness of the local orientation using a smoothness penalty. The initialization is computed with a previously proposed algorithm that takes account of only the muscle’s surface shape. Evaluation was performed using a CT volume (1.0 mm$$^\text {3}$$/voxel) and high resolution optical images of a serial cryo-section (0.1 mm$$^\text {3}$$/voxel). The mean fiber distance error at the initialization of 6.00 mm was decreased to 2.78 mm after the proposed optimization for the gluteus maximus muscle, and from 5.28 mm to 3.09 mm for the gluteus medius muscle. The result from 20 patient CT images suggested that the proposed algorithm reconstructed an anatomically more plausible fiber arrangement than the previous method.

Yoshito Otake, Futoshi Yokota, Norio Fukuda, Masaki Takao, Shu Takagi, Naoto Yamamura, Lauren J. O’Donnell, Carl-Fredrik Westin, Nobuhiko Sugano, Yoshinobu Sato
Revealing Hidden Potentials of the q-Space Signal in Breast Cancer

Mammography screening for early detection of breast lesions currently suffers from high amounts of false positive findings, which result in unnecessary invasive biopsies. Diffusion-weighted MR images (DWI) can help to reduce many of these false-positive findings prior to biopsy. Current approaches estimate tissue properties by means of quantitative parameters taken from generative, biophysical models fit to the q-space encoded signal under certain assumptions regarding noise and spatial homogeneity. This process is prone to fitting instability and partial information loss due to model simplicity. We reveal unexplored potentials of the signal by integrating all data processing components into a convolutional neural network (CNN) architecture that is designed to propagate clinical target information down to the raw input images. This approach enables simultaneous and target-specific optimization of image normalization, signal exploitation, global representation learning and classification. Using a multicentric data set of 222 patients, we demonstrate that our approach significantly improves clinical decision making with respect to the current state of the art.

Paul F. Jäger, Sebastian Bickelhaupt, Frederik Bernd Laun, Wolfgang Lederer, Daniel Heidi, Tristan Anselm Kuder, Daniel Paech, David Bonekamp, Alexander Radbruch, Stefan Delorme, Heinz-Peter Schlemmer, Franziska Steudle, Klaus H. Maier-Hein
Denoising Moving Heart Wall Fibers Using Cartan Frames

Current denoising methods for diffusion weighted images can obtain high quality estimates of local fiber orientation in static structures. However, recovering reliable fiber orientation from in vivo data is considerably more difficult. To address this problem we use a geometric approach, with a spatio-temporal Cartan frame field to model spatial (within time-frame) and temporal (between time-frame) rotations within a single consistent mathematical framework. The key idea is to calculate the Cartan structural connection parameters, and then fit probability distributions to these volumetric scalar fields. Voxels with low log-likelihood with respect to these distributions signal geometrical “noise” or outliers. With experiments on both simulated (canine) moving fiber data and on an in vivo human heart sequence, we demonstrate the promise of this approach for outlier detection and denoising via inpainting.

Babak Samari, Tristan Aumentado-Armstrong, Gustav Strijkers, Martijn Froeling, Kaleem Siddiqi
TBS: Tensor-Based Supervoxels for Unfolding the Heart

Investigation of the myofiber structure of the heart is desired for studies of anatomy and diseases. However, it is difficult to understand the left ventricle structure intuitively because it consists of three layers with different myofiber orientations. In this work, we propose an unfolding method for micro-focus X-ray CT ($$\mu $$CT) volumes of the heart. First, we explore a novel supervoxel over-segmentation technique, Tensor-Based Supervoxels (TBS), which allows us to divide the left ventricle into three layers. We utilize TBS and B-spline curves for extraction of the layers. Finally we project $$\mu $$CT intensities in each layer to an unfolded view. Experiments are performed using three $$\mu $$CT images of the left ventricle acquired from canine heart specimens. In all cases, the myofiber structure could be observed clearly in the unfolded views. This is promising for helping cardiac studies.

Hirohisa Oda, Holger R. Roth, Kanwal K. Bhatia, Masahiro Oda, Takayuki Kitasaka, Toshiaki Akita, Julia A. Schnabel, Kensaku Mori

Image Segmentation and Modelling

Frontmatter
A Fixed-Point Model for Pancreas Segmentation in Abdominal CT Scans

Deep neural networks have been widely adopted for automatic organ segmentation from abdominal CT scans. However, the segmentation accuracy of some small organs (e.g., the pancreas) is sometimes below satisfaction, arguably because deep networks are easily disrupted by the complex and variable background regions which occupies a large fraction of the input volume. In this paper, we formulate this problem into a fixed-point model which uses a predicted segmentation mask to shrink the input region. This is motivated by the fact that a smaller input region often leads to more accurate segmentation. In the training process, we use the ground-truth annotation to generate accurate input regions and optimize network weights. On the testing stage, we fix the network parameters and update the segmentation results in an iterative manner. We evaluate our approach on the NIH pancreas segmentation dataset, and outperform the state-of-the-art by more than $$4\%$$, measured by the average Dice-Sørensen Coefficient (DSC). In addition, we report $$62.43\%$$ DSC in the worst case, which guarantees the reliability of our approach in clinical applications.

Yuyin Zhou, Lingxi Xie, Wei Shen, Yan Wang, Elliot K. Fishman, Alan L. Yuille
Semi-supervised Learning for Biomedical Image Segmentation via Forest Oriented Super Pixels(Voxels)

In this paper, we focus on semi-supervised learning for biomedical image segmentation, so as to take advantage of huge unlabelled data. We observe that there usually exist some homogeneous connected areas of low confidence in biomedical images, which tend to confuse the classifier trained with limited labelled samples. To cope with this difficulty, we propose to construct forest oriented super pixels(voxels) to augment the standard random forest classifier, in which super pixels(voxels) are built upon the forest based code. Compared to the state-of-the-art, our proposed method shows superior segmentation performance on challenging 2D/3D biomedical images. The full implementation (based on Matlab) is available at https://github.com/lingucv/ssl_superpixels.

Lin Gu, Yinqiang Zheng, Ryoma Bise, Imari Sato, Nobuaki Imanishi, Sadakazu Aiso
Towards Automatic Semantic Segmentation in Volumetric Ultrasound

3D ultrasound is rapidly emerging as a viable imaging modality for routine prenatal examinations. However, lacking of efficient tools to decompose the volumetric data greatly limits its widespread. In this paper, we are looking at the problem of volumetric segmentation in ultrasound to promote the volume-based, precise maternal and fetal health monitoring. Our contribution is threefold. First, we propose the first and fully automatic framework for the simultaneous segmentation of multiple objects, including fetus, gestational sac and placenta, in ultrasound volumes, which remains as a rarely-studied but great challenge. Second, based on our customized 3D Fully Convolutional Network, we propose to inject a Recurrent Neural Network (RNN) to flexibly explore 3D semantic knowledge from a novel, sequential perspective, and therefore significantly refine the local segmentation result which is initially corrupted by the ubiquitous boundary uncertainty in ultrasound volumes. Third, considering sequence hierarchy, we introduce a hierarchical deep supervision mechanism to effectively boost the information flow within RNN and further improve the semantic segmentation results. Extensively validated on our in-house large datasets, our approach achieves superior performance and presents to be promising in boosting the interpretation of prenatal ultrasound volumes. Our framework is general and can be easily extended to other volumetric ultrasound segmentation tasks.

Xin Yang, Lequan Yu, Shengli Li, Xu Wang, Na Wang, Jing Qin, Dong Ni, Pheng-Ann Heng
Automatic Quality Control of Cardiac MRI Segmentation in Large-Scale Population Imaging

The trend towards large-scale studies including population imaging poses new challenges in terms of quality control (QC). This is a particular issue when automatic processing tools such as image segmentation methods are employed to derive quantitative measures or biomarkers for further analyses. Manual inspection and visual QC of each segmentation result is not feasible at large scale. However, it is important to be able to detect when an automatic method fails to avoid inclusion of wrong measurements into subsequent analyses which could otherwise lead to incorrect conclusions. To overcome this challenge, we explore an approach for predicting segmentation quality based on reverse classification accuracy, which enables us to discriminate between successful and failed cases. We validate this approach on a large cohort of cardiac MRI for which manual QC scores were available. Our results on 7,425 cases demonstrate the potential for fully automatic QC in the context of large-scale population imaging such as the UK Biobank Imaging Study.

Robert Robinson, Vanya V. Valindria, Wenjia Bai, Hideaki Suzuki, Paul M. Matthews, Chris Page, Daniel Rueckert, Ben Glocker
Towards Image-Guided Pancreas and Biliary Endoscopy: Automatic Multi-organ Segmentation on Abdominal CT with Dense Dilated Networks

Segmentation of anatomy on abdominal CT enables patient-specific image guidance in clinical endoscopic procedures and in endoscopy training. Because robust interpatient registration of abdominal images is necessary for existing multi-atlas- and statistical-shape-model-based segmentations, but remains challenging, there is a need for automated multi-organ segmentation that does not rely on registration. We present a deep-learning-based algorithm for segmenting the liver, pancreas, stomach, and esophagus using dilated convolution units with dense skip connections and a new spatial prior. The algorithm was evaluated with an 8-fold cross-validation and compared to a joint-label-fusion-based segmentation based on Dice scores and boundary distances. The proposed algorithm yielded more accurate segmentations than the joint-label-fusion-based algorithm for the pancreas (median Dice scores 66 vs 37), stomach (83 vs 72) and esophagus (73 vs 54) and marginally less accurate segmentation for the liver (92 vs 93). We conclude that dilated convolutional networks with dense skip connections can segment the liver, pancreas, stomach and esophagus from abdominal CT without image registration and have the potential to support image-guided navigation in gastrointestinal endoscopy procedures.

Eli Gibson, Francesco Giganti, Yipeng Hu, Ester Bonmati, Steve Bandula, Kurinchi Gurusamy, Brian R. Davidson, Stephen P. Pereira, Matthew J. Clarkson, Dean C. Barratt
Holistic Segmentation of Intermuscular Adipose Tissues on Thigh MRI

Muscular dystrophies (MD) cause muscles to gradually degenerate into fat. In order to effectively study and track disease progression, it is important to quantify both muscle and fat volumes, especially the intermuscular adipose tissue (IMAT). Existing methods were mostly based on unsupervised pixel clustering and morphological models. We propose a method integrating two holistic neural networks (one for edges and one for regions) and a dual active contour model to accurately locate the fascia lata and segment multiple tissue types on thigh MRIs. The proposed method is robust to image artifacts and weak boundaries, and thus it performs well for severe MD cases. Our method was tested on 104 data sets and achieved Dice coefficients 0.940 and 0.943 for muscle and IMAT in challenging severe cases, respectively.

Jianhua Yao, William Kovacs, Nathan Hsieh, Chia-Ying Liu, Ronald M. Summers
Spatiotemporal Segmentation and Modeling of the Mitral Valve in Real-Time 3D Echocardiographic Images

Transesophageal echocardiography is the primary imaging modality for preoperative assessment of mitral valves with ischemic mitral regurgitation (IMR). While there are well known echocardiographic insights into the 3D morphology of mitral valves with IMR, such as annular dilation and leaflet tethering, less is understood about how quantification of valve dynamics can inform surgical treatment of IMR or predict short-term recurrence of the disease. As a step towards filling this knowledge gap, we present a novel framework for 4D segmentation and geometric modeling of the mitral valve in real-time 3D echocardiography (rt-3DE). The framework integrates multi-atlas label fusion and template-based medial modeling to generate quantitatively descriptive models of valve dynamics. The novelty of this work is that temporal consistency in the rt-3DE segmentations is enforced during both the segmentation and modeling stages with the use of groupwise label fusion and Kalman filtering. The algorithm is evaluated on rt-3DE data series from 10 patients: five with normal mitral valve morphology and five with severe IMR. In these 10 data series that total 207 individual 3DE images, each 3DE segmentation is validated against manual tracing and temporal consistency between segmentations is demonstrated. The ultimate goal is to generate accurate and consistent representations of valve dynamics that can both visually and quantitatively provide insight into normal and pathological valve function.

Alison M. Pouch, Ahmed H. Aly, Eric K. Lai, Natalie Yushkevich, Rutger H. Stoffers, Joseph H. Gorman IV, Albert T. Cheung, Joseph H. Gorman III, Robert C. Gorman, Paul A. Yushkevich
Unbiased Shape Compactness for Segmentation

We propose to constrain segmentation functionals with a dimensionless, unbiased and position-independent shape compactness prior, which we solve efficiently with an alternating direction method of multipliers (ADMM). Involving a squared sum of pairwise potentials, our prior results in a challenging high-order optimization problem, which involves dense (fully connected) graphs. We split the problem into a sequence of easier sub-problems, each performed efficiently at each iteration: (i) a sparse-matrix inversion based on Woodbury identity, (ii) a closed-form solution of a cubic equation and (iii) a graph-cut update of a sub-modular pairwise sub-problem with a sparse graph. We deploy our prior in an energy minimization, in conjunction with a supervised classifier term based on CNNs and standard regularization constraints. We demonstrate the usefulness of our energy in several medical applications. In particular, we report comprehensive evaluations of our fully automated algorithm over 40 subjects, showing a competitive performance for the challenging task of abdominal aorta segmentation in MRI.

Jose Dolz, Ismail Ben Ayed, Christian Desrosiers
Joint Reconstruction and Segmentation of 7T-like MR Images from 3T MRI Based on Cascaded Convolutional Neural Networks

7T MRI scanner provides MR images with higher resolution and better contrast than 3T MR scanners. This helps many medical analysis tasks, including tissue segmentation. However, currently there is a very limited number of 7T MRI scanners worldwide. This motivates us to propose a novel image post-processing framework that can jointly generate high-resolution 7T-like images and their corresponding high-quality 7T-like tissue segmentation maps, solely from the routine 3T MR images. Our proposed framework comprises two parallel components, namely (1) reconstruction and (2) segmentation. The reconstruction component includes the multi-step cascaded convolutional neural networks (CNNs) that map the input 3T MR image to a 7T-like MR image, in terms of both resolution and contrast. Similarly, the segmentation component involves another paralleled cascaded CNNs, with a different architecture, to generate high-quality segmentation maps. These cascaded feedbacks between the two designed paralleled CNNs allow both tasks to mutually benefit from each another when learning the respective reconstruction and segmentation mappings. For evaluation, we have tested our framework on 15 subjects (with paired 3T and 7T images) using a leave-one-out cross-validation. The experimental results show that our estimated 7T-like images have richer anatomical details and better segmentation results, compared to the 3T MRI. Furthermore, our method also achieved better results in both reconstruction and segmentation tasks, compared to the state-of-the-art methods.

Khosro Bahrami, Islem Rekik, Feng Shi, Dinggang Shen
Development of a CT-based Patient-Specific Model of the Electrically Stimulated Cochlea

Cochlear implants (CIs) are neural prosthetics that are used to treat sensory-based hearing loss. There are over 320,000 recipients worldwide. After implantation, each CI recipient goes through a sequence of programming sessions where audiologists determine several CI processor settings to attempt to optimize hearing outcomes. However, this process is difficult because there are no objective measures available to indicate what setting changes will lead to better hearing outcomes. It has been shown that a simplified model of electrically induced neural activation patterns within the cochlea can be created using patient CT images, and that audiologists can use this information to determine settings that lead to better hearing performance. A more comprehensive physics-based patient-specific model of neural activation has the potential to lead to even greater improvement in outcomes. In this paper, we propose a method to create such customized electro-anatomical models of the electrically stimulated cochlea. We compare the accuracy of our patient-specific models to the accuracy of generic models. Our results show that the patient-specific models are on average more accurate than the generic models, which motivates the use of a patient-specific modeling approach for cochlear implant patients.

Ahmet Cakir, Benoit M. Dawant, Jack H. Noble
Compresso: Efficient Compression of Segmentation Data for Connectomics

Recent advances in segmentation methods for connectomics and biomedical imaging produce very large datasets with labels that assign object classes to image pixels. The resulting label volumes are bigger than the raw image data and need compression for efficient storage and transfer. General-purpose compression methods are less effective because the label data consists of large low-frequency regions with structured boundaries unlike natural image data. We present Compresso, a new compression scheme for label data that outperforms existing approaches by using a sliding window to exploit redundancy across border regions in 2D and 3D. We compare our method to existing compression schemes and provide a detailed evaluation on eleven biomedical and image segmentation datasets. Our method provides a factor of 600–2200x compression for label volumes, with running times suitable for practice.

Brian Matejek, Daniel Haehn, Fritz Lekschas, Michael Mitzenmacher, Hanspeter Pfister
Combining Spatial and Non-spatial Dictionary Learning for Automated Labeling of Intra-ventricular Hemorrhage in Neonatal Brain MRI

A specific challenge to accurate tissue quantification in premature neonatal MRI data is posed by Intra-Ventricular Hemorrhage (IVH), where severe cases can be accompanied by extreme and complex Ventriculomegaly (VM). IVH is apparent on MRI as bright signal pooling within the ventricular space in locations related to the original bleed and how the blood pools and clots due to gravity. High variability in the location and extent of IVH and in the shape and size of the ventricles due to ventriculomegaly (VM), combined with a lack of large sets of training images covering all possible configurations, mean it is not feasible to approach the problem using whole brain dictionary learning. Here, we propose a novel sparse dictionary approach that utilizes a spatial dictionary for normal tissues structures, and a non-spatial component to delineate IVH and VM structure. We examine the behavior of this approach using a dataset of premature neonatal MRI scans with severe IVH and VM, finding improvements in the segmentation accuracy compared to the conventional segmentation. This approach provides the first automatic whole-brain segmentation framework for severe IVH and VM in premature neonatal brain MRIs.

Mengyuan Liu, Steven P. Miller, Vann Chau, Colin Studholme
Backmatter
Metadaten
Titel
Medical Image Computing and Computer Assisted Intervention − MICCAI 2017
herausgegeben von
Maxime Descoteaux
Lena Maier-Hein
Alfred Franz
Pierre Jannin
D. Louis Collins
Dr. Simon Duchesne
Copyright-Jahr
2017
Electronic ISBN
978-3-319-66182-7
Print ISBN
978-3-319-66181-0
DOI
https://doi.org/10.1007/978-3-319-66182-7

Premium Partner