Skip to main content

2015 | Buch

Information Processing in Medical Imaging

24th International Conference, IPMI 2015, Sabhal Mor Ostaig, Isle of Skye, UK, June 28 - July 3, 2015, Proceedings

herausgegeben von: Sebastien Ourselin, Daniel C. Alexander, Carl-Fredrik Westin, M. Jorge Cardoso

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 24th International Conference on Information Processing in Medical Imaging, IPMI 2015, held at the Sabhal Mor Ostaig College on the Isle of Skye, Scotland, UK, in June/July 2015.

The 22 full papers and 41 poster papers presented in this volume were carefully reviewed and selected from 195 submissions. They were organized in topical sections named: probabilistic graphical models; MRI reconstruction; clustering; statistical methods; longitudinal analysis; microstructure imaging; shape analysis; multi-atlas fusion; fast image registration; deformation models; and the poster session.

Inhaltsverzeichnis

Frontmatter

Probabilistic Graphical Models

Frontmatter
Colocalization Estimation Using Graphical Modeling and Variational Bayesian Expectation Maximization: Towards a Parameter-Free Approach

In microscopy imaging,

colocalization

between

Awate, Suyash P.

two biological entities (e.g., protein-protein or protein-cell) refers to the (stochastic) dependencies between the spatial

locations

Radhakrishnan, Thyagarajan

of the two entities in the biological specimen. Measuring colocalization between two entities relies on fluorescence imaging of the specimen using two fluorescent chemicals, each of which indicates the presence/absence of one of the entities at any pixel location. State-of-the-art methods for estimating colocalization rely on post-processing image data using an adhoc sequence of algorithms with many free parameters that are tuned visually. This leads to loss of reproducibility of the results. This paper proposes a brand-new framework for estimating the nature and strength of colocalization directly from corrupted image data by solving a single unified optimization problem that automatically deals with noise, object labeling, and parameter tuning. The proposed framework relies on probabilistic graphical image modeling and a novel inference scheme using variational Bayesian expectation maximization for estimating

all

model parameters, including colocalization, from data. Results on simulated and real-world data demonstrate improved performance over the state of the art.

Suyash P. Awate, Thyagarajan Radhakrishnan
Template-Based Multimodal Joint Generative Model of Brain Data

The advent of large of multi-modal imaging databases opens up the opportunity to learn how local intensity patterns covariate between multiple modalities. These models can then be used to describe expected intensities in an unseen image modalities given one or multiple observations, or to detect deviations (e.g. pathology) from the expected intensity patterns. In this work, we propose a template-based multi-modal generative mixture-model of imaging data and apply it to the problems of inlier/outlier pattern classification and image synthesis. Results on synthetic and patient data demonstrate that the proposed method is able to synthesise unseen data and accurately localise pathological regions, even in the presence of large abnormalities. It also demonstrates that the proposed model can provide accurate and uncertainty-aware intensity estimates of expected imaging patterns.

M. Jorge Cardoso, Carole H. Sudre, Marc Modat, Sebastien Ourselin
Generative Method to Discover Genetically Driven Image Biomarkers

We present a generative probabilistic approach to discovery of disease subtypes determined by the genetic variants. In many diseases, multiple types of pathology may present simultaneously in a patient, making quantification of the disease challenging. Our method seeks common co-occurring image and genetic patterns in a population as a way to model these two different data types jointly. We assume that each patient is a mixture of multiple disease subtypes and use the joint generative model of image and genetic markers to identify disease subtypes guided by known genetic influences. Our model is based on a variant of the so-called topic models that uncover the latent structure in a collection of data. We derive an efficient variational inference algorithm to extract patterns of co-occurrence and to quantify the presence of heterogeneous disease processes in each patient. We evaluate the method on simulated data and illustrate its use in the context of Chronic Obstructive Pulmonary Disease (COPD) to characterize the relationship between image and genetic signatures of COPD subtypes in a large patient cohort.

Nematollah K. Batmanghelich, Ardavan Saeedi, Michael Cho, Raul San Jose Estepar, Polina Golland

MRI Reconstruction

Frontmatter
A Joint Acquisition-Estimation Framework for MR Phase Imaging

Measuring the phase of the MR signal is faced with fundamental challenges such as phase aliasing, noise and unknown offsets of the coil array. There is a paucity of acquisition, reconstruction and estimation methods that rigorously address these challenges. This reduces the reliability of information processing in phase domain. We propose a joint acquisition-processing framework that addresses the challenges of MR phase imaging using a rigorous theoretical treatment. Our proposed solution acquires the multi-coil complex data without any increase in acquisition time. Our corresponding estimation algorithm is applied optimally voxel-per-voxel. Results show that our framework achieves performance gains up to an order of magnitude compared to existing methods.

Joseph Dagher
A Compressed-Sensing Approach for Super-Resolution Reconstruction of Diffusion MRI

We present an innovative framework for reconstructing high-spatial-resolution diffusion magnetic resonance imaging (dMRI) from multiple low-resolution (LR) images. Our approach combines the twin concepts of compressed sensing (CS) and classical super-resolution to reduce acquisition time while increasing spatial resolution. We use sub-pixel-shifted LR images with down-sampled and non-overlapping diffusion directions to reduce acquisition time. The diffusion signal in the high resolution (HR) image is represented in a sparsifying basis of spherical ridgelets to model complex fiber orientations with reduced number of measurements. The HR image is obtained as the solution of a convex optimization problem which can be solved using the proposed algorithm based on the alternating direction method of multipliers (ADMM). We qualitatively and quantitatively evaluate the performance of our method on two sets of in-vivo human brain data and show its effectiveness in accurately recovering very high resolution diffusion images.

Lipeng Ning, Kawin Setsompop, Oleg Michailovich, Nikos Makris, Carl-Fredrik Westin, Yogesh Rathi
Accelerated High Spatial Resolution Diffusion-Weighted Imaging

Acquisition of a series of anisotropically oversampled acquisitions (so-called anisotropic “snapshots”) and reconstruction in the image space has recently been proposed to increase the spatial resolution in diffusion weighted imaging (DWI), providing a theoretical 8x acceleration at equal signal-to-noise ratio (SNR) compared to conventional dense k-space sampling. However, in most works, each DW image is reconstructed separately and the fact that the DW images constitute different views of the same anatomy is ignored. In addition, current approaches are limited by their inability to reconstruct a high resolution (HR) acquisition from snapshots with different subsets of diffusion gradients: an isotropic HR gradient image cannot be reconstructed if one of its anisotropic snapshots is missing, for example due to intra-scan motion, even if other snapshots for this gradient were successfully acquired. In this work, we propose a novel multi-snapshot DWI reconstruction technique that simultaneously achieves HR reconstruction and local tissue model estimation while enabling reconstruction from snapshots containing different subsets of diffusion gradients, providing increased robustness to patient motion and potential for acceleration. Our approach is formalized as a joint probabilistic model with missing observations, from which interactions between missing snapshots, HR reconstruction and a generic tissue model naturally emerge. We evaluate our approach with synthetic simulations, simulated multi-snapshot scenario and

in vivo

multi-snapshot imaging. We show that (1) our combined approach ultimately provides both better HR reconstruction and better tissue model estimation and (2) the error in the case of missing snapshots can be quantified. Our novel multi-snapshot technique will enable improved high spatial characterization of the brain connectivity and microstructure

in vivo

.

Benoit Scherrer, Onur Afacan, Maxime Taquet, Sanjay P. Prabhu, Ali Gholipour, Simon K. Warfield

Clustering

Frontmatter
Joint Spectral Decomposition for the Parcellation of the Human Cerebral Cortex Using Resting-State fMRI

Identification of functional connections within the human brain has gained a lot of attention due to its potential to reveal neural mechanisms. In a whole-brain connectivity analysis, a critical stage is the computation of a set of network nodes that can effectively represent cortical regions. To address this problem, we present a robust cerebral cortex parcellation method based on spectral graph theory and resting-state fMRI correlations that generates reliable parcellations at the single-subject level and across multiple subjects. Our method models the cortical surface in each hemisphere as a mesh graph represented in the spectral domain with its eigenvectors. We connect cortices of different subjects with each other based on the similarity of their connectivity profiles and construct a multi-layer graph, which effectively captures the fundamental properties of the whole group as well as preserves individual subject characteristics. Spectral decomposition of this joint graph is used to cluster each cortical vertex into a subregion in order to obtain whole-brain parcellations. Using rs-fMRI data collected from 40 healthy subjects, we show that our proposed algorithm computes highly reproducible parcellations across different groups of subjects and at varying levels of detail with an average Dice score of 0.78, achieving up to 9

$$\%$$

better reproducibility compared to existing approaches. We also report that our group-wise parcellations are functionally more consistent, thus, can be reliably used to represent the population in network analyses.

Salim Arslan, Sarah Parisot, Daniel Rueckert
Joint Clustering and Component Analysis of Correspondenceless Point Sets: Application to Cardiac Statistical Modeling

Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.

Ali Gooya, Karim Lekadir, Xenia Alba, Andrew J. Swift, Jim M. Wild, Alejandro F. Frangi

Statistical Methods

Frontmatter
Bootstrapped Permutation Test for Multiresponse Inference on Brain Behavior Associations

Despite that diagnosis of neurological disorders commonly involves a collection of behavioral assessments, most neuroimaging studies investigating the associations between brain and behavior largely analyze each behavioral measure in isolation. To jointly model multiple behavioral scores, sparse multiresponse regression (SMR) is often used. However, directly applying SMR without statistically controlling for false positives could result in many spurious findings. For models, such as SMR, where the distribution of the model parameters is unknown, permutation test and stability selection are typically used to control for false positives. In this paper, we present another technique for inferring statistically significant features from models with unknown parameter distribution. We refer to this technique as bootstrapped permutation test (BPT), which uses Studentized statistics to exploit the intuition that the variability in parameter estimates associated with relevant features would likely be higher with responses permuted. On synthetic data, we show that BPT provides higher sensitivity in identifying relevant features from the SMR model than permutation test and stability selection, while retaining strong control on the false positive rate. We further apply BPT to study the associations between brain connectivity estimated from pseudo-rest fMRI data of 1139 fourteen year olds and behavioral measures related to ADHD. Significant connections are found between brain networks known to be implicated in the behavioral tasks involved. Moreover, we validate the identified connections by fitting a regression model on pseudo-rest data with only those connections and applying this model on resting state fMRI data of 337 left out subjects to predict their behavioral scores. The predicted scores significantly correlate with the actual scores, hence verifying the behavioral relevance of the found connections.

Bernard Ng, Jean Baptiste Poline, Bertrand Thirion, Michael Greicius, IMAGEN Consortium
Controlling False Discovery Rate in Signal Space for Transformation-Invariant Thresholding of Statistical Maps

Thresholding statistical maps with appropriate correction of multiple testing remains a critical and challenging problem in brain mapping. Since the false discovery rate (FDR) criterion was introduced to the neuroimaging community a decade ago, various improvements have been proposed. However, a highly desirable feature, transformation invariance, has not been adequately addressed, especially for voxel-based FDR. Thresholding applied after spatial transformation is not necessarily equivalent to transformation applied after thresholding in the original space. We find this problem closely related to another important issue: spatial correlation of signals. A Gaussian random vector-valued image after normalization is a random map from a Euclidean space to a high-dimension unit-sphere. Instead of defining the FDR measure in the image’s Euclidean space, we define it in the signals’ hyper-spherical space whose measure not only reflects the intrinsic “volume” of signals’ randomness but also keeps invariant under images’ spatial transformation. Experiments with synthetic and real images demonstrate that our method achieves transformation invariance and significantly minimizes the bias introduced by the choice of template images.

Junning Li, Yonggang Shi, Arthur W. Toga

Longitudinal Analysis

Frontmatter
Group Testing for Longitudinal Data

We consider how to test for group differences of shapes given longitudinal data. In particular, we are interested in differences of

longitudinal models

of each group’s subjects. We introduce a generalization of principal geodesic analysis to the tangent bundle of a shape space. This allows the estimation of the variance and principal directions of the distribution of

trajectories

that summarize shape variations within the longitudinal data. Each trajectory is parameterized as a point in the tangent bundle. To study statistical differences in two distributions of trajectories, we generalize the Bhattacharyya distance in Euclidean space to the tangent bundle. This not only allows to take second-order statistics into account, but also serves as our test-statistic during permutation testing. Our method is validated on both synthetic and real data, and the experimental results indicate improved statistical power in identifying group differences. In fact, our study sheds new light on group differences in longitudinal corpus callosum shapes of subjects with dementia versus normal controls.

Yi Hong, Nikhil Singh, Roland Kwitt, Marc Niethammer
Spatio-Temporal Signatures to Predict Retinal Disease Recurrence

We propose a method to predict treatment response patterns based on spatio-temporal disease signatures extracted from longitudinal spectral domain optical coherence tomography (SD-OCT) images. We extract spatio-temporal disease signatures describing the underlying retinal structure and pathology by transforming total retinal thickness maps into a joint reference coordinate system. We formulate the prediction as a multi-variate sparse generalized linear model regression based on the aligned signatures. The algorithm predicts if and when recurrence of the disease will occur in the future. Experiments demonstrate that the model identifies predictive and interpretable features in the spatio-temporal signature. In initial experiments recurrence vs. non-recurrence is predicted with a ROC AuC of 0.99. Based on observed longitudinal morphology changes and a time-to-event based Cox regression model we predict the time to recurrence with a mean absolute error (MAE) of 1.25 months, comparing favorably to elastic net regression (1.34 months), demonstrating the benefit of a spatio-temporal survival model.

Wolf-Dieter Vogl, Sebastian M. Waldstein, Bianca S. Gerendas, Christian Simader, Ana-Maria Glodan, Dominika Podkowinski, Ursula Schmidt-Erfurth, Georg Langs

Microstructure Imaging

Frontmatter
A Unifying Framework for Spatial and Temporal Diffusion in Diffusion MRI

We propose a novel framework to simultaneously represent the diffusion-weighted MRI (dMRI) signal over diffusion times, gradient strengths and gradient directions. Current frameworks such as the 3D Simple Harmonic Oscillator Reconstruction and Estimation basis (3D-SHORE) only represent the signal over the spatial domain, leaving the temporal dependency as a fixed parameter. However, microstructure-focused techniques such as Axcaliber and ActiveAx provide evidence of the importance of sampling the dMRI space over diffusion time. Up to now there exists no generalized framework that simultaneously models the dependence of the dMRI signal in space and time. We use a functional basis to fit the 3D+t spatio-temporal dMRI signal, similarly to the 3D-SHORE basis in three dimensional ’q-space’. The lowest order term in this expansion contains an isotropic diffusion tensor that characterizes the Gaussian displacement distribution, multiplied by a negative exponential. We regularize the signal fitting by minimizing the norm of the analytic Laplacian of the basis, and validate our technique on synthetic data generated using the theoretical model proposed by Callaghan et al. We show that our method is robust to noise and can accurately describe the restricted spatio-temporal signal decay originating from tissue models such as cylindrical pores. From the fitting we can then estimate the axon radius distribution parameters along any direction using approaches similar to AxCaliber. We also apply our method on real data from an ActiveAx acquisition. Overall, our approach allows one to represent the complete 3D+t dMRI signal, which should prove helpful in understanding normal and pathologic nervous tissue.

Rutger Fick, Demian Wassermann, Marco Pizzolato, Rachid Deriche
Ground Truth for Diffusion MRI in Cancer: A Model-Based Investigation of a Novel Tissue-Mimetic Material

This work presents preliminary results on the development, characterisation, and use of a novel physical phantom designed as a simple mimic of tumour cellular structure, for diffusion-weighted magnetic resonance imaging (DW-MRI) applications. The phantom consists of a collection of roughly spherical, micron-sized core–shell polymer ‘cells’, providing a system whose ground truth microstructural properties can be determined and compared with those obtained from modelling the DW-MRI signal. A two-compartment analytic model combining restricted diffusion inside a sphere with hindered extracellular diffusion was initially investigated through Monte Carlo diffusion simulations, allowing a comparison between analytic and simulated signals. The model was then fitted to DW-MRI data acquired from the phantom over a range of gradient strengths and diffusion times, yielding estimates of ‘cell’ size, intracellular volume fraction and the free diffusion coefficient. An initial assessment of the accuracy and precision of these estimates is provided, using independent scanning electron microscope measurements and bootstrap-style simulations. Such phantoms may be useful for testing microstructural models relevant to the characterisation of tumour tissue.

Damien J. McHugh, Fenglei Zhou, Penny L. Hubbard Cristinacce, Josephine H. Naish, Geoffrey J. M. Parker

Shape Analysis

Frontmatter
Anisotropic Distributions on Manifolds: Template Estimation and Most Probable Paths

We use anisotropic diffusion processes to generalize normal distributions to manifolds and to construct a framework for likelihood estimation of template and covariance structure from manifold valued data. The procedure avoids the linearization that arise when first estimating a mean or template before performing PCA in the tangent space of the mean. We derive flow equations for the most probable paths reaching sampled data points, and we use the paths that are generally not geodesics for estimating the likelihood of the model. In contrast to existing template estimation approaches, accounting for anisotropy thus results in an algorithm that is not based on geodesic distances. To illustrate the effect of anisotropy and to point to further applications, we present experiments with anisotropic distributions on both the sphere and finite dimensional LDDMM manifolds arising in the landmark matching problem.

Stefan Sommer
A Riemannian Framework for Intrinsic Comparison of Closed Genus-Zero Shapes

We present a framework for intrinsic comparison of surface metric structures and curvatures. This work parallels the work of Kurtek et al. on parameterization-invariant comparison of genus zero shapes. Here, instead of comparing the embedding of spherically parameterized surfaces in space, we focus on the first fundamental form. To ensure that the distance on spherical metric tensor fields is invariant to parameterization, we apply the conjugation-invariant metric arising from the

L

2

norm on symmetric positive definite matrices. As a reparameterization changes the metric tensor by a congruent Jacobian transform, this metric perfectly suits our purpose. The result is an intrinsic comparison of shape metric structure that does not depend on the specifics of a spherical mapping. Further, when restricted to tensors of fixed volume form, the manifold of metric tensor fields and its quotient of the group of unitary diffeomorphisms becomes a proper metric manifold that is geodesically complete. Exploiting this fact, and augmenting the metric with analogous metrics on curvatures, we derive a complete Riemannian framework for shape comparison and reconstruction. A by-product of our framework is a near-isometric and curvature-preserving mapping between surfaces. The correspondence is optimized using the fast spherical fluid algorithm. We validate our framework using several subcortical boundary surface models from the ADNI dataset.

Boris A. Gutman, P. Thomas Fletcher, M. Jorge Cardoso, Greg M. Fleishman, Marco Lorenzi, Paul M. Thompson, Sebastien Ourselin

Multi-atlas Fusion

Frontmatter
Multi-atlas Segmentation as a Graph Labelling Problem: Application to Partially Annotated Atlas Data

Manually annotating images for multi-atlas segmentation is an expensive and often limiting factor in reliable automated segmentation of large databases. Segmentation methods requiring only a proportion of each atlas image to be labelled could potentially reduce the workload on expert raters tasked with labelling images. However, exploiting such a database of partially labelled atlases is not possible with state-of-the-art multi-atlas segmentation methods. In this paper we revisit the problem of multi-atlas segmentation and formulate its solution in terms of graph-labelling. Our graphical approach uses a Markov Random Field (MRF) formulation of the problem and constructs a graph connecting atlases and the target image. This provides a unifying framework for label propagation. More importantly, the proposed method can be used for segmentation using only partially labelled atlases. We furthermore provide an extension to an existing continuous MRF optimisation method to solve the proposed problem formulation. We show that the proposed method, applied to hippocampal segmentation of 202 subjects from the ADNI database, remains robust and accurate even when the proportion of manually labelled slices in the atlases is reduced to

$$20\,\%$$

.

Lisa M. Koch, Martin Rajchl, Tong Tong, Jonathan Passerat-Palmbach, Paul Aljabar, Daniel Rueckert
Keypoint Transfer Segmentation

We present an image segmentation method that transfers label maps of entire organs from the training images to the novel image to be segmented. The transfer is based on sparse correspondences between keypoints that represent automatically identified distinctive image locations. Our segmentation algorithm consists of three steps: (i) keypoint matching, (ii) voting-based keypoint labeling, and (iii) keypoint-based probabilistic transfer of organ label maps. We introduce generative models for the inference of keypoint labels and for image segmentation, where keypoint matches are treated as a latent random variable and are marginalized out as part of the algorithm. We report segmentation results for abdominal organs in whole-body CT and in contrast-enhanced CT images. The accuracy of our method compares favorably to common multi-atlas segmentation while offering a speed-up of about three orders of magnitude. Furthermore, keypoint transfer requires no training phase or registration to an atlas. The algorithm’s robustness enables the segmentation of scans with highly variable field-of-view.

C. Wachinger, M. Toews, G. Langs, W. Wells, P. Golland

Fast Image Registration

Frontmatter
Finite-Dimensional Lie Algebras for Fast Diffeomorphic Image Registration

This paper presents a fast geodesic shooting algorithm for diffeomorphic image registration. We first introduce a novel finite-dimensional Lie algebra structure on the space of bandlimited velocity fields. We then show that this space can effectively represent initial velocities for diffeomorphic image registration at much lower dimensions than typically used, with little to no loss in registration accuracy. We then leverage the fact that the geodesic evolution equations, as well as the adjoint Jacobi field equations needed for gradient descent methods, can be computed entirely in this finite-dimensional Lie algebra. The result is a geodesic shooting method for large deformation metric mapping (LDDMM) that is dramatically faster and less memory intensive than state-of-the-art methods. We demonstrate the effectiveness of our model to register 3D brain images and compare its registration accuracy, runtime, and memory consumption with leading LDDMM methods. We also show how our algorithm breaks through the prohibitive time and memory requirements of diffeomorphic atlas building.

Miaomiao Zhang, P. Thomas Fletcher
Fast Optimal Transport Averaging of Neuroimaging Data

Knowing how the Human brain is anatomically and functionally organized at the level of a group of healthy individuals or patients is the primary goal of neuroimaging research. Yet computing an average of brain imaging data defined over a voxel grid or a triangulation remains a challenge. Data are large, the geometry of the brain is complex and the between subjects variability leads to spatially or temporally non-overlapping effects of interest. To address the problem of variability, data are commonly smoothed before performing a linear group averaging. In this work we build on ideas originally introduced by Kantorovich [

18

] to propose a new algorithm that can average efficiently non-normalized data defined over arbitrary discrete domains using transportation metrics. We show how Kantorovich means can be linked to Wasserstein barycenters in order to take advantage of the entropic smoothing approach used by [

7

]. It leads to a smooth convex optimization problem and an algorithm with strong convergence guarantees. We illustrate the versatility of this tool and its empirical behavior on functional neuroimaging data, functional MRI and magnetoencephalography (MEG) source estimates, defined on voxel grids and triangulations of the folded cortical surface.

A. Gramfort, G. Peyré, M. Cuturi

Deformation Models

Frontmatter
Joint Morphometry of Fiber Tracts and Gray Matter Structures Using Double Diffeomorphisms

This work proposes an atlas construction method to jointly analyse the relative position and shape of fiber tracts and gray matter structures. It is based on a

double diffeomorphism

which is a composition of two diffeomorphisms. The first diffeomorphism acts only on the white matter keeping fixed the gray matter of the atlas. The resulting white matter, together with the gray matter, are then deformed by the second diffeomorphism. The two diffeomorphisms are related and jointly optimised. In this way, the first diffeomorphisms explain the variability in

structural connectivity

within the population, namely both changes in the connected areas of the gray matter and in the geometry of the pathway of the tracts. The second diffeomorphisms put into correspondence the homologous anatomical structures across subjects. Fiber bundles are approximated with

weighted prototypes

using the metric of

weighted currents

. The atlas, the covariance matrix of deformation parameters and the noise variance of each structure are automatically estimated using a Bayesian approach. This method is applied to patients with Tourette syndrome and controls showing a variability in the structural connectivity of the left cortico-putamen circuit.

Pietro Gori, Olivier Colliot, Linda Marrakchi-Kacem, Yulia Worbe, Alexandre Routier, Cyril Poupon, Andreas Hartmann, Nicholas Ayache, Stanley Durrleman
A Robust Probabilistic Model for Motion Layer Separation in X-ray Fluoroscopy

Fluoroscopic images are characterized by a transparent projection of 3-D structures from all depths to 2-D. Differently moving structures, for example due to breathing and heartbeat, can be described approximately using independently moving 2-D layers. Separating the fluoroscopic images into the motion layers is desirable to facilitate interpretation and diagnosis. Given the motion of each layer, it is state of the art to compute the layer separation by minimizing a least-squares objective function. However, due to high noise levels and inaccurate motion estimates, the results are not satisfactory in X-ray images.

In this work, we propose a probabilistic model for motion layer separation. In this model, we analyze various data terms and regularization terms theoretically and experimentally. We show that a robust penalty function is required in the data term to deal with noise and shortcomings of the image formation model. For the regularization term, we propose to enforce smoothness of the layers using bilateral total variation. On synthetic data, the mean squared error between the estimated layers and the ground truth is improved by

$$18\,\%$$

compared to the state of the art. In addition, we show qualitative improvements on real X-ray data.

Peter Fischer, Thomas Pohl, Thomas Köhler, Andreas Maier, Joachim Hornegger

Poster Papers

Frontmatter
Weighted Hashing with Multiple Cues for Cell-Level Analysis of Histopathological Images

Recently, content-based image retrieval has been investigated for histopathological image analysis, focusing on improving the accuracy and scalability. The main motivation is to interpret a new image (i.e., query image) by searching among a potentially large-scale database of training images in real-time. Hashing methods have been employed because of their promising performance. However, most previous works apply hashing algorithms on the whole images, while the important information of histopathological images usually lies in individual cells. In addition, they usually only hash one type of features, even though it is often necessary to inspect multiple cues of cells. Therefore, we propose a probabilistic-based hashing framework to model multiple cues of cells for accurate analysis of histopathological images. Specifically, each cue of a cell is compressed as binary codes by kernelized and supervised hashing, and the importance of each hash entry is determined adaptively according to its discriminativity, which can be represented as probability scores. Given these scores, we also propose several feature fusion and selection schemes to integrate their strengths. The classification of the whole image is conducted by aggregating the results from multiple cues of all cells. We apply our algorithm on differentiating adenocarcinoma and squamous carcinoma, i.e., two types of lung cancers, using a large dataset containing thousands of lung microscopic tissue images. It achieves

$$90.3\,\%$$

accuracy by hashing and retrieving multiple cues of half-million cells.

Xiaofan Zhang, Hai Su, Lin Yang, Shaoting Zhang
Multiresolution Diffeomorphic Mapping for Cortical Surfaces

Due to the convoluted folding pattern of the cerebral cortex, accurate alignment of cortical surfaces remains challenging. In this paper, we present a multiresolution diffeomorphic surface mapping algorithm under the framework of large deformation diffeomorphic metric mapping (LDDMM). Our algorithm takes advantage of multiresolution analysis (MRA) for surfaces and constructs cortical surfaces at multiresolution. This family of multiresolution surfaces are used as natural sparse priors of the cortical anatomy and provide the anchor points where the parametrization of deformation vector fields is supported. This naturally constructs tangent bundles of diffeomorphisms at different resolution levels and hence generates multiresolution diffeomorphic transformation. We show that our construction of multiresolution LDDMM surface mapping can potentially reduce computational cost and improves the mapping accuracy of cortical surfaces.

Mingzhen Tan, Anqi Qiu
A Comprehensive Computer-Aided Polyp Detection System for Colonoscopy Videos

Computer-aided detection (CAD) can help colonoscopists reduce their polyp miss-rate, but existing CAD systems are handicapped by using either shape, texture, or temporal information for detecting polyps, achieving limited sensitivity and specificity. To overcome this limitation, our key contribution of this paper is to fuse all possible polyp features by exploiting the strengths of each feature while minimizing its weaknesses. Our new CAD system has two stages, where the first stage builds on the robustness of shape features to reliably generate a set of candidates with a high sensitivity, while the second stage utilizes the high discriminative power of the computationally expensive features to effectively reduce false positives. Specifically, we employ a unique edge classifier and an original voting scheme to capture geometric features of polyps in context and then harness the power of convolutional neural networks in a novel score fusion approach to extract and combine shape, color, texture, and temporal information of the candidates. Our experimental results based on FROC curves and a new analysis of polyp detection latency demonstrate a superiority over the state-of-the-art where our system yields a lower polyp detection latency and achieves a significantly higher sensitivity while generating dramatically fewer false positives. This performance improvement is attributed to our reliable candidate generation and effective false positive reduction methods.

Nima Tajbakhsh, Suryakanth R. Gurudu, Jianming Liang
A Feature-Based Approach to Big Data Analysis of Medical Images

This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches in

O

(

log

N

) computational complexity in the number of images

N

. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring

O

(

N

) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of

$$89\,\%$$

if both exact and one-off predictions are considered correct.

Matthew Toews, Christian Wachinger, Raul San Jose Estepar, William M. Wells III
Joint Segmentation and Registration Through the Duality of Congealing and Maximum Likelihood Estimate

In this paper we consider the task of joint registration and segmentation. A popular method which aligns images and simultaneously estimates a simple statistical shape model was proposed by E. Learned-Miller and is known as congealing. It considers the entropy of a simple, pixel-wise independent distribution as the objective function for searching the unknown transformations. Besides being intuitive and appealing, this idea raises several theoretical and practical questions, which we try to answer in this paper. First, we analyse the approach theoretically and show that the original congealing is in fact the DC-dual task (difference of convex functions) for a properly formulated Maximum Likelihood estimation task. This interpretation immediately leads to a different choice for the algorithm which is substantially simpler than the known congealing algorithm. The second contribution is to show, how to generalise the task for models in which the shape prior is formulated in terms of segmentation labellings and is related to the signal domain via a parametric appearance model. We call this generalisation

unsupervised congealing

. The new approach is applied to the task of aligning and segmenting imaginal discs of Drosophila melanogaster larvae.

Boris Flach, Archibald Pontier
Self-Aligning Manifolds for Matching Disparate Medical Image Datasets

Manifold alignment can be used to reduce the dimensionality of multiple medical image datasets into a single globally consistent low-dimensional space. This may be desirable in a wide variety of problems, from fusion of different imaging modalities for Alzheimer’s disease classification to 4DMR reconstruction from 2D MR slices. Unfortunately, most existing manifold alignment techniques require either a set of prior correspondences or comparability between the datasets in high-dimensional space, which is often not possible. We propose a novel technique for the ‘self-alignment’ of manifolds (SAM) from multiple dissimilar imaging datasets without prior correspondences or inter-dataset image comparisons. We quantitatively evaluate the method on 4DMR reconstruction from realistic, synthetic sagittal 2D MR slices from 6 volunteers and real data from 4 volunteers. Additionally, we demonstrate the technique for the compounding of two free breathing 3D ultrasound views from one volunteer. The proposed method performs significantly better for 4DMR reconstruction than state-of-the-art image-based techniques.

Christian F. Baumgartner, Alberto Gomez, Lisa M. Koch, James R. Housden, Christoph Kolbitsch, Jamie R. McClelland, Daniel Rueckert, Andy P. King
Leveraging EAP-Sparsity for Compressed Sensing of MS-HARDI in $$({\mathbf {k}},{\mathbf {q}})$$ -Space

Compressed Sensing (CS) for the acceleration of MR scans has been widely investigated in the past decade. Lately, considerable progress has been made in achieving similar speed ups in acquiring multi-shell high angular resolution diffusion imaging (MS-HARDI) scans. Existing approaches in this context were primarily concerned with sparse reconstruction of the diffusion MR signal

$$S({\mathbf {q}})$$

in the

$${\mathbf {q}}$$

-space. More recently, methods have been developed to apply the compressed sensing framework to the 6-dimensional joint

$$({\mathbf {k}},{\mathbf {q}})$$

-space, thereby exploiting the redundancy in this 6D space. To guarantee accurate reconstruction from partial MS-HARDI data, the key ingredients of compressed sensing that need to be brought together are: (1) the function to be reconstructed needs to have a sparse representation, and (2) the data for reconstruction ought to be acquired in the dual domain (i.e., incoherent sensing) and (3) the reconstruction process involves a (convex) optimization.

In this paper, we present a novel approach that uses partial Fourier sensing in the 6D space of

$$({\mathbf {k}},{\mathbf {q}})$$

for the reconstruction of

$$P({\mathbf {x}},{\mathbf {r}})$$

. The distinct feature of our approach is a sparsity model that leverages surfacelets in conjunction with total variation for the joint sparse representation of

$$P({\mathbf {x}}, {\mathbf {r}})$$

. Thus, our method stands to benefit from the practical guarantees for accurate reconstruction from partial

$$({\mathbf {k}},{\mathbf {q}})$$

-space data. Further, we demonstrate significant savings in acquisition time over diffusion spectral imaging (DSI) which is commonly used as the benchmark for comparisons in reported literature. To demonstrate the benefits of this approach, we present several synthetic and real data examples.

Jiaqi Sun, Elham Sakhaee, Alireza Entezari, Baba C. Vemuri
Multi-stage Biomarker Models for Progression Estimation in Alzheimer’s Disease

The estimation of disease progression in Alzheimer’s disease (AD) based on a vector of quantitative biomarkers is of high interest to clinicians, patients, and biomedical researchers alike. In this work, quantile regression is employed to learn statistical models describing the evolution of such biomarkers. Two separate models are constructed using (1) subjects that progress from a cognitively normal (CN) stage to mild cognitive impairment (MCI) and (2) subjects that progress from MCI to AD during the observation window of a longitudinal study. These models are then automatically combined to develop a multi-stage disease progression model for the whole disease course. A probabilistic approach is derived to estimate the current disease progress (DP) and the disease progression rate (DPR) of a given individual by fitting any acquired biomarkers to these models. A particular strength of this method is that it is applicable even if individual biomarker measurements are missing for the subject. Employing cognitive scores and image-based biomarkers, the presented method is used to estimate DP and DPR for subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI). Further, the potential use of these values as features for different classification tasks is demonstrated. For example, accuracy of 64 % is reached for CN vs. MCI vs. AD classification.

Alexander Schmidt-Richberg, Ricardo Guerrero, Christian Ledig, Helena Molina-Abril, Alejandro F. Frangi, Daniel Rueckert, on behalf of the Alzheimers Disease Neuroimaging Initiative
Measuring Asymmetric Interactions in Resting State Brain Networks

Directed graph representations of brain networks are increasingly being used to indicate the direction and level of influence among brain regions. Most of the existing techniques for directed graph representations are based on time series analysis and the concept of causality, and use time lag information in the brain signals. These time lag-based techniques can be inadequate for functional magnetic resonance imaging (fMRI) signal analysis due to the limited time resolution of fMRI as well as the low frequency hemodynamic response. The aim of this paper is to present a novel measure of necessity that uses asymmetry in the joint distribution of brain activations to infer the direction and level of interaction among brain regions. We present a mathematical formula for computing necessity and extend this measure to partial necessity, which can potentially distinguish between direct and indirect interactions. These measures do not depend on time lag for directed modeling of brain interactions and therefore are more suitable for fMRI signal analysis. The necessity measures were used to analyze resting state fMRI data to determine the presence of hierarchy and asymmetry of brain interactions during resting state. We performed ROI-wise analysis using the proposed necessity measures to study the default mode network. The empirical joint distribution of the fMRI signals was determined using kernel density estimation, and was used for computation of the necessity and partial necessity measures. The significance of these measures was determined using a one-sided Wilcoxon rank-sum test. Our results are consistent with the hypothesis that the posterior cingulate cortex plays a central role in the default mode network.

Anand A. Joshi, Ronald Salloum, Chitresh Bhushan, Richard M. Leahy
Shape Classification Using Wasserstein Distance for Brain Morphometry Analysis

Brain morphometry study plays a fundamental role in medical imaging analysis and diagnosis. This work proposes a novel framework for brain cortical surface classification using Wasserstein distance, based on uniformization theory and Riemannian optimal mass transport theory.

By Poincare uniformization theorem, all shapes can be conformally deformed to one of the three canonical spaces: the unit sphere, the Euclidean plane or the hyperbolic plane. The uniformization map will distort the surface area elements. The area-distortion factor gives a probability measure on the canonical uniformization space. All the probability measures on a Riemannian manifold form the Wasserstein space. Given any 2 probability measures, there is a unique optimal mass transport map between them, the transportation cost defines the Wasserstein distance between them. Wasserstein distance gives a Riemannian metric for the Wasserstein space. It intrinsically measures the dissimilarities between shapes and thus has the potential for shape classification.

To the best of our knowledge, this is the first work to introduce the optimal mass transport map to general Riemannian manifolds. The method is based on geodesic power Voronoi diagram. Comparing to the conventional methods, our approach solely depends on Riemannian metrics and is invariant under rigid motions and scalings, thus it intrinsically measures shape distance. Experimental results on classifying brain cortical surfaces with different intelligence quotients demonstrated the efficiency and efficacy of our method.

Zhengyu Su, Wei Zeng, Yalin Wang, Zhong-Lin Lu, Xianfeng Gu
Temporal Trajectory and Progression Score Estimation from Voxelwise Longitudinal Imaging Measures: Application to Amyloid Imaging

Cortical

$$\beta $$

-amyloid deposition begins in Alzheimer’s disease (AD) years before the onset of any clinical symptoms. It is therefore important to determine the temporal trajectories of amyloid deposition in these earliest stages in order to better understand their associations with progression to AD. A method for estimating the temporal trajectories of voxelwise amyloid as measured using longitudinal positron emission tomography (PET) imaging is presented. The method involves the estimation of a score for each subject visit based on the PET data that reflects their amyloid progression. This amyloid progression score allows subjects with similar progressions to be aligned and analyzed together. The estimation of the progression scores and the amyloid trajectory parameters are performed using an expectation-maximization algorithm. The correlations among the voxel measures of amyloid are modeled to reflect the spatial nature of PET images. Simulation results show that model parameters are captured well at a variety of noise and spatial correlation levels. The method is applied to longitudinal amyloid imaging data considering each cerebral hemisphere separately. The results are consistent across the hemispheres and agree with a global index of brain amyloid known as mean cortical DVR. Unlike mean cortical DVR, which depends on

a priori

defined regions, the progression score extracted by the method is data-driven and does not make assumptions about regional longitudinal changes. Compared to regressing on age at each voxel, the longitudinal trajectory slopes estimated using the proposed method show better localized longitudinal changes.

Murat Bilgel, Bruno Jedynak, Dean F. Wong, Susan M. Resnick, Jerry L. Prince
Predicting Semantic Descriptions from Medical Images with Convolutional Neural Networks

Learning representative computational models from medical imaging data requires large training data sets. Often, voxel-level annotation is unfeasible for sufficient amounts of data. An alternative to manual annotation, is to use the enormous amount of knowledge encoded in imaging data and corresponding reports generated during clinical routine. Weakly supervised learning approaches can link volume-level labels to image content but suffer from the typical label distributions in medical imaging data where only a small part consists of clinically relevant abnormal structures. In this paper we propose to use a semantic representation of clinical reports as a learning target that is predicted from imaging data by a convolutional neural network. We demonstrate how we can learn accurate voxel-level classifiers based on weak volume-level semantic descriptions on a set of 157 optical coherence tomography (OCT) volumes. We specifically show how semantic information increases classification accuracy for intraretinal cystoid fluid (IRC), subretinal fluid (SRF) and normal retinal tissue, and how the learning algorithm links semantic concepts to image content and geometry.

Thomas Schlegl, Sebastian M. Waldstein, Wolf-Dieter Vogl, Ursula Schmidt-Erfurth, Georg Langs
Bodypart Recognition Using Multi-stage Deep Learning

Automatic medical image analysis systems often start from identifying the human body part contained in the image. Specifically, given a transversal slice, it is important to know which body part it comes from, namely “slice-based bodypart recognition”. This problem has its unique characteristic - the body part of a slice is usually identified by local discriminative regions instead of global image context, e.g., a cardiac slice is differentiated from an aorta arch slice by the mediastinum region. To leverage this characteristic, we design a multi-stage deep learning framework that aims at: (1) discover the local regions that are discriminative to the bodypart recognition, and (2) learn a bodypart identifier based on these local regions. These two tasks are achieved by the two stages of our learning scheme, respectively. In the pre-train stage, a convolutional neural network (CNN) is learned in a multi-instance learning fashion to extract the most discriminative local patches from the training slices. In the boosting stage, the learned CNN is further boosted by these local patches for bodypart recognition. By exploiting the discriminative local appearances, the learned CNN becomes more accurate than global image context-based approaches. As a key hallmark, our method does not require manual annotations of the discriminative local patches. Instead, it automatically discovers them through multi-instance deep learning. We validate our method on a synthetic dataset and a large scale CT dataset (7000+ slices from wholebody CT scans). Our method achieves better performances than state-of-the-art approaches, including the standard CNN.

Zhennan Yan, Yiqiang Zhan, Zhigang Peng, Shu Liao, Yoshihisa Shinagawa, Dimitris N. Metaxas, Xiang Sean Zhou
Multi-subject Manifold Alignment of Functional Network Structures via Joint Diagonalization

Functional magnetic resonance imaging group studies rely on the ability to establish correspondence across individuals. This enables location specific comparison of functional brain characteristics. Registration is often based on morphology and does not take variability of functional localization into account. This can lead to a loss of specificity, or confounds when studying diseases. In this paper we propose multi-subject functional registration by manifold alignment via coupled joint diagonalization. The functional network structure of each subject is encoded in a diffusion map, where functional relationships are decoupled from spatial position. Two-step manifold alignment estimates initial correspondences between functionally equivalent regions. Then, coupled joint diagonalization establishes common eigenbases across all individuals, and refines the functional correspondences. We evaluate our approach on fMRI data acquired during a language paradigm. Experiments demonstrate the benefits in matching accuracy achieved by coupled joint diagonalization compared to previously proposed functional alignment approaches, or alignment based on structural correspondences.

Karl-Heinz Nenning, Kathrin Kollndorfer, Veronika Schöpf, Daniela Prayer, Georg Langs
Brain Transfer: Spectral Analysis of Cortical Surfaces and Functional Maps

The study of brain functions using fMRI often requires an accurate alignment of cortical data across a population. Particular challenges are surface inflation for cortical visualizations and measurements, and surface matching or alignment of functional data on surfaces for group-level analyses. Present methods typically treat each step separately and can be computationally expensive. For instance, smoothing and matching of cortices often require several hours. Conventional methods also rely on anatomical features to drive the alignment of functional data between cortices, whereas anatomy and function can vary across individuals. To address these issues, we propose

BrainTransfer

, a spectral framework that unifies cortical smoothing, point matching with confidence regions, and transfer of functional maps, all within minutes of computation. Spectral methods decompose shapes into intrinsic geometrical harmonics, but suffer from the inherent instability of eigenbasis. This limits their accuracy when matching eigenbasis, and prevents the

spectral transfer

of functions. Our contributions consist of, first, the optimization of a spectral transformation matrix, which combines both, point correspondence and change of eigenbasis, and second,

focused harmonics

, which localize the spectral decomposition of functional data.

BrainTransfer

enables the transfer of surface functions across interchangeable cortical spaces, accounts for localized confidence, and gives a new way to perform statistics directly on surfaces. Benefits of spectral transfers are illustrated with a variability study on shape and functional data. Matching accuracy on retinotopy is increased over conventional methods.

Herve Lombaert, Michael Arcaro, Nicholas Ayache
Finding a Path for Segmentation Through Sequential Learning

Sequential learning techniques, such as auto-context, that applies the output of an intermediate classifier as contextual features for its subsequent classifier has shown impressive performance for semantic segmentation. We show that these methods can be interpreted as an approximation technique derived from a Bayesian formulation. To improve the effectiveness of applying this approximation technique, we propose a new sequential learning approach for semantic segmentation that solves a segmentation problem by breaking it into a series of simplified segmentation problems. Sequentially solving each of the simplified problems along the path leads to a more effective way for solving the original segmentation problem. To achieve this goal, we also propose a learning-based method to generate simplified segmentation problems by explicitly controlling the complexities of the modeling classifiers. We report promising results on the 2013 SATA canine leg muscle segmentation dataset.

Hongzhi Wang, Yu Cao, Tanveer F. Syed-Mahmood
Pancreatic Tumor Growth Prediction with Multiplicative Growth and Image-Derived Motion

Pancreatic neuroendocrine tumors are abnormal growths of hormone-producing cells in the pancreas. Different from the brain in the skull, the pancreas in the abdomen can be largely deformed by the body posture and the surrounding organs. In consequence, both tumor growth and pancreatic motion attribute to the tumor shape difference observable from images. As images at different time points are used to personalize the tumor growth model, the prediction accuracy may be reduced if such motion is ignored. Therefore, we incorporate the image-derived pancreatic motion to tumor growth personalization. For realistic mechanical interactions, the multiplicative growth decomposition is used with a hyperelastic constitutive law to model tumor mass effect, which allows growth modeling without compromising the mechanical accuracy. With also the FDG-PET and contrast-enhanced CT images, the functional, structural, and motion data are combined for a more patient-specific model. Experiments on synthetic and clinical data show the importance of image-derived motion on estimating physiologically plausible mechanical properties and the promising performance of our framework. From six patient data sets, the recall, precision, Dice coefficient, relative volume difference, and average surface distance were 89.8

$$\,\pm $$

3.5 %, 85.6

$$\,\pm $$

7.5 %, 87.4

$$\,\pm $$

3.6 %, 9.7

$$\,\pm $$

7.2 %, and 0.6

$$\,\pm $$

0.2 mm, respectively.

Ken C. L. Wong, Ronald M. Summers, Electron Kebebew, Jianhua Yao
IMaGe: Iterative Multilevel Probabilistic Graphical Model for Detection and Segmentation of Multiple Sclerosis Lesions in Brain MRI

In this paper, we present IMaGe, a new, iterative two-stage probabilistic graphical model for detection and segmentation of Multiple Sclerosis (MS) lesions. Our model includes two levels of Markov Random Fields (MRFs). At the bottom level, a regular grid voxel-based MRF identifies potential lesion voxels, as well as other tissue classes, using local and neighbourhood intensities and class priors. Contiguous voxels of a particular tissue type are grouped into regions. A higher, non-lattice MRF is then constructed, in which each node corresponds to a region, and edges are defined based on neighbourhood relationships between regions. The goal of this MRF is to evaluate the probability of candidate lesions, based on group intensity, texture and neighbouring regions. The inferred information is then propagated to the voxel-level MRF. This process of iterative inference between the two levels repeats as long as desired. The iterations suppress false positives and refine lesion boundaries. The framework is trained on 660 MRI volumes of MS patients enrolled in clinical trials from 174 different centres, and tested on a separate multi-centre clinical trial data set with 535 MRI volumes. All data consists of T1, T2, PD and FLAIR contrasts. In comparison to other MRF methods, such as [

5

,

9

], and a traditional MRF, IMaGe is much more sensitive (with slightly better PPV). It outperforms its nearest competitor by around 20 % when detecting very small lesions (3–10 voxels). This is a significant result, as such lesions constitute around 40 % of the total number of lesions.

Nagesh Subbanna, Doina Precup, Douglas Arnold, Tal Arbel
Moving Frames for Heart Fiber Reconstruction

The method of moving frames provides powerful geometrical tools for the analysis of smoothly varying frame fields. However, in the face of missing measurements, a reconstruction problem arises, one that is largely unexplored for 3D frame fields. Here we consider the particular example of reconstructing impaired cardiac diffusion magnetic resonance imaging (dMRI) data. We combine moving frame analysis with a diffusion inpainting scheme that incorporates rule-based priors. In contrast to previous reconstruction methods, this new approach uses comprehensive differential descriptors for cardiac fibers, and is able to fully recover their orientation. We demonstrate the superior performance of this approach in terms of error of fit when compared to alternate methods. We anticipate that these tools could find application in clinical settings, where damaged heart tissue needs to be replaced or repaired, and for generating dense fiber volumes in electromechanical modelling of the heart.

Emmanuel Piuze, Jon Sporring, Kaleem Siddiqi
Detail-Preserving PET Reconstruction with Sparse Image Representation and Anatomical Priors

Positron emission tomography (PET) reconstruction is an ill-posed inverse problem which typically involves fitting a high-dimensional forward model of the imaging process to noisy, and sometimes undersampled photon emission data. To improve the image quality, prior information derived from anatomical images of the same subject has been previously used in the penalised maximum likelihood (PML) method to regularise the model complexity and selectively smooth the image on a voxel basis in PET reconstruction. In this work, we propose a novel perspective of incorporating the prior information by exploring the sparse property of natural images. Instead of a regular voxel grid, the sparse image representation jointly determined by the prior image and the PET data is used in reconstruction to leverage between the image details and smoothness, and this prior is integrated into the PET forward model and has a closed-form expectation maximisation (EM) solution. Simulations show that the proposed approach achieves improved bias versus variance trade-off and higher contrast recovery than the current state-of-the-art methods, and preserves the image details better. Application to clinical PET data shows promising results.

Jieqing Jiao, Pawel Markiewicz, Ninon Burgos, David Atkinson, Brian Hutton, Simon Arridge, Sebastien Ourselin
Automatic Detection of the Uterus and Fallopian Tube Junctions in Laparoscopic Images

We present a method for the automatic detection of the uterus and the Fallopian tube/Uterus junctions (FU-junctions) in a monocular laparoscopic image. The main application is to perform automatic registration and fusion between preoperative radiological images of the uterus and laparoscopic images for image-guided surgery. In the broader context of computer assisted intervention, our method is the first that detects an organ and registration landmarks from laparoscopic images without manual input. Our detection problem is challenging because of the large inter-patient anatomical variability and pathologies such as uterine fibroids. We solve the problem using learned contextual geometric constraints that statistically model the positions and orientations of the FU-junctions relative to the uterus’ body. We train the uterus detector using a modern part-based approach and the FU-junction detector using junction-specific context-sensitive features. We have trained and tested on a database of 95 uterus images with cross validation, and successfully detected the uterus with Recall = 0.95 and average Number of False Positives per Image (NFPI) = 0.21, and FU-junctions with Recall = 0.80 and NFPI = 0.50. Our experimental results show that the contextual constraints are fundamental to achieve high quality detection.

Kristina Prokopetc, Toby Collins, Adrien Bartoli
A Mixed-Effects Model with Time Reparametrization for Longitudinal Univariate Manifold-Valued Data

Mixed-effects models provide a rich theoretical framework for the analysis of longitudinal data. However, when used to analyze or predict the progression of a neurodegenerative disease such as Alzheimer’s disease, these models usually do not take into account the fact that subjects may be at different stages of disease progression and the interpretation of the model may depend on some implicit reference time. In this paper, we propose a generative statistical model for longitudinal data, described in a univariate Riemannian manifold setting, which estimates an average disease progression model, subject-specific

time shifts

and

acceleration factors

. The

time shifts

account for variability in age at disease-onset time. The

acceleration factors

account for variability in speed of disease progression. For a given individual, the estimated time shift and acceleration factor define an affine reparametrization of the average disease progression model. This statistical model has been used to analyze neuropsychological assessments scores and cortical thickness measurements from the Alzheimer’s Disease Neuroimaging Initiative database. The numerical results showed that we can distinguish between slow versus fast progressing and early versus late-onset individuals.

J.-B. Schiratti, S. Allassonnière, A. Routier, the Alzheimers Disease Neuroimaging Initiative, O. Colliot, S. Durrleman
Prediction of Longitudinal Development of Infant Cortical Surface Shape Using a 4D Current-Based Learning Framework

Understanding the early dynamics of the highly folded human cerebral cortex is still an actively evolving research field teeming with unanswered questions. Longitudinal neuroimaging analysis and modeling have become the new trend to advance research in this field. However, this is challenged by a limited number of acquisition timepoints and the absence of inter-subject matching between timepoints. In this paper, we propose a novel framework that unprecedentedly solves the problem of predicting the dynamic evolution of infant cortical surface shape solely from a single baseline shape based on a spatiotemporal (4D) current-based learning approach. Specifically, our method learns from longitudinal data both the geometric (vertices positions) and dynamic (temporal evolution trajectories) features of the infant cortical surface, comprising a training stage and a prediction stage.

In the training stage

, we first use the current-based shape regression model to set up the inter-subject cortical surface correspondences at baseline of all training subjects. We then estimate for each training subject the diffeomorphic temporal evolution trajectories of the cortical surface shape and build an empirical mean spatiotemporal surface atlas.

In the prediction stage

, given an infant, we first warp all training subjects onto its baseline cortical surface. Second, we select the most appropriate learnt features from training subjects to

simultaneously

predict the cortical surface shapes at all later timepoints from its baseline cortical surface, based on closeness metrics between this baseline surface and the learnt baseline population average surface atlas. We used the proposed framework to predict the inner cortical surface shape at 3, 6 and 9 months from the cortical shape at birth in 9 healthy infants. Our method predicted with good accuracy the spatiotemporal dynamic change of the highly folded cortex.

Islem Rekik, Gang Li, Weili Lin, Dinggang Shen
Multi-scale Convolutional Neural Networks for Lung Nodule Classification

We investigate the problem of diagnostic lung nodule classification using thoracic Computed Tomography (CT) screening. Unlike traditional studies primarily relying on nodule segmentation for regional analysis, we tackle a more challenging problem on directly modelling raw nodule patches without any prior definition of nodule morphology. We propose a hierarchical learning framework—Multi-scale Convolutional Neural Networks (MCNN)—to capture nodule heterogeneity by extracting discriminative features from alternatingly stacked layers. In particular, to sufficiently quantify nodule characteristics, our framework utilizes multi-scale nodule patches to learn a set of class-specific features simultaneously by concatenating response neuron activations obtained at the last layer from each input scale. We evaluate the proposed method on CT images from Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), where both lung nodule screening and nodule annotations are provided. Experimental results demonstrate the effectiveness of our method on classifying malignant and benign nodules without nodule segmentation.

Wei Shen, Mu Zhou, Feng Yang, Caiyun Yang, Jie Tian
Tractography-Driven Groupwise Multi-scale Parcellation of the Cortex

The analysis of the connectome of the human brain provides key insight into the brain’s organisation and function, and its evolution in disease or ageing. Parcellation of the cortical surface into distinct regions in terms of structural connectivity is an essential step that can enable such analysis. The estimation of a stable connectome across a population of healthy subjects requires the estimation of a groupwise parcellation that can capture the variability of the connectome across the population. This problem has solely been addressed in the literature via averaging of connectivity profiles or finding correspondences between individual parcellations a posteriori. In this paper, we propose a groupwise parcellation method of the cortex based on diffusion MR images (dMRI). We borrow ideas from the area of cosegmentation in computer vision and directly estimate a consistent parcellation across different subjects and scales through a spectral clustering approach. The parcellation is driven by the tractography connectivity profiles, and information between subjects and across scales. Promising qualitative and quantitative results on a sizeable data-set demonstrate the strong potential of the method.

Sarah Parisot, Salim Arslan, Jonathan Passerat-Palmbach, William M. Wells III, Daniel Rueckert
Illumination Compensation and Normalization Using Low-Rank Decomposition of Multispectral Images in Dermatology

When attempting to recover the surface color from an image, modelling the illumination contribution per-pixel is essential. In this work we present a novel approach for illumination compensation using multispectral image data. This is done by means of a low-rank decomposition of representative spectral bands with prior knowledge of the reflectance spectra of the imaged surface. Experimental results on synthetic data, as well as on images of real lesions acquired at the university clinic, show that the proposed method significantly improves the contrast between the lesion and the background.

Alexandru Duliu, Richard Brosig, Saahil Ognawala, Tobias Lasser, Mahzad Ziai, Nassir Navab
Efficient Gaussian Process-Based Modelling and Prediction of Image Time Series

In this work we propose a novel Gaussian process-based spatio-temporal model of time series of images. By assuming separability of spatial and temporal processes we provide a very efficient and robust formulation for the marginal likelihood computation and the posterior prediction. The model adaptively accounts for local spatial correlations of the data, and the covariance structure is effectively parameterised by the Kronecker product of covariance matrices of very small size, each encoding only a single direction in space. We provide a simple and flexible framework for within- and between-subject modelling and prediction. In particular, we introduce the Hoffman-Ribak method for efficient inference on posterior processes and its uncertainty. The proposed framework is applied in the context of longitudinal modelling in Alzheimer’s disease. We firstly demonstrate the advantage of our non-parametric method for modelling of within-subject structural changes. The results show that non-parametric methods demonstrably outperform conventional parametric methods. Then the framework is extended to optimize complex parametrized covariate kernels. Using Bayesian model comparison via marginal likelihood the framework enables to compare different hypotheses about individual change processes of images.

Marco Lorenzi, Gabriel Ziegler, Daniel C. Alexander, Sebastien Ourselin
A Simulation Framework for Quantitative Validation of Artefact Correction in Diffusion MRI

In this paper we demonstrate a simulation framework that enables the direct and quantitative comparison of post-processing methods for diffusion weighted magnetic resonance (DW-MR) images. DW-MR datasets are employed in a range of techniques that enable estimates of local microstructure and global connectivity in the brain. These techniques require full alignment of images across the dataset, but this is rarely the case. Artefacts such as eddy-current (EC) distortion and motion lead to misalignment between images, which compromise the quality of the microstructural measures obtained from them. Numerous methods and software packages exist to correct these artefacts, some of which have become de-facto standards, but none have been subject to rigorous validation. The ultimate aim of these techniques is improved image alignment, yet in the literature this is assessed using either qualitative visual measures or quantitative surrogate metrics. Here we introduce a simulation framework that allows for the direct, quantitative assessment of techniques, enabling objective comparisons of existing and future methods. DW-MR datasets are generated using a process that is based on the physics of MRI acquisition, which allows for the salient features of the images and their artefacts to be reproduced. We demonstrate the application of this framework by testing one of the most commonly used methods for EC correction, registration of DWIs to b = 0, and reveal the systematic bias this introduces into corrected datasets.

Mark S. Graham, Ivana Drobnjak, Hui Zhang
Towards a Quantified Network Portrait of a Population

Computational network analysis has enabled researchers to investigate patterns of interactions between anatomical regions of the brain. Identification of subnetworks of the human connectome can reveal how the network manages an interplay of the seemingly competing principles of functional segregation and integration. Despite the study of subnetworks of the human structural connectome by various groups, the level of expression of these subnetworks in each subject remains for the most part largely unexplored. Thus, there is a need for methods that can extract common subnetworks that together render a network portrait of a sample and facilitate analysis of the same, such as group comparisons based on the expression of the subnetworks in each subject. In this paper, we propose a framework for quantifying the subject-specific expression of subnetworks. Our framework consists of two parts, namely subnetwork detection and reconstructive projection onto subnetworks. The first part identifies subnetworks of the connectome using multi-view spectral clustering. The second part quantifies subject specific manifestations of these subnetworks by nonnegative matrix decomposition. Positivity constraint is imposed to treat each subnetwork as a structure depicting the connectivity between specific anatomical regions. We have assessed the applicability of the framework by delineating a network portrait of a clinical sample consisting of children affected by autism spectrum disorder (ASD), and a matched group of typically developing controls (TDCs). Subsequent statistical analysis on the intra- and inter-subnetwork connections, revealed decreased connectivity in ASD group between regions of social cognition, executive functions, and emotion processing.

Birkan Tunç, Varsha Shankar, Drew Parker, Robert T. Schultz, Ragini Verma
Segmenting the Brain Surface from CT Images with Artifacts Using Dictionary Learning for Non-rigid MR-CT Registration

This paper presents a dictionary learning-based method to segment the brain surface in post-surgical CT images of epilepsy patients following surgical implantation of electrodes. Using the electrodes identified in the post-implantation CT, surgeons require accurate registration with pre-implantation functional and structural MR imaging to guide surgical resection of epileptic tissue. In this work, we use a surface-based registration method to align the MR and CT brain surfaces. The key challenge here is not the registration, but rather the extraction of the cortical surface from the CT image, which includes missing parts of the skull and artifacts introduced by the electrodes. To segment the brain from these images, we propose learning a model of appearance that captures both the normal tissue and the artifacts found along this brain surface boundary. Using clinical data, we demonstrate that our method both accurately extracts the brain surface and better localizes electrodes than intensity-based rigid and non-rigid registration methods.

John A. Onofrey, Lawrence H. Staib, Xenophon Papademetris
AxTract: Microstructure-Driven Tractography Based on the Ensemble Average Propagator

We propose a novel method to simultaneously trace brain white matter (WM) fascicles and estimate WM microstructure characteristics. Recent advancements in diffusion-weighted imaging (DWI) allow multi-shell acquisitions with b-values of up to 10,000

$$\mathrm{s/mm^2}$$

in human subjects, enabling the measurement of the ensemble average propagator (EAP) at distances as short as 10

$$\mathrm{\mu m}$$

. Coupled with continuous models of the full 3D DWI signal and the EAP such as Mean Apparent Propagator (MAP) MRI, these acquisition schemes provide unparalleled means to probe the WM tissue

in vivo

. Presently, there are two complementary limitations in tractography and microstructure measurement techniques. Tractography techniques are based on models of the DWI signal geometry without taking specific hypotheses of the WM structure. This hinders the tracing of fascicles through certain WM areas with complex organization such as branching, crossing, merging, and bottlenecks that are indistinguishable using the orientation-only part of the DWI signal. Microstructure measuring techniques, such as AxCaliber, require the direction of the axons within the probed tissue before the acquisition as well as the tissue to be highly organized. Our contributions are twofold. First, we extend the theoretical DWI models proposed by Callaghan et al. to characterize the distribution of axonal calibers within the probed tissue taking advantage of the MAP-MRI model. Second, we develop a simultaneous tractography and axonal caliber distribution algorithm based on the hypothesis that axonal caliber distribution varies smoothly along a WM fascicle. To validate our model we test it on

in-silico

phantoms and on the HCP dataset.

Gabriel Girard, Rutger Fick, Maxime Descoteaux, Rachid Deriche, Demian Wassermann
Sampling from Determinantal Point Processes for Scalable Manifold Learning

High computational costs of manifold learning prohibit its application for large datasets. A common strategy to overcome this problem is to perform dimensionality reduction on selected landmarks and to successively embed the entire dataset with the Nyström method. The two main challenges that arise are: (i) the landmarks selected in non-Euclidean geometries must result in a low reconstruction error, (ii) the graph constructed from sparsely sampled landmarks must approximate the manifold well. We propose to sample the landmarks from determinantal distributions on non-Euclidean spaces. Since current determinantal sampling algorithms have the same complexity as those for manifold learning, we present an efficient approximation with linear complexity. Further, we recover the local geometry after the sparsification by assigning each landmark a local covariance matrix, estimated from the original point set. The resulting neighborhood selection based on the Bhattacharyya distance improves the embedding of sparsely sampled manifolds. Our experiments show a significant performance improvement compared to state-of-the-art landmark selection techniques on synthetic and medical data.

Christian Wachinger, Polina Golland
Model-Based Estimation of Microscopic Anisotropy in Macroscopically Isotropic Substrates Using Diffusion MRI

Non-invasive estimation of cell size and shape is a key challenge in diffusion MRI. Changes in cell size and shape discriminate functional areas in the brain and can highlight different degrees of malignancy in cancer tumours. Consequently various methods have emerged recently that aim to measure the microscopic anisotropy of porous media such as biological tissue and aim to reflect pore eccentricity, the simplest shape feature. However, current methods assume a substrate of identical pores, and are strongly influenced by non-trivial size distribution. This paper presents a model-based approach that provides estimates of pore size and shape from diffusion MRI data. The technique uses a geometric model of randomly oriented finite cylinders with gamma distributed radii. We use Monte Carlo simulation to generate synthetic data in substrates consisting of randomly oriented cuboids with various size distributions and eccentricities. We compare the sensitivity of single and double pulsed field gradient (sPFG and dPFG) sequences to the size distribution and eccentricity and further compare different protocols of dPFG sequences with parallel and/or perpendicular pairs of gradients. The key result demonstrates that this model-based approach can provide features of pore shape (specifically eccentricity) that are independent of the size distribution unlike previous attempts to characterise microscopic anisotropy. We show further that explicitly accounting for size distribution is necessary for accurate estimates of average size and eccentricity, and a model that assumes a single size fails to recover the ground truth values. We find the most accurate parameter estimates for dPFG sequences with mixed parallel and perpendicular gradients, nevertheless all other sequences, including sPFG, show sensitivity as well.

Andrada Ianuş, Ivana Drobnjak, Daniel C. Alexander
Multiple Orderings of Events in Disease Progression

The event-based model constructs a discrete picture of disease progression from cross-sectional data sets, with each event corresponding to a new biomarker becoming abnormal. However, it relies on the assumption that all subjects follow a single event sequence. This is a major simplification for sporadic disease data sets, which are highly heterogeneous, include distinct subgroups, and contain significant proportions of outliers. In this work we relax this assumption by considering two extensions to the event-based model: a generalised Mallows model, which allows subjects to deviate from the main event sequence, and a Dirichlet process mixture of generalised Mallows models, which models clusters of subjects that follow different event sequences, each of which has a corresponding variance. We develop a Gibbs sampling technique to infer the parameters of the two models from multi-modal biomarker data sets. We apply our technique to data from the Alzheimer’s Disease Neuroimaging Initiative to determine the sequence in which brain regions become abnormal in sporadic Alzheimer’s disease, as well as the heterogeneity of that sequence in the cohort. We find that the generalised Mallows model estimates a larger variation in the event sequence across subjects than the original event-based model. Fitting a Dirichlet process model detects three subgroups of the population with different event sequences. The Gibbs sampler additionally provides an estimate of the uncertainty in each of the model parameters, for example an individual’s latent disease stage and cluster assignment. The distributions and mixtures of sequences that this new family of models introduces offer better characterisation of disease progression of heterogeneous populations, new insight into disease mechanisms, and have the potential for enhanced disease stratification and differential diagnosis.

Alexandra L. Young, Neil P. Oxtoby, Jonathan Huang, Razvan V. Marinescu, Pankaj Daga, David M. Cash, Nick C. Fox, Sebastien Ourselin, Jonathan M. Schott, Daniel C. Alexander, the Alzheimers Disease Neuroimaging Initiative
Construction of An Unbiased Spatio-Temporal Atlas of the Tongue During Speech

Quantitative characterization and comparison of tongue motion during speech and swallowing present fundamental challenges because of striking variations in tongue structure and motion across subjects. A reliable and objective description of the dynamics tongue motion requires the consistent integration of inter-subject variability to detect the subtle changes in populations. To this end, in this work, we present an approach to constructing an unbiased spatio-temporal atlas of the tongue during speech for the first time, based on cine-MRI from twenty two normal subjects. First, we create a common spatial space using images from the reference time frame, a neutral position, in which the unbiased spatio-temporal atlas can be created. Second, we transport images from all time frames of all subjects into this common space via the single transformation. Third, we construct atlases for each time frame via groupwise diffeomorphic registration, which serves as the initial spatio-temporal atlas. Fourth, we update the spatio-temporal atlas by realigning each time sequence based on the Lipschitz norm on diffeomorphisms between each subject and the initial atlas. We evaluate and compare different configurations such as similarity measures to build the atlas. Our proposed method permits to accurately and objectively explain the main pattern of tongue surface motion.

Jonghye Woo, Fangxu Xing, Junghoon Lee, Maureen Stone, Jerry L. Prince
Tree-Encoded Conditional Random Fields for Image Synthesis

Magnetic resonance imaging (MRI) is the dominant modality for neuroimaging in clinical and research domains. The tremendous versatility of MRI as a modality can lead to large variability in terms of image contrast, resolution, noise, and artifacts. Variability can also manifest itself as missing or corrupt imaging data. Image synthesis has been recently proposed to homogenize and/or enhance the quality of existing imaging data in order to make them more suitable as consistent inputs for processing. We frame the image synthesis problem as an inference problem on a 3-D continuous-valued conditional random field (CRF). We model the conditional distribution as a Gaussian by defining quadratic association and interaction potentials encoded in leaves of a regression tree. The parameters of these quadratic potentials are learned by maximizing the pseudo-likelihood of the training data. Final synthesis is done by inference on this model. We applied this method to synthesize

$$T_2$$

-weighted images from

$$T_1$$

-weighted images, showing improved synthesis quality as compared to current image synthesis approaches. We also synthesized Fluid Attenuated Inversion Recovery (FLAIR) images, showing similar segmentations to those obtained from real FLAIRs. Additionally, we generated super-resolution FLAIRs showing improved segmentation.

Amod Jog, Aaron Carass, Dzung L. Pham, Jerry L. Prince
Simultaneous Longitudinal Registration with Group-Wise Similarity Prior

Here we present an algorithm for the simultaneous registration of N longitudinal image pairs such that information acquired by each pair is used to constrain the registration of each other pair. More specifically, in the geodesic shooting setting for Large Deformation Diffeomorphic Metric Mappings (LDDMM) an average of the initial momenta characterizing the N transformations is maintained throughout and updates to individual momenta are constrained to be similar to this average. In this way, the N registrations are coupled and explore the space of diffeomorphisms as a group, the variance of which is constrained to be small. Our approach is motivated by the observation that transformations learned from images in the same diagnostic category share characteristics. The group-wise consistency prior serves to strengthen the contribution of the common signal among the N image pairs to the transformation for a specific pair, relative to features particular to that pair. We tested the algorithm on 57 longitudinal image pairs of Alzheimer’s Disease patients from the Alzheimer’s Disease Neuroimaging Initiative and evaluated the ability of the algorithm to produce momenta that better represent the long term biological processes occurring in the underlying anatomy. We found that for many image pairs, momenta learned with the group-wise prior better predict a third time point image unobserved in the registration.

Greg M. Fleishman, Boris A. Gutman, P. Thomas Fletcher, Paul M. Thompson
Spatially Weighted Principal Component Regression for High-Dimensional Prediction

We consider the problem of using high dimensional data residing on graphs to predict a low-dimensional outcome variable, such as disease status. Examples of data include time series and genetic data measured on linear graphs and imaging data measured on triangulated graphs (or lattices), among many others. Many of these data have two key features including spatial smoothness and intrinsically low dimensional structure. We propose a simple solution based on a general statistical framework, called spatially weighted principal component regression (SWPCR). In SWPCR, we introduce two sets of weights including importance score weights for the selection of individual features at each node and spatial weights for the incorporation of the neighboring pattern on the graph. We integrate the importance score weights with the spatial weights in order to recover the low dimensional structure of high dimensional data. We demonstrate the utility of our methods through extensive simulations and a real data analysis based on Alzheimer’s disease neuroimaging initiative data.

Dan Shen, Hongtu Zhu
Coupled Stable Overlapping Replicator Dynamics for Multimodal Brain Subnetwork Identification

Combining imaging modalities to synthesize their inherent strengths provides a promising means for improving brain subnetwork identification. We propose a multimodal integration technique based on a sex-differentiated formulation of replicator dynamics for identifying subnetworks of brain regions that exhibit high inter-connectivity both functionally and structurally. Our method has a number of desired properties, namely, it can operate on weighted graphs derived from functional magnetic resonance imaging (fMRI) and diffusion MRI (dMRI) data, allows for subnetwork overlaps, has an intrinsic criterion for setting the number of subnetworks, and provides statistical control on false node inclusion in the identified subnetworks via the incorporation of stability selection. We thus refer to our technique as coupled stable overlapping replicator dynamics (CSORD). On synthetic data, we demonstrate that CSORD achieves significantly higher subnetwork identification accuracy than state-of-the-art techniques. On real data from the Human Connectome Project (HCP), we show that CSORD attains improved test-retest reliability on multiple network measures and superior task classification accuracy.

Burak Yoldemir, Bernard Ng, Rafeef Abugharbieh
Joint 6D k-q Space Compressed Sensing for Accelerated High Angular Resolution Diffusion MRI

High Angular Resolution Diffusion Imaging (HARDI) avoids the Gaussian diffusion assumption that is inherent in Diffusion Tensor Imaging (DTI), and is capable of characterizing complex white matter micro-structure with greater precision. However, HARDI methods such as Diffusion Spectrum Imaging (DSI) typically require significantly more signal measurements than DTI, resulting in prohibitively long scanning times. One of the goals in HARDI research is therefore to improve estimation of quantities such as the Ensemble Average Propagator (EAP) and the Orientation Distribution Function (ODF) with a limited number of diffusion-weighted measurements. A popular approach to this problem, Compressed Sensing (CS), affords highly accurate signal reconstruction using significantly fewer (sub-Nyquist) data points than required traditionally. Existing approaches to CS diffusion MRI (CS-dMRI) mainly focus on applying CS in the

$$\mathbf {q}$$

-space of diffusion signal measurements and fail to take into consideration information redundancy in the

$$\mathbf {k}$$

-space. In this paper, we propose a framework, called 6-Dimensional Compressed Sensing diffusion MRI (6D-CS-dMRI), for reconstruction of the diffusion signal and the EAP from data sub-sampled in both 3D

$$\mathbf {k}$$

-space and 3D

$$\mathbf {q}$$

-space. To our knowledge, 6D-CS-dMRI is the first work that applies compressed sensing in the full 6D

$$\mathbf {k}$$

-

$$\mathbf {q}$$

space and reconstructs the diffusion signal in the full continuous

$$\mathbf {q}$$

-space and the EAP in continuous displacement space. Experimental results on synthetic and real data demonstrate that, compared with full DSI sampling in

$$\mathbf {k}$$

-

$$\mathbf {q}$$

space, 6D-CS-dMRI yields excellent diffusion signal and EAP reconstruction with low root-mean-square error (RMSE) using 11 times less samples (3-fold reduction in

$$\mathbf {k}$$

-space and 3.7-fold reduction in

$$\mathbf {q}$$

-space).

Jian Cheng, Dinggang Shen, Peter J. Basser, Pew-Thian Yap
Functional Nonlinear Mixed Effects Models for Longitudinal Image Data

Motivated by studying large-scale longitudinal image data, we propose a novel functional nonlinear mixed effects modeling (FNMEM) framework to model the nonlinear spatial-temporal growth patterns of brain structure and function and their association with covariates of interest (e.g., time or diagnostic status). Our FNMEM explicitly quantifies a random nonlinear association map of individual trajectories. We develop an efficient estimation method to estimate the nonlinear growth function and the covariance operator of the spatial-temporal process. We propose a global test and a simultaneous confidence band for some specific growth patterns. We conduct Monte Carlo simulation to examine the finite-sample performance of the proposed procedures. We apply FNMEM to investigate the spatial-temporal dynamics of white-matter fiber skeletons in a national database for autism research. Our FNMEM may provide a valuable tool for charting the developmental trajectories of various neuropsychiatric and neurodegenerative disorders.

Xinchao Luo, Lixing Zhu, Linglong Kong, Hongtu Zhu
Backmatter
Metadaten
Titel
Information Processing in Medical Imaging
herausgegeben von
Sebastien Ourselin
Daniel C. Alexander
Carl-Fredrik Westin
M. Jorge Cardoso
Copyright-Jahr
2015
Electronic ISBN
978-3-319-19992-4
Print ISBN
978-3-319-19991-7
DOI
https://doi.org/10.1007/978-3-319-19992-4