Skip to main content
Top

2015 | Book

Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2015

18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part I

insite
SEARCH

About this book

The three-volume set LNCS 9349, 9350, and 9351 constitutes the refereed proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2015, held in Munich, Germany, in October 2015. Based on rigorous peer reviews, the program committee carefully selected 263 revised papers from 810 submissions for presentation in three volumes. The papers have been organized in the following topical sections: quantitative image analysis I: segmentation and measurement; computer-aided diagnosis: machine learning; computer-aided diagnosis: automation; quantitative image analysis II: classification, detection, features, and morphology; advanced MRI: diffusion, fMRI, DCE; quantitative image analysis III: motion, deformation, development and degeneration; quantitative image analysis IV: microscopy, fluorescence and histological imagery; registration: method and advanced applications; reconstruction, image formation, advanced acquisition - computational imaging; modelling and simulation for diagnosis and interventional planning; computer-assisted and image-guided interventions.

Table of Contents

Frontmatter

Advanced MRI: Diffusion, fMRI, DCE

Frontmatter
Automatic Segmentation of Renal Compartments in DCE-MRI Images

In this paper, we introduce a method for automatic renal compartment segmentation from Dynamic Contrast-Enhanced MRI (DCE-MRI) images, which is an important problem but existing solutions cannot achieve high accuracy robustly for a wide range of data. The proposed method consists of three main steps. First, the whole kidney is segmented based on the concept of Maximally Stable Temporal Volume (MSTV). The proposed MSTV detects anatomical structures that are stable in both spatial domain and temporal dynamics. MSTV-based kidney segmentation is robust to noises and does not require a training phase. It can well adapt to kidney shape variations caused by renal dysfunction. Second, voxels in the segmented kidney are described by principal components (PCs) to remove temporal redundancy and noises. And then k-means clustering of PCs is applied to separate voxels into cortex, medulla and pelvis. Third, a refinement method is introduced to further remove noises in each segmented compartment. Experimental results on 16 clinical kidney datasets demonstrate that our method reaches a very high level of agreement with manual results and achieves superior performance to three existing baseline methods. The code of the proposed method will be made publicly available with the publication of this paper.

Xin Yang, Hung Le Minh, Tim Cheng, Kyung Hyun Sung, Wenyu Liu
Harmonizing Diffusion MRI Data Across Multiple Sites and Scanners

Harmonizing diffusion MRI (dMRI) images across multiple sites is imperative for joint analysis of the data to significantly increase the sample size and statistical power of neuroimaging studies. In this work, we develop a method to harmonize diffusion MRI data across multiple sites and scanners that incorporates two main novelties: i) we take into account the spatial variability of the signal (for different sites) in different parts of the brain as opposed to existing methods, which consider one linear statistical covariate for the entire brain; ii) our method is model-free, in that no

a-priori

model of diffusion (e.g., tensor, compartmental models, etc.) is assumed and the signal itself is corrected for scanner related differences. We use spherical harmonic basis functions to represent the signal and compute several rotation invariant features, which are used to estimate a regionally specific linear mapping between signal from different sites (and scanners). We validate our method on diffusion data acquired from four different sites (including two GE and two Siemens scanners) on a group of healthy subjects. Diffusion measures such fractional anisotropy, mean diffusivity and generalized fractional anisotropy are compared across multiple sites before and after the mapping. Our experimental results demonstrate that, for identical acquisition protocol across sites, scanner-specific differences can be accurately removed using the proposed method.

Hengameh Mirzaalian, Amicie de Pierrefeu, Peter Savadjiev, Ofer Pasternak, Sylvain Bouix, Marek Kubicki, Carl-Fredrik Westin, Martha E. Shenton, Yogesh Rathi
Track Filtering via Iterative Correction of TDI Topology

We propose a new technique to clean outlier tracks from fiber bundles reconstructed by tractography. Previous techniques were mainly based on computing pair-wise distances and clustering methods to identify unwanted tracks, which relied heavy upon user inputs for parameter tuning. In this work, we propose the use of topological information in track density images (TDI) to achieve a more robust filtering of tracks. There are two main steps of our iterative algorithm. Given a fiber bundle, we first convert it to a TDI, then extract and score its critical points. After that, tracks that contribute to high scoring loops are identified and removed using the Reeb graph of the level set surface of the TDI. Our approach is geometrically intuitive and relies only on a single parameter that enables the user to decide on the length of insignificant loops. In our experiments, we use our method to reconstruct the optic radiation in human brain using the multi-shell HARDI data from the human connectome project (HCP). We compare our results against spectral filtering and show that our approach can achieve cleaner reconstructions. We also apply our method to 215 HCP subjects to test for asymmetry of the optic radiation and obtain statistically significant results that are consistent with post-mortem studies.

Dogu Baran Aydogan, Yonggang Shi
Novel Single and Multiple Shell Uniform Sampling Schemes for Diffusion MRI Using Spherical Codes

A good data sampling scheme is important for diffusion MRI acquisition and reconstruction. Diffusion Weighted Imaging (DWI) data is normally acquired on single or multiple shells in q-space. The samples in different shells are typically distributed uniformly, because they should be invariant to the orientation of structures within tissue, or the laboratory coordinate frame. The Electrostatic Energy Minimization (EEM) method, originally proposed for single shell sampling scheme in dMRI by Jones et al., was recently generalized to the multi-shell case, called generalized EEM (GEEM). GEEM has been successfully used in the Human Connectome Project (HCP). Recently, the Spherical Code (SC) concept was proposed to maximize the minimal angle between different samples in single or multiple shells, producing a larger angular separation and better rotational invariance than the GEEM method. In this paper, we propose two novel algorithms based on the SC concept: 1) an efficient incremental constructive method, called Iterative Maximum Overlap Construction (IMOC), to generate a sampling scheme on a discretized sphere; 2) a constrained non-linear optimization (CNLO) method to update a given initial scheme on the continuous sphere. Compared to existing incremental estimation methods, IMOC obtains schemes with much larger separation angles between samples, which are very close to the best known solutions in single shell case. Compared to the existing Riemannian gradient descent method, CNLO is more robust and stable. Experiments demonstrated that the two proposed methods provide larger separation angles and better rotational invariance than the state-of-the-art GEEM and methods based on the SC concept.

Jian Cheng, Dinggang Shen, Pew-Thian Yap, Peter J. Basser
q-Space Deep Learning for Twelve-Fold Shorter and Model-Free Diffusion MRI Scans

Diffusion MRI uses a multi-step data processing pipeline. With certain steps being prone to instabilities, the pipeline relies on considerable amounts of partly redundant input data, which requires long acquisition time. This leads to high scan costs and makes advanced diffusion models such as diffusion kurtosis imaging (DKI) and neurite orientation dispersion and density imaging (NODDI) inapplicable for children and adults who are uncooperative, uncomfortable or unwell. We demonstrate how deep learning, a group of algorithms in the field of artificial neural networks, can be applied to reduce diffusion MRI data processing to a single optimized step. This method allows obtaining scalar measures from advanced models at twelve-fold reduced scan time and detecting abnormalities without using diffusion models.

Vladimir Golkov, Alexey Dosovitskiy, Philipp Sämann, Jonathan I. Sperl, Tim Sprenger, Michael Czisch, Marion I. Menzel, Pedro A. Gómez, Axel Haase, Thomas Brox, Daniel Cremers
A Machine Learning Based Approach to Fiber Tractography Using Classifier Voting

Current tractography pipelines incorporate several modelling assumptions about the nature of the diffusion-weighted signal. We present an approach that tracks fiber pathways based on a random forest classification and voting process, guiding each step of the streamline progression by directly processing raw signal intensities. We evaluated our approach quantitatively and qualitatively using phantom and

in vivo

data. The presented machine learning based approach to fiber tractography is the first of its kind and our experiments showed auspicious performance compared to 12 established state of the art tractography pipelines. Due to its distinctly increased sensitivity and specificity regarding tract connectivity and morphology, the presented approach is a valuable addition to the repertoire of currently available tractography methods and promises to be beneficial for all applications that build upon tractography results.

Peter F. Neher, Michael Götz, Tobias Norajitra, Christian Weber, Klaus H. Maier-Hein
Symmetric Wiener Processes for Probabilistic Tractography and Connectivity Estimation

Probabilistic tractography based on diffusion weighted MRI has become a powerful approach for quantifying structural brain connectivities. In several works the similarity of probabilistic tractography and path integrals was already pointed out. This work investigates this connection more deeply. For the so called Wiener process, a Gaussian random walker, the equivalence is worked out. We identify the source of the asymmetry of usual random walker approaches and show that there is a proper symmetrization, which leads to a new symmetric connectivity measure. To compute this measure we will use the Fokker-Planck equation, which is an equivalent representation of a Wiener process in terms of a partial differential equation. In experiments we show that the proposed approach leads to a symmetric and robust connectivity measure.

M. Reisert, B. Dihtal, E. Kellner, H. Skibbe, C. Kaller
An Iterated Complex Matrix Approach for Simulation and Analysis of Diffusion MRI Processes

We present a novel approach to investigate the properties of diffusion weighted magnetic resonance imaging (dMRI). The process of restricted diffusion of spin particles in the presence of a magnetic field is simulated by an iterated complex matrix multiplication approach. The approach is based on first principles and provides a flexible, transparent and fast simulation tool. The experiments carried out reveals fundamental features of the dMRI process. A particularly interesting observation is that the induced speed of the local spatial spin angle rate of change is highly shift variant. Hence, the encoding basis functions are not the complex exponentials associated with the Fourier transform as commonly assumed. Thus, reconstructing the signal using the inverse Fourier transform leads to large compartment estimation errors, which is demonstrated in a number of 1D and 2D examples. In accordance with previous investigations the compartment size is under-estimated. More interestingly, however, we show that the estimated shape is likely to be far from the true shape using state of the art clinical MRI scanners.

Hans Knutsson, Magnus Herberthson, Carl-Fredrik Westin
Prediction of Motor Function in Very Preterm Infants Using Connectome Features and Local Synthetic Instances

We propose a method to identify preterm infants at highest risk of adverse motor function (identified at 18 months of age) using connectome features from a diffusion tensor image (DTI) acquired shortly after birth. For each full-brain DTI, a connectome is constructed and network features are extracted. After further reducing the dimensionality of the feature vector via PCA, SVM is used to discriminate between normal and abnormal motor scores. We further introduce a novel method to produce realistic synthetic training data in order to reduce the effects of class imbalance. Our method is tested on a dataset of 168 DTIs of 115 very preterm infants, scanned between 27 and 45 weeks post-menstrual age. We show that using our synthesized training data can consistently improve classification accuracy while setting a baseline for this challenging prediction problem. This work presents the first image analysis approach to predicting impairment in motor function in preterm-born infants.

Colin J. Brown, Steven P. Miller, Brian G. Booth, Kenneth J. Poskitt, Vann Chau, Anne R. Synnes, Jill G. Zwicker, Ruth E. Grunau, Ghassan Hamarneh
Segmenting Kidney DCE-MRI Using 1st-Order Shape and 5th-Order Appearance Priors

Kidney segmentation from dynamic contrast enhanced magnetic resonance images (DCE-MRI) is vital for computer-aided early assessment of kidney functions. To accurately extract kidneys in the presence of inherently inhomogeneous contrast deviations, we control an evolving geometric deformable boundary using specific prior models of kidney shape and visual appearance. Due to analytical estimates from the training data, these priors make the kidney segmentation fast and accurate, offering the prospect of clinical applications. Experiments with 50 DCE-MRI

in-vivo

data sets confirmed that the proposed approach outperforms three more conventional counterparts.

Ni Liu, Ahmed Soliman, Georgy Gimel’farb, Ayman El-Baz
Comparison of Stochastic and Variational Solutions to ASL fMRI Data Analysis

Functional Arterial Spin Labeling (fASL) MRI can provide a quantitative measurement of changes of cerebral blood flow induced by stimulation or task performance. fASL data is commonly analysed using a general linear model (GLM) with regressors based on the canonical hemodynamic response function. In this work, we consider instead a joint detection-estimation (JDE) framework which has the advantage of allowing the extraction of both task-related perfusion and hemodynamic responses not restricted to canonical shapes. Previous JDE attempts for ASL have been based on computer intensive sampling (MCMC) methods. Our contribution is to provide a comparison with an alternative variational expectation-maximization (VEM) algorithm on synthetic and real data.

Aina Frau-Pascual, Florence Forbes, Philippe Ciuciu
Prediction of CT Substitutes from MR Images Based on Local Sparse Correspondence Combination

Prediction of CT substitutes from MR images are clinically desired for dose planning in MR-based radiation therapy and attenuation correction in PET/MR. Considering that there is no global relation between intensities in MR and CT images, we propose local sparse correspondence combination (LSCC) for the prediction of CT substitutes from MR images. In LSCC, we assume that MR and CT patches are located on two nonlinear manifolds and the mapping from the MR manifold to the CT manifold approximates a diffeomorphism under a local constraint. Several techniques are used to constrain locality: 1) for each patch in the testing MR image, a local search window is used to extract patches from the training MR/CT pairs to construct MR and CT dictionaries; 2) k-Nearest Neighbors is used to constrain locality in the MR dictionary; 3) outlier detection is performed to constrain locality in the CT dictionary; 4) Local Anchor Embedding is used to solve the MR dictionary coefficients when representing the MR testing sample. Under these local constraints, the coefficient weights are linearly transferred from MR to CT, and used to combine the samples in the CT dictionary to generate CT predictions. The proposed method has been evaluated for brain images on a dataset of 13 subjects. Each subject has T1- and T2-weighted MR images, as well as a CT image with a total of 39 images. Results show the effectiveness of the proposed method which provides CT predictions with a mean absolute error of 113.8 HU compared with real CTs.

Yao Wu, Wei Yang, Lijun Lu, Zhentai Lu, Liming Zhong, Ru Yang, Meiyan Huang, Yanqiu Feng, Wufan Chen, Qianjin Feng
Robust Automated White Matter Pathway Reconstruction for Large Studies

Automated probabilistic reconstruction of white matter pathways facilitates tractography in large studies. TRACULA (TRActs Constrained by UnderLying Anatomy) follows a Markov-chain Monte Carlo (MCMC) approach that is compute-intensive. TRACULA is available on our Neuroscience Gateway (NSG), a user-friendly environment for fully automated data processing on grid computing resources. Despite the robustness of TRACULA, our users and others have reported incidents of partially reconstructed tracts. Investigation revealed that in these situations the MCMC algorithm is caught in local minima. We developed a method that detects unsuccessful tract reconstructions and iteratively repeats the sampling procedure while maintaining the anatomical priors to reduce computation time. The anatomical priors are recomputed only after several unsuccessful iterations. Our method detects affected tract reconstructions by analyzing the dependency between samples produced by the MCMC algorithm. We extensively validated the original and the modified methods by performing five repeated reconstructions on a dataset of 74 HIV-positive patients and 47 healthy controls. Our method increased the rate of successful reconstruction in the two most prominently affected tracts (forceps major and minor) on average from 74% to 99%. In these tracts, no group difference in FA and MD was found, while a significant association with age could be confirmed.

Jalmar Teeuw, Matthan W. A. Caan, Silvia D. Olabarriaga
Exploiting the Phase in Diffusion MRI for Microstructure Recovery: Towards Axonal Tortuosity via Asymmetric Diffusion Processes

Microstructure recovery procedures via Diffusion-Weighted Magnetic Resonance Imaging (DW-MRI) usually discard the signal’s phase, assuming symmetry in the underlying diffusion process. We propose to recover the Ensemble Average Propagator (EAP) directly from the complex DW signal in order to describe also eventual diffusional asymmetry, thus obtaining an asymmetric EAP. The asymmetry of the EAP is then related to tortuosity of undulated white matter axons, which are found in pathological scenarios associated with axonal elongation or compression. We derive a model of the EAP for this geometry and quantify its asymmetry. Results show that the EAP obtained when accounting for the DW signal’s phase provides useful microstructural information in such pathological scenarios. Furthermore, we validate these results in-silico through 3D Monte-Carlo simulations of white matter tissue that has experienced different degrees of elongation/compression.

Marco Pizzolato, Demian Wassermann, Timothé Boutelier, Rachid Deriche
Sparse Bayesian Inference of White Matter Fiber Orientations from Compressed Multi-resolution Diffusion MRI

The RubiX [1] algorithm combines high SNR characteristics of low resolution data with high spacial specificity of high resolution data, to extract microstructural tissue parameters from diffusion MRI. In this paper we focus on estimating crossing fiber orientations and introduce sparsity to the RubiX algorithm, making it suitable for reconstruction from compressed (under-sampled) data. We propose a sparse Bayesian algorithm for estimation of fiber orientations and volume fractions from compressed diffusion MRI. The data at high resolution is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible directions. Volume fractions of fibers along these orientations define the dictionary weights. The data at low resolution is modeled using a spatial partial volume representation. The proposed dictionary representation and sparsity priors consider the dependence between fiber orientations and the spatial redundancy in data representation. Our method exploits the sparsity of fiber orientations, therefore facilitating inference from under-sampled data. Experimental results show improved accuracy and decreased uncertainty in fiber orientation estimates. For under-sampled data, the proposed method is also shown to produce more robust estimates of fiber orientations.

Pramod Kumar Pisharady, Julio M. Duarte-Carvajalino, Stamatios N. Sotiropoulos, Guillermo Sapiro, Christophe Lenglet
Classification of MRI under the Presence of Disease Heterogeneity using Multi-Task Learning: Application to Bipolar Disorder

Heterogeneity in psychiatric and neurological disorders has undermined our ability to understand the pathophysiology underlying their clinical manifestations. In an effort to better distinguish clinical subtypes, many disorders, such as Bipolar Disorder, have been further sub-categorized into subgroups, albeit with criteria that are not very clear, reproducible and objective. Imaging, along with pattern analysis and classification methods, offers promise for developing objective and quantitative ways for disease subtype categorization. Herein, we develop such a method using learning multiple tasks, assuming that each task corresponds to a disease subtype but that subtypes share some common imaging characteristics, along with having distinct features. In particular, we extend the original SVM method by incorporating the sparsity and the group sparsity techniques to allow simultaneous joint learning for all diagnostic tasks. Experiments on Multi-Task Bipolar Disorder classification demonstrate the advantages of our proposed methods compared to other state-of-art pattern analysis approaches.

Xiangyang Wang, Tianhao Zhang, Tiffany M. Chaim, Marcus V. Zanetti, Christos Davatzikos
Fiber Connection Pattern-Guided Structured Sparse Representation of Whole-Brain fMRI Signals for Functional Network Inference

A variety of studies in the brain mapping field have reported that the dictionary learning and sparse representation framework is efficient and effective in reconstructing concurrent functional brain networks based on the functional magnetic resonance imaging (fMRI) signals. However, previous approaches are pure data-driven and do not integrate brain science domain knowledge when reconstructing functional networks. The group-wise correspondence of the reconstructed functional networks across individual subjects is thus not well guaranteed. Moreover, the fiber connection pattern consistency of those functional networks across subjects is largely unknown. To tackle these challenges, in this paper, we propose a novel fiber connection pattern-guided structured sparse representation of whole-brain resting state fMRI (rsfMRI) signals to infer functional networks. In particular, the fiber connection patterns derived from diffusion tensor imaging (DTI) data are adopted as the connectional features to perform consistent cortical parcellation across subjects. Those consistent parcellated regions with similar fiber connection patterns are then employed as the group structured constraint to guide group-wise multi-task sparse representation of whole-brain rsfMRI signals to reconstruct functional networks. Using the recently publicly released high quality Human Connectome Project (HCP) rsfMRI and DTI data, our experimental results demonstrate that the identified functional networks via the proposed approach have both reasonable spatial pattern correspondence and fiber connection pattern consistency across individual subjects.

Xi Jiang, Tuo Zhang, Qinghua Zhao, Jianfeng Lu, Lei Guo, Tianming Liu
Joint Estimation of Hemodynamic Response Function and Voxel Activation in Functional MRI Data

This paper proposes a method of voxel-wise hemodynamic response function (HRF) estimation using sparsity and smoothing constraints on the HRF. The slow varying baseline drift at the voxel time-series is initially estimated via empirical mode decomposition (EMD). This estimation is refined by two-stage optimization that estimates HRF and slow-varying noise iteratively. In addition, this paper proposes a novel method of finding voxel activation via projection of voxel time-series on signal subspace constructed using the prior estimates of HRF. The performance of the proposed method is demonstrated on both synthetic and real fMRI data.

Priya Aggarwal, Anubha Gupta, Ajay Garg
Quantifying Microstructure in Fiber Crossings with Diffusional Kurtosis

Diffusional Kurtosis Imaging (DKI) is able to capture non-Gaussian diffusion and has become a popular complement to the more traditional Diffusion Tensor Imaging (DTI). In this paper, we demonstrate how strongly the presence of fiber crossings and the exact crossing angle affect measures from diffusional kurtosis, limiting their interpretability as indicators of tissue microstructure. We alleviate this limitation by modeling fiber crossings with a mixture of cylindrically symmetric kurtosis models. Based on results on simulated and on real-world data, we conclude that explicitly including crossing geometry in kurtosis models leads to parameters that are more specific to other aspects of tissue microstructure, such as scale and homogeneity.

Michael Ankele, Thomas Schultz
Which Manifold Should be Used for Group Comparison in Diffusion Tensor Imaging?

Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) is a modality which allows to investigate the white matter structure by probing water molecule diffusion. A common way to model the diffusion process is to consider a second-order tensor, represented by a symmetric positive-definite matrix. Currently, there is still no consensus on the most appropriate manifold for handling diffusion tensors. We propose to evaluate the influence of considering an Euclidean, a Log-Euclidean or a Riemannian manifold for conducting group comparison in DT-MRI. To this end, we consider a multi-linear regression problem that is solved on each of these manifolds. Statistical analysis is then achieved by computing an F-statistic between two nested (restricted and full) models. Our evaluation on simulated data suggests that the performance of these manifolds varies with the kind of modifications that has to be detected, while the experiments on real data do not exhibit significant difference between the methods.

A. Bouchon, V. Noblet, F. Heitz, J. Lamy, F. Blanc, J. -P. Armspach
Convex Non-negative Spherical Factorization of Multi-Shell Diffusion-Weighted Images

Diffusion-weighted imaging (DWI) allows to probe tissue microstructure non-invasively and study healthy and diseased white matter (WM) in vivo. Yet, less research has focussed on modelling grey matter (GM), cerebrospinal fluid (CSF) and other tissues. Here, we introduce a fully data-driven approach to spherical deconvolution, based on convex non-negative matrix factorization. Our approach decomposes multi-shell DWI data, represented in the basis of spherical harmonics, into tissue-specific orientation distribution functions and corresponding response functions. We evaluate the proposed method in phantom simulations and in vivo brain images, and demonstrate its ability to reconstruct WM, GM and CSF, unsupervised and solely relying on DWI.

Daan Christiaens, Frederik Maes, Stefan Sunaert, Paul Suetens
Tensorial Spherical Polar Fourier Diffusion MRI with Optimal Dictionary Learning

High Angular Resolution Diffusion Imaging (HARDI) can characterize complex white matter micro-structure, avoiding the Gaussian diffusion assumption inherent in Diffusion Tensor Imaging (DTI). However, HARDI methods normally require significantly more signal measurements and a longer scan time than DTI, which limits its clinical utility. By considering sparsity of the diffusion signal, Compressed Sensing (CS) allows robust signal reconstruction from relatively fewer samples, reducing the scanning time. A good dictionary that sparsifies the signal is crucial for CS reconstruction. In this paper, we propose a novel method called Tensorial Spherical Polar Fourier Imaging (TSPFI) to recover continuous diffusion signal and diffusion propagator by representing the diffusion signal using an orthonormal TSPF basis. TSPFI is a generalization of the existing model-based method DTI and the model-free method SPFI. We also propose dictionary learning TSPFI (DL-TSPFI) to learn an even sparser dictionary represented as a linear combination of TSPF basis from continuous mixture of Gaussian signals. The learning process is efficiently performed in a small subspace of SPF coefficients, and the learned dictionary is proved to be sparse for all mixture of Gaussian signals by adaptively setting the tensor in TSPF basis. Then the learned DL-TSPF dictionary is optimally and adaptively applied to different voxels using DTI and a weighted LASSO for CS reconstruction. DL-TSPFI is a generalization of DL-SPFI, by considering general adaptive tensor setting instead of a scale value. The experiments demonstrated that the learned DL-TSPF dictionary has a sparser representation and lower reconstruction Root-Mean-Squared-Error (RMSE) than both the original SPF basis and the DL-SPF dictionary.

Jian Cheng, Dinggang Shen, Pew-Thian Yap, Peter J. Basser
Diffusion Compartmentalization Using Response Function Groups with Cardinality Penalization

Spherical deconvolution (SD) of the white matter (WM) diffusion-attenuated signal with a fiber signal response function has been shown to yield high-quality estimates of fiber orientation distribution functions (FODFs). However, an inherent limitation of this approach is that the response function (RF) is often fixed and assumed to be spatially invariant. This has been reported to result in spurious FODF peaks as the discrepancy of the RF with the data increases. In this paper, we propose to utilize response function groups (RFGs) for robust compartmentalization of diffusion signal and hence improving FODF estimation. Unlike the aforementioned single fixed RF, each RFG consists of a set of RFs that are intentionally varied to capture potential signal variations associated with a fiber bundle. Additional isotropic RFGs are included to account for signal contributions from gray matter (GM) and cerebrospinal fluid (CSF). To estimate the WM FODF and the volume fractions of GM and CSF compartments, the RFGs are fitted to the data in the least-squares sense, penalized by the cardinality of the support of the solution to encourage group sparsity. The volume fractions associated with each compartment are then computed by summing up the volume fractions of the RFs within each RFGs. Experimental results confirm that our method yields estimates of FODFs and volume fractions of diffusion compartments with improved robustness and accuracy.

Pew-Thian Yap, Yong Zhang, Dinggang Shen
V–Bundles: Clustering Fiber Trajectories from Diffusion MRI in Linear Time

Fiber clustering algorithms are employed to find patterns in the structural connections of the human brain as traced by tractography algorithms. Current clustering algorithms often require the calculation of large similarity matrices and thus do not scale well for datasets beyond 100,000 streamlines. We extended and adapted the 2D vector field

k

–means algorithm of Ferreira et al. to find bundles in 3D tractography data from diffusion MRI (dMRI) data. The resulting algorithm is linear in the number of line segments in the fiber data and can cluster large datasets without the use of random sampling or complex multipass procedures. It copes with interrupted streamlines and allows multisubject comparisons.

Andre Reichenbach, Mathias Goldau, Christian Heine, Mario Hlawitschka
Assessment of Mean Apparent Propagator-Based Indices as Biomarkers of Axonal Remodeling after Stroke

Recently, a robust mathematical formulation has been introduced for the closed-form analytical reconstruction of the signal and the Mean Apparent Propagator (MAP) in diffusion MRI. This is referred to as MAP-MRI or 3D-SHORE depending on the chosen reference frame. From the MAP, microstructural properties can be inferred by the derivation of indices that under certain circumstances allow the estimation of pores’ geometry and local diffusivity, holding the potential of becoming the next generation of microstructural numerical biomarkers. In this work, we propose the assessment and validation of a subset of such indices that is RTAP, D, and PA for the quantitative analysis of axonal remodeling in the uninjured motor network after stroke. Diffusion Spectrum Imaging (DSI) was performed on ten patients and ten controls at different time points and the indices were derived and exploited for tract-based quantitative analysis. Our results provide quantitative evidence on the eligibility of the derived indices as microstructural biomarkers.

Lorenza Brusini, Silvia Obertino, Mauro Zucchelli, Ilaria Boscolo Galazzo, Gunnar Krueger, Cristina Granziera, Gloria Menegaz
Integrating Multimodal Priors in Predictive Models for the Functional Characterization of Alzheimer’s Disease

Functional brain imaging provides key information to characterize neurodegenerative diseases, such as Alzheimer’s disease (AD). Specifically, the metabolic activity measured through fluorodeoxyglucose positron emission tomography (FDG-PET) and the connectivity extracted from resting-state functional magnetic resonance imaging (fMRI), are promising biomarkers that can be used for early assessment and prognosis of the disease and to understand its mechanisms. FDG-PET is the best suited functional marker so far, as it gives a reliable quantitative measure, but is invasive. On the other hand, non-invasive fMRI acquisitions do not provide a straightforward quantification of brain functional activity. To analyze populations solely based on resting-state fMRI, we propose an approach that leverages a metabolic prior learned from FDG-PET. More formally, our classification framework embeds population priors learned from another modality at the voxel-level, which can be seen as a regularization term in the analysis. Experimental results show that our PET-informed approach increases classification accuracy compared to pure fMRI approaches and highlights regions known to be impacted by the disease.

Mehdi Rahim, Bertrand Thirion, Alexandre Abraham, Michael Eickenberg, Elvis Dohmatob, Claude Comtat, Gael Varoquaux
Simultaneous Denoising and Registration for Accurate Cardiac Diffusion Tensor Reconstruction from MRI

Cardiac diffusion tensor MR imaging (DT-MRI) allows to analyze 3D fiber organization of the myocardium which may enhance the understanding of, for example, cardiac remodeling in conditions such as ventricular hypertrophy. Diffusion-weighted MRI (DW-MRI) denoising methods rely on accurate spatial alignment of all acquired DW images. However, due to cardiac and respiratory motion, cardiac DT-MRI suffers from low signal-to-noise ratio (SNR) and large spatial transformations, which result in unusable DT reconstructions. The method proposed in this paper is based on a novel registration-guided denoising algorithm, that explicitly avoids intensity averaging in misaligned regions of the images by imposing a sparsity-inducing norm between corresponding image edges. We compared our method with consecutive registration and denoising of DW images on a high quality

ex vivo

canine dataset. The results show that the proposed method improves DT field reconstruction quality, which yields more accurate measures of fiber helix angle distribution and fractional anisotropy coefficients.

Valeriy Vishnevskiy, Christian Stoeck, Gábor Székely, Christine Tanner, Sebastian Kozerke
Iterative Subspace Screening for Rapid Sparse Estimation of Brain Tissue Microstructural Properties

Diffusion magnetic resonance imaging (DMRI) is a powerful imaging modality due to its unique ability to extract microstructural information by utilizing restricted diffusion to probe compartments that are much smaller than the voxel size. Quite commonly, a mixture of models is fitted to the data to infer microstructural properties based on the estimated parameters. The fitting process is often non-linear and computationally very intensive. Recent work by Daducci et al. has shown that speed improvement of several orders of magnitude can be achieved by linearizing and recasting the fitting problem as a linear system, involving the estimation of the volume fractions associated with a set of diffusion basis functions that span the signal space. However, to ensure coverage of the signal space, sufficiently dense sampling of the parameter space is needed. This can be problematic because the number of basis functions increases exponentially with the number of parameters, causing computational intractability. We propose in this paper a method called iterative subspace screening (ISS) for tackling this ultrahigh dimensional problem. ISS requires only solving the problem in a medium-size subspace with a dimension that is much smaller than the original space spanned by all diffusion basis functions but is larger than the expected cardinality of the support of the solution. The solution obtained for this subspace is used to screen the basis functions to identify a new subspace that is pertinent to the target problem. These steps are performed iteratively to seek both the solution subspace and the solution itself. We apply ISS to the estimation of the fiber orientation distribution function (ODF) and demonstrate that it improves estimation robustness and accuracy.

Pew-Thian Yap, Yong Zhang, Dinggang Shen
Elucidating Intravoxel Geometry in Diffusion-MRI: Asymmetric Orientation Distribution Functions (AODFs) Revealed by a Cone Model

A diffusion-MRI processing method is presented for representing the inherent asymmetry of the underlying intravoxel geometry, which emerges in regions with bending, crossing, or sprouting fibers. The orientation distribution functions (ODFs) obtained through conventional approaches such as q-ball imaging and spherical deconvolution result in symmetric ODF profiles at each voxel even when the underlying geometry is asymmetric. To extract such inherent asymmetry, an inter-voxel filtering approach through a cone model is employed. The cone model facilitates a sharpening of the ODFs in some directions while suppressing peaks in other directions, thus yielding an asymmetric ODF (AODF) field. Compared to symmetric ODFs, AODFs reveal more information regarding the cytoarchitectural organization within the voxel. The level of asymmetry is quantified via a new scalar index that could complement standard measures of diffusion anisotropy. Experiments on synthetic geometries of circular, crossing, and kissing fibers show that the estimated AODFs successfully recover the asymmetry of the underlying geometry. The feasibility of the technique is demonstrated on

in vivo

data obtained from the Human Connectome Project.

Suheyla Cetin, Evren Ozarslan, Gozde Unal
Modeling Task FMRI Data via Supervised Stochastic Coordinate Coding

Task functional MRI (fMRI) has been widely employed to assess brain activation and networks. Modeling the rich information from the fMRI time series is challenging because of the lack of ground truth and the intrinsic complexity. Model-driven methods such as the general linear model (GLM) regresses exterior task designs from voxel-wise brain functional activity, which is confined because of ignoring the complexity and diversity of concurrent brain networks. Recently, dictionary learning and sparse coding method has attracted increasing attention in the fMRI analysis field. The major advantage of this methodology is its effectiveness in reconstructing concurrent brain networks automatically and systematically. However, the data-driven strategy is, to some extent, arbitrary due to ignoring the prior knowledge of task design and neuroscience knowledge. In this paper, we proposed a novel supervised stochastic coordinate coding (SCC) algorithm for fMRI data analysis, in which certain brain networks are learned with supervised information such as temporal patterns of task designs and spatial patterns of network templates, while other networks are learned automatically from the data. Its application on two independent fMRI datasets has shown the effectiveness of our methods.

Jinglei Lv, Binbin Lin, Wei Zhang, Xi Jiang, Xintao Hu, Junwei Han, Lei Guo, Jieping Ye, Tianming Liu

Computer Assisted and Image-guided Interventions

Frontmatter
Autonomous Ultrasound-Guided Tissue Dissection

Intraoperative ultrasound imaging can act as a valuable guide during minimally invasive tumour resection. However, contemporaneous bimanual manipulation of the transducer and cutting instrument presents significant challenges for the surgeon. Both cannot occupy the same physical location, and so a carefully coordinated relative motion is required. Using robotic partial nephrectomy as an index procedure, and employing PVA cryogel tissue phantoms in a reduced dimensionality setting, this study sets out to achieve autonomous tissue dissection with a high velocity waterjet under ultrasound guidance. The open-source da Vinci Research Kit (DVRK) provides the foundation for a novel multimodal visual servoing approach, based on the simultaneous processing and analysis of endoscopic and ultrasound images. Following an accurate and robust Jacobian estimation procedure, dissections are performed with specified theoretical tumour margin distances. The resulting margins, with a mean difference of 0.77mm, indicate that the overall system performs accurately, and that future generalisation to 3D tumour and organ surface morphologies is warranted.

Philip Pratt, Archie Hughes-Hallett, Lin Zhang, Nisha Patel, Erik Mayer, Ara Darzi, Guang-Zhong Yang
A Compact Retinal-Surgery Telemanipulator that Uses Disposable Instruments

We present a retinal-surgery telemanipulation system with submicron precision that is compact enough to be head-mounted and that uses a full range of existing disposable instruments. Two actuation mechanisms are described that enable the use of actuated instruments, and an instrument adapter enables quick-change of instruments. Experiments on a phantom eye show that telemanipulated surgery results in reduction of maximum downward force on the retina as compared to manual surgery for experienced users.

Manikantan Nambi, Paul S. Bernstein, Jake J. Abbott
Surgical Tool Tracking and Pose Estimation in Retinal Microsurgery

Retinal Microsurgery (RM) is performed with small surgical tools which are observed through a microscope. Real-time estimation of the tool’s pose enables the application of various computer-assisted techniques such as augmented reality, with the potential of improving the clinical outcome. However, most existing methods are prone to fail in

in-vivo

sequences due to partial occlusions, illumination and appearance changes of the tool. To overcome these problems, we propose an algorithm for simultaneous tool tracking and pose estimation that is inspired by state-of-the-art computer vision techniques. Specifically, we introduce a method based on regression forests to track the tool tip and to recover the tool’s articulated pose. To demonstrate the performance of our algorithm, we evaluate on a dataset which comprises four real surgery sequences, and compare with the state-of-the-art methods on a publicly available dataset.

Nicola Rieke, David Joseph Tan, Mohamed Alsheakhali, Federico Tombari, Chiara Amat di San Filippo, Vasileios Belagiannis, Abouzar Eslami, Nassir Navab
Direct Calibration of a Laser Ablation System in the Projective Voltage Space

Laser ablation is a widely adopted technique in many contemporary medical applications. However, it is new to use a laser to cut bone and perform general osteotomy surgical tasks with it. In this paper, we propose to apply the

direct linear transformation

algorithm to calibrate and integrate a laser deflecting tilting mirror into the affine transformation chain of a sophisticated surgical navigation system, involving next generation robots and optical tracking. Experiments were performed on synthetic input and real data. The evaluation showed a

target registration error

of 0.3mm ± 0.2mm in a working distance of 150 mm.

Adrian Schneider, Simon Pezold, Kyung-won Baek, Dilyan Marinov, Philippe C. Cattin
Surrogate-Driven Estimation of Respiratory Motion and Layers in X-Ray Fluoroscopy

Dense motion estimation in X-ray fluoroscopy is challenging due to low soft-tissue contrast and the transparent projection of 3-D information to 2-D. Motion layers have been introduced as an intermediate representation, but so far failed to generate plausible motions because their estimation is ill-posed. To attain plausible motions, we include prior information for each motion layer in the form of a surrogate signal. In particular, we extract a respiratory signal from the images using manifold learning and use it to define a surrogate-driven motion model. The model is incorporated into an energy minimization framework with smoothness priors to enable motion estimation.

Experimentally, our method estimates 48% of the 2-D motion field on XCAT phantom data. On real X-ray sequences, the target registration error of manually annotated landmarks is reduced by 52%. In addition, we qualitatively show that a meaningful separation into motion layers is achieved.

Peter Fischer, Thomas Pohl, Andreas Maier, Joachim Hornegger
Robust 5DOF Transesophageal Echo Probe Tracking at Fluoroscopic Frame Rates

Registration between transesophageal echocardiography (TEE) and x-ray fluoroscopy (XRF) has recently been introduced as a potentially useful tool for advanced image guidance of structural heart interventions. Algorithms for registration at fluoroscopic imaging frame rates (15-30

fps

) have yet to be reported, despite the fact that probe movement resulting from cardiorespiratory motion and physician manipulation can introduce non-trivial registration errors during untracked image frames. In this work, we present a novel algorithm for GPU-accelerated 2D/3D registration and apply it to the problem of TEE probe tracking in XRF sequences. Implementation in

CUDA C

resulted in an extremely fast similarity computation of < 80

μs

, which in turn enabled registration frame rates ranging from 23.6-92.3

fps

. The method was validated on simulated and clinical datasets and achieved target registration errors comparable to previously reported methods but at much faster registration speeds. Our results show, for the first time, the ability to accurately register TEE and XRF coordinate systems at fluoroscopic frame rates without the need for external hardware. The algorithm is generic and can potentially be applied to other 2D/3D registration problems where real-time performance is required.

Charles R. Hatt, Michael A. Speidel, Amish N. Raval
Rigid Motion Compensation in Interventional C-arm CT Using Consistency Measure on Projection Data

Interventional C-arm CT has the potential to visualize brain hemorrhages in the operating suite and save valuable time for stroke patients. Due to the critical constitution of the patients, C-arm CT images are frequently affected by patient motion artifacts, which often makes the reliable diagnosis of hemorrhages impossible. In this work, we propose a geometric optimization algorithm to compensate for these artifacts and present first results. The algorithm is based on a projection data consistency measure, which avoids computationally expensive forward- and backprojection steps in the optimization process. The ability to estimate movements with this measure is investigated for different rigid degrees of freedom. It was shown that out-of-plane parameters, i.e. geometrical deviations perpendicular to the plane of rotation, can be estimated with high precision. Movement artifacts in reconstructions are consistently reduced throughout all analyzed clinical datasets. With its low computational cost and high robustness, the proposed algorithm is well-suited for integration into clinical software prototypes for further evaluation.

Robert Frysch, Georg Rose
Hough Forests for Real-Time, Automatic Device Localization in Fluoroscopic Images: Application to TAVR

A method for real-time localization of devices in fluoroscopic images is presented. Device pose is estimated using a Hough forest based detection framework. The method was applied to two types of devices used for transcatheter aortic valve replacement: a transesophageal echo (TEE) probe and prosthetic valve (PV). Validation was performed on clinical datasets, where both the TEE probe and PV were successfully detected in 95.8% and 90.1% of images, respectively. TEE probe position and orientation errors were 1.42 ± 0.79

mm

and 2.59° ± 1.87°, while PV position and orientation errors were 1.04 ± 0.77

mm

and 2.90° ± 2.37°. The Hough forest was implemented in

CUDA C

, and was able to generate device location hypotheses in less than 50

ms

for all experiments.

Charles R. Hatt, Michael A. Speidel, Amish N. Raval
Hybrid Utrasound and MRI Acquisitions for High-Speed Imaging of Respiratory Organ Motion

Magnetic Resonance (MR) imaging provides excellent image quality at a high cost and low frame rate. Ultrasound (US) provides poor image quality at a low cost and high frame rate. We propose an instance-based learning system to obtain the best of both worlds: high quality MR images at high frame rates from a low cost single-element US sensor. Concurrent US and MRI pairs are acquired during a relatively brief offline learning phase involving the US transducer and MR scanner. High frame rate, high quality MR imaging of respiratory organ motion is then predicted from US measurements, even after stopping MRI acquisition, using a probabilistic kernel regression framework. Experimental results show predicted MR images to be highly representative of actual MR images.

Frank Preiswerk, Matthew Toews, W. Scott Hoge, Jr-yuan George Chiou, Lawrence P. Panych, William M. Wells III, Bruno Madore
A Portable Intra-Operative Framework Applied to Distal Radius Fracture Surgery

Fractures of the distal radius account for about 15% of all extremity fractures. To date, open reduction and internal plate fixation is the standard operative treatment. During the procedure, only fluoroscopic images are available for the planning of the screw placement and the monitoring of the instrument trajectory. Complications arising from malpositioned screws can lead to revision surgery. With the aim of improving screw placement accuracy, we present a prototype framework for fully intra-operative guidance that simplifies the planning transfer. Planning is performed directly intra-operatively and expressed in terms of screw configuration w.r.t the used implant plate. Subsequently, guidance is provided solely by a combination of locally positioned markers and a small camera placed on the surgical instrument that allows real-time position feedback. We evaluated our framework on 34 plastic bones and 3 healthy forearm cadaver specimens. In total, 146 screws were placed. On bone phantoms, we achieved an accuracy of 1.02 ± 0.57mm, 3.68 ± 4.38° and 1.77 ± 1.38° in the screw tip position and orientation (azimuth and elevation) respectively. On forearm specimens, we achieved a corresponding accuracy of 1.63 ± 0.91mm, 5.85 ± 4.93° and 3.48 ± 3.07°. Our analysis shows that our framework has the potential for improving the accuracy of the screw placement compared to the state of the art.

Jessica Magaraggia, Wei Wei, Markus Weiten, Gerhard Kleinszig, Sven Vetter, Jochen Franke, Karl Barth, Elli Angelopoulou, Joachim Hornegger
Image Based Surgical Instrument Pose Estimation with Multi-class Labelling and Optical Flow

Image based detection, tracking and pose estimation of surgical instruments in minimally invasive surgery has a number of potential applications for computer assisted interventions. Recent developments in the field have resulted in advanced techniques for 2D instrument detection in laparoscopic images, however, full 3D pose estimation remains a challenging and unsolved problem. In this paper, we present a novel method for estimating the 3D pose of robotic instruments, including axial rotation, by fusing information from large homogeneous regions and local optical flow features. We demonstrate the accuracy and robustness of this approach on ex vivo data with calibrated ground truth given by surgical robot kinematics which we will also make available to the community. Qualitative validation on in vivo data from robotic assisted prostatectomy further demonstrates that the technique can function in clinical scenarios.

Max Allan, Ping-Lin Chang, Sébastien Ourselin, David J. Hawkes, Ashwin Sridhar, John Kelly, Danail Stoyanov
Adaption of 3D Models to 2D X-Ray Images during Endovascular Abdominal Aneurysm Repair

Endovascular aneurysm repair (EVAR) has been gaining popularity over open repair of abdominal aortic aneurysms (AAAs) in the recent years. This paper describes a distortion correction approach to be applied during the EVAR cases. In a novel workflow, models (meshes) of the aorta and its branching arteries generated from preoperatively acquired computed tomography (CT) scans are overlayed with interventionally acquired fluoroscopic images. The overlay provides an arterial roadmap for the operator, with landmarks (LMs) marking the ostia, which are critical for stent placement. As several endovascular devices, such as angiographic catheters, are inserted, the anatomy may be distorted. The distortion reduces the accuracy of the overlay. To overcome the mismatch, the aortic and the iliac meshes are adapted to a device seen in uncontrasted intraoperative fluoroscopic images using the skeletonbased as-rigid-as-possible (ARAP) method. The deformation was evaluated by comparing the distance between an ostium and the corresponding LM prior to and after the deformation. The central positions of the ostia were marked in digital subtraction angiography (DSA) images as ground truth. The mean Euclidean distance in the image plane was reduced from 19.81±17.14mm to 4.56±2.81 mm.

Daniel Toth, Marcus Pfister, Andreas Maier, Markus Kowarschik, Joachim Hornegger
Projection-Based Phase Features for Localization of a Needle Tip in 2D Curvilinear Ultrasound

Localization of a needle’s tip in ultrasound images is often a challenge during percutaneous procedures due to the inherent limitations of ultrasound imaging. A new method is proposed for tip localization with curvilinear arrays using local image statistics over a region extended from the partially visible needle shaft. First, local phase-based image projections are extracted using orientation-tuned Log-Gabor filters to coarsely estimate the needle trajectory. The trajectory estimation is then improved using a best fit iterative method. To account for the typically discontinuous needle shaft appearance, a geometric optimization is then performed that connects the extracted inliers of the point cloud. In the final stage, the enhanced needle trajectory points are passed to a feature extraction method that uses a combination of spatially distributed image statistics to enhance the needle tip. The needle tip is localized using the enhanced images and calculated trajectory. Validation results obtained from 150

ex vivo

ultrasound scans show an accuracy of 0.43 ±0.31 mm for needle tip localization.

Ilker Hacihaliloglu, Parmida Beigi, Gary Ng, Robert N. Rohling, Septimiu Salcudean, Purang Abolmaesumi
Inertial Measurement Unit for Radiation-Free Navigated Screw Placement in Slipped Capital Femoral Epiphysis Surgery

Slipped Capital Femoral Epiphysis (SCFE) is a common pathologic hip condition in adolescents. In the standard treatment, a surgeon relies on multiple intra-operative fluoroscopic X-ray images to plan the screw placement and to guide a drill along the intended trajectory. More complex cases could require more images, and thereby, higher radiation dose to both patient and surgeon. We introduce a novel technique using an Inertial Measurement Unit (IMU) for recovering and visualizing the orthopedic tool trajectory in two orthogonal Xray images in real-time. The proposed technique improves screw placement accuracy and reduces the number of required fluoroscopic X-ray images without changing the current workflow. We present results from a phantom study using 20 bones to perform drilling and screw placement tasks. While dramatically reducing the number of required fluoroscopic images from 20 to 4, the results also show improvement in accuracy compared to the manual SCFE approach.

Bamshad Azizi Koutenaei, Ozgur Guler, Emmanuel Wilson, Matthew Oetgen, Patrick Grimm, Nassir Navab, Kevin Cleary
Pictorial Structures on RGB-D Images for Human Pose Estimation in the Operating Room

Human pose estimation in the operating room (OR) can benefit many applications, such as surgical activity recognition, radiation exposure monitoring and performance assessment. However, the OR is a very challenging environment for computer vision systems due to limited camera positioning possibilities, severe illumination changes and similar colors of clothes and equipments. This paper tackles the problem of human pose estimation in the OR using RGB-D images, hypothesizing that the combination of depth and color information will improve the pose estimation results in such a difficult environment. We propose an approach based on pictorial structures that makes use of both channels of the RGB-D camera and also introduce a new feature descriptor for depth images, called histogram of depth differences (HDD), that captures local depth level changes. To quantitatively evaluate the proposed approach, we generate a novel dataset by manually annotating images recorded from different camera views during several days of live surgeries. Our experiments show that the pictorial structures (PS) approach applied on depth images using HDD outperforms the state-of-the art PS approach applied on the corresponding color images by over 11%. Furthermore, the proposed HDD descriptor has superior performance when compared to two other classical descriptors applied on depth images. Finally, the appearance models generated from the depth images perform better than those generated from the color images, and the combination of both improves the overall results. We therefore conclude that it is highly beneficial to use depth information in the pictorial structure model and also for human pose estimation in operating rooms.

Abdolrahim Kadkhodamohammadi, Afshin Gangi, Michel de Mathelin, Nicolas Padoy
Interventional Photoacoustic Imaging of the Human Placenta with Ultrasonic Tracking for Minimally Invasive Fetal Surgeries

Image guidance plays a central role in minimally invasive fetal surgery such as photocoagulation of inter-twin placental anastomosing vessels to treat twin-to-twin transfusion syndrome (TTTS). Fetoscopic guidance provides insufficient sensitivity for imaging the vasculature that lies beneath the fetal placental surface due to strong light scattering in biological tissues. Incomplete photocoagulation of anastamoses is associated with postoperative complications and higher perinatal mortality. In this study, we investigated the use of multi-spectral photoacoustic (PA) imaging for better visualization of the placental vasculature. Excitation light was delivered with an optical fiber with dimensions that are compatible with the working channel of a fetoscope. Imaging was performed on an

ex vivo

normal term human placenta collected at Caesarean section birth. The photoacoustically-generated ultrasound signals were received by an external clinical linear array ultrasound imaging probe. A vein under illumination on the fetal placenta surface was visualized with PA imaging, and good correspondence was obtained between the measured PA spectrum and the optical absorption spectrum of deoxygenated blood. The delivery fiber had an attached fiber optic ultrasound sensor positioned directly adjacent to it, so that its spatial position could be tracked by receiving transmissions from the ultrasound imaging probe. This study provides strong indications that PA imaging in combination with ultrasonic tracking could be useful for detecting the human placental vasculature during minimally invasive fetal surgery.

Wenfeng Xia, Efthymios Maneas, Daniil I. Nikitichev, Charles A. Mosse, Gustavo Sato dos Santos, Tom Vercauteren, Anna L. David, Jan Deprest, Sebastien Ourselin, Paul C. Beard, Adrien E. Desjardins
Automated Segmentation of Surgical Motion for Performance Analysis and Feedback

Advances in technology have motivated the increasing use of virtual reality simulation-based training systems in surgical education, as well as the use of motion capture systems to record surgical performance. These systems have the ability to collect large volumes of trajectory data. The capability to analyse motion data in a meaningful manner is valuable in characterising and evaluating the quality of surgical technique, and in facilitating the development of intelligent self-guided training systems with automated performance feedback. To this end, we propose an automatic trajectory segmentation technique, which divides surgical tool trajectories into their component movements according to spatio-temporal features. We evaluate this technique on two different temporal bone surgery tasks requiring the use of distinct surgical techniques and show that the proposed approach achieves higher accuracy compared to an existing method.

Yun Zhou, Ioanna Ioannou, Sudanthi Wijewickrema, James Bailey, Gregor Kennedy, Stephen O’Leary
Vision-Based Intraoperative Cone-Beam CT Stitching for Non-overlapping Volumes

Cone-Beam Computed Tomography (CBCT) is one of the primary imaging modalities in radiation therapy, dentistry, and orthopedic interventions. While providing crucial intraoperative imaging, CBCT is bounded by its limited imaging volume, motivating the use of image stitching techniques. Current methods rely on overlapping volumes, leading to an excessive amount of radiation exposure, or on external tracking hardware, which may increase the setup complexity. We attach an optical camera to a CBCT enabled C-arm, and co-register the video and X-ray views. Our novel algorithm recovers the spatial alignment of non-overlapping CBCT volumes based on the observed optical views, as well as the laser projection provided by the X-ray system. First, we estimate the transformation between two volumes by automatic detection and matching of natural surface features during the patient motion. Then, we recover 3D information by reconstructing the projection of the positioning-laser onto an unknown curved surface, which enables the estimation of the unknown scale. We present a full evaluation of the methodology, by comparing vision- and registration-based stitching.

Bernhard Fuerst, Javad Fotouhi, Nassir Navab
Visibility Map: A New Method in Evaluation Quality of Optical Colonoscopy

Optical colonoscopy is performed by insertion of a long flexible endoscope into the colon. Inspecting the whole colonic surface for abnormalities has been a main concern in estimating quality of a colonoscopy procedure. In this paper we aim to estimate areas that have not been inspected thoroughly as a quality metric by generating a visibility map of the colon surface. The colon was modeled as a cylinder. By estimating the camera motion parameters between each consecutive frame, circumferential bands from the cylinder of the colon surface were extracted. Registering these extracted band images from adjacent video frames provide a visibility map, which could reveal uncovered areas by clinicians from colonoscopy videos. The method was validated using a set of realistic videos generated using a colonoscopy simulator for which the ground truth was known, and by analyzing results from processing actual colonoscopy videos by a clinical expert

.

Our method was able to identify 100% of uncovered areas on simulated data and achieved with sensitivity of 96% and precision of 74% on real videos. The results suggest that visibility map can increase clinicians’ awareness of uncovered areas, and would reduce the chance of missed polyps.

Mohammad Ali Armin, Hans De Visser, Girija Chetty, Cedric Dumas, David Conlan, Florian Grimpen, Olivier Salvado
Tissue Surface Reconstruction Aided by Local Normal Information Using a Self-calibrated Endoscopic Structured Light System

The tissue surface shape provides important information for both tissue pathology detection and augmented reality. Previously a miniaturised structured light (SL) illumination probe (1.9 mm diameter) has been developed to generate sparsely reconstructed tissue surfaces in minimally invasive surgery (MIS). The probe is inserted through the biopsy channel of a standard endoscope and projects a pattern of spots with unique spectra onto the target tissue. The tissue surface can be recovered by light pattern decoding and using parallax. This paper introduces further algorithmic developments and analytical work to allow free-hand manipulation of the SL probe, to improve the light pattern decoding result and to increase the reconstruction accuracy. Firstly the “normalized cut” algorithm was applied to segment the light pattern. Then an iterative procedure was investigated to update both the pattern decoding and the relative position between the camera and the probe simultaneously. Based on planar homography computation, the orientations of local areas where the spots are located in 3D space were estimated. The acquired surface normal information was incorporated with the sparse spot correspondences to constrain the fitting of a thin-plate spline during surface reconstruction. This SL system was tested in phantom,

ex vivo

, and

in vivo

experiments, and the potential of applying this system in surgical environments was demonstrated.

Jianyu Lin, Neil T. Clancy, Danail Stoyanov, Daniel S. Elson
Surgical Augmented Reality with Topological Changes

The visualization of internal structures of organs in minimally invasive surgery is an important avenue for improving the perception of the surgeon, or for supporting planning and decision systems. However, current methods dealing with non-rigid augmented reality only provide augmentation when the topology of the organ is not modified. In this paper we solve this shortcoming by introducing a method for physics-based non-rigid augmented reality. Singularities caused by topological changes are detected and propagated to the pre-operative model. This significantly improves the coherence between the actual laparascopic view and the model, and provides added value in terms of navigation and decision making. Our real time augmentation algorithm is assessed on a video showing the cut of a porcine liver’s lobe in minimal invasive surgery.

Christoph J. Paulus, Nazim Haouchine, David Cazier, Stéphane Cotin
Towards an Efficient Computational Framework for Guiding Surgical Resection through Intra-operative Endo-microscopic Pathology

Precise detection and surgical resection of the tumors during an operation greatly increases the chance of the overall procedure efficacy. Emerging experimental in-vivo imaging technologies such as Confocal Laser Endomicroscopy (CLE), could potentially assist surgeons to examine brain tissues on histological scale in real-time during the operation. However, it is a challenging task for neurosurgeons to interpret these images in real-time, primarily due to the low signal to noise ratio and variability in the patterns expressed within these images by various examined tissue types. In this paper, we present a comprehensive computational framework capable of automatic brain tumor classification in real-time. Specifically, our contributions include: (a) an end-to-end computational pipeline where a variety of the feature extraction methods, encoding schemes, and classification algorithms can be readily deployed, (b) thorough evaluation of state-of-the-art low-level image features and popular encoding techniques in context of CLE imagery, and finally, (c) A highly optimized feature pooling method based on codeword proximity. The proposed system can effectively classify two types of commonly diagnosed brain tumors in CLE sequences captured in real-time with close to 90% accuracy. Extensive experiments on a dataset of 117 videos demonstrate the efficacy of our system.

Shaohua Wan, Shanhui Sun, Subhabrata Bhattacharya, Stefan Kluckner, Alexander Gigler, Elfriede Simon, Maximilian Fleischer, Patra Charalampaki, Terrence Chen, Ali Kamen
Automated Assessment of Surgical Skills Using Frequency Analysis

We present an automated framework for visual assessment of the expertise level of surgeons using the OSATS (Objective Structured Assessment of Technical Skills) criteria. Video analysis techniques for extracting motion quality via frequency coefficients are introduced. The framework is tested on videos of medical students with different expertise levels performing basic surgical tasks in a surgical training lab setting. We demonstrate that transforming the sequential time data into frequency components effectively extracts the useful information differentiating between different skill levels of the surgeons. The results show significant performance improvements using DFT and DCT coefficients over known state-of-the-art techniques.

Aneeq Zia, Yachna Sharma, Vinay Bettadapura, Eric L. Sarin, Mark A. Clements, Irfan Essa
Robust Live Tracking of Mitral Valve Annulus for Minimally-Invasive Intervention Guidance

Mitral valve (MV) regurgitation is an important cardiac disorder that affects 2-3% of the Western population. While valve repair is commonly performed under open-heart surgery, an increasing number of transcatheter MV repair (TMVR) strategies are being developed. To be successful, TMVR requires extensive image guidance due to the complexity of MV physiology and of the therapies, in particular during device deployment. New trans-esophageal echocardiography (TEE) enable real-time, full-volume imaging of the valve including 3D anatomy and 3D color-Doppler flow. Such new transducers open a large range of applications for TMVR guidance, like the 3D assessment of the impact of a therapy on the MV function. In this manuscript we propose an algorithm towards the goal of live quantification of the MV anatomy. Leveraging the recent advances in ultrasound hardware, and combining machine learning approaches, predictive search strategies and efficient image-based tracking algorithms, we propose a novel method to automatically detect and track the MV annulus over very long image sequences. The method was tested on 12 4D TEE annotated sequences acquired in patients suffering from a large variety of disease. These sequences have been rigidly transformed to simulate probe motion. Obtained results showed a tracking accuracy of 4.04mm mean error, while demonstrating robustness when compared to purely image based methods. Our approach therefore paves the way towards quantitative guidance of TMVR through live 3D valve modeling.

Ingmar Voigt, Mihai Scutaru, Tommaso Mansi, Bogdan Georgescu, Noha El-Zehiry, Helene Houle, Dorin Comaniciu
Motion-Aware Mosaicing for Confocal Laser Endomicroscopy

Probe-based Confocal Laser Endomicroscopy (pCLE) provides physicians with real-time access to histological information during standard endoscopy procedures, through high-resolution cellular imaging of internal tissues. Earlier work on mosaicing has enhanced the potential of this imaging modality by meeting the need to get a complete representation of the imaged region. However, with approaches, the dynamic information, which may be of clinical interest, is lost. In this study, we propose a new mosaic construction algorithm for pCLE sequences based on a min-cut optimization and gradient-domain composition. Its main advantage is that the motion of some structures within the tissue such as blood cells in capillaries, is taken into account. This allows physicians to get both a sharper static representation and a dynamic representation of the imaged tissue. Results on 16 sequences acquired

in vivo

on six different organs demonstrate the clinical relevance of our approach.

Jessie Mahé, Nicolas Linard, Marzieh Kohandani Tafreshi, Tom Vercauteren, Nicholas Ayache, Francois Lacombe, Remi Cuingnet
A Registration Approach to Endoscopic Laser Speckle Contrast Imaging for Intrauterine Visualisation of Placental Vessels

Intrauterine interventions such as twin-to-twin transfusion syndrome procedure require accurate mapping of the fetal placental vasculature to ensure complete photocoagulation of vascular anastomoses. However, surgeons are currently limited to fetoscopy and external ultrasound imaging, which are unable to accurately identify all vessels especially those that are narrow and at the periphery. Laser speckle contrast imaging (LSCI) is an optical method for imaging blood flow that is emerging as an intraoperative tool for neurosurgery. Here we explore the application of LSCI to minimally invasive fetal surgery, with an endoscopic LSCI system based on a 2.7-mm-diameter fetoscope. We establish using an optical phantom that it can image flow in 1-mm-diameter vessels as far as 4 mm below the surface. We demonstrate that a spatiotemporal algorithm produces the clearest images of vessels within 200 ms, and that speckle contrast images can be accurately registered using groupwise registration to correct for significant motion of target or probe. When tested on a perfused term

ex vivo

human placenta, our endoscopic LSCI system revealed small capillaries not evident in the fetoscopic images.

Gustavo Sato dos Santos, Efthymios Maneas, Daniil Nikitichev, Anamaria Barburas, Anna L. David, Jan Deprest, Adrien Desjardins, Tom Vercauteren, Sebastien Ourselin
Marker-Less AR in the Hybrid Room Using Equipment Detection for Camera Relocalization

Augmented reality (AR) permits clinicians to visualize directly in their field of view key information related to the performance of a surgery. To track the user’s viewpoint, current systems often use markers or register a reconstructed mesh with an a priori model of the scene. This only allows for a limited set of viewpoints and positions near the patient. Indeed, markers can be intrusive and interfere with the procedure. Furthermore, changes in the positions of equipment or clinicians can invalidate a priori models. Instead, we propose a marker-free mobile AR system based on a KinectFusion-like approach for camera tracking and equipment detection for camera relocalization. Our approach relies on the use of multiple RGBD cameras: one camera is rigidly attached to a hand-held screen where the AR visualization is displayed, while two others are rigidly fixed to the ceiling. The inclusion of two static cameras enables us to dynamically recompute the 3D model of the room, as required for relocalization when changes occur in the scene. Fast relocalization can be performed by looking at an equipment that is not required to remain static. This is particularly of advantage during hybrid surgeries, where an obvious choice for such an equipment is the intraoperative imaging device, which is large, can be seen in all views, but can also move. We propose to detect the equipment using a template based approach and further make use of the static cameras to speed-up the detection in the moving view by dynamically adapting the subset of tested templates according to the actual room layout. The approach is illustrated in a hybrid room through a radiation monitoring application where a virtual representation of the radiation cone beam, main X-ray scattering direction and dose distribution deposited on the surface of the patient are displayed on the hand-held screen.

Nicolas Loy Rodas, Fernando Barrera, Nicolas Padoy
Hybrid Retargeting for High-Speed Targeted Optical Biopsies

With the increasing maturity of optical biopsy techniques, routine clinical use has become more widespread. This wider adoption of the technique demands effective tracking and retargeting of the biopsy sites, as no visible markers are left following examination. This study presents a high-speed framework for intra-procedural retargeting of probe-based optical biopsies in gastrointestinal endoscopy. A probe tip localisation method using active shape models and geometric heuristics, which eliminates the traditional dependency on shaft visibility, is proposed for automated initialisation. Partial occlusion and tissue deformation are addressed by exploiting the benefits of indirect and direct tracking through a novel combination of geometric association and online learning. Robustness to rapid endoscope motion and improvements in computational efficiency are achieved by restricting processing to the automatically detected video content area and through a feature-based rejection of non-informative frames. Performance evaluation in phantom and

in-vivo

environments demonstrates accurate biopsy site initialisation, robust retargeting and significant improvements over the state-of-the-art in processing time and memory usage.

André Mouton, Menglong Ye, François Lacombe, Guang-Zhong Yang
Visual Force Feedback for Hand-Held Microsurgical Instruments

Microsurgery is technically challenging, demanding both rigorous precision under the operating microscope and great care when handling tissue. Applying excessive force can result in irreversible tissue injury, but sufficient force must be exerted to carry out manoeuvres in an efficient manner. Technological advances in hand-held instruments have allowed the integration of force sensing capabilities into surgical tools, resulting in the possibility of force feedback during an operation. This paper presents a novel method of graduated online visual force-feedback for hand-held microsurgical instruments. Unlike existing visual force-feedback techniques, the force information is integrated into the surgical scene by highlighting the area around the point of contact while preserving salient anatomical features. We demonstrate that the proposed technique can be integrated seamlessly with image guidance techniques. Critical anatomy beyond the exposed tissue surface is revealed using an augmented reality overlay when the user is exerting large forces within their proximity. The force information is further used to improve the quality of the augmented reality by displacing the overlay based on the forces exerted. Detailed user studies were performed to assess the efficacy of the proposed method.

Gauthier Gras, Hani J. Marcus, Christopher J. Payne, Philip Pratt, Guang-Zhong Yang
Automated Three-Piece Digital Dental Articulation

In craniomaxillofacial (CMF) surgery, a critical step is to reestablish dental occlusion. Digitally establishing new dental occlusion is extremely difficult. It is especially true when the maxilla is segmentalized into 3 pieces, a common procedure in CMF surgery. In this paper, we present a novel midline-guided occlusal optimization (MGO) approach to automatically and efficiently reestablish dental occlusion for 3-piece maxillary surgery. Our MGO approach consists of 2 main steps. The anterior segment of the maxilla is first aligned to the intact mandible using our ergodic midline-match algorithm. The right and left posterior segments are then aligned to the mandible in sequence using an improved iterative surface-based minimum distance mapping algorithm. Our method has been validated using 15 sets of digital dental models. The results showed our algorithm-generated 3-piece articulation is more efficient and effective than the current standard of care method. The results demonstrated our approach’s significant clinical impact and technical contributions.

Jianfu Li, Flavio Ferraz, Shunyao Shen, Yi-Fang Lo, Xiaoyan Zhang, Peng Yuan, Zhen Tang, Ken-Chung Chen, Jaime Gateno, Xiaobo Zhou, James J. Xia
A System for MR-Ultrasound Guidance during Robot-Assisted Laparoscopic Radical Prostatectomy

We describe a new ultrasound and magnetic resonance image guidance system for robot assisted radical prostatectomy and its first use in patients. This system integrates previously developed and new components and presents to the surgeon preoperative magnetic resonance images (MRI) registered to real-time 2D ultrasound to inform the surgeon of anatomy and cancer location. At the start of surgery, a trans-rectal ultrasound (TRUS) is manually positioned for prostate imaging using a standard brachytherapy stepper. When the anterior prostate surface is exposed, the TRUS, which can be rotated under computer control, is registered to one of the da Vinci patient-side manipulators by recognizing the tip of the da Vinci instrument at multiple locations on the tissue surface. A 3D TRUS volume is then taken, which is segmented semi-automatically. A segmentation-based, biomechanically regularized deformable registration algorithm is used to register the 3D TRUS image to preoperatively acquired and annotated T2-weighted images, which are deformed to the patient. MRI and TRUS images can then be pointed at and examined by the surgeon at the da Vinci console. We outline the approaches used and present our experience with the system in the first two patients. While this work is preliminary, the feasibility of fused MRI and TRUS during radical prostatectomy has not been demonstrated before. Given the significant rates of positive surgical margins still reported in the literature, such a system has potentially significant clinical benefits.

Omid Mohareri, Guy Nir, Julio Lobo, Richard Savdie, Peter Black, Septimiu Salcudean

Computer Aided Diagnosis: Machine Learning

Frontmatter
Automatic Fetal Ultrasound Standard Plane Detection Using Knowledge Transferred Recurrent Neural Networks

Accurate acquisition of fetal ultrasound (US) standard planes is one of the most crucial steps in obstetric diagnosis. The conventional way of standard plane acquisition requires a thorough knowledge of fetal anatomy and intensive manual labors. Hence, automatic approaches are highly demanded in clinical practice. However, automatic detection of standard planes containing key anatomical structures from US videos remains a challenging problem due to the high intra-class variations of standard planes. Unlike previous studies that developed specific methods for different anatomical standard planes respectively, we present a general framework to detect standard planes from US videos automatically. Instead of utilizing hand-crafted visual features, our framework explores spatio-temporal feature learning with a novel knowledge transferred recurrent neural network (T-RNN), which incorporates a deep hierarchical visual feature extractor and a temporal sequence learning model. In order to extract visual features effectively, we propose a joint learning framework with knowledge transfer across multi-tasks to address the insufficiency issue of limited training data. Extensive experiments on different US standard planes with hundreds of videos corroborate that our method can achieve promising results, which outperform state-of-the-art methods.

Hao Chen, Qi Dou, Dong Ni, Jie-Zhi Cheng, Jing Qin, Shengli Li, Pheng-Ann Heng
Automatic Localization and Identification of Vertebrae in Spine CT via a Joint Learning Model with Deep Neural Networks

Accurate localization and identification of vertebrae in 3D spinal images is essential for many clinical tasks. However, automatic localization and identification of vertebrae remains challenging due to similar appearance of vertebrae, abnormal pathological curvatures and image artifacts induced by surgical implants. Traditional methods relying on hand-crafted low level features and/or a priori knowledge usually fail to overcome these challenges on arbitrary CT scans. We present a robust and efficient approach to automatically locating and identifying vertebrae in 3D CT volumes by exploiting high level feature representations with deep convolutional neural network (CNN). A novel joint learning model with CNN (J-CNN) is proposed by considering both the appearance of vertebrae and the pairwise conditional dependency of neighboring vertebrae. The J-CNN can effectively identify the type of vertebra and eliminate false detections based on a set of coarse vertebral centroids generated from a random forest classifier. Furthermore, the predicted centroids are refined by a shape regression model. Our approach was quantitatively evaluated on the dataset of MICCAI 2014 Computational Challenge on Vertebrae Localization and Identification. Compared with the state-of-the-art method [1], our approach achieved a large margin with 10.12% improvement of the identification rate and smaller localization errors.

Hao Chen, Chiyao Shen, Jing Qin, Dong Ni, Lin Shi, Jack C. Y. Cheng, Pheng-Ann Heng
Identification of Cerebral Small Vessel Disease Using Multiple Instance Learning

Cerebral small vessel disease (SVD) is a common cause of ageing-associated physical and cognitive impairment. Identifying SVD is important for both clinical and research purposes but is usually dependent on radiologists’ evaluation of brain scans. Computer tomography (CT) is the most widely used brain imaging technique but for SVD it shows a low signal-to-noise ratio, and consequently poor inter-rater reliability. We therefore propose a novel framework based on multiple instance learning (MIL) to distinguish between absent/mild SVD and moderate/severe SVD. Intensity patches are extracted from regions with high probability of containing lesions. These are then used as instances in MIL for the identification of SVD. A large baseline CT dataset, consisting of 590 CT scans, was used for evaluation. We achieved approximately 75% accuracy in classifying two different types of SVD, which is high for this challenging problem. Our results outperform those obtained by either standard machine learning methods or current clinical practice.

Liang Chen, Tong Tong, Chin Pang Ho, Rajiv Patel, David Cohen, Angela C. Dawson, Omid Halse, Olivia Geraghty, Paul E. M. Rinne, Christopher J. White, Tagore Nakornchai, Paul Bentley, Daniel Rueckert
Why Does Synthesized Data Improve Multi-sequence Classification?

The classification and registration of incomplete multi-modal medical images, such as multi-sequence MRI with missing sequences, can sometimes be improved by replacing the missing modalities with synthetic data. This may seem counter-intuitive: synthetic data is derived from data that is already available, so it does not add new information. Why can it still improve performance? In this paper we discuss possible explanations. If the synthesis model is more flexible than the classifier, the synthesis model can provide features that the classifier could not have extracted from the original data. In addition, using synthetic information to complete incomplete samples increases the size of the training set.

We present experiments with two classifiers, linear support vector machines (SVMs) and random forests, together with two synthesis methods that can replace missing data in an image classification problem: neural networks and restricted Boltzmann machines (RBMs). We used data from the BRATS 2013 brain tumor segmentation challenge, which includes multi-modal MRI scans with T1, T1 post-contrast, T2 and FLAIR sequences. The linear SVMs appear to benefit from the complex transformations offered by the synthesis models, whereas the random forests mostly benefit from having more training data. Training on the hidden representation from the RBM brought the accuracy of the linear SVMs close to that of random forests.

Gijs van Tulder, Marleen de Bruijne
Label Stability in Multiple Instance Learning

We address the problem of

instance label stability

in multiple instance learning (MIL) classifiers. These classifiers are trained only on globally annotated images (bags), but often can provide fine-grained annotations for image pixels or patches (instances). This is interesting for computer aided diagnosis (CAD) and other medical image analysis tasks for which only a coarse labeling is provided. Unfortunately, the instance labels may be unstable. This means that a slight change in training data could potentially lead to abnormalities being detected in different parts of the image, which is undesirable from a CAD point of view. Despite MIL gaining popularity in the CAD literature, this issue has not yet been addressed. We investigate the stability of instance labels provided by several MIL classifiers on 5 different datasets, of which 3 are medical image datasets (breast histopathology, diabetic retinopathy and computed tomography lung images). We propose an unsupervised measure to evaluate instance stability, and demonstrate that a performance-stability trade-off can be made when comparing MIL classifiers.

Veronika Cheplygina, Lauge Sørensen, David M. J. Tax, Marleen de Bruijne, Marco Loog
Spectral Forests: Learning of Surface Data, Application to Cortical Parcellation

This paper presents a new method for classifying surface data via spectral representations of shapes. Our approach benefits classification problems that involve data living on surfaces, such as in cortical parcellation. For instance, current methods for labeling cortical points into surface parcels often involve a slow mesh deformation toward pre-labeled atlases, requiring as much as 4 hours with the established FreeSurfer. This may burden neuroscience studies involving region-specific measurements. Learning techniques offer an attractive computational advantage, however, their representation of spatial information, typically defined in a Euclidean domain, may be inadequate for cortical parcellation. Indeed, cortical data resides on surfaces that are highly variable in space and shape. Consequently, Euclidean representations of surface data may be inconsistent across individuals. We propose to fundamentally change the spatial representation of surface data, by exploiting spectral coordinates derived from the Laplacian eigenfunctions of shapes. They have the advantage over Euclidean coordinates, to be geometry aware and to parameterize surfaces explicitly. This change of paradigm, from Euclidean to spectral representations, enables a classifier to be applied

directly

on surface data via spectral coordinates. In this paper, we decide to build upon the successful Random Decision Forests algorithm and improve its spatial representation with spectral features. Our method, Spectral Forests, is shown to significantly improve the accuracy of cortical parcellations over standard Random Decision Forests (74% versus 28% Dice overlaps), and produce accuracy equivalent to FreeSurfer in a fraction of its time (23 seconds versus 3 to 4 hours).

Herve Lombaert, Antonio Criminisi, Nicholas Ayache
DeepOrgan: Multi-level Deep Convolutional Networks for Automated Pancreas Segmentation

Automatic organ segmentation is an important yet challenging problem for medical image analysis. The pancreas is an abdominal organ with very high anatomical variability. This inhibits previous segmentation methods from achieving high accuracies, especially compared to other organs such as the liver, heart or kidneys. In this paper, we present a probabilistic bottom-up approach for pancreas segmentation in abdominal computed tomography (CT) scans, using multi-level deep convolutional networks (ConvNets). We propose and evaluate several variations of deep ConvNets in the context of hierarchical, coarse-to-fine classification on image patches and regions, i.e. superpixels. We first present a dense labeling of local image patches via P-ConvNet and nearest neighbor fusion. Then we describe a regional ConvNet (

R

1

−ConvNet) that samples a set of bounding boxes around each image superpixel at different scales of contexts in a “zoom-out” fashion. Our ConvNets learn to assign class probabilities for each superpixel region of being pancreas. Last, we study a stacked

R

2

−ConvNet leveraging the joint space of CT intensities and the

P

−ConvNet dense probability maps. Both 3D Gaussian smoothing and 2D conditional random fields are exploited as structured predictions for post-processing. We evaluate on CT images of

patients in 4-fold cross-validation. We achieve a Dice Similarity Coefficient of 83.6±6.3% in training and 71.8±10.7% in testing.

Holger R. Roth, Le Lu, Amal Farag, Hoo-Chang Shin, Jiamin Liu, Evrim B. Turkbey, Ronald M. Summers
3D Deep Learning for Efficient and Robust Landmark Detection in Volumetric Data

Recently, deep learning has demonstrated great success in computer vision with the capability to learn powerful image features from a large training set. However, most of the published work has been confined to solving 2D problems, with a few limited exceptions that treated the 3D space as a composition of 2D orthogonal planes. The challenge of 3D deep learning is due to a much larger input vector, compared to 2D, which dramatically increases the computation time and the chance of over-fitting, especially when combined with limited training samples (hundreds to thousands), typical for medical imaging applications. To address this challenge, we propose an efficient and robust deep learning algorithm capable of full 3D detection in volumetric data. A two-step approach is exploited for efficient detection. A shallow network (with one hidden layer) is used for the initial testing of all voxels to obtain a small number of promising candidates, followed by more accurate classification with a deep network. In addition, we propose two approaches, i.e., separable filter decomposition and network sparsification, to speed up the evaluation of a network. To mitigate the over-fitting issue, thereby increasing detection robustness, we extract small 3D patches from a multi-resolution image pyramid. The deeply learned image features are further combined with Haar wavelet features to increase the detection accuracy. The proposed method has been quantitatively evaluated for carotid artery bifurcation detection on a head-neck CT dataset from 455 patients. Compared to the state-of-the-art, the mean error is reduced by more than half, from 5.97 mm to 2.64 mm, with a detection speed of less than 1 s/volume.

Yefeng Zheng, David Liu, Bogdan Georgescu, Hien Nguyen, Dorin Comaniciu
A Hybrid of Deep Network and Hidden Markov Model for MCI Identification with Resting-State fMRI

In this paper, we propose a novel method for modelling functional dynamics in resting-state fMRI (rs-fMRI) for Mild Cognitive Impairment (MCI) identification. Specifically, we devise a hybrid architecture by combining Deep Auto-Encoder (DAE) and Hidden Markov Model (HMM). The roles of DAE and HMM are, respectively, to discover hierarchical non-linear relations among features, by which we transform the original features into a lower dimension space, and to model dynamic characteristics inherent in rs-fMRI, i.e., internal state changes. By building a generative model with HMMs for each class individually, we estimate the data likelihood of a test subject as MCI or normal healthy control, based on which we identify the clinical label. In our experiments, we achieved the maximal accuracy of 81.08% with the proposed method, outperforming state-of-the-art methods in the literature.

Heung-Il Suk, Seong-Whan Lee, Dinggang Shen
Combining Unsupervised Feature Learning and Riesz Wavelets for Histopathology Image Representation: Application to Identifying Anaplastic Medulloblastoma

Medulloblastoma (MB) is a type of brain cancer that represent roughly 25% of all brain tumors in children. In the anaplastic medulloblastoma subtype, it is important to identify the degree of irregularity and lack of organizations of cells as this correlates to disease aggressiveness and is of clinical value when evaluating patient prognosis. This paper presents an image representation to distinguish these subtypes in histopathology slides. The approach combines learned features from (i) an unsupervised feature learning method using topographic independent component analysis that captures scale, color and translation invariances, and (ii) learned linear combinations of Riesz wavelets calculated at several orders and scales capturing the granularity of multiscale rotation-covariant information. The contribution of this work is to show that the combination of two complementary approaches for feature learning (unsupervised and supervised) improves the classification performance. Our approach outperforms the best methods in literature with statistical significance, achieving 99% accuracy over region-based data comprising 7,500 square regions from 10 patient studies diagnosed with medulloblastoma (5 anaplastic and 5 non-anaplastic).

Sebastian Otálora, Angel Cruz-Roa, John Arevalo, Manfredo Atzori, Anant Madabhushi, Alexander R. Judkins, Fabio González, Henning Müller, Adrien Depeursinge
Automatic Coronary Calcium Scoring in Cardiac CT Angiography Using Convolutional Neural Networks

The amount of coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular events. Non-contrast enhanced cardiac CT is considered a reference for quantification of CAC. Recently, it has been shown that CAC may be quantified in cardiac CT angiography (CCTA). We present a pattern recognition method that automatically identifies and quantifies CAC in CCTA. The study included CCTA scans of 50 patients equally distributed over five cardiovascular risk categories. CAC in CCTA was identified in two stages. In the first stage, potential CAC voxels were identified using a convolutional neural network (CNN). In the second stage, candidate CAC lesions were extracted based on the CNN output for analyzed voxels and thereafter described with a set of features and classified using a Random Forest. Ten-fold stratified cross-validation experiments were performed. CAC volume was quantified per patient and compared with manual reference annotations in the CCTA scan. Bland-Altman bias and limits of agreement between reference and automatic annotations were -15 (-198–168) after the first stage and -3 (-86 – 79) after the second stage. The results show that CAC can be automatically identified and quantified in CCTA using the proposed method. This might obviate the need for a dedicated non-contrast-enhanced CT scan for CAC scoring, which is regularly acquired prior to a CCTA scan, and thus reduce the CT radiation dose received by patients.

Jelmer M. Wolterink, Tim Leiner, Max A. Viergever, Ivana Išgum
A Random Riemannian Metric for Probabilistic Shortest-Path Tractography

Shortest-path tractography (SPT) algorithms solve global optimization problems defined from local distance functions. As diffusion MRI data is inherently noisy, so are the voxelwise tensors from which local distances are derived. We extend Riemannian SPT by modeling the stochasticity of the diffusion tensor as a “random Riemannian metric”, where a geodesic is a distribution over tracts. We approximate this distribution with a Gaussian process and present a probabilistic numerics algorithm for computing the geodesic distribution. We demonstrate SPT improvements on data from the Human Connectome Project.

Søren Hauberg, Michael Schober, Matthew Liptrot, Philipp Hennig, Aasa Feragen
Deep Learning and Structured Prediction for the Segmentation of Mass in Mammograms

In this paper, we explore the use of deep convolution and deep belief networks as potential functions in structured prediction models for the segmentation of breast masses from mammograms. In particular, the structured prediction models are estimated with loss minimization parameter learning algorithms, representing: a) conditional random field (CRF), and b) structured support vector machine (SSVM). For the CRF model, we use the inference algorithm based on tree re-weighted belief propagation with truncated fitting training, and for the SSVM model the inference is based on graph cuts with maximum margin training. We show empirically the importance of deep learning methods in producing state-of-the-art results for both structured prediction models. In addition, we show that our methods produce results that can be considered the best results to date on DDSM-BCRP and INbreast databases. Finally, we show that the CRF model is significantly faster than SSVM, both in terms of inference and training time, which suggests an advantage of CRF models when combined with deep learning potential functions.

Neeraj Dhungel, Gustavo Carneiro, Andrew P. Bradley
Learning Tensor-Based Features for Whole-Brain fMRI Classification

This paper presents a novel tensor-based feature learning approach for whole-brain fMRI classification. Whole-brain fMRI data have high exploratory power, but they are challenging to deal with due to large numbers of voxels. A critical step for fMRI classification is dimensionality reduction, via

feature selection

or

feature extraction

. Most current approaches perform voxel selection based on

feature selection

methods. In contrast,

feature extraction

methods, such as principal component analysis (PCA), have limited usage on whole brain due to the small sample size problem and limited interpretability. To address these issues, we propose to directly extract features from natural tensor (rather than vector) representations of whole-brain fMRI using multilinear PCA (MPCA), and map MPCA bases to voxels for interpretability. Specifically, we extract low-dimensional tensors by MPCA, and then select a number of MPCA features according to the captured variance or mutual information as the input to SVM. To provide interpretability, we construct a mapping from the selected MPCA bases to raw voxels for localizing discriminating regions. Quantitative evaluations on challenging multiclass tasks demonstrate the superior performance of our proposed methods against the state-of-the-art, while qualitative analysis on localized discriminating regions shows the spatial coherence and interpretability of our mapping.

Xiaonan Song, Lingnan Meng, Qiquan Shi, Haiping Lu
Prediction of Trabecular Bone Anisotropy from Quantitative Computed Tomography Using Supervised Learning and a Novel Morphometric Feature Descriptor

Patient-specific biomechanical models including local bone mineral density and anisotropy have gained importance for assessing musculoskeletal disorders. However the trabecular bone anisotropy captured by high-resolution imaging is only available at the peripheral skeleton in clinical practice. In this work, we propose a supervised learning approach to predict trabecular bone anisotropy that builds on a novel set of pose invariant feature descriptors. The statistical relationship between trabecular bone anisotropy and feature descriptors were learned from a database of pairs of high resolution QCT and clinical QCT reconstructions. On a set of leave-one-out experiments, we compared the accuracy of the proposed approach to previous ones, and report a mean prediction error of 6% for the tensor norm, 6% for the degree of anisotropy and 19° for the principal tensor direction. These findings show the potential of the proposed approach to predict trabecular bone anisotropy from clinically available QCT images.

Vimal Chandran, Philippe Zysset, Mauricio Reyes
Automatic Diagnosis of Ovarian Carcinomas via Sparse Multiresolution Tissue Representation

It has now been convincingly demonstrated that ovarian carcinoma subtypes are not a single disease but comprise a heterogeneous group of neoplasms. Whole slide images of tissue sections are used clinically for diagnosing biologically distinct subtypes, as opposed to different grades of the same disease. This new grading scheme for ovarian carcinomas results in a low to moderate interobserver agreement among pathologists. In practice, the majority of cases are diagnosed at advanced stages and the overall prognosis is typically poor. In this work, we propose an automatic system for the diagnosis of ovarian carcinoma subtypes from large-scale histopathology images. Our novel approach uses an unsupervised feature learning framework composed of a sparse tissue representation and a discriminative feature encoding scheme. We validate our model on a challenging clinical dataset of 80 patients and demonstrate its ability to diagnose whole slide images with an average accuracy of 91% using a linear support vector machine classifier.

Aïcha BenTaieb, Hector Li-Chang, David Huntsman, Ghassan Hamarneh
Scale-Adaptive Forest Training via an Efficient Feature Sampling Scheme

In the context of forest-based segmentation of medical data, modeling the visual appearance around a voxel requires the choice of the scale at which contextual information is extracted, which is of crucial importance for the final segmentation performance. Building on Haar-like visual features, we introduce a simple yet effective modification of the forest training which automatically infers the most informative scale at each stage of the procedure. Instead of the standard uniform sampling during node split optimization, our approach draws candidate features sequentially in a fine-to-coarse fashion. While being very easy to implement, this alternative is free of additional parameters, has the same computational cost as a standard training and shows consistent improvements on three medical segmentation datasets with very different properties.

Loïc Peter, Olivier Pauly, Pierre Chatelain, Diana Mateus, Nassir Navab
Multiple Instance Cancer Detection by Boosting Regularised Trees

We propose a novel multiple instance learning algorithm for cancer detection in histopathology images. With images labelled at image-level, we first search a set of region-level prototypes by solving a submodular set cover problem. Regularised regression trees are then constructed and combined on the set of prototypes using a multiple instance boosting framework. The method compared favourably with competing methods in experiments on breast cancer tissue microarray images and optical tomographic images of colorectal polyps.

Wenqi Li, Jianguo Zhang, Stephen J. McKenna
Uncertainty-Driven Forest Predictors for Vertebra Localization and Segmentation

Accurate localization, identification and segmentation of vertebrae is an important task in medical and biological image analysis. The prevailing approach to solve such a task is to first generate pixelindependent features for each vertebra, e.g. via a random forest predictor, which are then fed into an MRF-based objective to infer the optimal MAP solution of a constellation model. We abandon this static, twostage approach and mix feature generation with model-based inference in a new, more flexible, way. We evaluate our method on two data sets with different objectives. The first is semantic segmentation of a 21-part body plan of zebrafish embryos in microscopy images, and the second is localization and identification of vertebrae in benchmark human CT.

David Richmond, Dagmar Kainmueller, Ben Glocker, Carsten Rother, Gene Myers
Who Is Talking to Whom: Synaptic Partner Detection in Anisotropic Volumes of Insect Brain

Automated reconstruction of neural connectivity graphs from electron microscopy image stacks is an essential step towards large-scale neural circuit mapping. While significant progress has recently been made in automated segmentation of neurons and detection of synapses, the problem of synaptic partner assignment for polyadic (one-to-many) synapses, prevalent in the

Drosophila

brain, remains unsolved. In this contribution, we propose a method which automatically assigns pre- and postsynaptic roles to neurites adjacent to a synaptic site. The method constructs a probabilistic graphical model over potential synaptic partner pairs which includes factors to account for a high rate of one-to-many connections, as well as the possibility of the same neuron to be pre-synaptic in one synapse and post-synaptic in another. The algorithm has been validated on a publicly available stack of ssTEM images of

Drosophila

neural tissue and has been shown to reconstruct most of the synaptic relations correctly.

Anna Kreshuk, Jan Funke, Albert Cardona, Fred A. Hamprecht
Direct and Simultaneous Four-Chamber Volume Estimation by Multi-Output Regression

Cardiac four-chamber volumes provide crucial information for quantitative analysis of whole heart functions. Conventional cardiac volume estimation relies on a segmentation step; recently emerging direct estimation without segmentation has shown better performance than than segmentation-based methods. However, due to the high complexity, four-chamber volume estimation poses great challenges to these existing methods: four-chamber segmentation is not feasible due to intensity homogeneity of ventricle and atrium without implicit boundaries between them; existing direct methods which can only handle single or bi-ventricles are not directly applicable due to great combinatorial variability of four chambers. In this paper, by leveraging the full strength of direct estimation, we propose a new method for direct and simultaneous four-chamber volume estimation using multi-output regression that can disentangle complex relationship of image appearance and four-chamber volumes via statistical learning. To accomplish accurate and efficient estimation, we propose using a supervised descriptor learning (SDL) algorithm to generate a compact and discriminative feature representation. By casting into generalized low-rank approximations of matrices with a supervised manifold regularization, the SDL jointly removes irrelevant and redundant information by feature reduction and extracts discriminative features directly related to four chambers via supervised learning, which overcomes the high complexity of four chambers. We evaluate the proposed method on a cardiac four-chamber MR dataset from 125 subjects including both healthy and diseased cases. The experimental results show that our method achieves a high correlation coefficient of up to 91.5% with manual segmentation obtained by human experts. Our method for the first time achieves simultaneous and direct four-chamber volume estimation, which enables more efficient and accurate functional assessment of the whole heart.

Xiantong Zhen, Ali Islam, Mousumi Bhaduri, Ian Chan, Shuo Li
Cross-Domain Synthesis of Medical Images Using Efficient Location-Sensitive Deep Network

Cross-modality image synthesis has recently gained significant interest in the medical imaging community. In this paper, we propose a novel architecture called location-sensitive deep network (LSDN) for synthesizing images across domains. Our network integrates intensity feature from image voxels and spatial information in a principled manner. Specifically, LSDN models hidden nodes as products of features and spatial responses. We then propose a novel method, called ShrinkConnect, for reducing the computations of LSDN without sacrificing synthesis accuracy. ShrinkConnect enforces simultaneous sparsity to find a compact set of functions that accurately approximates the responses of all hidden nodes. Experimental results demonstrate that LSDN+ShrinkConnect outperforms the state of the art in cross-domain synthesis of MRI brain scans by a significant margin. Our approach is also computationally efficient, e.g. 26× faster than other sparse representation based methods.

Hien Van Nguyen, Kevin Zhou, Raviteja Vemulapalli
Grouping Total Variation and Sparsity: Statistical Learning with Segmenting Penalties

Prediction from medical images is a valuable aid to diagnosis. For instance, anatomical MR images can reveal certain disease conditions, while their functional counterparts can predict neuropsychiatric phenotypes. However, a physician will not rely on predictions by black-box models: understanding the anatomical or functional features that underpin decision is critical. Generally, the weight vectors of classifiers are not easily amenable to such an examination: Often there is no apparent structure. Indeed, this is not only a prediction task, but also an inverse problem that calls for adequate regularization. We address this challenge by introducing a convex region-selecting penalty. Our penalty combines total-variation regularization, enforcing spatial contiguity, and ℓ

1

regularization, enforcing sparsity, into one group: Voxels are either active with non-zero spatial derivative or zero with inactive spatial derivative. This leads to segmenting contiguous spatial regions (inside which the signal can vary freely) against a background of zeros. Such segmentation of medical images in a target-informed manner is an important analysis tool. On several prediction problems from brain MRI, the penalty shows good segmentation. Given the size of medical images, computational efficiency is key. Keeping this in mind, we contribute an efficient optimization scheme that brings significant computational gains.

Michael Eickenberg, Elvis Dohmatob, Bertrand Thirion, Gaël Varoquaux
Scandent Tree: A Random Forest Learning Method for Incomplete Multimodal Datasets

We propose a solution for training random forests on incomplete multimodal datasets where many of the samples are non-randomly missing a large portion of the most discriminative features. For this goal, we present the novel concept of scandent trees. These are trees trained on the features common to all samples that mimic the feature space division structure of a support decision tree trained on all features. We use the forest resulting from ensembling these trees as a classification model. We evaluate the performance of our method for different multimodal sample sizes and single modal feature set sizes using a publicly available clinical dataset of heart disease patients and a prostate cancer dataset with MRI and gene expression modalities. The results show that the area under ROC curve of the proposed method is less sensitive to the multimodal dataset sample size, and that it outperforms the imputation methods especially when the ratio of multimodal data to all available data is small.

Soheil Hor, Mehdi Moradi
Disentangling Disease Heterogeneity with Max-Margin Multiple Hyperplane Classifier

There is ample evidence for the heterogeneous nature of diseases. For example, Alzheimer’s Disease, Schizophrenia and Autism Spectrum Disorder are typical disease examples that are characterized by high clinical heterogeneity, and likely by heterogeneity in the underlying brain phenotypes. Parsing this heterogeneity as captured by neuroimaging studies is important both for better understanding of disease mechanisms, and for building subtype-specific classifiers. However, few existing methodologies tackle this problem in a principled machine learning framework. In this work, we developed a novel non-linear learning algorithm for integrated binary classification and subpopulation clustering. Non-linearity is introduced through the use of multiple linear hyperplanes that form a convex polytope that separates healthy controls from pathologic samples. Disease heterogeneity is disentangled by implicitly clustering pathologic samples through their association to single linear sub-classifiers. We show results of the proposed approach from an imaging study of Alzheimer’s Disease, which highlight the potential of the proposed approach to map disease heterogeneity in neuroimaging studies.

Erdem Varol, Aristeidis Sotiras, Christos Davatzikos
Marginal Space Deep Learning: Efficient Architecture for Detection in Volumetric Image Data

Current state-of-the-art techniques for fast and robust parsing of volumetric medical image data exploit large annotated image databases and are typically based on machine learning methods. Two main challenges to be solved are the low efficiency in scanning large volumetric input images and the need for manual engineering of image features. This work proposes Marginal Space Deep Learning (MSDL) as an effective solution, that combines the strengths of efficient object parametrization in hierarchical marginal spaces with the automated feature design of Deep Learning (DL) network architectures. Representation learning through DL automatically identifies, disentangles and learns explanatory factors directly from low-level image data. However, the direct application of DL to volumetric data results in a very high complexity, due to the increased number of transformation parameters. For example, the number of parameters defining a similarity transformation increases to 9 in 3D (3 for location, 3 for orientation and 3 for scale). The mechanism of marginal space learning provides excellent run-time performance by learning classifiers in high probability regions in spaces of gradually increasing dimensionality, for example starting from location only (3D) to location and orientation (6D) and full parameter space (9D). In addition, for parametrized feature computation, we propose to simplify the network by replacing the standard, pre-determined feature sampling pattern with a sparse, adaptive, self-learned pattern. The MSDL framework is evaluated on detecting the aortic heart valve in 3D ultrasound data. The dataset contains 3795 volumes from 150 patients. Our method outperforms the state-of-the-art with an improvement of 36 than one second. To our knowledge this is the first successful demonstration of the DL potential to detection in full 3D data with parametrized representations.

Florin C. Ghesu, Bogdan Georgescu, Yefeng Zheng, Joachim Hornegger, Dorin Comaniciu
Nonlinear Regression on Riemannian Manifolds and Its Applications to Neuro-Image Analysis

Regression in its most common form where independent and dependent variables are in ℝ

n

is a ubiquitous tool in Sciences and Engineering. Recent advances in Medical Imaging has lead to a wide spread availability of manifold-valued data leading to problems where the independent variables are manifold-valued and dependent are real-valued or vice-versa. The most common method of regression on a manifold is the geodesic regression, which is the counterpart of linear regression in Euclidean space. Often, the relation between the variables is highly complex, and existing most commonly used geodesic regression can prove to be inaccurate. Thus, it is necessary to resort to a non-linear model for regression. In this work we present a novel Kernel based non-linear regression method when the mapping to be estimated is either from

M

 → ℝ

n

or ℝ

n

 → 

M

, where M is a Riemannian manifold. A key advantage of this approach is that there is no requirement for the manifold-valued data to necessarily inherit an ordering from the data in ℝ

n

. We present several synthetic and real data experiments along with comparisons to the state-of-the-art geodesic regression method in literature and thus validating the effectiveness of the proposed algorithm.

Monami Banerjee, Rudrasis Chakraborty, Edward Ofori, David Vaillancourt, Baba C. Vemuri
Backmatter
Metadata
Title
Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2015
Editors
Nassir Navab
Joachim Hornegger
William M. Wells
Alejandro Frangi
Copyright Year
2015
Electronic ISBN
978-3-319-24553-9
Print ISBN
978-3-319-24552-2
DOI
https://doi.org/10.1007/978-3-319-24553-9

Premium Partner