Skip to main content
Top

2018 | Book

PRedictive Intelligence in MEdicine

First International Workshop, PRIME 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Proceedings

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the First International Workshop on PRedictive Intelligence in MEdicine, PRIME 2018, held in conjunction with MICCAI 2018, in Granada, Spain, in September 2018.

The 20 full papers presented were carefully reviewed and selected from 23 submissions. The main aim of the workshop is to propel the advent of predictive models in a broad sense, with application to medical data. Particularly, the workshop will admit papers describing new cutting-edge predictive models and methods that solve challenging problems in the medical field.

Table of Contents

Frontmatter
Computer Aided Identification of Motion Disturbances Related to Parkinson’s Disease
Abstract
We present a framework for assessing which types of simple movement tasks are most discriminative between healthy controls and Parkinson’s patients. We collected movement data in a game-like environment, where we used the Microsoft Kinect sensor for tracking the user’s joints. We recruited 63 individuals for the study, of whom 30 had been diagnosed with Parkinson’s disease. A physician evaluated all participants on movement-related rating scales, e.g., elbow rigidity. The participants also completed the game task, moving their arms through a specific pattern. We present an innovative approach for data acquisition in a game-like environment, and we propose a novel method, sparse ordinal regression, for predicting the severity of motion disorders from the data.
Gudmundur Einarsson, Line K. H. Clemmensen, Ditte Rudå, Anders Fink-Jensen, Jannik B. Nielsen, Anne Katrine Pagsberg, Kristian Winge, Rasmus R. Paulsen
Prediction of Severity and Treatment Outcome for ASD from fMRI
Abstract
Autism spectrum disorder (ASD) is a complex neurodevelopmental syndrome. Early diagnosis and precise treatment are essential for ASD patients. Although researchers have built many analytical models, there has been limited progress in accurate predictive models for early diagnosis. In this project, we aim to build an accurate model to predict treatment outcome and ASD severity from early stage functional magnetic resonance imaging (fMRI) scans. The difficulty in building large databases of patients who have received specific treatments and the high dimensionality of medical image analysis problems are challenges in this work. We propose a generic and accurate two-level approach for high-dimensional regression problems in medical image analysis. First, we perform region-level feature selection using a predefined brain parcellation. Based on the assumption that voxels within one region in the brain have similar values, for each region we use the bootstrapped mean of voxels within it as a feature. In this way, the dimension of data is reduced from number of voxels to number of regions. Then we detect predictive regions by various feature selection methods. Second, we extract voxels within selected regions, and perform voxel-level feature selection. To use this model in both linear and non-linear cases with limited training examples, we apply two-level elastic net regression and random forest (RF) models respectively. To validate accuracy and robustness of this approach, we perform experiments on both task-fMRI and resting state fMRI datasets. Furthermore, we visualize the influence of each region, and show that the results match well with other findings.
Juntang Zhuang, Nicha C. Dvornek, Xiaoxiao Li, Pamela Ventola, James S. Duncan
Enhancement of Perivascular Spaces Using a Very Deep 3D Dense Network
Abstract
Perivascular spaces (PVS) in the human brain are related to various brain diseases or functions, but it is difficult to quantify them in a magnetic resonance (MR) image due to their thin and blurry appearance. In this paper, we introduce a deep learning based method which can enhance a MR image to better visualize the PVS. To accurately predict the enhanced image, we propose a very deep 3D convolutional neural network which contains densely connected networks with skip connections. The densely connected networks can utilize rich contextual information derived from low level to high level features and effectively alleviate the gradient vanishing problem caused by the deep layers. The proposed method is evaluated on seventeen 7T MR images by a two-fold cross validation. The experiments show that our proposed network is more effective to enhance the PVS than the previous deep learning based methods using less layers.
Euijin Jung, Xiaopeng Zong, Weili Lin, Dinggang Shen, Sang Hyun Park
Generation of Amyloid PET Images via Conditional Adversarial Training for Predicting Progression to Alzheimer’s Disease
Abstract
New positron emission tomography (PET) tracers could have a substantial impact on early diagnosis of Alzheimer’s disease (AD) and mild cognitive impairment (MCI) progression, particularly if they are accompanied by optimised deep learning methods. To realize the full potential of deep learning for PET imaging, large datasets are required for training. However, dataset sizes are restricted due to limited availability. Meanwhile, most of the AD classification studies have been based on structural MRI rather than PET. In this paper, we propose a novel application of conditional Generative Adversarial Networks (cGANs) to the generation of \( ^{18} F \)-florbetapir PET images from corresponding MRI images. Furthermore, we show that generated PET images can be used for synthetic data augmentation, and improve the performance of 3D Convolutional Neural Networks (CNN) for predicting progression to AD. Our method is applied to a dataset of 79 PET images, obtained from Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. We generate high quality PET images from corresponding MRIs using cGANs, and we evaluate the quality of generated PET images by comparison to real images. We then use the trained cGANs to generate synthetic PET images from additional MRI dataset. Finally we build a 152-layer ResNet to compare the MCI classification performance using both traditional data augmentation method and our proposed synthetic data augmentation method. Mean Structural Similarity (SSIM) index was 0.95 ± 0.05 for generated PET and real PET. For MCI progression classification, the traditional data augmentation method showed 75% accuracy while the synthetic data augmentation improved this to 82%.
Yu Yan, Hoileong Lee, Edward Somer, Vicente Grau
Prediction of Hearing Loss Based on Auditory Perception: A Preliminary Study
Abstract
The major cause of deafness and hearing impairment around the globe is hearing loss. Hearing loss has become very common among young adults and kids due to the genetic disorder, temporary or permanent hearing impairments, aging, and exposure to noise. Meanwhile, the possibilities of treating hearing loss are very limited. It can be reduced by taking proper precautionary measures if diagnosed on time. In this paper, we study the possibility of preventing hearing loss based on the auditory system responses. The auditory perception is highly correlated with the human age. Consequently, predicting a big gap between the real age and the perceived one can be an indicator of hearing loss. Our predictive model of human age is very robust with an RMSE value of 4.1 years and an EER value of 4%, indicating the applicability of our proposed method for predicting the hearing loss.
Muhammad Ilyas, Alice Othmani, Amine Nait-Ali
Predictive Patient Care: Survival Model to Prevent Medication Non-adherence
Abstract
Adherence in medicine is a measure of how well a patient follows their treatment. Not following the medication plan is actually a major issue as it was underlined in the World Health Organization’s reports (http://​www.​who.​int/​chp/​knowledge/​publications/​adherence_​full_​report.​pdf). They indicated that, in developed countries, only about 50% of patients with chronic diseases correctly follow their treatments. This severely compromises the efficiency of long-term therapy and increases the cost of health services.
In this paper, we report our work on modeling patient drug consumption in breast cancer treatments. We test a statistic approach to predict medication non-adherence with a special focus on the features relevant for each approach. These characteristics are discussed in view of previous results issued from the literature as well as the hypothesis made to use this model.
T. Janssoone, P. Rinder, P. Hornus, D. Kanoun
Joint Robust Imputation and Classification for Early Dementia Detection Using Incomplete Multi-modality Data
Abstract
It is vital to identify Mild Cognitive Impairment (MCI) subjects who will progress to Alzheimer’s Disease (AD), so that early treatment can be administered. Recent studies show that using complementary information from multi-modality data may improve the model performance of the above prediction problem. However, multi-modality data is often incomplete, causing the prediction models that rely on complete data unusable. One way to deal with this issue is by first imputing the missing values, and then building a classifier based on the completed data. This two-step approach, however, may generate non-optimal classifier output, as the errors of the imputation may propagate to the classifier during training. To address this issue, we propose a unified framework that jointly performs feature selection, data denoising, missing values imputation, and classifier learning. To this end, we use a low-rank constraint to impute the missing values and denoise the data simultaneously, while using a regression model for feature selection and classification. The feature weights learned by the regression model are integrated into the low rank formulation to focus on discriminative features when denoising and imputing data, while the resulting low-rank matrix is used for classifier learning. These two components interact and correct each other iteratively using Alternating Direction Method of Multiplier (ADMM). The experimental results using incomplete multi-modality ADNI dataset shows that our proposed method outperforms other comparison methods.
Kim-Han Thung, Pew-Thian Yap, Dinggang Shen
Shared Latent Structures Between Imaging Features and Biomarkers in Early Stages of Alzheimer’s Disease
Abstract
In this work, we identify meaningful latent patterns in MR images for patients across the Alzheimer’s disease (AD) continuum. For this purpose, we apply Projection to Latent Structures (PLS) method using cerebrospinal fluid (CSF) biomarkers (t-tau, p-tau, amyloid-beta) and age as response variables and imaging features as explanatory variables. Freesurfer pipeline is used to compute MRI surface and volumetric features resulting in 68 cortical ROIs and 84 cortical and subcortical ROIs, respectively. The main assumption of this work is that there are two main underlying processes governing brain morphology along the AD continuum: brain aging and dementia. We use two different and orthogonal PLS models to describe each process: PLS-aging and PLS-dementia. To define PLS-aging model we use normal aging subjects and age as predictor and response variables, respectively, while for PLS-dementia we only use demented subjects and biomarkers as response variables.
Adrià Casamitjana, Verónica Vilaplana, Paula Petrone, José Luis Molinuevo, Juan Domingo Gispert
Predicting Nucleus Basalis of Meynert Volume from Compartmental Brain Segmentations
Abstract
In clinical practice one often encounters a situation when a quantity of interest cannot be measured routinely, for reasons such as invasiveness, high costs, the need for special equipment, etc. For instance, research showed that early cognitive decline can be predicted from volume (atrophy) of the nucleus basalis of Meynert (NBM), however its small size makes it difficult to measure from brain magnetic resonance (MR) scans. We treat NBM volume as an unobservable quantity in a statistical model, exploiting the structural integrity of the brain, and aim to estimate it indirectly based on one or more interdependent, but possibly more accurate and reliable compartmental brain volume measurements that are easily accessible. We propose a Bayesian approach based on the previously published reference-free error estimation framework to achieve this aim. The main contribution is a novel prior distribution parametrization encoding the scale of the distribution of the unobservable quantity. The proposed prior is more general and better interpretable than the original. In addition to unobservable quantity estimates, for each observable we calculate a figure of merit as an individual predictor of the unobservable quantity. The framework was successfully validated on synthetic data and on a clinical dataset, predicting the NBM volume from volumes of the whole-brain and hippocampal subfields, based on compartmental segmentations of structural brain MR images.
Hennadii Madan, Rok Berlot, Nicola J. Ray, Franjo Pernuš, Žiga Špiclin
Multi-modal Neuroimaging Data Fusion via Latent Space Learning for Alzheimer’s Disease Diagnosis
Abstract
Recent studies have shown that fusing multi-modal neuroimaging data can improve the performance of Alzheimer’s Disease (AD) diagnosis. However, most existing methods simply concatenate features from each modality without appropriate consideration of the correlations among multi-modalities. Besides, existing methods often employ feature selection (or fusion) and classifier training in two independent steps without consideration of the fact that the two pipelined steps are highly related to each other. Furthermore, existing methods that make prediction based on a single classifier may not be able to address the heterogeneity of the AD progression. To address these issues, we propose a novel AD diagnosis framework based on latent space learning with ensemble classifiers, by integrating the latent representation learning and ensemble of multiple diversified classifiers learning into a unified framework. To this end, we first project the neuroimaging data from different modalities into a common latent space, and impose a joint sparsity constraint on the concatenated projection matrices. Then, we map the learned latent representations into the label space to learn multiple diversified classifiers and aggregate their predictions to obtain the final classification result. Experimental results on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset show that our method outperforms other state-of-the-art methods.
Tao Zhou, Kim-Han Thung, Mingxia Liu, Feng Shi, Changqing Zhang, Dinggang Shen
Transfer Learning for Task Adaptation of Brain Lesion Assessment and Prediction of Brain Abnormalities Progression/Regression Using Irregularity Age Map in Brain MRI
Abstract
The Irregularity Age Map (IAM) for the unsupervised assessment of brain white matter hyperintensities (WMH) opens several opportunities in machine learning-based MRI analysis, including transfer task adaptation learning in the segmentation and prediction of brain lesion progression and regression. The lack of need for manual labels is useful for transfer learning. Whereas the nature of IAM itself can be exploited for predicting lesion progression/regression. In this study, we propose the use of task adaptation transfer learning for WMH segmentation using CNN through weakly-training UNet and UResNet using the output from IAM and the use of IAM for predicting patterns of WMH progression and regression.
Muhammad Febrian Rachmadi, Maria del C. Valdés-Hernández, Taku Komura
Multi-view Brain Network Prediction from a Source View Using Sample Selection via CCA-Based Multi-kernel Connectomic Manifold Learning
Abstract
Several challenges emerged from the dataclysm of neuroimaging datasets spanning both healthy and disordered brain spectrum. In particular, samples with missing data views (e.g., functional imaging modality) constitute a hurdle to conventional big data learning techniques which ideally would be trained using a maximum number of samples across all views. Existing works on predicting target data views from a source data view mainly used brain images such as predicting PET image from MRI image. However, to the best of our knowledge, predicting a set of target brain networks from a source network remains unexplored. To fill this gap, a multi-kernel manifold learning (MKML) framework is proposed to learn how to predict multi-view brain networks from a source network to impute missing views in a connectomic dataset. Prior to performing multiple kernel learning of multi-view data, it is typically assumed that the source and target data come from the same distribution. However, multi-view connectomic data can be drawn from different distributions. In order to build robust predictors for predicting target multi-view networks from a source network view, it is necessary to take into account the shift between the source and target domains. Hence, we first estimate a mapping function that transforms the source and the target domains into a shared space where their correlation is maximized using canonical correlation analysis (CCA). Next, we nest the projected training and testing source samples into a connectomic manifold using multiple kernel learning, where we identify the most similar training samples to the testing source network. Given a testing subject, we introduce a cross-domain trust score to assess the reliability of each selected training sample for the target prediction task. Our model outperformed both conventional MKML technique and the proposed CCA-based MKML technique without enhancement by trust scores.
Minghui Zhu, Islem Rekik
Predicting Emotional Intelligence Scores from Multi-session Functional Brain Connectomes
Abstract
In this study, we aim to predict emotional intelligence scores from functional connectivity data acquired at different timepoints. To enhance the generalizability of the proposed predictive model to new data and accurate identification of most relevant neural correlates with different facets of the human intelligence, we propose a joint support vector machine and support vector regression (SVM+SVR) model. Specifically, we first identify most discriminative connections between subjects with high vs low emotional intelligence scores in the SVM step and then perform a multi-variate linear regression using these connections to predict the target emotional intelligence score in the SVR step. Our method outperformed existing methods including the Connectome-based Predictive Model (CPM) using functional connectivity data simultaneously acquired with the intelligence scores. The most predictive connections of intelligence included brain regions involved in processing of emotions and social behaviour.
Anna Lisowska, Islem Rekik
Predictive Modeling of Longitudinal Data for Alzheimer’s Disease Diagnosis Using RNNs
Abstract
In this paper, we study the application of Recurrent Neural Networks (RNNs) to discriminate Alzheimer’s disease patients from healthy control individuals using longitudinal neuroimaging data. Distinctions between Alzheimer’s Disease (AD), Mild Cognitive Impairment (MCI), and healthy subjects in a multi-modal heterogeneous longitudinal dataset is a challenging problem due to high similarity between brain patterns, high portions of missing data from different modalities and time points, and inconsistent number of test intervals between different subjects. Due to these challenges, to distinguish AD patients from healthy subjects, conventionally researchers use cross-sectional data when applying deep learning methods in neuroimaging applications. Whereas we propose a method based on RNNS to analyze the longitudinal data. After carefully preprocessing the data to alleviate the inconsistency due to different data sources and various protocols of capturing modalities, we arrange the data and feed it into variations of RNNs, i.e., vanilla Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU). The accuracy, F-score, sensitivity, and specificity of our models are reported and are compared with the most immediate baseline method, multi-layer perceptron (MLP).
Maryamossadat Aghili, Solale Tabarestani, Malek Adjouadi, Ehsan Adeli
Towards Continuous Health Diagnosis from Faces with Deep Learning
Abstract
Recent studies show that health perception from faces by humans is a good predictor of good health and healthy behaviors. We aimed to automatize human health perception by training a Convolutional Neural Network on a related task (age estimation) combined with a Ridge Regression to rate faces. Indeed, contrary to health ratings, large datasets with labels of biological age exist. The results show that our system outperforms average human judgments for health. The system could be used on a daily basis to detect early signs of sickness or a declining state. We are convinced that such a system will contribute to more extensively explore the use of holistic, fast, and non-invasive measures to improve the speed of diagnosis.
Victor Martin, Renaud Séguier, Aurélie Porcheron, Frédérique Morizot
XmoNet: A Fully Convolutional Network for Cross-Modality MR Image Inference
Abstract
Magnetic resonance imaging (MRI) can generate multimodal scans with complementary contrast information, capturing various anatomical or functional properties of organs of interest. But whilst the acquisition of multiple modalities is favourable in clinical and research settings, it is hindered by a range of practical factors that include cost and imaging artefacts. We propose XmoNet, a deep-learning architecture based on fully convolutional networks (FCNs) that enables cross-modality MR image inference. This multiple branch architecture operates on various levels of image spatial resolutions, encoding rich feature hierarchies suited for this image generation task. We illustrate the utility of XmoNet in learning the mapping between heterogeneous T1- and T2-weighted MRI scans for accurate and realistic image synthesis in a preliminary analysis. Our findings support scaling the work to include larger samples and additional modalities.
Sophia Bano, Muhammad Asad, Ahmed E. Fetit, Islem Rekik
3D Convolutional Neural Network and Stacked Bidirectional Recurrent Neural Network for Alzheimer’s Disease Diagnosis
Abstract
Alzheimer’s disease (AD) is the leading cause of dementia in the elderly and the number of sufferers increases year by year. Early detection of AD is highly beneficial to provide timely treatment and possible medication, which is still an open challenge. To meet challenges of the early diagnosis of AD and its early stage (e.g., progressive MCI (pMCI) and stable MCI (sMCI)) in clinical practice, we present a novel deep learning framework in this paper. The proposed framework exploits the merits of 3D convolutional neural network (CNN) and stacked bidirectional recurrent neural network (SBi-RNN). Specifically, we devise simple 3D-CNN architecture to obtain the deep feature representation from magnetic resonance imaging (MRI) and positron emission tomography (PET) images, respectively. We further apply SBi-RNN on the local deep cascaded and flattened descriptors for performance boosting. Extensive experiments are performed on the ADNI dataset to investigate the effectiveness of the proposed method. Our method achieves an average accuracy of 94.29% for AD vs. normal classification (NC), 84.66% of pMCI vs. NC and 64.47% sMCI vs. NC, which outperforms the related algorithms. Also, our method is simpler and more compact compared with the existing methods with complex preprocessing and feature engineering processes.
Chiyu Feng, Ahmed Elazab, Peng Yang, Tianfu Wang, Baiying Lei, Xiaohua Xiao
Generative Adversarial Training for MRA Image Synthesis Using Multi-contrast MRI
Abstract
Magnetic Resonance Angiography (MRA) has become an essential MR contrast for imaging and evaluation of vascular anatomy and related diseases. MRA acquisitions are typically ordered for vascular interventions, whereas in typical scenarios, MRA sequences can be absent in the patient scans. This motivates the need for a technique that generates inexistent MRA from existing MR multi-contrast, which could be a valuable tool in retrospective subject evaluations and imaging studies. We present a generative adversarial network (GAN) based technique to generate MRA from T1- and T2-weighted MRI images, for the first time to our knowledge. To better model the representation of vessels which the MRA inherently highlights, we design a loss term dedicated to a faithful reproduction of vascularities. To that end, we incorporate steerable filter responses of the generated and reference images as a loss term. Extending the well-established generator-discriminator architecture based on the recent PatchGAN model with the addition of steerable filter loss, the proposed steerable GAN (sGAN) method is evaluated on the large public database IXI. Experimental results show that the sGAN outperforms the baseline GAN method in terms of an overlap score with similar PSNR values, while it leads to improved visual perceptual quality.
Sahin Olut, Yusuf H. Sahin, Ugur Demir, Gozde Unal
Diffusion MRI Spatial Super-Resolution Using Generative Adversarial Networks
Abstract
Spatial resolution is one of the main constraints in diffusion Magnetic Resonance Imaging (dMRI). Increasing resolution leads to a decrease in SNR of the diffusion images. Acquiring high resolution images without reducing SNRs requires larger magnetic fields and long scan times which are typically not applicable in the clinical settings. Currently feasible voxel size is around 1 mm\( ^{3} \) for a diffusion image. In this paper, we present a deep neural network based post-processing method to increase the spatial resolution in diffusion MRI. We utilize Generative Adversarial Networks (GANs) to obtain a higher resolution diffusion MR image in the spatial dimension from lower resolution diffusion images. The obtained real data results demonstrate a first time proof of concept that GANs can be useful in super-resolution problem of diffusion MRI for upscaling in the spatial dimension.
Enes Albay, Ugur Demir, Gozde Unal
Prediction to Atrial Fibrillation Using Deep Convolutional Neural Networks
Abstract
Recently, many researchers have attempted to apply deep neural networks to detect Atrial Fibrillation (AF). In this paper, we propose an approach for prediction of AF instead of detection using Deep Convolutional Neural Networks (DCNN). This is done by classifying electrocardiogram (ECG) before AF into normal and abnormal states, which is hard for the cardiologists to distinguish from the normal sinus rhythm. ECG is transformed into spectrogram and trained using VGG16 networks to predict normal and abnormal signals. By changing the time length of abnormal signals and making up their own datasets for preprocessing, we investigate the changes in F1-score for each dataset to explore the right time to alert the occurrence of AF.
Jungrae Cho, Yoonnyun Kim, Minho Lee
Backmatter
Metadata
Title
PRedictive Intelligence in MEdicine
Editors
Islem Rekik
Gozde Unal
Ehsan Adeli
Sang Hyun Park
Copyright Year
2018
Electronic ISBN
978-3-030-00320-3
Print ISBN
978-3-030-00319-7
DOI
https://doi.org/10.1007/978-3-030-00320-3

Premium Partner