2020 | OriginalPaper | Buchkapitel
Comparison of CNN Visualization Methods to Aid Model Interpretability for Detecting Alzheimer’s Disease
verfasst von : Martin Dyrba, Arjun H. Pallath, Eman N. Marzban
Erschienen in: Bildverarbeitung für die Medizin 2020
Verlag: Springer Fachmedien Wiesbaden
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
Advances in medical imaging and convolutional neural networks (CNNs) have made it possible to achieve excellent diagnostic accuracy from CNNs comparable to human raters. However, CNNs are still not implemented in medical trials as they appear as a black box system and their inner workings cannot be properly explained. Therefore, it is essential to assess CNN relevance maps, which highlight regions that primarily contribute to the prediction. This study focuses on the comparison of algorithms for generating heatmaps to visually explain the learned patterns of Alzheimer’s disease (AD) classification. T1-weighted volumetric MRI data were entered into a 3D CNN. Heatmaps were then generated for different visualization methods using the iNNvestigate and keras-vis libraries. The model reached an area under the curve of 0.93 and 0.75 for separating AD dementia patients from controls and patients with amnestic mild cognitive impairment from controls, respectively. Visualizations for the methods deep Taylor decomposition and layer-wise relevance propagation (LRP) showed most reasonable results for individual patients matching expected brain regions. Other methods, such as Grad-CAM and guided backpropagation showed more scattered activations or random areas. For clinically research, deep Taylor decomposition and LRP showed most valuable network activation patterns.