Skip to main content
Top

2025 | Book

Artificial Intelligence in Pancreatic Disease Detection and Diagnosis, and Personalized Incremental Learning in Medicine

First International Workshop, AIPAD 2024 and First International Workshop, PILM 2024, Held in Conjunction with MICCAI 2024, Marrakesh, Morocco, October 10, 2024, Proceedings

Editors: Federica Proietto Salanitri, Serestina Viriri, Ulaş Bağcı, Pallavi Tiwari, Boqing Gong, Concetto Spampinato, Simone Palazzo, Giovanni Bellitto, Nancy Zlatintsi, Panagiotis Filntisis, Cecilia S. Lee, Aaron Y. Lee

Publisher: Springer Nature Switzerland

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This volume constitutes the refereed proceedings of the First International Workshop on Artificial Intelligence in Pancreatic Disease Detection and Diagnosis, AIPAD 2024 and the First International Workshop on Personalized Incremental Learning in Medicine, PILM 2024, held in conjunction with MICCAI 2024, in Marrakesh, Morocco, in October 2024.

The 8 full papers included in these proceedings were carefully reviewed and selected from 9 submissions. They were organized in topical sections as follows: artificial intelligence in pancreatic disease detection and diagnosis; and personalized incremental learning in medicine.

Table of Contents

Frontmatter

Artificial Intelligence in Pancreatic Disease Detection and Diagnosis

Frontmatter
Assessing the Efficacy of Foundation Models in Pancreas Segmentation
Abstract
Accurate pancreas segmentation is crucial for diagnosing and managing pancreatic diseases, facilitating preoperative planning, and aiding transplantation procedures. Effective segmentation enables the identification and monitoring of conditions such as chronic pancreatitis and diabetes mellitus, which are characterized by changes in pancreatic size and volume. Recent advancements in segmentation technology have leveraged foundation models like SAM and MedSAM, achieving state-of-the-art performance in various domains. In this work, we explore the effectiveness of using these models in the particularly challenging domain of pancreas segmentation. We also propose a simple yet effective method for including 3D information into SAM. Our findings suggest that, despite foundation models have a good general knowledge, they are not well-suited for pancreas segmentation without significant architectural modifications and the inclusion of a good prompt. Moreover, we found that simply including volume information significantly enhances segmentation performance, even without the use of an expert prompt.
Emanuele Rapisarda, Alessandro Giuseppe Gravagno, Salvatore Calcagno, Daniela Giordano
Hybrid Deep Learning Model for Pancreatic Cancer Image Segmentation
Abstract
Pancreatic cancer remains one of the most challenging malignancies to diagnose and treat, necessitating advances in medical imaging techniques for early and accurate detection. This study presents a novel hybrid approach to pancreatic cancer histopathology image segmentation by integrating deep neural networks with traditional machine learning models. Our method leverages the strengths of both paradigms to enhance segmentation performance. Specifically, we employ supervised learning to train deep convolutional neural networks (CNNs), namely ResNet50 and VGG16, to extract high-level feature vectors from medical histopathology images obtained from The Cancer Imaging Archive pancreatic-ct dataset by the National Institutes of Health Clinical Center. These feature vectors serve as inputs to various machine learning classifiers, including Random Forest, K-Nearest Neighbors (KNN), XGBoost, Linear Support Vector Machine (LinearSVM), Linear Discriminant Analysis (LDA), and Gaussian Naive Bayes (GNB). Combining the feature extraction capabilities of deep learning models with the decision-making prowess of traditional classifiers, our hybrid framework based on XGBoost produced the best segmentation results among other classifiers by achieving a precision value of (0.96), F1-Score value of (0.97) and a recall value of (0.98). Extensive experiments and cross-validation on benchmark datasets demonstrate that our approach outperforms standalone models, showcasing its potential in clinical applications for improved diagnostic accuracy (0.936) and patient outcomes.
Wilson Bakasa, Clopas Kwenda, Serestina Viriri
Leveraging SAM and Learnable Prompts for Pancreatic MRI Segmentation
Abstract
Accurate segmentation of the pancreas in magnetic resonance imaging (MRI) is essential for enhancing diagnostic and therapeutic strategies in pancreatic diseases. In this study, we explore the application of the Segment Anything Model (SAM), a state-of-the-art foundation model, for pancreas segmentation in MRI scans. We present a preliminary approach that utilizes AutoSAM, a recent work designed to optimize input prompts for the SAM decoder, aiming to improve segmentation capabilities. To evaluate the performance of our method, we employ a publicly available MRI dataset, allowing for comparison with existing segmentation techniques. Preliminary results suggest that learned prompts may lead to potential improvements in pancreas segmentation, indicating the promise of foundation models in medical imaging tasks.
Cristian Delle Castelle, Fabio Spampinato, Federica Proietto Salanitri, Giovanni Bellitto, Concetto Spampinato
Optimizing Synthetic Data for Enhanced Pancreatic Tumor Segmentation
Abstract
Pancreatic cancer remains one of the leading causes of cancer-related mortality worldwide. Precise segmentation of pancreatic tumors from medical images is a bottleneck for effective clinical decision-making. However, achieving a high accuracy is often limited by the small size and availability of real patient data for training deep learning models. Recent approaches have employed synthetic data generation to augment training datasets. While promising, these methods may not yet meet the performance benchmarks required for real-world clinical use. This study critically evaluates the limitations of existing generative-AI based frameworks for pancreatic tumor segmentation. We conduct a series of experiments to investigate the impact of synthetic tumor size and boundary definition precision on model performance. Our findings demonstrate that: (1) strategically selecting a combination of synthetic tumor sizes is crucial for optimal segmentation outcomes, and (2) generating synthetic tumors with precise boundaries significantly improves model accuracy. These insights highlight the importance of utilizing refined synthetic data augmentation for enhancing the clinical utility of segmentation models in pancreatic cancer decision making including diagnosis, prognosis, and treatment plans. Our code will be available at https://​github.​com/​lkpengcs/​SynTumorAnalyzer​.
Linkai Peng, Zheyuan Zhang, Gorkem Durak, Frank H. Miller, Alpay Medetalibeyoglu, Michael B. Wallace, Ulas Bagci
Pancreatic Vessel Landmark Detection in CT Angiography Using Prior Anatomical Knowledge
Abstract
Localizing vessels or their defining bifurcations is a frequent clinical problem for advanced visualizations in pancreatic cancer invasion analysis, driving the demand for design guidelines of easy-to-implement landmark detection solutions. When transforming such landmarks appropriately to surrogate targets, the competitive nnDetection and nnU-Net frameworks provide such solutions, especially for small data settings. Here, the underlying networks can further benefit from incorporating additional anatomical information. We present results on two CTA datasets consisting of arterial and venous phase images with 6 and 4 bifurcation landmarks surrounding the pancreas respectively. Landmark points were modeled as spheres to allow for the application of object detection/segmentation models. We evaluate both nn-frameworks for these tasks focusing on the incorporation of anatomical knowledge. Postprocessing nnDetection predictions with organ masks and landmark relation constraints boosts detection accuracies from 66.7 % to 79.4 % in the more challenging venous case and decreases the mean radial error from 9.06 to 4.92 mm. The nnU-Net benefits more from organ masks in the input when targeting problematic vessels, lowering the mean radial error from 12.97 to 8.45 mm when using the splenic mask for the venous task. Both networks have good initial detection rates for the arterial phase which are slightly boosted using our method to 93.7 % (nnU-Net) and 95.5 % (nnDetection). All remaining mispredictions are within the vessel of interest and thus sufficient for many downstream tasks.
Leonhard Rist, Christopher Homm, Felix Lades, Abraham Ayala Hernandez, Michael Sühling, Erik Gudman Steuble Brandt, Andreas Maier, Oliver Taubmann

Personalized Incremental Learning in Medicine

Frontmatter
Addressing Catastrophic Forgetting by Modulating Global Batch Normalization Statistics for Medical Domain Expansion
Abstract
Model brittleness across datasets is a key concern when deploying deep learning models in real-world medical settings. One approach is to fine-tune the model on subsequent datasets after training on the original dataset. However, this degrades model performance on the original dataset, a phenomenon known as catastrophic forgetting. We develop an approach to address catastrophic forgetting by combining elastic weight consolidation with a simple yet novel modulation of global batch normalization statistics under two scenarios: expanding the domain across 1) imaging systems and 2) hospital institutions. Focusing on the clinical use case of mammographic breast density detection, we show that our approach empirically outperforms several other state-of-the-art approaches and provides theoretical justification for the efficacy of batch normalization modulation, demonstrating the potential of our approach to deploying clinical deep learning models requiring domain expansion.
Sharut Gupta, Ken Chang, Liangqiong Qu, Aakanksha Rana, Syed Rakin Ahmed, Mehak Aggarwal, Nishanth Arun, Ashwin Vaswani, Shruti Raghavan, Vibha Agarwal, Mishka Gidwani, Katharina Hoebel, Jay Patel, Charles Lu, Christopher P. Bridge, Daniel L. Rubin, Jayashree Kalpathy-Cramer, Praveer Singh
Distribution-Aware Replay for Continual MRI Segmentation
Abstract
Medical image distributions shift constantly due to changes in patient population and discrepancies in image acquisition. These distribution changes result in performance deterioration; deterioration that continual learning aims to alleviate. However, only adaptation with data rehearsal strategies yields practically desirable performance for medical image segmentation. Such rehearsal violates patient privacy and, as most continual learning approaches, overlooks unexpected changes from out-of-distribution instances. To transcend both of these challenges, we introduce a distribution-aware replay strategy that mitigates forgetting through auto-encoding of features, while simultaneously leveraging the learned distribution of features to detect model failure. We provide empirical corroboration on hippocampus and prostate MRI segmentation. To ensure reproducibility, we make our code available at https://​github.​com/​MECLabTUDA/​Lifelong-nnUNet/​tree/​cl_​vae.
Nick Lemke, Camila González, Anirban Mukhopadhyay, Martin Mundt
Exploring Wearable Emotion Recognition with Transformer-Based Continual Learning
Abstract
The rapid advancement of wearable technology has enabled continuous, real-time health monitoring through devices such as smartwatches and fitness trackers. These devices generate vast amounts of biometric data, including heart rate, galvanic skin response (GSR), and activity levels, which can be used for personalized healthcare applications such as emotion recognition and stress monitoring. However, the use of medical data introduces privacy challenges due to regulations like HIPAA and GDPR, necessitating innovative learning techniques that do not rely on large, centralized datasets. Continual learning (CL) offers a solution by enabling models to incrementally acquire knowledge over time, adapting to new data without forgetting previously learned information. This paper evaluates the effectiveness of CL techniques in the context of emotion recognition using GSR data from the DEAP dataset. Each subject is treated as a separate task, and a custom transformer-based PatchTST model is trained sequentially on each patient’s data. Results show that the CL approach achieves performance levels comparable to traditional methods that train on all patients’ data simultaneously. This demonstrates the potential of CL to maintain high accuracy while preserving patient data privacy, thereby supporting the development of adaptive, real-time personalized healthcare solutions.
Federica Rizza, Giovanni Bellitto, Salvatore Calcagno, Simone Palazzo
Backmatter
Metadata
Title
Artificial Intelligence in Pancreatic Disease Detection and Diagnosis, and Personalized Incremental Learning in Medicine
Editors
Federica Proietto Salanitri
Serestina Viriri
Ulaş Bağcı
Pallavi Tiwari
Boqing Gong
Concetto Spampinato
Simone Palazzo
Giovanni Bellitto
Nancy Zlatintsi
Panagiotis Filntisis
Cecilia S. Lee
Aaron Y. Lee
Copyright Year
2025
Electronic ISBN
978-3-031-73483-0
Print ISBN
978-3-031-73482-3
DOI
https://doi.org/10.1007/978-3-031-73483-0

Premium Partner