2025 | OriginalPaper | Chapter
Brain-ID
: Learning Contrast-Agnostic Anatomical Representations for Brain Imaging
Authors : Peirong Liu, Oula Puonti, Xiaoling Hu, Daniel C. Alexander, Juan E. Iglesias
Published in: Computer Vision – ECCV 2024
Publisher: Springer Nature Switzerland
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by (Link opens in a new window)
Abstract
Brain-ID
, an anatomical representation learning model for brain imaging. With the proposed “mild-to-severe” intra-subject generation, Brain-ID
is robust to the subject-specific brain anatomy regardless of the appearance of acquired images. Trained entirely on synthetic inputs, Brain-ID
readily adapts to various downstream tasks through one layer. We present new metrics to validate the intra/inter-subject robustness of Brain-ID
features, and evaluate their performance on four downstream applications, covering contrast-independent (anatomy reconstruction, brain segmentation), and contrast-dependent (super-resolution, bias field estimation) tasks (Fig. 1). Extensive experiments on six public datasets demonstrate that Brain-ID
achieves state-of-the-art performance in all tasks on different MR contrasts and CT, and more importantly, preserves its performance on low-resolution and small datasets. Code is available at https://github.com/peirong26/Brain-ID.