2024 | OriginalPaper | Chapter
Comparison of Deep Learning Image-to-image Models for Medical Image Translation
Authors : Zeyu Yang, Frank G. Zöllner
Published in: Bildverarbeitung für die Medizin 2024
Publisher: Springer Fachmedien Wiesbaden
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
We conducted a comparative analysis of six image-to-image deep learning models for the purpose of MRI-to-CT image translation comprising resUNet, attUnet, DCGAN, pix2pixGAN with resUNet, pix2pixGAN with attUnet, and the denoising diffusion probabilistic model (DDPM). These models underwent training and assessment using the SynthRAD2023 Grand Challenge dataset. For training, 170 MRI and CT image pairs (patients) were employed, while a set of 10 patients was reserved for testing. In summary, the pix2pixGAN with resUNet achieved the hightest scores (SSIM = 0.81±0.21, MAE = 55.52±3.50, PSNR = 27.19±6.29). The DDPM displayed considerable potential in generating CT images that closely resemble real ones in terms of detail and fidelity. Nevertheless, the quality of its generated images exhibited notable fluctuations. Consequently, further refinement is necessary to stabilize its output quality.