Skip to content

Develop a deep learning-based model for accurate image-to-image translation across MRI sequences for the brain region.

Notifications You must be signed in to change notification settings

afiosman/UNet-for-MR-to-MR-image-translation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 

Repository files navigation

UNet-for-cross-sequence-MR-image-translations

In this studt, we assessed the standard U-Net for end-to-end image translations across three MR image contrasts for the brain.

image

Fig. 2. The comparison of synthetic-MR images generated using a U-Net model for one subject on a test data set. From left to right: input MR images (source image-contrast); synthetic-MR images (target image-contrast); ground-truth/real MR image (target image-contrast); difference map (predicted – real MR image); and SSIM map. Rows show image-to-image translations across T1, T2, and FLAIR MR contrasts.

Availability of data and materials

The datasets analyzed during the current study are available in the BRATS 2018 challenge repository at https://www.med.upenn.edu/sbia/brats2018/data.html.

Paper

Please cite this paper: Osman AFI, Tamam NM. Deep learning-based convolutional neural network for intramodality brain MRI synthesis. J Appl Clin Med Phys. 2022;e13530. https://pubmed.ncbi.nlm.nih.gov/35044073/.