ModalityConverter is officially available in the 3D Slicer Extensions Index, under the Image Synthesis category (from version 5.9.0)!
The extension makes medical image-to-image translation AI models freely and easily accessible in 3D Slicer. It already includes 3 ready-to-use models for brain T1w MRI-to-CT translation, recently introduced in the FedSynthCT-Brain article.
We are looking forward to community contributions — if you would like to integrate new models for other modalities (MRI-MRI, CBCT-CT, PET-CT), please propose them via the GitHub repository!
Yes, this has been possible for a few years—not only from MRI, but also from CBCT and PET [1], [2], [3].
Is it applicable to all MRis ? Like Tmj mri
Currently, the extension includes 3 models from the FedSynthCT-Brain study [4], which can generate synthetic CTs from T1w brain MRIs.
New models would need to be trained specifically for other regions, such as the TMJ. We hope to include new models with community contributions in the future.
How accurate is it ?
Accuracy depends on several factors, especially the quality and amount of training data. Deep learning models generally demonstrated satisfactory performance for image translation tasks.
Spadea, M. F. et al., Deep learning based synthetic-CT generation in radiotherapy and PET: A review, Medical Physics, Volume 48, 2021, https://doi.org/10.1002/mp.15150↩︎
Sanuwani D. et al., Deep learning based synthesis of MRI, CT and PET: Review and analysis, Medical Image Analysis, Volume 92, 2024, Redirecting↩︎
Bahloul, M. A. et al., Advancements in synthetic CT generation from MRI: A review of techniques, and trends in radiation therapy planning, Journal of Applied Clinical Medical Physics, 2024, https://doi.org/10.1002/acm2.14499↩︎
Raggio C. B. et al., FedSynthCT-Brain: A federated learning framework for multi-institutional brain MRI-to-CT synthesis, Computers in Biology and Medicine, Volume 192, Part A, 2025, Redirecting↩︎
i hope there is serious consideration for TMJ MRI . i routinely request both images for patient and the ability to generate ct from mri would mean big for my patients
Please be aware that image “translation” really means image synthesis. The newly created image is the model’s best guess as to what the corresponding image in another modality would look like, based on what the given image looks like. For clinical use purposes, such synthetic images should be used only with extreme caution. Clinically, imaging is typically acquired to look for the presence of ABNORMAL features. Usually, image synthesis models will have seen predominantly, or sometimes only, NORMAL features. So, when guessing what the translated image should look like, a well-trained model is likely to guess something that looks roughly normal, even if a clinical image in the target modality would clearly show abnormality.
If you typically acquire both a CT and an MRI to evaluate a condition, it is likely because there are distinct abnormal features which are best seen on each modality. It is generally not safe to assume that an image translation would be able to generate the abnormality you are looking for based only on the image from the other modality.
I don’t think this is meant to replace diagnostic imaging. One potential use I can think of is the study of normative population of growing children, where you can not justify exposing them to ionizing radiation (and repetitively if it is a longitudinal study) just for the sake of research data. MRI to CT would be costly but possible.
I am not familiar with this, but I think it depends on the how the model is trained. In MR, the bone is dark, but it is still there as a void. So if you have an abnormality, depending on how it is shaped, it is likely going to affect the shape of the void is as well. So if the model is learning how to convert the void to CT from the existing CT (as opposed to learning what a normal CT looks like), then it should reconstruct the abnormality to some extend. Of course it cannot reconstruct what’s going on inside the bone accurately.
I still don’t think you can replace the diagnostic data with synthetic ones, particularly for treatment planning, but I see quite a potential for growing kids when the care team orders evaluation exams for surgery followups. Kids with craniofacial conditions would get 3-4 CTs, and even potential to replace a couple is a welcome prospect.
Don’t get me wrong, these models are both very cool and very useful for many purposes, and it is fabulous that this new extension is making them easily usable in Slicer! I just wanted to inject a note of caution that would be seen by clinicians who may be less familiar with the technology and who might assume that a translated image is more or less equivalent to an acquired image, making the extra diagnostic acquisition unnecessary. There may be use cases where that will likely be OK (like bone shape from void shape on MR as you suggest), but there are also use cases where that will likely not be OK, and clinicians (and researchers) need to consider and test carefully for whatever application they are considering.
The thing is that MRI captures all information needed . So I would think that a very good model would be able to 80ish percent accurate . If that is correct , it would be good enough.
The two great things is that I can use it as way to visualize bone morphology not quality . Quality is already being seen in MRi . But 3d shape is illusive , required bit of imagination and lots of experience to guess from mri .
The second is that bone will be perfectly registered on MRi . And that has big potential for me . Not sure what but I’m thinking it might be diagnostically significant
a would also hope that someone can work on MRI ct registration .