TensorFlow x Slicer

Dear Slicer Community,

I’m new here and I wanted to share an extension I made to use my TensorFlow models for image segmentation directly into slicer. I’ve already read here that most deep learning models are developed under Pytorch in the medical field, but I though it would be fun trying to create an extension focused on TensorFlow.

The model input size is detected to resize the selected image/volume to compatible size and range of values.

For now the segmentation can only be segmented slice by slice (1 image as input - 1 segmentation as output) but I’ll try to implement 2.5D approach (N images as input - 1 segmentation as output) and 3D approach (full volume as input - full volume as output).

Rep of the extension : https://github.com/VincentMillotMaysounabe/Slicer-TensorFlow

If any of you have comments on how I could enhance the project it could be really precious to me.

Thanks,
Vincent

1 Like

Hi - welcome, this sounds really interesting, and yes, it’s common. to use pytorch but tensorflow should work fine too if it gets installed right. For me on a linux test with a recent build, pip_install("tensorflow[and-cuda]") in the python console completed but crashed whtn I tried to import it. I suspect there may be similar issues with cuda drivers since there have been many with pytorch.

BTW, your repository would benefit with some examples and screenshots so it’s clear what the module currently does.

Also since it’s pure python you should be able to provide instructions for people to install it locally. This doesn’t need to be in the extension manager until it’s ready for use by the general user population.

Hi
Thanks so much for your answer !
I was not aware of those installation issues since it worked fine for me. I’m on Windows using Slicer 5.6.1 and TensorFlow 2.15.0. I’d like to investigate more about this, have you any idea of where to start ?

I recently updated my repository with a quick overview of the extension and a full tutorial with example files and a -not so accurate- U-net model for prostate segmentation.

Regarding GPU driver issues you can just search this discourse for questions about TotalSegmentator and you’ll find a lot of questions about installation issues. Maybe more on linux than windows but both come up. Usually they can be resolved if someone has the patients and some technical experience, but it’s a barrier to widespread use.

Hi again,
I’ve been working for some weeks on a way to deal with the installation issues for tensorflow. The solution I came up with is to give the possibility to perform a remote prediction by sending the model and inputs to a raspberry Pi through an API. Then tensorflow is not needed anymore.

Since models can be heavy in h5 format, i’ve mainly worked with models saved with tflite format, but i’m working on making both availables. (tflite extension will always compute faster since models can be 2-3 times lighter). This solution is working well but you have to wait about 50 seconds to have the prediction with a model ~25MB and 20 images to compute.

Everything should be ready within a week, i’ll update my repo at this moment.

I’d be happy to know what you think about this idea !

Most computers that are used for running Slicer are magnitudes stronger than a Raspberry Pi. Therefore, most people would want to run inference on the computer that runs Slicer.

That said, the same server that you implement for receiving images from Slicer, running the inference, and sending back segmentations to Slicer could be used on the same computer, to run inference in another Python environment that is compatible with TensorFlow.

I would just note that nowadays I barely see any projects using TensorFlow for medical image computing anymore. If you want to maximize the impact of your efforts in this field then you might consider working with PyTorch.

Thank you so much for the advices.

The extension was primarely using local inferences, but to do so, you still have to install tensorflow dependencies. The idea of having a Raspberry Pi running inferences is to provide a ready-to-go solution, despite poor performances. Anyway the user will still have both choices available on the extension.

I expected this concerns about using TensorFlow instead of PyTorch. After few investigation about model saving and loading with PyTorch, I couldn’t find any way to load the model without having the corresponding python class. This makes unknown model loading way harder than with tensorflow and I’ll have to create a specific workflow for it. I guess the user will have to provide the python file with the corresponding python class.

You usually don’t need to implement models from scratch, but pip install a package tha provides the model (e.g., MONAI) and you just load the model weights. It is all automated, you can run a model on an image by a single click. See for example MONAIAuto3DSeg extension.