Building custom module inside 3D slicer that contains pre trained models (built in tensorflow)

Hi All,

I have a few pre trained medical image segmentation models built in tensorflow. I would like to build modules which I can use to test their segmentation prediction on scans that I import into 3D slicer. I know there are extensions available like Monai, but I would want to build a module which can use my trained model (stored as a h5) and generate predictions within 3D slicer itself by connecting to GPU.

I tried searching within the community here, but i still do not have an idea of how to go about building this
Could anyone please point to any resources or tutorial for this purpose?

You can start with the general programming tutorials linked from the web site. You can pip install tensorflow and move data back and forth using numpy arrays. There are lots of example scripts available to learn from. It sounds like you searched but didn’t find them - maybe we need to make this more obvious? Or did you find these and still thought there should be more?

Thanks for the reply @pieper. I did go through the resources and I did find the right resources.

However I still need some help. Thanks to @lassoan CLI blur image code, I was able to build a custom module myself.
Here is my workflow

  1. Import a scan into 3D slicer
  2. Pass the scan via argument as an input to my segmentation prediction python file
    1. The segmentation python file has my pre-trained model (in tensorflow) with which I generate
  3. I take the segmentations and write it back to the slicer as a label map

This workflow works and I am able to generate segmentations. However the time taken to generate predictions are way longer. I think it is because the python script is getting executed on a CPU and not a GPU. I tried to initialize and set up GPU within the python script using the below code

import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = '0' 

I also tried to check if the GPU is detected using the below code and I get the output “Not running on GPU” from it

import tensorflow as tf
gpus = tf.config.list_physical_devices('GPU')
if gpus:
    # Currently, memory growth needs to be the same across GPUs
    for gpu in gpus:
      tf.config.experimental.set_memory_growth(gpu, True)
    logical_gpus = tf.config.list_logical_devices('GPU')
    print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
  except RuntimeError as e:
    # Memory growth must be set before GPUs have been initialized
    print("Not running on GPU ")

Could you please help me with this ? Thanks a lot in advance

It looks like you’ve got a cuda installation problem or maybe you didn’t get the right version of tensorflow for the cuda driver you have. This isn’t really a Slicer issue, so you should find lots of information about this with some searching. You can use the PythonSlicer interpreter, for testing, which is just a normal python build without the Slicer app.

Thanks @pieper. But in terms of the process, all I want to do is, load a scan in slicer, use my tensorflow model to generate segmentations, get those back to slicer and overlay them with the scan. Apart from the workflow that I followed, do you think there is any other efficient way to do this ?

There are lots of options. If you have another python environment where tensorflow detects the GPU and runs faster then you can execute that with slicer.util.launchConsoleProcess(args) and read the results. Or you could use a server like MONAI Label does. Or you could use slicerio from an external process to send segmentations to Slicer.