AI-assisted segmentation extension

We are excited to announce that Nvidia AI-assisted segmentation extension is ready to use in latest Slicer Preview Release (rev28686 or later). The extension has been developed by Nvidia, with contributions from Slicer core developers. While there have been other AI-assisted segmentation modules in Slicer (such as DeepInfer, TOMAAT, SlicerCIP), this newest addition uses Nvidia Clara, a toolkit with significant industrial support and sufficient openness for researchers.

Few-minute overview video, showing guided MRI brain tumor and liver segmentation, and fully automatic liver, tumor, and spleen segmentation:

Tutorial and detailed description: see Nvidia AIAA extension documentation.

How does it work? The input image (and input points - in case of guided segmentation) are sent to a computer equipped with an Nvidia GPU, running Linux operating system and Nvidia Clara software. The server computes segmentation using the selected AI model and sends back the results to Slicer for display and further processing.

We have set up a demonstration server at the PerkLab (Queen’s University in Canada) to make it easier for Slicer users to get started without setting up their own processing computer. We uploaded a couple of AI models that Nvidia developed. We provide these models and the processing service as is, we don’t guarantee quality of this service (validity of segmentation results, speed, server uptime, etc.). No patient information is sent to the processing server and images and results are deleted from the server after processing, but users need to make sure they comply with their data management guidelines when using our server. If there are any confidentiality concerns then publicly available images may be used for testing: see Slicer’s Sample Data module or TCIA Browser extension, or download from other websites, such as Medical Decathlon.

If anyone would like to share their AI models for segmentation, let us know. As long as the model is compatible with Nvidia Clara, we should be able to install it on our server and make it available to the Slicer community.

Any questions and suggestions are welcome!

18 Likes

This looks great. I assume the existing trained models are for human clinical scans?
Is it possible to add/train additional reference segmentation (e.g, mouse skulls?, embryos etc)

All NVidia models are based on human clinical data sets (you can find more information here). However, the methods are completely generic and you should be able to use Clara Train to build models from your data.

This is indeed great news !!! :partying_face:

However only segmentation spleen and segmentation liver and tumor is shown in mine.

I’ve only installed liver+tumor and spleen models for fully automatic segmentation (others were listed to have rather low scores or just did not seem that interesting).

There are about 10 more models for boundary points based segmentation.

If you want us to install any other models listed here then let me know.

1 Like

Dear Prof Lassoan,
Well these AI trained modules at NVIDIA are not relevant to me i looked at that out of interest and i thought it was due to something wrong with the linux version as i downloaded the new build moments ago. I would have loved to experiment with the training on oral cancer but unfortunately reuqired specs are out of my reach.

Hardware Requirements

Recommended

  • 1 GPU or more
  • 16 GB GPU memory
  • 8 core CPU
  • 32 GB system RAM
  • 80 GB free disk space

This sounds really cool. @lassoan I wondered if you recall our CT cardio project… Any ideas if this might work for automatic segmentation of heart datasets like we were dealing with?

John

The automatic liver/spleen segmentation was impressive. NVidia has come a long way fast since the manual point and click approach. I plan to compare their approach to a Caffe 2D UNet paradigm. My only concern is the low input resolution (128 x 128 x 128) required.

Andras,
this is amazing news. Thinking that we might have a true one-click segmentation very soon is so exciting.

I tested this extension, specifically liver segmentation models, on my local datasets and wanted to share some thoughts:

  1. Liver segmentation from boundary points works flawlessly. It is by far more accurate and faster than any other approach available in Slicer (grow from seeds, marching…). In most cases it doesn’t require any postprocessing. For people like me, who work with liver parenchyma segmentation regularly, this is another major upgrade.
  2. Automatic liver and tumor segmentation has some accuracy issues, though. It looks like model is a little overfitted on Medical Decathlon dataset. I tested a few examples from that set and it did pretty accurate segmentation. On my local dataset it had a lot of trouble and is not functional at the moment (though it has some strong moments and can detect small lesions). Also, it’s too time-consuming for the effects you’re getting. A few examples:

    I will look at the Clara closer ASAP and try to figure out how to contribute models.

@lassoan

Great news, I just downloaded Slicer 4.11.0 for windows but this extension does not appear in the extension manager. Another question, do you know if there is an automatic spine segmentation available?

I think it is possible to fine-tune the models with new datasets. It would be nice if this is available in the slicer module.

I have a question… Can this thing be used to segment whatever organ we want or just the ones they prepared the models of segmentation for?

Thanks for the feedback. It’s great to hear that you find the extension useful.

I think so, too. If the volume has different size or resolution then the model gets confused. The models are pretty good overall, considering that Nvidia provides these models as technology demonstrations, but if you spend some time with training, especially with your own data sets then you can expect much better results.

Awesome. If you can create models and willing to share them then we would be happy to upload them to our segmentation server.

There is a few-hour timeslot each day when the new nightly Slicer installer is already available but not all extensions are built yet. You either need to wait a few hours or download the preview release from the day before using this link: https://download.slicer.org/?offset=-1

Yes, it is. It is not as easy as running a pre-trained model, so I’m not sure we can create a simple GUI for this in Slicer, but you can follow these instructions.

These specs are just recommendations and only relevant for training (for inference, any hardware with a CUDA-capable GPU will do). For training, you should be fine with 8GB GPU memory and 16GB RAM but probably even half of that is enough for many applications.

Each model can segment what it is trained for. You can fine-tune pre-trained models or train new models from scratch if you have enough data sets (for simple problems you might not need that much data and training parameter tuning) by following these instructions or creating a any model and bringing it into MMAR format.

The training framework is quite generic and heart segmentation on contrasted CT is fundamentally not very hard, so training a model should be feasible if you have enough data sets. There could be some special considerations to handle large 4D periodic data sets.

Very excited about these modules , however I must be doing something wrong. I keep getting an error when I try to fetch models. It says "Failed to fetch models from server. Make sure address is correct and retry. Not sure if my work PC is blocking me from accessing that server? I have 0.0.0.0 for the server address.
Details:
Traceback (most recent call last):
File “C:/Users/en12283/AppData/Roaming/NA-MIC/Extensions-28690/NvidiaAIAssistedAnnotation/lib/Slicer-4.11/qt-scripted-modules/SegmentEditorNvidiaAIAALib/SegmentEditorEffect.py”, line 164, in onClickFetchModels
models = self.logic.list_models(self.ui.modelFilterLabelLineEdit.text)
File “C:/Users/en12283/AppData/Roaming/NA-MIC/Extensions-28690/NvidiaAIAssistedAnnotation/lib/Slicer-4.11/qt-scripted-modules/SegmentEditorNvidiaAIAALib/SegmentEditorEffect.py”, line 692, in list_models
result = self.aiaaClient.model_list(label)
File “C:\Users\en12283\AppData\Roaming\NA-MIC\Extensions-28690\NvidiaAIAssistedAnnotation\lib\Slicer-4.11\qt-scripted-modules\NvidiaAIAAClientAPI\client_api.py”, line 107, in model_list
response = AIAAUtils.http_get_method(self.server_url, selector)
File “C:\Users\en12283\AppData\Roaming\NA-MIC\Extensions-28690\NvidiaAIAssistedAnnotation\lib\Slicer-4.11\qt-scripted-modules\NvidiaAIAAClientAPI\client_api.py”, line 402, in http_get_method
conn = httplib.HTTPConnection(parsed.hostname, parsed.port)
File “C:\Users\en12283\AppData\Local\NA-MIC\Slicer 4.11.0-2019-12-17\lib\Python\Lib\http\client.py”, line 849, in init
(self.host, self.port) = self._get_hostport(host, port)
File “C:\Users\en12283\AppData\Local\NA-MIC\Slicer 4.11.0-2019-12-17\lib\Python\Lib\http\client.py”, line 881, in _get_hostport
i = host.rfind(’:’)
AttributeError: ‘NoneType’ object has no attribute ‘rfind’

I just did a Pancreatic tumor case and would love to see if the auto segment gets it close. Thanks!

Are you running a local server? If you’re trying to access publicly available models hosted by PerkLab, leave the “NVidia AIAA server” field empty.

1 Like

I thought it was blank yesterday. I have no idea what I did different today but now it’s working!! Sorry for wasting your time! Thanks,Greg

Is it possible to train this to segment a area of bone defect (e.g Alveolar Cleft ) by looking at predefine set of standard landmarks ?

I am Zero in AI.

Can someone be able to do a small tutorial on how to train to segment out a structure ?

The extension allows users to run trained AI models. Creating models is a non-trivial task, but usually it is not too hard, if you already have large training data set (hundreds of manually segmented cases). You can search specifically for “nvidia clara train tutorial” (or look for any deep learning image segmentation tutorial on the web and then set up Clara interface to it).

i do have access to large sets of CT and CBCT of data of head and neck region. But how much of manual segmentation will be needed for e.g

  1. to train on oral cancer detection in the Maxilla or Mandible ? (we get like 150-200 oral cancers per year)
    or
  2. more simple segmentation like inferior alveolar nerve ? (big numbers of data)

Also is there a way to compensate for conditions with lack of data ? for example in cleft lip and palate we don’t get patients in number of hundreds… what we have is like 20 new patients per year. So in those cases is it possible to train ?

For training models for segmentation, you need to have manually segmentations as input (it is not enough to have just the CT volume). The more data the better, but a few hundred should be enough. If you only have a few ten cases, it may be feasible, too, but it makes the training more difficult (you may need to tune network structure and learning parameters more carefully, do more sophisticated data augmentation, etc.).

Thank you. I will try with simple inferior alveolar nerve training… i think i should be able to segment about 200 alveolar nerves and try this as a experiment.

Is the segmenting is the usual process with Segment editor ?
What format i need to save the segments/data in ?

Once i segment out 200 alveolar nerves i will try to set the clara interface to it.