AI-assisted segmentation extension

Before you start segmentation please take a look at tutorials / user guides. If you decide to use Nvidia Clara you can find more information e.g. here: https://docs.nvidia.com/clara/tlt-mi/clara-train-sdk-v1.1/index.html In short: it depends on your approach.

Also, think about the structure that you want to segment. You might want to start with a larger structure than inferior alveolar nerve – small structures tend to be more difficult to understand by AI models.

Is the segmenting is the usual process with Segment editor ?

Yes, the process of creating the “ground truth” segmentation can be done completely with Slicer tools. Usually, you’ll want to export your segmentation masks into binary label maps (through the Segmentations module).

2 Likes

Thanks much appreciated. Will experiment with this.

Hi ! I want to make segment for ovarion tumor but I can not find the suitable model . Could you set up one model for ovarian tumor please? Thank you very much!

You can try other models (such as those developed for brain tumors). If they don’t work then you probably need to train your own model (see reply above).

@lassoan: Thank your team and NVIDIA to provide an interesting project.

Currently, we are working on object detection (i.e. tumor detection…) using faster-rcnn and plan to incorporate it into 3D Slicer and Nvidia Clara. This task is a new task and does not include in the current version of Nvidia Clara. I hope we can contribute it. However, I am newbie of Slicer and I would like to ask you some questions:

My pipeline is : Load a json file --> Draw a rectangular bounding box (4 points) --> Modify the location of these points --> Overwrite to the json file (Pascal VOC format)

Could you please suggest some tutorial to implement the pipeline in the Slicer with python extension? For write json file, I found a tutorial at https://www.slicer.org/wiki/Documentation/Nightly/ScriptRepository#Write_annotation_ROI_to_JSON_file

2 Likes

Since NVidiaAIAA segment editor effect has already implemented the Clara API, I would recommend to start with modifying that effect in your own fork.

Probably the easiest is to use 4 markup fiducial points for this. Marking points is already implemented in the AIAA effect, you can connect them using a line as it is done in SurfaceCut effect.

Save/load json file is very easy in Python. Let us know if you have any specific questions.

1 Like

Thank you Prof Lasso and team for this extension. I’ve just trained a model with good results and I was able to deploy it right away in Slicer.
I would like to complete a script to perform the autosegmentation on some datasets using this model (Extract skin) with nvidia aiaa extension.
I looked for the setMRMLDefault to find out the parameters, but I’m missing something. Also I miss the input for the AIAA server URL.

I appreciate any help. Thanks again

Caio

# NVIDIA auto segmentation

segmentEditorWidget.setActiveEffectByName(“NvidiaAIAA”)
effect = segmentEditorWidget.activeEffect()
effect.setParameter(“SegmentationModel”, “my_model”)
effect.self().onApply()

Traceback (most recent call last):
File “”, line 1, in
AttributeError: ‘NoneType’ object has no attribute ‘setParameter’

Great!

Would you like to run extract skin example on an NVidia AIAA server?

What parameters of what effect you are trying to find parameters of?

To use the default Slicer AIAA server, leave the URL field empty.

I would like to run the autosegmentation by script (loading volumes, creating segmentations and using the nvidia aiaa-slicer extension by code). I used extract skin example to get the code for creating segmentation etc. I’m running a local AIAA server here with my model.
Everything is working fine with my server and model, but I would like to skip the clicking parts of it executing a script.

Parameters of NVIDIA AIAA extension

1 Like

I’ve added an example for boundary points based AI segmentation in batch mode (without GUI):

2 Likes

Thanks so much. I’m learning coding as I go, and I was able to complete the script for the segmentation based on yours.

Auto segmentation
segmentEditorWidget.setActiveEffectByName(“Nvidia AIAA”)
effect = segmentEditorWidget.activeEffect()
serverUrl = “http://AIAA_SERVER_ADDRESS:PORT/v1/models”
effect.self().ui.serverComboBox.currentText = serverUrl
effect.self().onClickFetchModels()
aiaaModelName = “MY_MODEL”
effect.self().ui.segmentationModelSelector.currentText = aiaaModelName
effect.self().onClickSegmentation()

1 Like

Hi, I’m also trying to automatically segment ovarian masses.
I have a collection of over 200 manually segmented tumor lesions of both ovaries, several of them are bilateral and partly merged. How can I train my own model using this approach? Is there any tutorial available? Many thanks for your help!

There are lots of deep learning tutorials online.

You can follow any keras or pytorch tutorial and feed your segmentations into them. Most likely it will not immediately work, but if the segmentation problem is not too hard then by tuning network layers, learning parameters, data preprocessing, you will eventually get to something usable.

If you want to have a model that runs directly in NVidia AIAA server then you can try to train a model using NVidia Clara. It is a higher-level library, which implements some commonly needed features and hides some of the low-level details, which may save you time, but also make the learning a bit harder (it may be a bit less clear what exactly happens internally).

Hi @Dunstan

Here’s a tutorial you can follow. It uses PyTorch and TorchIO to train a 3D U-Net for segmentation. There will be a TorchIO extension available in Slicer soon. I hope this helps.

3 Likes

@lassoan - We tested the AI organ segmentation extension on a bunch of public data. The extension worked very well. Awesome. However, as our research group would like to also analyze hospital / patient data, we do not feel comfortable sending our input image (even the deidentified ones) to “an external” server. Therefore, my question is “Is there any way to run the AI organ segmentation using our own hardware (e.g. GPU) w/o sending our data to an different server?” Thank you!

You can set up your own server by following instructions on NVidia Clara website.

Dear Prof Lasso:
I have tried nvidia AIAA module in 3d slicer, there are my results:

  1. DExtr3D works fine.
  2. Auto-segmentation does not work:
    Traceback (most recent call last):
    File “C:/Users/Expert Pro R2/AppData/Roaming/NA-MIC/Extensions-29276/NvidiaAIAssistedAnnotation/lib/Slicer-4.11/qt-scripted-modules/SegmentEditorNvidiaAIAALib/SegmentEditorEffect.py”, line 361, in createAiaaSessionIfNotExists
    in_file, session_id = self.logic.createSession(inputVolume)
    File “C:/Users/Expert Pro R2/AppData/Roaming/NA-MIC/Extensions-29276/NvidiaAIAssistedAnnotation/lib/Slicer-4.11/qt-scripted-modules/SegmentEditorNvidiaAIAALib/SegmentEditorEffect.py”, line 999, in createSession
    response = aiaaClient.create_session(in_file)
    File “C:\Users\Expert Pro R2\AppData\Roaming\NA-MIC\Extensions-29276\NvidiaAIAssistedAnnotation\lib\Slicer-4.11\qt-scripted-modules\NvidiaAIAAClientAPI\client_api.py”, line 107, in create_session
    raise AIAAException(AIAAError.SERVER_ERROR, ‘Status: {}; Response: {}’.format(status, response))
    NvidiaAIAAClientAPI.client_api.AIAAException: (3, ‘Status: 404; Response: b’\n404 Not Found\n

    Not Found

    \n

    The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.

    \n’’)

What would be the solution for this?
Kind regards

Please report this to NVidia engineers at https://github.com/NVIDIA/ai-assisted-annotation-client/issues. Maybe we would just need to upgrade the default Slicer server (https://github.com/NVIDIA/ai-assisted-annotation-client/issues/62).

There’s a way to incorporate US into the AI assisted segmentation module?

Ultrasound is very different from CT and MRI. There is code for training and deploying ultrasound segmentation in this repository: https://github.com/SlicerIGT/aigt
This repository is not very organized, but if you tell me more what you need to do, I can point you to more specific examples.