Also, think about the structure that you want to segment. You might want to start with a larger structure than inferior alveolar nerve – small structures tend to be more difficult to understand by AI models.
Is the segmenting is the usual process with Segment editor ?
Yes, the process of creating the “ground truth” segmentation can be done completely with Slicer tools. Usually, you’ll want to export your segmentation masks into binary label maps (through the Segmentations module).
Hi ! I want to make segment for ovarion tumor but I can not find the suitable model . Could you set up one model for ovarian tumor please? Thank you very much!
You can try other models (such as those developed for brain tumors). If they don’t work then you probably need to train your own model (see reply above).
@lassoan: Thank your team and NVIDIA to provide an interesting project.
Currently, we are working on object detection (i.e. tumor detection…) using faster-rcnn and plan to incorporate it into 3D Slicer and Nvidia Clara. This task is a new task and does not include in the current version of Nvidia Clara. I hope we can contribute it. However, I am newbie of Slicer and I would like to ask you some questions:
My pipeline is : Load a json file --> Draw a rectangular bounding box (4 points) --> Modify the location of these points --> Overwrite to the json file (Pascal VOC format)
Since NVidiaAIAA segment editor effect has already implemented the Clara API, I would recommend to start with modifying that effect in your own fork.
Probably the easiest is to use 4 markup fiducial points for this. Marking points is already implemented in the AIAA effect, you can connect them using a line as it is done in SurfaceCut effect.
Save/load json file is very easy in Python. Let us know if you have any specific questions.
Thank you Prof Lasso and team for this extension. I’ve just trained a model with good results and I was able to deploy it right away in Slicer.
I would like to complete a script to perform the autosegmentation on some datasets using this model (Extract skin) with nvidia aiaa extension.
I looked for the setMRMLDefault to find out the parameters, but I’m missing something. Also I miss the input for the AIAA server URL.
I would like to run the autosegmentation by script (loading volumes, creating segmentations and using the nvidia aiaa-slicer extension by code). I used extract skin example to get the code for creating segmentation etc. I’m running a local AIAA server here with my model.
Everything is working fine with my server and model, but I would like to skip the clicking parts of it executing a script.
Hi, I’m also trying to automatically segment ovarian masses.
I have a collection of over 200 manually segmented tumor lesions of both ovaries, several of them are bilateral and partly merged. How can I train my own model using this approach? Is there any tutorial available? Many thanks for your help!
You can follow any keras or pytorch tutorial and feed your segmentations into them. Most likely it will not immediately work, but if the segmentation problem is not too hard then by tuning network layers, learning parameters, data preprocessing, you will eventually get to something usable.
If you want to have a model that runs directly in NVidia AIAA server then you can try to train a model using NVidia Clara. It is a higher-level library, which implements some commonly needed features and hides some of the low-level details, which may save you time, but also make the learning a bit harder (it may be a bit less clear what exactly happens internally).
Here’s a tutorial you can follow. It uses PyTorch and TorchIO to train a 3D U-Net for segmentation. There will be a TorchIO extension available in Slicer soon. I hope this helps.
@lassoan - We tested the AI organ segmentation extension on a bunch of public data. The extension worked very well. Awesome. However, as our research group would like to also analyze hospital / patient data, we do not feel comfortable sending our input image (even the deidentified ones) to “an external” server. Therefore, my question is “Is there any way to run the AI organ segmentation using our own hardware (e.g. GPU) w/o sending our data to an different server?” Thank you!
Dear Prof Lasso:
I have tried nvidia AIAA module in 3d slicer, there are my results:
DExtr3D works fine.
Auto-segmentation does not work:
Traceback (most recent call last):
File “C:/Users/Expert Pro R2/AppData/Roaming/NA-MIC/Extensions-29276/NvidiaAIAssistedAnnotation/lib/Slicer-4.11/qt-scripted-modules/SegmentEditorNvidiaAIAALib/SegmentEditorEffect.py”, line 361, in createAiaaSessionIfNotExists
in_file, session_id = self.logic.createSession(inputVolume)
File “C:/Users/Expert Pro R2/AppData/Roaming/NA-MIC/Extensions-29276/NvidiaAIAssistedAnnotation/lib/Slicer-4.11/qt-scripted-modules/SegmentEditorNvidiaAIAALib/SegmentEditorEffect.py”, line 999, in createSession
response = aiaaClient.create_session(in_file)
File “C:\Users\Expert Pro R2\AppData\Roaming\NA-MIC\Extensions-29276\NvidiaAIAssistedAnnotation\lib\Slicer-4.11\qt-scripted-modules\NvidiaAIAAClientAPI\client_api.py”, line 107, in create_session
raise AIAAException(AIAAError.SERVER_ERROR, ‘Status: {}; Response: {}’.format(status, response))
NvidiaAIAAClientAPI.client_api.AIAAException: (3, ‘Status: 404; Response: b’\n404 Not Found\n
Not Found
\n
The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
Ultrasound is very different from CT and MRI. There is code for training and deploying ultrasound segmentation in this repository: https://github.com/SlicerIGT/aigt
This repository is not very organized, but if you tell me more what you need to do, I can point you to more specific examples.