Thanks again
NVidia recommends 16GB GPU RAM. I have set up the Slicer server with a 2080 RTX (8GB RAM) and most models that were shipped with Clara 2 work, but the models come with Clara 3 are keep crashing. People who create models for Clara probably don’t care too much about reducing the memory needs but assume 16GB available.
I see, this is a discussion I would need to bring up with my advisor and have no control over it now unfortunately. It would be really helpful to use the server endpoint available at the PERK Lab to prototype.
NVidia AIAA client of February 2020 should be OK. You should not need to build anything, it is just a Python API.
As I mentioned before, I was coming across a lot of issues while trying to integrate the python API. I instead went ahead with integrating the C++ API itself, which was available from the C++ client. Since the task requires me to only use Dext3D, I went ahead with providing the appropriate Markups widget and logic to get that working.
I’m still getting the same error which points to a broken IO Pipe with the Feb 2020 build. I even tried directly using the compiled binary file for NvidiaAIAASession to test if a session is even created with a compressed NIftI image. I’ll try using older commits I guess.
Another option for me would be to not include the AIAA Segmentation in my extension and force the user to navigate to the segmentations module and segment the region, then feed the segmentations node back into my module. While that is a possibility, I wasn’t even able to get the AIAA through the extension wizard with the same PERK Lab server endpoint with the developer build for slicer. Since my module and extension are a loadable C++ module, I’m not sure if I can build my extension against the binary release for slicer to allow the user to use the Nvidia Segmentation tool from the segment editor.