Including modules from external extension in my own custom extension

I’m working on creating a Slicer Extension that combines various modules available in slicer and the nvidia segmentation tool, all in one extension. I’m not sure how I can include the widgets and logic created by the nvidia AIAA in my own extension as it is an external extension of its own. I’m currently considering using ExternalProject to pull the ai-assisted-annotation repository and link my logic to the segmentation tools in that extension, but I would rather just use the already made slicer extension widgets in my extension.

Module type - loadable module
Language - C++

You can use the Slicer custom application template. You just need specify the extensions you want to include in your application in the top-level CMakeLists.txt file and the build system takes care of everything (download, build, package into your custom application):

Thank you for pointing me to the right location. Is the FetchContent macro also supposed to build the extension before continuing with the next steps? This is the modified code I’m trying out but I don’t see any generated libraries or build configurations.

include(FetchContent)
set(extension_name "SegmentEditorNvidiaAIAA")
set(${extension_name}_SOURCE_DIR "${CMAKE_BINARY_DIR}/${extension_name}/slicer-plugin")
FetchContent_Populate(${extension_name}
  SOURCE_DIR     ${${extension_name}_SOURCE_DIR}
  GIT_REPOSITORY ${EP_GIT_PROTOCOL}://github.com/NVIDIA/ai-assisted-annotation-client.git
  GIT_TAG        origin/master
  GIT_PROGRESS   1
  PREFIX         ${CMAKE_BINARY_DIR}/${extension_name}
  BINARY_DIR     ${CMAKE_BINARY_DIR}/${extension_name}-build
  INSTALL_DIR    ${CMAKE_BINARY_DIR}/${extension_name}
  BUILD_ALWAYS   True
  CMAKE_ARGS     -DNvidiaAIAssistedAnnotation_BUILD_SLICER_EXTENSION:BOOL=ON
  )
list(APPEND Slicer_EXTENSION_SOURCE_DIRS ${${extension_name}_SOURCE_DIR})

Also, do scripted modules also generate include headers and linker targets for C++ code? Or will I need to create a scripted module to include the widgets and logic from a scripted module? I have created a c++ loadable module for my purpose.

Yes, the extension is downloaded and built automatically.

There is nothing to generate for scripted modules, they are just copied and packaged.

If you think that something is not right then you can print all CMake variables as shown here to see what exactly is going on.

Hi Andras, thanks for helping out. While FetchContent didn’t work with the NvidiaAIAA extension, I managed to get it working by adding it as a submodule in my own extension and only building the NvidiaAIAA module via an add_subdirectory command. This is primarily because my own extensions CMAKE variables were overriding the Nvidia ones (since CMAKE was populating my extensions variables first).

I do have a follow up question now - I don’t understand how I can use the NvidiaAIAA modules widget and logic in my C++ loadable extension. With any other module, I can simply include the module widget using the QT designer ui file, and connect it to the widgets logic using the application logic. I then link my module to the respective logic (vtkSlicer{ModuleName}Logic) associated with it. I don’t understand how I can do this with a scripted module. Is the process the same? I add the ui resource and use SlicerApplicationLogic to find the NvidiaAIAA module logic?

You can either put a qMRMLSegmentEditorWidget in your module GUI (you can specify in your widget which effects you want to show) or call Python from C++ (search for “pythonmanager” in Slicer source code for examples).

Hi Andras,

Thanks for helping out before, I ended up using the low level C++ Nvidia API itself instead of using the python extension. I got the code building and I was able to create a client. However, when I try to create a session, I receive a BAD REQUEST error from the underlying curl command (curlutils.cpp), I even tried using the binary built by the nvidia aiaa tool itself to create a session (nvidiaAIAASession is the binary file) and that returned the same error.

The server I’m using is the perk lab one: http://skull.cs.queensu.ca:8123

With it, I’m using a generic image that is saved as a compressed NIftI image (.nii.gz). The image is valid and loads/saves on Slicer.

23:50:38 [ERROR] [curlutils.cpp:145 - doMethod()] I/O error: Broken pipe
nvidia::aiaa::exception => nvidia.aiaa.error.101; description: Failed to communicate to AIAA Server

In recent Nvidia API they accidentally broke compatibility with v2 servers (see Slicer default server needs an upgrade to latest version of AIAA/Clara · Issue #62 · NVIDIA/ai-assisted-annotation-client · GitHub). You either need to downgrade to an earlier version or set up your own Clara v3 server.

Thank you! This definitely helps. I would be alright with setting up our own server, however, the requirements to setup a server seem daunting. I was wondering if the inference for the server can be performed on a Nvidia Jetson? Or is it necessary for me to setup a workstation with a dGPU with 8gb of ram for just inference?

I also didn’t fully understand what you meant by downgrade to an earlier version. Should I use an earlier version of the ai-annotation-client itself? I tried building against multiple previous commits dating all the way back to Feb 27th 2020. (f0cfc5e). Should I revert to an older commit?

Setting up the server for anyone who is familiar with linux and docker is probably trivial, but for me it took about a day. I don’t know if it really has to be this complicated, but it is.

NVidia recommends 16GB GPU RAM. I have set up the Slicer server with a 2080 RTX (8GB RAM) and most models that were shipped with Clara 2 work, but the models come with Clara 3 are keep crashing. People who create models for Clara probably don’t care too much about reducing the memory needs but assume 16GB available.

NVidia AIAA client of February 2020 should be OK. You should not need to build anything, it is just a Python API.

Thanks again

NVidia recommends 16GB GPU RAM. I have set up the Slicer server with a 2080 RTX (8GB RAM) and most models that were shipped with Clara 2 work, but the models come with Clara 3 are keep crashing. People who create models for Clara probably don’t care too much about reducing the memory needs but assume 16GB available.

I see, this is a discussion I would need to bring up with my advisor and have no control over it now unfortunately. It would be really helpful to use the server endpoint available at the PERK Lab to prototype.

NVidia AIAA client of February 2020 should be OK. You should not need to build anything, it is just a Python API.

As I mentioned before, I was coming across a lot of issues while trying to integrate the python API. I instead went ahead with integrating the C++ API itself, which was available from the C++ client. Since the task requires me to only use Dext3D, I went ahead with providing the appropriate Markups widget and logic to get that working.

I’m still getting the same error which points to a broken IO Pipe with the Feb 2020 build. I even tried directly using the compiled binary file for NvidiaAIAASession to test if a session is even created with a compressed NIftI image. I’ll try using older commits I guess.

Another option for me would be to not include the AIAA Segmentation in my extension and force the user to navigate to the segmentations module and segment the region, then feed the segmentations node back into my module. While that is a possibility, I wasn’t even able to get the AIAA through the extension wizard with the same PERK Lab server endpoint with the developer build for slicer. Since my module and extension are a loadable C++ module, I’m not sure if I can build my extension against the binary release for slicer to allow the user to use the Nvidia Segmentation tool from the segment editor.

Hi andras,
Is there a way to get widgets and logics of already available python scripted modules in a new extension. I want to combine all my scripted modules under one extension and on one page with different tabs like below. Each tab with a different python scripted modules already in 3 D Slicer.

image