I am trying to develop a deep learning based module, whereby a few clicks would allow to segment the CT volume. I am using the nnUNET for my implementation, which is basically a UNET with fixed hyperparameter and this architecture has been quite successful in various medical decathlon challenges/competition. The code flow works for my purpose in normal Python outside Slicer. I have read about the different options available in the Slicer forum, such as using Pytorch directly (presented in PW35), or going with MONAI framework. But I am struggling a bit with the workflow as some of the options available confuses me and where they need to come into the picture.
From what I have understood during my reading, the workflow pipeline would be something like following:
Convert the CT volume data images into the format compatible with the nnUNET.
Maually segement some data into the required segments using the Slicer. Divide these data into training, testing and validation dataset.
Now I can train the network in the Slicer itself using the Pytorch extension provided in PW-35 in the command line interface. Or more suitably I can transfer these dataset outside Slicer and do all the AI stuff outside Slicer.
After doing all these AI stuff, I would create an extension by following the tuitorials available for creating simple extension. These extension would basically present a few buttons which would basically perform model inferance and segment new data, after these new data have been converted to the format in which they were trained.
Now my questions are:
- Am I right about my workflow or am I forgetting somethings?
- Maybe I can work without using MONAI. From what I understand MONAI is a framework which is being used in the medical imaging community to create a uniformity in the workflow and provides pre-built functions.
- Are there any available opensource deep learning project related to Slicer or some form of documentation or workflow etc for creating these extensions.