PW35 Projects List

Hi all

This will be our master topic for PW35 projects. Please post a reply to this topic with the project(s) you are considering. Or, feel free to create a standalone topic for your project in the ProjectWeek section, and link to it here.



1 Like

I am planning to attend PW35 and will be working on a project entitled: “Registration of Time Series data for Deep Learning”. I have a longitudinal dataset of patients who have been treated for lung cancer. They returned periodically for several follow-up scans to see if any cancer has returned. I propose to (1) use Plastimatch or other techniques to first register and measure the time sequence of a patient’s scans, and then (2) to prepare the 4D (3D + time) dataset for annotation and deep learning-based learning.

Hi all,

Apart from supporting the MONAILabel team, I would like to work on more generic and lower-level compatibility issues between PyTorch and Slicer.

Basically, I imagine the following scenario: a user has trained a deep learning segmentation model using PyTorch (and possibly TorchIO, MONAI or both). They want users (e.g., clinicians) to be able to use the model on their own data, without the need to code. The best solution is probably to contribute an extension. (I am in this situation, with resseg and its corresponding extension SlicerEPISURG).

Three issues I would like to address:

  1. How to install PyTorch inside Slicer. The main question is whether to install a version with GPU support and, if it does, which version of the CUDA toolkit to install. I did a bit of work on this during the development of the SlicerTorchIO extension.
  2. How to handle the necessary conversion of Slicer nodes (e.g. vtkMRMLScalarNode) to PyTorch objects (e.g. torch.Tensor). A few additions to slicer.util might help here.
  3. Possibly, contributing a full tutorial with a toy example using a publicly available dataset such as TorchIO’s IXITiny or a dataset from the Medical Segmentation Decathlon*

If someone is interested in this stuff, please let me know and let’s work together!

Some related projects that are probably worth looking at are DeepInfer and TOMAAT.

*Many images from the Medical Decathlon cannot be easily read by Slicer due to their 4D shape. This can maybe be addressed within the MONAILabel projects – @diazandr3s, @SachidanandAlle


These are great topics @Fernando. We can use MONAILabel to train those deep learning segmentation models you mentioned. Happy to help with this.
I’d also like to echo the issue we had when loading multimodality images in Slicer (Modality, Height, Width, Depth). MONAILabel will benefit from this as it allows the development of Apps that manage multimodality images. So far we have Apps that work on single modality images only.
Another project I’m really interested in is the development of the OHIF plugin for MONAILabel. If anyone would like to contribute on any of these, please don’t hesitate to reach out :slight_smile:


I will be interested in attending a breakout session or other project meetings to discuss integrating Slicer with deep learning systems. I am personally leaning toward having the deep learning server outside of Slicer instead of performing the training/prediction inside Slicer using Slicer’s Python interpreter. I believe others in the community have some examples we can learn from. From my first impressions, the MONAILabel architecture looks promising for hosting deep learnig models.

1 Like

I would like to propose a project for establishing (more initial steps I suppose) entitled: “development of a deep learning segmentation approach for spines with metastatic disease" as part of an NIH grant project for the development of risk of fracture prediction in cancer patients.
However, I am not a programmer! I have 50 labeled CT data sets for lumbar and/or thoracic as well as full spine columns for patients at baseline. For a good # of patients, we have 3 and 6m follow-up CT. These are not yet labeled. I can try and label these data sets if this would be of help for model development. Indeed it would be great to get some help/advice regarding how to speed up the segmentation for the labeling and extraction of volume information from the masks. The segmented volumes are needed for the analytical and computational modeling pipeline as part of a collaboration with MIT.
Ultimately the study patient cohort will contain 450 patients for which CT imaging is tightly standardized, as much as we can in a clinical environment across departments, with baseline and longitudinal follow up at 3, 6 9, and 12M.
I am looking forward to seeing what can be done, as I have several additional novel projects regarding imaging (CT, MRI) of this cohort that will greatly benefit from a deep learning approach. I am happy to be fully committed to the project in whatever capacity useful for the project.

1 Like

I’m interested in following up on work from last project week and the one before that to launch Slicer instances on-demand in cloud environments (optionally GPU accelerated for ML. The idea would be that one could browse studies and view images in OHIF, and then launch Slicer on the same dataset to access any of the tools and extensions it can provide, ultimately storing back segmentations or other results back to the original or another server.

1 Like

I’m interested developing a module combining different resources (imaging, electrophysiology, atlases) to get a live feedback during Deep Brain Stimulation surgery. The idea is to communicate with devices SDK to get the current location of micro electrodes and their recordings. From here, different visualisations can be implemented in Slicer.

I’m also interested in image registration - currently working on adding ANTs registration in Slicer and a module capable of manually fixing small misalignments from the non-linear registration warp.

1 Like

My plan is to move forward the VTK9 compatibility of the SlicerVirtualReality extension, and in case it is achieved, continue by integrating in-VR UI widgets in said extension. For reference see project page from last project week.

Update: the project page is ready.

1 Like

I’m planning to continue our liver surgery planning platform from last project week. I’ll be joined by @dalbenzioG (OUS), Ole V. Solberg (SINTEF) and Geir A. Tangen (SINTEF)