PW35 Projects List

Hi all

This will be our master topic for PW35 projects. Please post a reply to this topic with the project(s) you are considering. Or, feel free to create a standalone topic for your project in the ProjectWeek section, and link to it here.



1 Like

I am planning to attend PW35 and will be working on a project entitled: “Registration of Time Series data for Deep Learning”. I have a longitudinal dataset of patients who have been treated for lung cancer. They returned periodically for several follow-up scans to see if any cancer has returned. I propose to (1) use Plastimatch or other techniques to first register and measure the time sequence of a patient’s scans, and then (2) to prepare the 4D (3D + time) dataset for annotation and deep learning-based learning.

1 Like

Hi all,

Apart from supporting the MONAILabel team, I would like to work on more generic and lower-level compatibility issues between PyTorch and Slicer.

Basically, I imagine the following scenario: a user has trained a deep learning segmentation model using PyTorch (and possibly TorchIO, MONAI or both). They want users (e.g., clinicians) to be able to use the model on their own data, without the need to code. The best solution is probably to contribute an extension. (I am in this situation, with resseg and its corresponding extension SlicerEPISURG).

Three issues I would like to address:

  1. How to install PyTorch inside Slicer. The main question is whether to install a version with GPU support and, if it does, which version of the CUDA toolkit to install. I did a bit of work on this during the development of the SlicerTorchIO extension.
  2. How to handle the necessary conversion of Slicer nodes (e.g. vtkMRMLScalarNode) to PyTorch objects (e.g. torch.Tensor). A few additions to slicer.util might help here.
  3. Possibly, contributing a full tutorial with a toy example using a publicly available dataset such as TorchIO’s IXITiny or a dataset from the Medical Segmentation Decathlon*

If someone is interested in this stuff, please let me know and let’s work together!

Some related projects that are probably worth looking at are DeepInfer and TOMAAT.

*Many images from the Medical Decathlon cannot be easily read by Slicer due to their 4D shape. This can maybe be addressed within the MONAILabel projects – @diazandr3s, @SachidanandAlle


These are great topics @Fernando. We can use MONAILabel to train those deep learning segmentation models you mentioned. Happy to help with this.
I’d also like to echo the issue we had when loading multimodality images in Slicer (Modality, Height, Width, Depth). MONAILabel will benefit from this as it allows the development of Apps that manage multimodality images. So far we have Apps that work on single modality images only.
Another project I’m really interested in is the development of the OHIF plugin for MONAILabel. If anyone would like to contribute on any of these, please don’t hesitate to reach out :slight_smile:


I will be interested in attending a breakout session or other project meetings to discuss integrating Slicer with deep learning systems. I am personally leaning toward having the deep learning server outside of Slicer instead of performing the training/prediction inside Slicer using Slicer’s Python interpreter. I believe others in the community have some examples we can learn from. From my first impressions, the MONAILabel architecture looks promising for hosting deep learnig models.

1 Like

I would like to propose a project for establishing (more initial steps I suppose) entitled: “development of a deep learning segmentation approach for spines with metastatic disease" as part of an NIH grant project for the development of risk of fracture prediction in cancer patients.
However, I am not a programmer! I have 50 labeled CT data sets for lumbar and/or thoracic as well as full spine columns for patients at baseline. For a good # of patients, we have 3 and 6m follow-up CT. These are not yet labeled. I can try and label these data sets if this would be of help for model development. Indeed it would be great to get some help/advice regarding how to speed up the segmentation for the labeling and extraction of volume information from the masks. The segmented volumes are needed for the analytical and computational modeling pipeline as part of a collaboration with MIT.
Ultimately the study patient cohort will contain 450 patients for which CT imaging is tightly standardized, as much as we can in a clinical environment across departments, with baseline and longitudinal follow up at 3, 6 9, and 12M.
I am looking forward to seeing what can be done, as I have several additional novel projects regarding imaging (CT, MRI) of this cohort that will greatly benefit from a deep learning approach. I am happy to be fully committed to the project in whatever capacity useful for the project.


I’m interested in following up on work from last project week and the one before that to launch Slicer instances on-demand in cloud environments (optionally GPU accelerated for ML. The idea would be that one could browse studies and view images in OHIF, and then launch Slicer on the same dataset to access any of the tools and extensions it can provide, ultimately storing back segmentations or other results back to the original or another server.


I’m interested developing a module combining different resources (imaging, electrophysiology, atlases) to get a live feedback during Deep Brain Stimulation surgery. The idea is to communicate with devices SDK to get the current location of micro electrodes and their recordings. From here, different visualisations can be implemented in Slicer.

I’m also interested in image registration - currently working on adding ANTs registration in Slicer and a module capable of manually fixing small misalignments from the non-linear registration warp.

1 Like

My plan is to move forward the VTK9 compatibility of the SlicerVirtualReality extension, and in case it is achieved, continue by integrating in-VR UI widgets in said extension. For reference see project page from last project week.

Update: the project page is ready.


I’m planning to continue our liver surgery planning platform from last project week. I’ll be joined by @dalbenzioG (OUS), Ole V. Solberg (SINTEF) and Geir A. Tangen (SINTEF)


Hi @RonSpine let’s create a MONAILabel App for this use case. Happy to help! Are you registered for the Slicer Week Workshop?

Hi Curtis,

I have worked on image registration with EM images using deep learning model. I am interested in your project and would like to work with you.

Dear Andres and Curtis

I am very grateful that you are willing to help me with this project. Andres, Curtis, and I have been talking over the last two weeks about the project and Curtis has been trying to help me install Linux mint on a rather recalcitrant Lenovo P620. We are hoping to have it ready for the project week. As to patient data sets, I am hoping to put at least 4. I am having issues with the BI regarding putting more of the data online.

As I have stated, I am not a programmer or an imaging expert. Thus I cannot offer any help in this regard. However, I am more than happy to collaborate. I have attached reference that relate directly to the spine. I have more comprehensive PDF from that workshop, but its 7MB and I got a error message from discorse. I am happy to have a meeting (zoom?)with all of us to plan the week. For my end, I aim to become a better slicer user. Currently, to segment, a spine takes two days ( partially, still having problems encompassing the full volume!)

As to registration, I think I am. I am getting emails regarding the week. How can I confirm?

I have never worked with GitHub. Curtis kindly offered to put the project description on the site.

I very much look forward to working with you both. R

(Attachment Coarse to Fine Vertebrae Localization and Segmentation with Spatial Configuration-Net and U-Net.pdf is missing)

1 Like


Dear Neha, thank you for your interest in this registration project. I will ask my collaborators about sharing some sample data with you, and I look forward to discussing this problem further. I have access to deep learning hardware, so I can run tests during the project week. Thanks again for your interest.


1 Like

Hello Curtis,

Thank you. Please let me know when you and the team are available to discuss more.
Also if you could share project meeting link for every Tuesday with me?


I’m interested in helping new developers give support to more kinds of planar osteotomies surgeries for virtual surgical planning and patient-specific guides. Also improving the current mandibular reconstruction module to support dental implants planning and (if there is time) finish the long-bone deformity correction module.
More info here.

I am a clinician and trying to bring a miniaturized version of a simple image guidance system for cranial procedures to the bedside using facial features for registration for simplicity.

I am currently trying to use a Microsoft azure camera linked to a tablet or laptop. The challenge has been in obtaining registration using the camera.

Once this step is obtained, we would like to try to use a different fiducial (perhaps a QR code) to track the instrument that needs to be navigated.

There are several groups at the project week who have been working on this topic. We have been using ArUco makers, flexible marker patterns, surface mesh acquired by Intel RealSense camera, etc.

We have a very closely related project during this project week. It would be great if you could join.

Hi @Fernando I have some similar experience working on this problem except using Tensorflow/Keras instead of Pytorch. I’m happy to discuss the approach that we’ve taken and potentially see if we can to make these projects work together seamlessly! If you want to take a look at what I’ve done so far it can be found here: aigt/DeepLearnLive at master · SlicerIGT/aigt · GitHub