Multi-modality/label Brain Tumor segmentation

I’m trying to re-manually segment tumors from the BRATS dataset and attempting to follow the tutorial from here: https://www2.imm.dtu.dk/projects/BRATS2012/Jakab_TumorSegmentation_Manual.pdf

The motivation of the tutorial is to create a labelmap that is derived from 2 different sequences. It is outdated and I’m currently trying to re-adapt it for 4.X. However, the main idea is as follows.

Use one sequence (FLAIR/T2) to wholy everything of segment everything. Then use another sequence (t1gd) to segment something within this area, and then do T2 (whole area) minus T1gd to create a third label. I’m one week in, and I’m at my wits end.

The tutorial recommends paint + slice interpolation, but region growing makes more intuitive sense to me. Any idea how I can adapt that for this tutorial? Or if there are better tutorials available?

Thank you!

The tutorial is way outdated. We have incomparably better tools in Slicer now.

Use Segment Editor module for segmentation. You can switch master volume (the volume that is used by effects that need intensity values as input) any time. You can use masking section (near the bottom of Segment editor module panel) to restrict editing to a certain region (intensity range, segment, combination of segments, etc.).

If there is good contrast difference between regions you need to segment then “Grow from seeds” effect will work better (less manual work needed) than “Fill between slices”.

You may find the neurosurgical planning tutorial (and other segmentation tutorials on that page) useful.

It would be great if you could create an update version of Andras Jakab’s tutorial and share it. We could help you with tips and fine-tuning the workflow.

Sure that sounds good. I’ve got it working as well.

I’ll send you an email about it!

It’ll be great to see this updated - thanks for working on it!

I also suggest you consider oversampling the segmentation compare to the labelmap. Those BRATS segmentations have been used in a lot to train deep learning models and it would help to generate the most faithful anatomical models possible (oversample the MRs too if you need the labels to match the background image grid).