Creating Truth Mask for Segmentation Task

Hello everyone!

I’m a Computer Science student currently working on a research project focused on segmenting root canals using DICOM images. My approach involves using a U-Net combined with DenseNet-121. I have access to 81 scans, and I’m aiming to achieve something similar to this paper: Link to Paper.

At the moment, I’m extracting each slice from the DICOM files into PNG format and manually segmenting them with the assistance of a dentistry student. However, this process is extremely labor-intensive and seems less than ideal, especially since it results in a loss of the 3D context.

I’m considering using Slicer to perform the segmentation directly on the DICOM images and then exporting the segmentation masks for use with the model. However, I’m not entirely sure how to approach this and would greatly appreciate any advice on exporting these masks in a format to train the U-Net + DenseNet-121 model.

Any insights, tips, or resources would be incredibly helpful!

Thank you so much!

Segmenting 2D slices one by one sounds very inefficient and could be potentially very inaccurate. Instead, you could start with 3D segmentation of all the teeth fully automatically, using an already trained model, such as DentalSegmentator. You may then find the root canal within each tooth with classic manual or semi-automatic 3D segmentation tools.

1 Like

Thanks for the reply!

I realize I wasn’t entirely clear in my initial post. While I mentioned wanting to segment “root canals,” I’m actually focusing on a specific one, known as the “Mesio-Buccal 2” canal. I’ve highlighted it here:

The segmentation was exported as an NRRD file, which I believe contains the 3D information related to the segmentation. Is that correct?

This might be a bit more technical, but with this NRRD file, would it be possible to use it as the ground truth mask for the model I’ve developed?