Different segmentation export function produce different results

Hi,

I have a segmentation node that I want to save on disk. In order to save it with saveNode, I need to convert it to a labelmap, but the way I convert it makes the result different:

here is how the segmentation looks like:

Here are two different ways I convert the segmentation to labelmap and result of each:

  • Using “ExportAllSegmentsToLabelmapNode”
    this way the labelmap is totally overlap with segmentation, and that’s what is desirable

I run this in Interactor:

slicer.modules.segmentations.logic().ExportAllSegmentsToLabelmapNode(seg, lblmap, slicer.vtkSegmentation.EXTENT_REFERENCE_GEOMETRY)

here’s how it looks when I overlap to the segmentation shown above:

and the other way is:

  • Using “ExportVisibleSegmentsToLabelmapNode”
    this way the labelmap is not a full overlap with segmentation, and I don’t want this to happen, simply because it’s not true segmentation mask of all voxels.

Here is the interactor code if I want to reproduce (here ref is the reference image node):

slicer.modules.segmentations.logic().ExportVisibleSegmentsToLabelmapNode(seg, lblmap, ref)

and here’s how it looks compared to original segmentation:

It’s not clear to me why converting segmentation to labelmap should produce different results, especially that the result is unrealistic when I provide the reference image to it (which I expected the other way around).

Still that would be fine if I could use it in my code, but when I save the generated (visually looking fine) labelmap, and then load it into slicer, it seems to have incorrect position, since I see nothing overlaying image. How should I handle this?

(Slicer version: 5.0.2 r30822 / a4420c3)

Hi, anyone has ideas on this? Ideas appreciated.

A segmentation object can store the segmentation at different representations and resolutions. Usually the same geometry is used in all segments and that is the same as the segmentation’s reference geometry, but if you switched source volumes during segmentations then they may be different. It is up to you to choose what representation and resolution you would like to export.

If you export a labelmap representation then I would recommend to use any of the export methods that take a referenceVolumeNode as input and set the reference volume to the geometry you need.

In your examples, you got different results, because of the reference geometry you specified in the ref node (probably you have just left some default geometry in it, with 1.0mm uniform spacing).

Dear @lassoan ,

Thank you for your informative reply,
I would like to elaborate a bit more on the context of this issue I’m facing, I hope this helps me get even more specific information.

I once had an image (img_original), that I previously transformed with a rigid transformation to create (img_transformed). Then I segmented a pathology area on this transformed image (pathology).

now I want to inverse transform the pathology, to overlay on the original image and save it on that. so I do it like this:

transformation.Inverse()
pathology.SetAndObserveTransformNodeID(transformation.GetID())
lblmap = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLLabelMapVolumeNode")
slicer.modules.segmentations.logic().ExportVisibleSegmentsToLabelmapNode(pathology, lblmap, img_original)

doing so the segmentation (in red in image below) which was looking acceptable after applying inverse transformation on it, created the labelmap (in green) that looks somehow not a precise representation of the segmentation:

As I am creating this to make my training set for next steps, I’m concerned about the discrepancy between the two. for instance, in the image above, the labelmap goes a bit inside tendon, that by definition can not be the pathology, and so this can provide wrong information to neural network in training later, I believe.

I appreciate any thoughts that can come helpful on any steps I tried to describe here.
Thanks in advance

If you harden a warping (not just rigid translation+rotation) transform on an image or segmentation then there is some loss of details due to resampling. To compensate for this information loss, you can resample the original image to have 2x-3x smaller spacing and perform all operations (transformation, segmentation, etc.) on this higher-resolution image.

To keep the data size under control, it is also recommended to crop the image to the minimum necessary extent and use isotropic spacing. All these features - reducing spacing, resampling to isotropic spacing, cropping - are available at one place, in Crop volume module.