Extract labelmap contour

Hi.
I would like to extract the contour of the labelmap saved in a nrrd file using itk filters. It seems that the toggle button for the label map in the reslice viewers has this functionality. Is there a sample code of this functionality that I could search for? I’ve been trying to use the itk::BinaryContourImageFilter, but cannot get it to work.

Thanks.

Would you like an image, a closed surface, or a set of planar contours? In what file format?

I would like to extract the point set (cloud) of the label contours. Here is my current code.
Thanks!

// declare the image class
typedef itk::Image<float, 3>                      OriginalImageType;

//  -Read in label map
typedef itk::ImageFileReader<OriginalImageType> itkReaderType;
itkReaderType::Pointer itkreader = itkReaderType::New();
itkreader->SetFileName("label.nrrd");
itkreader->Update();

OriginalImageType::Pointer outImage = itkreader->GetOutput();

// Extract point data from label map
typedef itk::ImageRegionConstIterator< OriginalImageType > IteratorType;
typedef itk::PointSet< float, 3 > KdPointSetType;
typedef KdPointSetType::PointType     KdPointType;

IteratorType it(outImage, outImage->GetBufferedRegion());
it.GoToBegin();

KdPointSetType::Pointer  PointSet = KdPointSetType::New();
KdPointType point;

unsigned long pointId = 0;
	
while (!it.IsAtEnd())
{
	// Convert the pixel position into a Point
	outImage->TransformIndexToPhysicalPoint(it.GetIndex(), point);
	PointSet->SetPoint(pointId, point);
	// Transfer the pixel data to the value associated with the point.
	PointSet->SetPointData(pointId, it.Get());
	++it;
	++pointId;
}
std::cout << "Number Of Points = " << PointSet->GetNumberOfPoints() << std::endl;

If you only have a point cloud then you cannot do much with your data (because you don’t know what is inside/outside, you don’t have surface normals, you don’t have surfaces that you can render, etc). until you reconstruct a surface.

That’s why typically you run a marching cubes filter to create surface mesh from the image.

I am trying to write a deformablemesh surface filter that would deform against the point cloud of the surface. The slicers are so far apart that the marching cubes won’t give a satisfying result. Thanks for the help!

Reconstruction from point cloud is a much difficult problem than smooth interpolation between slices, so that’s not a good direction to explore.

Instead, create a high-resolution labelmap with isotropic spacing (preferably create a high-res labelmap in the first place by segmenting an isotropic supersampled input volume) using Crop volume module. For deformation, use Transforms module directly on the labelmap; or import the labelmap to a Segmentation node, create closed surface representation (point cloud with connectivity; you can apply smoothing on the labelmap in Segment editor; or during conversion to closed surface), make it the master representation and apply transform to that.

No need for any custom module development, these are all very common tasks.

Vtk has recently added a number of point cloud filters. See
https://lorensen.github.io/VTKExamples/site/Cxx/#point-cloud-operations

In particular there is a new surface construction filter:
https://lorensen.github.io/VTKExamples/site/Cxx/Points/ExtractSurface/

Thanks Bill for the links. These methods are useful when you don’t have connectivity information in your point cloud and you want to reconstruct only a single structure.

If you have connectivity information between the points (you have a surface mesh), then it would be a huge mistake to discard that information. Even when connectivity information within an image slice is available and you only miss connectivity info between slices (this is how segmentation is stored in DICOM-RT structure sets), there is a lot of ambiguity in reconstructing the surface. There is an algorithm for this in SlicerRT, that can reconstruct nice smooth surfaces most of the time while preserve most details, it’s quite tricky and in some cases the results are still not optimal.

I think any of these methods would give better results (no reconstruction errors, controllable smoothing, works well with multiple labels):

  • Option A. Convert labelmap to mesh, smooth the mesh, then convert back to labelmap. This method is implemented in the Segment Editor: Smoothing effect, Joint smoothing method. This is somewhat similar to what @DanC may try to achieve, but with the important difference of not discarding connectivity information when we convert labelmap to mesh.
  • Option B. Reduce the slice spacing to approximately match in-slice pixel size and insert empty slices between the original slices. Then run morphological contour interpolation. The morphological interpolator is available in Slicer in the Segment editor module, as Interpolate between slices effect.
  • Option C. Supersample the labelmap and apply smoothing (median filter, morphological operators, Gaussian, etc). These are available in Segment editor’s smoothing effect.

Thanks for the suggestions Andras.

I am trying to reconstruct histological cross sections of a prostate including other structures such as urethra,etc.
The spacing of the image set is 0.07 x 0.07 x 2 mm.
The cross sections varies so much between images, that any smoothing or interpolation I’ve tried in slicer would not give a continuous surface.Hence I was trying to explore a totally different approach. The urethra would look like a spine instead of a continuous surface after reconstruction.

Option A. Are the suggested algorithms different than the ones used in the Model Maker Module? The smoothing/ joint smoothing in the model maker module wasn’t good enough in my case.
Option B. I will try the morphological contour interpolation. I wan’t aware of this new function.
Option C. Does supersample meen to resample with smaller spacing between slices with interpolation?

Thanks!

Hi Bill.
Thank for the suggestion. I will definitely explore these if I go this direction.

Any suggestions to extract the contour points from the Slicer label maps? I currently am able to save the point set of all filled voxels using ITK.

Thanks!

The one in Segment Editor is slightly improved, but it still won’t be able to do any good if voxel spacing is highly anisotropic (you’ll see staircase artifacts or you’ll lose details in the reconstructed surface). If spacing of your volume differs more than about 10% between axes then first you have to crop&resample your volume using Crop Volume module.

So, before trying any of the recommended options, you must resample your input volume.

Yes.

When you have 20x difference between intra/inter-slice spacing, then most likely you cannot utilize all the information in the slice, so you have to use a spacing that is in between the smallest spacing vale - for 0.07x0.07x2mm spacing, something like 0.5mm should be good enough.

Do you register histology to US or MRI? Do you use the SlicerProstate or Segment registration extensions? They are specifically developed for prostate registration between high-resolution and low-resolution images.