I work with a lot of different dcm RTSTRUCT datasets, and often end up with weird artefacts after conversion of planar contours to a binary labelmap, such as holes, aliasing and interpolation errors creating planar artefacts where there may be a significant change in cross section between contours (sometimes, but not always).
This is what these artefacts tend to look like:
I can usually get a decent result by changing a couple of conversion parameters in the “Advanced Create” dialogue such as the spacing (voxel size) and the oversampling factor - there seems to be an optimal spacing that gets rid of artefacts and minimises aliasing while preserving detail. However I can only find this through trial and error currently.
Does anyone know of a way to select the best conversion parameters based on features of the dataset (maybe like contour spacing, contour resolution, overall size of the segment, or something else)?
Or does anyone have any links to technical resources that might help me to get a deeper understanding of the geometric/algorithmic procedure that is at play here?
First of all, I think it is important to understand the way segmentations are handled in Slicer. Please see this page https://slicer.readthedocs.io/en/latest/user_guide/image_segmentation.html
The figure about conversion shows exactly the RT use case, where you have a set of planar contours for each structure. By default Slicer converts these planar contours to closed surface models (see paper: Sunderland, K., Woo, B., Pinter, C., & Fichtinger, G. (2015, March). Reconstruction of surfaces from planar contours through contour interpolation. In Medical Imaging 2015: Image-Guided Procedures, Robotic Interventions, and Modeling (Vol. 9415, pp. 435-442). SPIE.), which are then voxelized. Once you start editing the segments, the binary labelmap becomes the source representation, and closed surface is created directly from that.
In the advance create dialog that you already found, you can define an alternate route from planar contours, which is via ribbons. It is more robust, but less accurate (no out-of-slice changes for each slice, so the slice thickness seriously limits the reconstructed details).
To me the artifacts you show seem like instances where the binary labelmap contains a connecting point between parts of the structure, which the algorithm wants to triangulate (with the included smoothing), and this is the result. Unfortunately in this single low-resolution screenshot there is not much to see.
Overall I think your approach to increase resolution via oversampling is a good way to go if your data allows this increased memory and performance demand. Every application is very different, so I think you’ll need to find an overall reasonable oversampling factor, and use that.
Maybe if you give us more information we can give you better answers.