Forgive me if I missed something here. But you’re basically saying that Volume Rendering works by generating a point cloud based on the intensities of the pixels from the input image stack (CT, MRI. US, etc.). Is there a way to export this point cloud to another program (like Meshlab) to generate a mesh and subsequently an STL?
Also, the Segment Editor is a fantastic tool in that it allows the user to define thresholds to filter out undesired noise. But in my experience, there are certain instances when both the thresholding and edge detection/fill between slices methods don’t capture all the necessary data, resulting in a segmented model that has to be further processed to achieve a manifold model. Personally, I think that the point cloud generated by the Volume Rendering module converts the input image stack to a rendered cloud perfectly and if there was a way that we could generate an STL, in theory it would have almost perfect contours that match the sample’s anatomy.