Hi everyone! I’m new to Slicer and I apologise if this question is trivial or has been discussed elsewhere.
I have a segmented volume and I’d like to export the surface of this volume as an (ideally triangular) mesh (e.g. WEM). The constraint here is that I’d like each border voxel of the segmented volume to be represented as a vertex in the resulting mesh. No smoothing, no optimisation. Just the border voxel coordinates and a mesh between them that tells me which ones are adjacent (and by the by turns it into a surface model).
Is this reasonably doable with Slicer and if so, how?
Thanks a lot!
You can disable surface smoothing (decimatin is disabled by default) in Segment Editor, Show 3D button’s submenu.
If the surface mesh must follow the binary labelmap so closely then there might be cases when only non-manifold mesh can be created, causing serious issues with in further processing steps. What would you like to do with the mesh?
Thanks a lot lassoan! Appreciate the fast reply.
I want to use the points to simulate the acqisition of points on a physical organ surface. So I’m mainly interested in the border voxel coordinates as a point cloud (simulating a cloud of physically measured points).
But I’d also like to run a patch growing method on this surface, so knowing the adjacent points for each point would be super helpful. That’s where the mesh idea came from.
Will the serious issues you mention affect this? Or what sort of issues are you talking about? Thanks again for any advice!
Be aware that medical images suffer from the partial volume problem, e.g. a voxel at the surface of the brain may contain a mixture of gray matter and CSF. The surface boundary voxels are necessarily far more likely to be composed of mixtures of tissues than voxels deep inside the object (e.g. a voxel in deep white matter might be 100% white matter, but voxels at the gray-white boundary will be mixtures). Methods like marching cubes leverage this by interpolating vertex positions. In general, this will give a more accurate shape than meshes where vertex positions are constrained to the voxel positions.
Thanks Chris! That’s a very good point that I hadn’t thought of - I’ll take it into consideration! What’s important to me is that the resolution is not changed and that no optimisation via vertex elimination or adjustment occurs. Thanks a lot for the feedback!
Surfaces reconstructed by marching cubes (or flying edges) are very similar to those acquired by surface scanners. We used this successfully in several projects on skin, bone, and cartilage surfaces (with submillimeter residual error after registration).
Thanks lassoan! I’ll give that a try! Cheers!