Issue during voxelization of a model stl

Hi all!

I have to convert a surface model stl into a voxelized mask, with 1s inside and 0s outside. It is important that the mask has the same dimensions and voxel size as another volume I want to register with. In Matlab the available functions don’t allow to specify voxel size and this is very important for me. Getting inspired by this post: Voxelization of mesh, I follow these steps:

  1. Import stl and convert into segmentation node by right clicking on the model entry in the data module.
  2. Import the other volume, and in the volume section specify the voxel size and center it.
  3. Go to the transform model, and create a new transformation which moves the segmentation so that the crucial parts of it overlap the other volume. This step may not be needed.
  4. Go to the Segmentation Module, and select export the segmentation as a labelmap, indicating as reference volume the other one, so that the output voxelized mask has the same dimensions and voxel size.
  5. Then, go to the volume section, select the just created volume, and convert to scalar volume.

I’d like to know if these steps are well or if there is another way to do it better. The stl model sometimes appears to be bigger than the object in the other volume, when it shouldn’t be. And other thing: should the stl model have some basis features (i.g. closed surface, filled inside etc.)?

You can do it a bit simpler:

  • Load the STL as segmentation: in “Add data” dialog, choose “Segmentation” in description column
  • Create master volume of at the desired resolution: Go to Segment Editor module, click on Specify geometry button (next to Master volume selector), select your input mesh as source geometry, specify voxel size (spacing values) and click OK
  • Create binary labelmap representation: go to Data module, right-click on the segmentation and choose Export visible segments to binary labelmap
1 Like

Hello,
I am new to this forum, but I use 3DSlicer most of the time. I like the idea of creating the binary label map from the 3Dmodel. However, I don’t know what I did wrong; when I loaded the created binary “file.nrrd” to check if I could reconstruct the 3D from that estimation, they didn’t fit together. Do you know how to fix that?

Could you attach a screenshot to illustrate what you mean by “didn’t fit together”?

How did you create the file.nrrd file? (using export to files? or by exporting to labelmap volume node and use save scene?)

A friend of mine used ilaskit to annotate regions of interest and she generated 3D models as obj files. When she tried to recover her work it was gone. Then I suggested to her to create the binary image from her obj file. I load the model and select segmentation then go to the segmentation editor and generate the label map as you suggested.

So to export I got to import/export nodes
image.png

and export then export to files.

Just by checking if the generated file was fitting to the model to double check before I confirm to her that things it is okay I found the merge not fitting at all; see the screenshot below:

It is a kind of rotation. I was thinking of using volume information to adjust the files but I think you have maybe a better way to go.
Thank you for your help.

OK, this looks good, the difference is just due to using a different coordinate system convention (RAS or LPS). You can choose which coordinate system convention to use when you import/export models or segmentations. You can also switch between RAS/LPS for a loaded node by mirroring along the first and second axis.

Excellent. Thank you very much.