I have a .OBJ mesh that I would like to convert into a binary segmentation DICOM or NIFTI image. The easiest way for me to do it now is to drag the model into Slicer and change the dropdown from ‘model’ to ‘segmentation.’ This produces a nearly perfect result that I can then export.
Is there a way to recreate this functionality from the command line to automate the process?
Can you elaborate on how this is done? I can successfully drag my .obj mesh into Slicer, and I see it rendered in one of 4 windows on the right side of the Slicer window, but I don’t see any dropdown that say “segmentation”… I see “segmentations”, but then what?
The dropdown menu where you can choose to load the .obj file as segmentation is in the “Description” column the “Add data” window (the window that appears when you drag-and-drop a file to the Slicer application window).
OK, I can see the object in the upper right slicer window - I believe this is where I was before. How does one save this selected object out now as a 3D volume in .nii format?
Note that usage of NIFTI file format is strongly discouraged, because it has many issues. Only use it for storing brain images, because for those images compatibility the BIDS ecosystem outweighs the disadvantages of using such a problematic file format.
Search for nifti on this forum and you’ll find lots of examples of interoperability problems due to unclear assumptions about the use of qform/sform by various packages, no definitive standard for the meaning of 4D data, and other issues. Nifti was a big improvement over the img/hdr conventions that used to be common in neuroimaging but it’s disappointing that left/right flip questions still come up on neuroimaging forums after all these years.
Thanks! I have enough to do already without searching a forum for NIFTI issues, so thanks for summarizing. I’ve run into the L/R-flipped issue with .img/.hdr before, but not with NIFTI. The packages I use must interpret L/R in the same way: I have a gold-standard image which I use in my processing pipelines to check for this. So its good to know it can still be an issue.
In case it is useful to others in the future, here is a recipe:
Translate mesh into .obj format
To use Slicer for the transformation, you need to have the mesh in .obj format. You can use Meshlab to translate some other common mesh formats (e.g., .ply) Load the file into Meshlab, and then “Export mesh as…” from File menu, and choose “Alias Wavefront Object (*.obj)” as the file type.
Use Slicer to transform mesh to volume
start the Slicer app
drag the mesh file and drop it into the Slicer main window (a sub-window will open titled: “Add data into the scene”)
in the “Add data into the scene” window, select “segmentation” from the dropdown menu in the Description portion (far right), then select “OK”. The object should show up in the upper right Slicer sub-window.
choose “Segmentations” from the Modules dropdown menu. This will change the options in the left-most part of the Slicer window (under the 3D Slicer icon)
in the “Representations” section (one of the new set of options that appeared as part of “Segmentations”), select “create” in the “Binary labelmap” line
in the “Export to files” section (also part of “Segmentations”, but below the “Representations” subsection), select the following:
NIFTI under the “File format” dropdown menu
choose the “Destination folder” for the exported .nii file
optional: select a reference volume that you want to save this new object into (e.g., that has the voxel size etc. that you prefer). If you don’t choose anything here, then it defaults to the size of the object (which you’ll need to pad later - easy to do)
choose “Export”. This should create an .nii file in the “Destination folder”, with the same basic name as the mesh .obj file you started with. This should be loadable in, e.g., ITKSNAP where you can check if it did the right thing.
The resulting image space will be exactly the size of the maximal dimensions of the mesh object. This can be a problem if you want to register images with certain other software packages. You can pad this image using ImageMath (part of ANTS tools) or the iMath function in the ANTsR package in R