I have a input 3D volume data (binary 0/1) . I know “surface module maker” works pretty well for me in 3D slicer. Now I am trying to build an API. The input is the numpy 3D array (0/1) and output is the surface module VTK file. Could someone advise how to achieve this or kindly share an example?
Thanks a lot
I would recommend to use Segmentation module for converting binary labelmaps to 3D models (meshes). You can find full working examples in the script repository.
You can access and modify a segmentation as a numpy array as shown here. You can also create a labelmap volume node from a numpy array and then import that into a segmentation.
Thanks for your reply. It seems I have to go to Python console in Slicer for using segmentation module. Is there a way that I can achieve this outside 3D slicer (no 3d slicer installed) or call slicer library?
You can install Slicer, or just unpack a Slicer application package somewhere, or use one of the Slicer docker images and run those Python commands in Slicer’s Python environment as shown here.
If you work in Jupyter notebooks then you can use Slicer’s Jupyter kernel.
You can also pip-install all Python packages into Slicer and run all your Python code from there.
Thanks for all your advice. I tried pyvisa and It seems it is able to achieve this by using smooth(). However, the visualization in pyvista looks much worse than in 3D slicer, both lighting and surface. Could you please advise if any idea?
Both Slicer and pyvista uses VTK for mesh operations. Pyvista does not add new features to VTK, it is just a thin wrapper over VTK to provide a more pythonic interface. Slicer adds thousands of higher-level features on top of VTK that are useful for medical image computing.
I don’t know which ones of those thousands of features were actually missing in this case. In the top image it seems that polydata normals are not computed. Computing those would remove the faceted look.
But this is just one small example. If you are building a medical image computing application using low-level generic tools such as VTK then you will spend most of your time reimplementing basic features from scratch that have been all figured out and solved many years ago. Instead, I would recommend to spend time with learning to use a higher-level medical image computing framework (such 3D Slicer or MITK). As you get better, you get familiar with how things work internally, learn how to extend, improve things, etc. You eventually would still learn all the ins and outs of VTK, Slicer, etc. but you would spend time with learning and developing new, relevant things and not waste your time with redeveloping existing features.
That’s great. Thanks very much for the detailed replay and advice!