I want to import an STL file and slice at a layer height of 0.1mm and generate a folder of TIFF images corresponding to a each fully dense layer. Is this possible?
I have seen many people here trying to do the opposite (create an STL from a stack of images) so surely the reverse is possible! I just can’t quite figure out how to do that yet. Any tips greatly appreciated
Yes, you can do these kinds of conversions using Slicer. I’ve added an example to the script repository that does what you need (with customizable margin size and output label value):
You can copy-paste the code above to the Python console (Ctrl-3). You can of course do the same steps using GUI, let us know if you would prefer to do that instead of scripting.
Thanks for that script! Looks just like what I need. Though having a bit of trouble implementing it.
At line >>> inputModel.GetBounds(bounds)
I get an error AttributeError: ‘bool’ object has no attribute ‘GetBounds’
Something I’m missing here?
The script is to be used with recent Slicer Preview Releases. In latest stable,
loadModel returns a bool flag (succes/fail) and you need to specify an additional argument to make it return the mode node as well. I would recommend you to use the latest preview release.
Great thanks, I got to the bottom of the script. I assume in line
outputLabelmapVolumeArray = (slicer.util.arrayFromVolume(outputLabelmapVolumeNode) * labelValue).astype(‘int8’)
That labelValue is meant to be declared outputVolumeLabelValue
For a 55x40x40mm part I get 126 images which are all just black 128x131 pixels. Was expecting to get black silhouette of slice on white background right? Sorry - learning and asking at the same time.
O got it to work! Just figured out the SpacingMm and MarginMm variables. Thanks for that!
Is there a way to increase resolution to around 600dpi and assign colours to background and sliced image? Thanks
You can set any resolution by adjusting outputVolumeSpacingMm values (first two values are image spacing within a slice, third value is image spacing between slices).
It is an indexed image, so it has no colors. You can create an RGB image by defining a lookup table and running through
np.take (e.g, something like this:
outputVolumeLabelValue=1; ... lut=[[255,0,0],[0,255,0]]; imageRGB = np.take(lut, imageIndexed)). If you are doing this for 3D printing colored bitmaps on polyjet printers then you may consider using SlicerFab extension.
Thanks @lassoan. It is for 3D printing but just inverted the matrix as I just needed black on white.
I have considerably increased the “resolution” by changing SpacingMm and have a high quality STL. However curves and diagonals are represented very coarse (45deg line steps up in 5x5pix increments).
Does one of the functions quantize the model heavily?
If you import a model into segmentation without specifying reference geometry then a default resolution is chosen automatically. You can force the segmentation’s internal labelmap representation geometry to be the same as the output geometry by calling
Do you use a Stratasys printer? Can you write a bit about your application? My understanding is that voxel printing’s main advantages are that you can use voxel intensity of original images (and/or information extracted from them), you can generate 3D prints with regions of non-uniform color/opacity/stiffness, save time on segmentation, and create more detailed prints (e.g., using SlicerFab or similar methods), but you cannot really benefit from these if the bitmap just contains uniform regions defined by STL files. Or you just want to use these to validate your workflow vs. directly printing an STL file?
Thanks that worked, though I placed the call earlier, just after creating the seg object.
It is for a DIY binder-jetting printer (only jetting binder not color). But another person designed the printhead interface which takes bitmap images layer by layer. I couldn’t find any software which generates these images except 3DSlicer.
Sounds very exciting project. It’s great that you could make it all work.