Is it possible using Slicer to create a simulated x-ray image, similar to a digitally reconstructed radiograph, that takes 3D models as its input rather than volumetric data (i.e. CT scans)? I’m envisioning a scenario in which 3D models (e.g. STL files) of one or more bones/implants/tools etc. can be placed in a 3D scene, assigned x-ray attenuation values, and an x-ray image simulated? Solutions that require the use of python are viable options for me.
I know it’s possible to generate a digitally reconstructed radiograph from a CT scan as per the post linked to below, but wasn’t sure if it’s possible to do that using a more simplified model of a finite number of structures each with constant x-ray attenuation. I know it’s possible to convert surface models to volumetric data and then generate a DRR of those volumes, but that seems like a fairly inefficient solution (computationally) compared to something that can calculate ray-triangle intersections directly.
Try this out… seems to work reasonably well, but not sure if it qualifies as a “simulation”.
Create an empty image volume with Image Maker module. Then import your .stl files or whatever and convert to segmentation (Segmentation module → import models). Fill the voxels in the empty image volume with values for the linear attenuation coefficient (look these up for e.g. bone, soft tissue, lung tissue at the photon energy you’re interested in… here I used 10 kVp and looked up the values from the NIST database), for each tissue; you could script it or use a combination of Segmentations → “Mask Volume” and Add Scalar Volumes module in the GUI. Finally, once you have your linear attenuation coefficient map (image) completed, use Simple Filters module → MeanProjectionImageFilter, or SumProjectionImageFilter.
There are many options how to do this. The ideal solution depends on what your requirements are (how many input models you have, what resolution you need, how quickly the objects move, how complex their geometry is, what CPU and GPU are available, etc.).
@Hamburgerfinger’s suggestion is good. Generating binary or fractional volumes and render it using GPU is good, too (after the volumes are generated, GPU renderer can probably render the DRR image at a high frame rate).
For me this is still unclear. I cannot find a module named ‘Image Maker’. I have no idea how I should create an empty volume and there is no clear explanation how to do this. I imported my .stl and exported this as a binary labelmap. Now I can find this back in the volume module. I cannot set any linear attenuation coefficient values though.
You are almost there. You can already generate a DRR from this using Volume Rendering module. To make it look more realistic, you need to adjust transfer functions, which might be a bit hard if your voxel values are 0 and 1 (defaults for binary labelmaps).
Hi Guys! Although this is an older topic I thought I give it a try since I have a similiar question:
The uploaded picture is an orthographic projection of a STL Design I did, which shall mimic a female breast parenchym in Mammography. I created the projection using an orthographic projection script in python.
So much for the introduction, now here is my problem: I want to use 3D slicer to create a more advanced orthographic projection, like a DRR. I have voxelized the STL file and want every voxel that is filled, to have a certain attenuation A, and all the space inbetween (so all empty voxels) to have a certain attenuation B. Thanks in advance for your help!