Digitally Reconstructed Radiograph Using Models Instead of Volumes

Is it possible using Slicer to create a simulated x-ray image, similar to a digitally reconstructed radiograph, that takes 3D models as its input rather than volumetric data (i.e. CT scans)? I’m envisioning a scenario in which 3D models (e.g. STL files) of one or more bones/implants/tools etc. can be placed in a 3D scene, assigned x-ray attenuation values, and an x-ray image simulated? Solutions that require the use of python are viable options for me.

I know it’s possible to generate a digitally reconstructed radiograph from a CT scan as per the post linked to below, but wasn’t sure if it’s possible to do that using a more simplified model of a finite number of structures each with constant x-ray attenuation. I know it’s possible to convert surface models to volumetric data and then generate a DRR of those volumes, but that seems like a fairly inefficient solution (computationally) compared to something that can calculate ray-triangle intersections directly.

Thanks in advance for your help!


Try this out… seems to work reasonably well, but not sure if it qualifies as a “simulation”.

Create an empty image volume with Image Maker module. Then import your .stl files or whatever and convert to segmentation (Segmentation module --> import models). Fill the voxels in the empty image volume with values for the linear attenuation coefficient (look these up for e.g. bone, soft tissue, lung tissue at the photon energy you’re interested in… here I used 10 kVp and looked up the values from the NIST database), for each tissue; you could script it or use a combination of Segmentations --> “Mask Volume” and Add Scalar Volumes module in the GUI. Finally, once you have your linear attenuation coefficient map (image) completed, use Simple Filters module --> MeanProjectionImageFilter, or SumProjectionImageFilter.

The result is at the bottom right.


There are many options how to do this. The ideal solution depends on what your requirements are (how many input models you have, what resolution you need, how quickly the objects move, how complex their geometry is, what CPU and GPU are available, etc.).

@Hamburgerfinger’s suggestion is good. Generating binary or fractional volumes and render it using GPU is good, too (after the volumes are generated, GPU renderer can probably render the DRR image at a high frame rate).

Perfect. Thanks @lassoan and @Hamburgerfinger!


For me this is still unclear. I cannot find a module named ‘Image Maker’. I have no idea how I should create an empty volume and there is no clear explanation how to do this. I imported my .stl and exported this as a binary labelmap. Now I can find this back in the volume module. I cannot set any linear attenuation coefficient values though.

You are almost there. You can already generate a DRR from this using Volume Rendering module. To make it look more realistic, you need to adjust transfer functions, which might be a bit hard if your voxel values are 0 and 1 (defaults for binary labelmaps).

I would recommend to convert the labelmap volume to scalar volume in Volumes module, then replace labelmap values by Hounsfield unit values (e.g., air -1000, bones and tools around 1000-2000) and add some noise using numpy. See this example for details:

You can change label value 3 to Hounsfield unit 1200 by calling voxels[voxels==3] = 1200.