Can you create a model using a CT scan from only one axis?

hello, i downloaded a file for a fossa skull from morphosource but the scan is only from one axis and doesnt load up in slicer, is there a way to create a model from a scan like this? it shows as a model in
morphosource’s inbuilt rendering software
heres the files i am trying to use
https://drive.google.com/drive/folders/1v1U0WpTqCZBoR_Ohvh2Ghy53FU7gKIsT?usp=sharing
thank you to anyone who can help! :smiley:

This is a huge image (16GB), so you need a strong computer to load it or load at lower resolution. I would recommend to use the ImageStacks module of SlicerMorph extension, which allows loading large image stacks conveniently, optionally at half or quarter resolution and/or cropped to the relevant region of interest.

For example, I was able load and volume render the image without issues at half resolution (2GB):

Volume rendering of the full-resolution - 16GB - image failed with my GPU, which has only 12GB RAM.

For volume rendering, enable SlicerMorph customization options in application settings, use the 16-bit preset, and adjust the offset slider.


@muratmaga I would recommend to update SlicerMorph extension so that it always registers its extra volume rendering presets (or allow registering of the presets separately from other customization options).

I would also consider adding the 8-bit and 16-bit preset to Slicer core (with names such as “CT bone low-dynamic-range” and “CT bone high-dynamic range”?). @muratmaga @pieper what do you think?

@studyskin you may want to get yourself familiar with SlicerMorph tutorials which covers all steps of importing, visualizing and segmenting microCT data. GitHub - SlicerMorph/Tutorials: SlicerMorph module tutorials

you want to complete sections on ImageStacks, Volume Rendering and Segmentation.

As @lassoan pointed out, this is a quite large dataset. Our rule of thumb in slicer is 6-8 times more memory than the dataset size. So you will need about 128GB of RAM. If you do want to benefit from GPU based rendering, which will be fast, you need a GPU with at least 16GB of RAM. So as advised, please proceed with downsampled datasets…

@lassoan adding the presets to Slicer would be great. Feel free to take it from SlicerMorph and add them. But again, we still need a way of people easily adding their presets to the Slicer, as a feature. Also how would you go about automatically adding volume rendering presets? Simply move them into another resource file that always gets executed?

The functionality already exists to save and load custom volume properties but it’s not well exposed in the UI. You can save .vp files and reload them, and then pick them using the volume rendering module (inputs section not preset menu, so no icon).

If someone has time, it would be pretty easy to add something that saves the current volume property and a screen capture of the 3D view in a form that could be used as a preset along with an application setting to reload them when the app starts.
https://slicer.readthedocs.io/en/latest/developer_guide/script_repository.html#register-custom-volume-rendering-presets

thank you so much everyone! i really appreciate it, im just doing this for personal interest so not very familiar with all the terms but i understand it alot better now!

This is now incorporated to SlicerMorph. Now all three volume rendering presets should be visible to users regardless they choose to enable customizations or not.

Thank you! I’ve started to clean up and improve the presets. As part of this work, I’ll propose to add these 3 presets to Slicer core and remove some of the less useful ones.

1 Like