I want to get the surface of a MRI volume

Hello,

I’m intern engineer in biomedical lab and I’m new with 3D Slicer and 3D visualization / 3D imaging.

I am currently working on MRI data of brains and I’m interesting only in the information located at the brain surface (blood vessels at the brain surface, lobes, sulcus). To be more accurate, I would like to project this 3D surface into a 2D plan in order to do some image processing on it.

Do you have any idea on how I can extrat the surface from my nifti datas with 3D Slicer?

Thank you very much

You may be able to use freesurfer for your project, since it projects each brain hemisphere to a sphere and I think they can project to a plane too. You would need several steps if your mri isn’t already in a form freesurfer supports, but SynthSR is good for that. Also freesurfer only does the brain part, so you would need to also segment the surface vessels another way, perhaps by thresholding near the surface as extracted by freesurfer. The links below might help.

Thank you very much, I’ll investigate these links. About the segmentations : we’re not approaching segmentation by machine learning/deep learning.

The idea would be :

  • from a nifti data, extract the surface points (I guess it’s possible with slicer ) and withdraw useless information (inside of the brain)
  • rotate the camera to a given plan looking at the region of the surface we want to observe
  • project this surface into the camera plan

I would like to obtain a 2D image of my surface before using a segmentation of vessels on it. Since I don’t need an accurate segmentation, simple morphological tools should be enough (I use T1 MRI where the blood vessels are hypersignal). My biggest issue is to extract the surface points of my MRI volume.

I’ve already found a python function that allows me to extract the surface mesh from volume node. Then from this surface mesh I can generate a segmentation node. And it would be greate if I could erode this segmentation node of 1 or 2 voxels.

I am not sure to be clear, but it’s the idea my project.

You can extract the surface geometry using any of the skull stripping modules in Slicer (try the new HD-BrainExtraction extension) or any external tool.

You can then use Probe volume with model module to color it by a chosen volume’s voxels. I would recommend to segment the brain surface vessels in the 3D image and probe that volume, because you can segment the vessels more reliably and accurately in the original 3D volume then in some extracted surface.

If you need a flattened (2D) model then cut off the surface regions that you don’t need using Dynamic Modeler module, and use texture mapping to get 2D coordinates. If you don’t care too much about distortion then you can use vtkTextureMapToPlane or vtkTextureMapToSphere. If you want to flatten with minimal distortion then you can use some conformal mapping algorithm, such as the Conformal texture mapping module in SlicerHeart extension.

Thank you very much ! I’ve been trying some of the tools you gave me, it’s exactly what I need.

I have another question, is there a tool/reference python script to rotate the camera from a given angle, take a screen of the 3d view and then rotate again, and screen again? This for a whole round !

Yes, see the ScreenCapture module.

Thank you, I was able to do what I wanted. Though I face a new problem, I want to withdraw the lighting effect from the 3D rendering in order to have a uniform lighting and withdraw reflects, is it possible? I saw a module name sandbox/lights that should do what I want but I struggle to build it from source and my extension manager doesn’t work.

I’m on fedora linux.

If you go to the advanced section of the Models module you can set the surface properties.

https://slicer.readthedocs.io/en/latest/user_guide/modules/models.html

I’ve already found such options, but I didn’t understand yet how to apply model changes to volume rendering ! I create model from a volume thanks to this : Python FAQ — 3D Slicer documentation

You can change appearance of models in Models module. You can change appearance of volume-rendered images in Volume rendering module.

Hey everyone,

I was able thanks to your advises to complete my objectives. Though, I have met an issue I would like to resolve. The spacing of MRI images is saved into the nifti files, I guess this spacing is then used for the 3d reconstruction of the structure through the volume rendering. Though, I would like to know if we could access to the new “voxel spacing” generated by the 3d view. To be more clear, I need to get the pixel spacing of screen captured images. Indeed, this information is needed for my registration task.

Screen captures are typically not used for analysis, as they have low bit depth (8 bits per pixel; while most original images use at least 10-12), they may contain burnt-in annotations, etc. However, if you still want to use them then you can get the spacing by dividing the slice view’s field of view (FOV, in physical size) by the slice view’s dimensions (in pixels).