I want to get the surface of a MRI volume

Hello,

I’m intern engineer in biomedical lab and I’m new with 3D Slicer and 3D visualization / 3D imaging.

I am currently working on MRI data of brains and I’m interesting only in the information located at the brain surface (blood vessels at the brain surface, lobes, sulcus). To be more accurate, I would like to project this 3D surface into a 2D plan in order to do some image processing on it.

Do you have any idea on how I can extrat the surface from my nifti datas with 3D Slicer?

Thank you very much

You may be able to use freesurfer for your project, since it projects each brain hemisphere to a sphere and I think they can project to a plane too. You would need several steps if your mri isn’t already in a form freesurfer supports, but SynthSR is good for that. Also freesurfer only does the brain part, so you would need to also segment the surface vessels another way, perhaps by thresholding near the surface as extracted by freesurfer. The links below might help.

Thank you very much, I’ll investigate these links. About the segmentations : we’re not approaching segmentation by machine learning/deep learning.

The idea would be :

  • from a nifti data, extract the surface points (I guess it’s possible with slicer ) and withdraw useless information (inside of the brain)
  • rotate the camera to a given plan looking at the region of the surface we want to observe
  • project this surface into the camera plan

I would like to obtain a 2D image of my surface before using a segmentation of vessels on it. Since I don’t need an accurate segmentation, simple morphological tools should be enough (I use T1 MRI where the blood vessels are hypersignal). My biggest issue is to extract the surface points of my MRI volume.

I’ve already found a python function that allows me to extract the surface mesh from volume node. Then from this surface mesh I can generate a segmentation node. And it would be greate if I could erode this segmentation node of 1 or 2 voxels.

I am not sure to be clear, but it’s the idea my project.

You can extract the surface geometry using any of the skull stripping modules in Slicer (try the new HD-BrainExtraction extension) or any external tool.

You can then use Probe volume with model module to color it by a chosen volume’s voxels. I would recommend to segment the brain surface vessels in the 3D image and probe that volume, because you can segment the vessels more reliably and accurately in the original 3D volume then in some extracted surface.

If you need a flattened (2D) model then cut off the surface regions that you don’t need using Dynamic Modeler module, and use texture mapping to get 2D coordinates. If you don’t care too much about distortion then you can use vtkTextureMapToPlane or vtkTextureMapToSphere. If you want to flatten with minimal distortion then you can use some conformal mapping algorithm, such as the Conformal texture mapping module in SlicerHeart extension.

Thank you very much ! I’ve been trying some of the tools you gave me, it’s exactly what I need.

I have another question, is there a tool/reference python script to rotate the camera from a given angle, take a screen of the 3d view and then rotate again, and screen again? This for a whole round !

Yes, see the ScreenCapture module.

Thank you, I was able to do what I wanted. Though I face a new problem, I want to withdraw the lighting effect from the 3D rendering in order to have a uniform lighting and withdraw reflects, is it possible? I saw a module name sandbox/lights that should do what I want but I struggle to build it from source and my extension manager doesn’t work.

I’m on fedora linux.

If you go to the advanced section of the Models module you can set the surface properties.

https://slicer.readthedocs.io/en/latest/user_guide/modules/models.html

I’ve already found such options, but I didn’t understand yet how to apply model changes to volume rendering ! I create model from a volume thanks to this : Python FAQ — 3D Slicer documentation

You can change appearance of models in Models module. You can change appearance of volume-rendered images in Volume rendering module.

Hey everyone,

I was able thanks to your advises to complete my objectives. Though, I have met an issue I would like to resolve. The spacing of MRI images is saved into the nifti files, I guess this spacing is then used for the 3d reconstruction of the structure through the volume rendering. Though, I would like to know if we could access to the new “voxel spacing” generated by the 3d view. To be more clear, I need to get the pixel spacing of screen captured images. Indeed, this information is needed for my registration task.

Screen captures are typically not used for analysis, as they have low bit depth (8 bits per pixel; while most original images use at least 10-12), they may contain burnt-in annotations, etc. However, if you still want to use them then you can get the spacing by dividing the slice view’s field of view (FOV, in physical size) by the slice view’s dimensions (in pixels).

Thank you for the information, I answer one month later because I let this issue aside in order to work on other relevant parts of the project. Though, I’m going back to this problem : since I want only to segment to external surface of the brain cortex, how can I access to this information without using screen capture ? I use screen capture to make a binary segmentation of the surface (blood vessels and gray matter) but indeed, the bit depth is quite low and I lose many information about smaller blood vessels.

Ideally I would like to have a 2d flattened version of the surface, with a minimum of distorsion. Using screen capture in our project was somehow useful because we could imitate the view taking of the camera.

Measuring the surface area, creating flattened surface with minimal distortion, and segmenting surface blood vessels are all quite different tasks. Slicer has solutions for all of them, but it is not clear what you want to do exactly. Could you attach a few screenshots and other illustrations that explains what your end goal is?

The main idea is to perform image registration between MRI datas and optical images in order to compare functional area mapping, here is an illustrate example :
registra
The optical image is registered during chirurgical process while functional MRI is registered few hours before the operation.
We have two type of MRI sequences : T1 and FLAIR, on T1 we see blood vessels better and on FLAIR sulcus better. On optical images, we can see sulcus and blood vessels, we got the idea to segment on T1 and optical images the blood vessels, in order to perform the registration. Here is anoter example below :

Though, we do not want to perfom a 2D/3D registration (where optical image is the moving image), we want to perfom 2D/2D registration from the optical image to the surface of the reconstructed brain MRI. That’s why we need a flattened image of the reconstructed brain surface.

3 Likes

Thank you, this additional information was very helpful.

The “proper” solution for this task would be 2D/3D registration.

You can compute your optical camera’s intrinsic and extrinsic parameters from a set of corresponding points in the optical image and the 3D volume rendered image. Once you have your camera calibration, you can do anything: you can rectify the image (remove non-linear distortions) by using the intrinsic camera parameters, and then using the extrinsic camera parameters either project the 3D volume rendered image (or extracted 3D brain surface) to the camera image; or project the optical image to the brain surface. This would be fairly easy to implement if you can find a readily usable 2D/3D calibration algorithm implementation.

Alternatively, you could compute the camera intrinsic parameters using a standard camera calibration in OpenCV (very well-established method). Then you could compute extrinsic parameters (camera position and orientation) fully automatically using 2D/3D image registration (align the volume-rendered image with the optical image). ITK registration framework could be usable for this: ITK has similarity metrics and optimizer, the only non-trivial part is that you need to incorporate the volume rendering in the registration process.