Segmenting vasculature from two-photon imaging

This is on behalf of a colleague who is interested in using 3D Slicer to visualize and segment vasculature from mouse brains using two-photon imaging.

Specifically, they are interested in volume rendering of the data and do interactive segmentations and vessel tracing within the rendered volume with the aim of getting a 3D reconstructions of the vasculature and extracting data out of these reconstructions such as vessel diameter, length, tortuosity or straightness, branching order and branching angle etc.

He sent some examples of what they are trying to visualize and segment with Imaris currently. Any pointers would be much appreciated.

What would the interactive segmentation consist of? Is this data suitable for thresholding, or would a more involved vessel segmentation algorithm (such as vmtk) be needed?

They trace the vessels in 3d rendering and then algorithm paints the branches, as I understand. In Imaris they don’t seem to use threshold, possibly due to change in intensities as they go deeper into the tissue.
Ultimate goal is to get metrics from those branches, which I guess vmtk can do. But they first need a semi-automated way of extracting them.

@muratmaga can you provide a small sample? It’s hard to tell from the image you posted how easy the task would be. For example, is the pixel spacing isotropic?

What is the recent status if the topic? Any solution?

Not much, my colleague never gave me a sample dataset so that we can follow up.

1 Like

I’m also investigating the possibility of using 3D slicer for segmentation, reconstruction and tracking of structures in 3D. I guess I can provide some data. Since 3D microscopy images are composed of channels (red, green, blue, etc) I believe that thresholding should be possible.

I will check my data I will provide a one if I can find it.

1 Like

Well, I have now a data set can be used.

1 Like

That’s great - if you can post a sample somewhere I’ll bet people would try some experiments and provide feedback.

Hi There,

Finally, I have managed to image some 3D stacks.

Properties:
Nuclear staining for the kidney samples. There are two folders in the link; the Kidney_nucleus_Series folder contains the tiff image series for nuclear staining and the Rendering folder contains some rendering videos that I have created with depth coding using the confocal microscope software.

I will update here with more complex data types (such as vasculature, and specific staining for specific cell types).

I believe a 3D slicer can create such a rendering view, but I could not figure out how.

Please let me know if you have any questions regarding the data.

Thanks for sharing the data. It’s fairly easy to work with in Slicer. Just drag and drop one file, then turn on the options and uncheck the Single File option. Then use the Vector to Scalar Volume module to eliminate the redundant channels and then turn on Volume Rendering for the newly created volume.

1 Like

Here is the default rendering of your data. I used ImageStacks from SlicerMorph, since it makes RGB->grayscale conversion easier as well as allows downsampling etc…

In Slicer opacity/color maps are determined by the intensity values inside each voxel. but as I understand you want to color them based on the relation in the stack. That’s dots higher in the stack will have a different color then ones in the lower in the stack. Is that what you mean by depth coding?

It should be doable by scripting, but I don’t think it is exposed via UI.

1 Like

Hi Steve,
Thanks for quick update. It looks nice.
I plan to have bigger data such (probably at 1-5 GB in size), do you think if 3DSlicer able to render such big data? Is it possible to segment (via tresholding) and quantify (size and co-localization)? the volume?

Hi Murat,

Data actually is not RGB (it is possibly 12-14 bit). The rendered video is RGB. I used colorcoding (as a depth coding) to demonstrate the thickness of the sample. It is good to know that downsampling is possible via SlicerMorph.

PS: I will update another data (contains vasculature together with nuclear staining) that may allow studying tracking better.

That should be no problem if you have enough memory, although some steps, like decompressing, are single threaded and can be slow. Cropping and downsampling can help a lot when exploring the data.

This example data was something like 180GB but can be explored at full res (not volume rendered).

1 Like