We already have an Endoscopy module in Slicer. However that requires the lumen being already segmented. I have been working on a module which performs the centerline extraction followed by lumen segmentation, for pancreatic duct (it can be used for other lumens too). This may be a useful module for lumen analysis.
I have not made a demonstration webpage yet, just have the repository at:
I’m hoping to add this through the extension management system and would like to hear about your suggestions/opinions.
I am very interested in your work. Thank you for sharing
May I ask / suggest this would be very useful for us for the lung to map the airways … especially if there is a nodule to which we have to reach endoscopically. Often these nodules are peripheral (outer one third of the lung) as since the airways get narrow and branch it’s difficult to reach them with confidence. There is a ton of new equipment being pumped by many companies into this technology (Lung point, Electromagnetic navigation, Robotic bronchoscopy, Cone Beam CT etc etc ) but for a low cost setting / open source we have very little… it would be ideal if you could help design something which will help navigate the airways to a lesion of interest with your background work on Pancreas
There is some overlap with the VMTK extension, which offers an opportunity in reducing maintenance workload. For example, VMTK can compute vesselness, segment lumen automatically using a few simple methods, and create a centerline tree. The tree is compatible with curves module so that you can automatically find the shortest path between two points (maybe the tree that your modules extract could be used, too):
The resulting tree in the Slicer core Endoscopy module (if your modules create a curve node as output then that can be used there, too).
We should review what is common between existing and new modules and how to consolidate them, so that they combine all the good properties and features.
What is in the gth818n folder?
What does SegmentLumenFromAxis module do?
How the VIrtualEndoscopy module differs from the Endoscopy module in Slicer core? Could they be merged into a single module?
Could we make some segmentation features available in the Segment Editor as effects?
These could be all great projects for the upcoming project week.
And this paper for short: Nain D. An interactive virtual endoscopy tool with automotive path generation[D]. Massachusetts Institute of Technology, 2002.
Shows the centerline idea.
This does not affect the innovation of the project。
I agree. there might be some overlapping the the VMTK package.
I recall that I had some discussion with Luca back in the old days that VMTK is computing the vessel lumen surface using level set evolution, then computing the center line from from the lumen segmentation. I thought that such level set evolution that perfectly grabbing the vessel without leaking is particularly hard except some high contrast cases such as the big air way wall.
In contrast, the Virtual Endoscopy module is computing the center line first and the “grow out” from the center line. This is what the “SegmentLumenFromAxis” dose.
gth818n was my student ID back in Georgia Tech and I used it as a namespace storing my tools/functions…
The current Endoscopy assumes an already extracted surface. But both VMTK and the Virtual Endoscopy proposed here starts from volumetric image. They could be combined.
Also i agree that making an effect for centerline computation in Segment Editor would be a good idea. I’ll look into that.
I’ll check the wiki for the upcoming project week. Project week has always been so much fun!
Agreed. It only works for a single, very well contrasted vessel branch, but you can segment that using simple thresholding or grow from seeds, so I did not find the levelset method useful (other than it creates smoother surface). A few more segmentation tools were added to VMTK, but I did not have much luck using them either.
Growing segmentation from a manually defined curve could be a good approach for challenging cases. I would be interested to see if it can be used for segmentation of intestines.
I agree. We should split the functionalities so that we can pick any of the segmentation and centerline extraction methods and then use the created centerline for visualization (travel along the centerline).
The current Endoscopy module does not deal with centerline extraction and segmentation, so probably it could be left as is (or some features added from your endoscopy module, if anything is missing). For the segmentation from centerline we could add a Segment Editor effect, as you suggested. If you have time next week, then we could make this a project for the project week.
Sounds great! I’ve just registered for the Project week. I’ll populate the project i’m planning to work on on the wiki.
Yes I used the same algorithm in the “Grow From Seed” for lumen segmentation in SegmentLumenFromAxis.
I’ve tried that but found little luck… The intestine is very hard because the touching outer surfaces and the collapsed lumen make the recognition of the topology very hard even for human. So it’s even very hard to draw the seed. It relatively easy for the inflated colon. It’s even hard to prepared colon (without inflation).
We have “Draw tube effect” (in SegmentEditorExtraEffects extension), which can draw a uniform-radius tube in the selected segment. Maybe we could add a feature to this effect: snap the tube walls to the image content.
I tried the “Draw tube effect” and it’s really nice. It seems to be purely determined by the fiducial points place, the image content is not affecting the tube trajectory.
The proposed extension has four possible “contact points” with the other modules:
For tracing a curve in the image, you place n (>=2) fiducial points along a tubular structure. Then the algorithm will trace from the n-th back to the 1st points, passing the most-vessel-like regions in the image. Comparing to the “Draw tube effect”, this module takes the image content into consideration. This module is useful when you want to trace out a single un-branched vessel/tube whose contrast with surrounding is poor, such as the pancreatic duct.
Once you have a major vessel traced out as a curve, the second module allows you to place m fiducial points, and then it will find a curve from each of the m points back to the curve, along the most-vessel-like regions. This is useful when you want to trace a “fish bone” type structure, such as the SMA, or the lower-limb vessels.
If you want trace a tree-type structure, such as coronary artery, where there is no long-running main thread with side branches like a fish-bone. You may want to try the 3rd module. Here you give n fiducial points and the algorithm will trace back from each point (from 2nd to n-th) back to the 1st point, independently.
Once you have the center-line/curve traced out, you want to extract the lumen from the center-curve. You run this 4th module. Essentially it runs a “grow from seed” growing out from the center-curve.
In vmtk, the lumen is firstly segmented using level-set. This requires the image to have good contrast and relatively large diameter to allow the contour/surface to grow in. Then the center-curve is computed from the lumen.
Here the order is reversed and my experience is that sometimes it’s easier to find the center-curve first and then the lumen.
Vessel segmentation is still a challenge. You are proposing an interesting approach.
In your third proposed bullet item: How do you handle something like the pulmonary veins and arteries?
Thank you very much! Yes, pulmonary veins and arteries are tough. But not in the tracing step, instead, they are challenging in the vessleness characterization step. My understanding is that the Hessian based vesselness computation (such as the Frangi filter) relies on the implicit assumption that the diameter of the vessel being relatively constant, but that’s not the case the pulmonary vessels, whose diameters change dramatically in a very short distance.
We are facing this difficulty coz we are working on a pulmonary embolism detection and measurement project currently and doctors want to extract the vessels first. But I took the liberty of postponing that and detect the embolism regions first.
The current form of this extension wouldn’t handle pulmonary vessels well. We are trying combining some deep learning segmentation with shape prior to see if that helps… Now knowing that you are interested in this, i’ll definitely keep you updated and asking for your advice on this project.
Since this extension contains several components that may potentially be interacting with several different parts in Slicer (tube segmentation in segmentation editor, vmtk, endoscopy), the path and integration approach of each may not be very clear right now. How about we first integrate these as a SlicerVesselAnalysisAndVirtualEndoscopy extension. We people are using them (hopefully), we can gather more information and hands-on experience on how to integrate it with the other modules?
sounds good. Vessels have been one of my interests for a long time. But they are still a challenge.
Kikinis R, Jolesz FA, Gerig G, Sandor T, Cline HE, Lorensen WE, Halle M, Benton SA. 3D morphometric and morphologic information derived from clinical brain MR images. In: Proceedings of the NATO Advanced Workshop; June 1990; Travemuende, Germany. NATO ASI Series. 1990. p. 441-54.