Best strategy to integrate airway segmentation into LungCTSegmenter?

I am planning to integrate an airway segmentation into the LungCTSegmenter extension.
Would it make sense to invest time and connect the current CIP airway segmentation module or would you recommend writing that completely new using the recent segment editor features?

I could call the current CIP airway segmentation from the LungCTSegmenter and transform the generated labelmap into segmentation and display it, in one go. Airway segmentation results are quite good, the procedure itself is, however, slow. The process is not very interactive but we could make use of the one trachea markup which I demand from the users and go from there. As a con, one could not correct for leaks etc. easily.

Any advice or ideas are welcome. Thanks.

Is specifying the inputs takes time or the automatic computation? How long do they take?

How does the segmentation result quality compare with what you can get with Grow from seeds, Local threshold, Fast marching, Flood filling, and Watershed?

Specifying the inputs is the easy part, it just needs one seed in the trachea, which I usually already have.

A CIP airway segmentation takes around 140 s to complete.
The results are often acceptable, but sometimes important bronchi are just missed (like B2 right in this case of the public COV dataset).

With kernel type B70f I get the best results concerning leaks.
It would be important to have an option to place additional seeds in bronchi that are missed.

How does the segmentation result quality compare with what you can get with Grow from seeds, >>Local threshold, Fast marching, Flood filling, and Watershed?

The results are only slightly better and require more manual input.

An closer look at the missed bronchi:

HU’s of parenchyma and bronchi in the above CT are almost identical and this kind of segmentation is certainly a difficult task. However, detailed subsegmental bronchi as shown in this older video

can hardly be obtained with the current CIP and the three kernel options - as long as I am not missing something here. At least, this kind of detail would be sufficient for clinical use. The video also shows that an earlier version had a direct 3D display of the created labelmap, which is a nice feature.

I’ve checked the source code of Segment Lung Airways module and it is shockingly complicated. This is the kind of algorithm that is developed with a lot of manual effort by adding more and more rules to handle all kinds of situations that are encountered during testing on various data sets. It should provide better results than simple generic methods (such as Grow from seeds, Fast marching, etc.), but as more and more rules are added, the more complicated and slower the algorithm becomes.

In theory, you could go through the code, understand it, and improve it, for example by adding more rules or inject more user inputs (e.g., seed points) into it. However, you are probably much better off with training a neural network for this, because it is a relatively simple problem, you have tons of images to train on. To generate ground truth segmentation, you can use the current airway segmentation module, but review and fix all the errors manually. Once you have segmented a few dozen data sets, you may be able to use MONAILabel to do the rest.

If you are not ready to jump into deep learning and/or performance of the current airway segmenter module is sufficient, then you can simply add an option in your lung segmenter module to call this module. If ChestImagingPlatform is not installed then you can show a popup asking the user to confirm installation of the extension (see how to install an extension from Python scripting here).

Thanks for sharing the link to the sources, just did not find it.
I will probably implement an option to use the CIP airway segmenter first, try to understand workflow and parameters, but will have MONAILabel on the screen, too.

1 Like

If I use MONAIlabel here, will a user have to start its server or install anything complicated?
I want to make this as user-friendly as possible.

The MONAILabel approach would probably have the benefit that it could be combined with the right and left lung detection.

MONAILabel is currently optimized for people who want to train their own models. It requires a few setup steps (although it could be automated in a Python scripted module) and - depending on the model - it may require the user to have a good GPU.

If you want to make things as simple for users as possible, don’t require GPU and any setup, then you can upload the model to an NVidia Clara server (such as the public Slicer segmentation server) and let users do the segmentation using the NVidia AIAA segment editor effect.

As a first step, I implemented the CIP airway segmenter CLI call into the LungCTSegmenter, which works great.
This is nearly ready. Before I can make this commit, I would need to know how to switch the visibility of a labelmap off in a python script. And how to switch visibility of a CT volume on.
I know how to do this for segmentations using the displaynode and Visibility2DOn(), but the day begins to be very ineffective seaching the net how to do that with a labelmap or a CT volume. Probably @lassoan you know it by heart.


scalarVolumeDisplayNode = self.labelmapVolumeNode.GetScalarVolumeDisplayNode()

does not work. The problem is not related to airway segmentation, where I am delete the created labelmap after conversion into a segmentation, but for the Parenchyma Analysis labelmap which the segmenter builds.

It is the last detail I need to implement, then I would commit and push the segmenter.

Will evaluate AI airway segmentation as well. From all MONAILabel examples I saw until now, the segmentations looked somewhat bulky, the vessels very tubular, but I would need to test all this before I can really judge.

Test result, lung mask generation and airway segmentation combined (1 start click, 13 fiducial clicks, 1 checkbox click, one apply click, 203 sec processing on a gaming laptop, CT: from open source COV dataset) :

Labelmaps can only be displayed in 3D using volume rendering, which has many limitations (this was one of the main motivation for the development of the segmentation infrastructure). You can only render a single labelmap layer in a view. You cannot edit/touch-up labelmaps directly. I could list many more limitations of labelmaps compared to segmentations but even these should be enough that justify using segmentations instead.

I would recommend to import the computed labelmap to segmentation node and delete the labelmap node.

Done that, it is what you see in the example above, the problem is not related to that.

It is related to the labelmap I still generate for Parenchyma analysis, I just want to switch that labelmap display off in 2D and 3D. And the CT volume display on again.

Haven’t you implemented segmentation support for Parenchyma analysis, so that it takes segmentation directly? In Parenchyma analysis you can delete the temporary labelmap volume after the analysis is completed.

Yes, but this PR is still open. But this is a good idea. I will just assume that it is merged and remove the labelmap generation in segmenter ? And I will try to remove the temp labelmap in Parenchyma Analysis. Tried that before, there was an error message coming up, will need to analyze that.

1 Like

We can always switch to a temporarily forked version of CIP extension to get your changes in Slicer quickly, and then switch back to the official version when your changes got merged.

1 Like

I committed the CLI call to CIP airway segmentation, removed the CIP Parenchyma Analysis labelmap generation as you suggested, and incremented the version to 2.45. Pushed this to LungCTAnalyzer. The LCTA WIKI will be updated accordingly.

1 Like