Creating an automatic US vessel segmentation algorithm using the Python interactor

Hi all!

I would like to develop an automatic vessel segmentation algorithm in Python code based on the abdominal aorta.

Currently, I am stuck in the trial-and-error procedure since I really would like to avoid any manual operator interventions such as pointing seeds manually.

I tried the VTKVesselEnhancement module but didn’t succeed. Moreover, I couldn’t find the perfect filter operation.

Is anyone familiar with any gradient filter in which I could find the vessel in some pixels and perform some morphological or seed growing with these pixels as a starting point?

Your help is appreciated!

Ultrasound image quality can be quite bad, with lots of noise and artifacts. In general, you cannot expect that any common image processing filter (even vessel enhancement filter) will be readily usable to detect features on ultrasound images. Do you work with a single 2D image, a sequence of 2D images, or 3D ultrasound volume? Can you post a few example images so that we have an idea about what kind of images do you have?

Thanks for your reply! Really appreciated.
The data that is collected intraoperatively, is comparable to the attached image. The major problem in Slicer for automatic segmentation is to find a morphological operation that is capable of detecting a single vessel shaped structure. I tried some simple conventional operations in the following order: thresholding → Gaussian filtering → Median filtering. This result in sort of binary image.

The data we collected is both 2D and 3D reconstructed US volume. Currently, I’m working on the 2D US images alone.
Source of image: https://obgyn.onlinelibrary.wiley.com/doi/full/10.1002/uog.15942
Slicer_forum_1

You cannot extract these vessels using basic image filtering, but seems like an easy task for deep learning. You “just” need to segment a few thousand slices and train a network.

During the upcoming Slicer Project Week #35 we’ll work with MONAI, NVidia, and PerkLab engineers and researchers (@diazandr3s @SachidanandAlle @ungi @RebeccaHisey) to make it easier to segment ultrasound image sequences (both with and without position tracking) with MONAILabel, which will be very useful for streamlining this initial manual segmentation and test the performance the network as you go. You are welcome to join this work during the project week or catch up after the project week and see what you can use from what we develop.

2 Likes

As @lassoan says, there are a couple of examples using deep learning and U-Net showing that this problem can be solved. I think about 8-10 scans with about a hundred manually annotated images from each scan could be enough to achieve good accuracy. But of course the more the better. It could be done in a few days since all software pieces to do this is already available and they work in 3D Slicer. I’m happy to help if you decide to do it either at the project week or another time.

3 Likes

Happy to help with creating the MONAILabel App for this use case.

1 Like

Hi @ungi! thanks again for your reply.

That sounds interesting! I have US-series of six acquisitions that include about 300 images each. Would that be enough for building a deep learning? We might need to involve some augmented data to enlarge the available data set. Do you have a link to a tutorial or related work, so I can check the feasibility?
The project week would be bit too late to be honest. Therefore, your help is really welcome.

Are your ultrasound scans recorded as Slicer sequences? If not, first we need to find a way to import them into Slicer. Could you check what file format are they saved in? And how are files organized?

All the files are saved both in .mhf and the corresponding .zraw formats.
I already succeeded to import the files into Slicer with:
[success_I, masterVolumeNode_I] = slicer.util.loadVolume(‘C:/Users … .mhd’,returnNode=True)

That way, your images are loaded as a single 3D volume. But you have a time-series of 2D images, so the third dimension should be interpreted as time, right? If you can share a sample scan file pair (e.g. paste a Dropbox link in this conversation), we could see if there is an option to treat the third dimension as time. If not, it should be simple to convert your 3D volume to a sequence of 2D images.
FYI, Slicer has a modified version of the mhd file format, called sequence metafile. Those can be loaded directly as sequences using this command: slicer.util.loadNodeFromFile(fullpath, ‘Sequence Metafile’)
But probably your modules are sequence metafiles (yet).

Hi all!

I am working with J.vd.Zee on this project and we have collected some more US data in the past few months. Now we have recorded the data directly in Slicer using the sequence module. So we have 2D US images, which are transformed to the correct position using the tracking system information in time. I think I have enough data to train a neural network, as a first feasibility test, and I would really appreciate your help with this!

Do you have a tutorial or manual (maybe from the project week) on how to do this using the software pieces that are already available in Slicer? Thanks a lot for your help!

1 Like

Hi, did you finish the US vessel segmentation latter ? I met the same request . Please share some solutions. Thanks.

The US images are extracted from the MP4 video through the US device.

5