Integration of “Segment Anything Model”?

Hi all,

Wondering if there’s a way to get the new open source segment anything model that meta ai released recently integrated into the AIAA server?

The model seems incredibly powerful and integration with Slicer seems like it would be a godsend to those of us in the natural history space utilizing Slicer for things beyond human/mouse scans.

It should be pretty easy to integrate, but does anyone know if it would work well on 3D data? I’d be afraid that each slice would segment differently and you’d end up with jagged segmentations in 3D.

1 Like

Maybe?

My thought was that I could pretty easily import an image of a single slice into the model and using the segment from prompt functionality was able to mask out the outline of entire organs trivially.

Youre right that as currently implemented the automatic mask generator in SAM doesnt really have a way to insure that fidelity of a given segment between multiple slices in the image stack so I doubt it would function as a unsupervised tool right out the gate.

But my concept would be to use the prompt based segmentation as part of a semi-supervised work flow. Once an adequate number of semi-supervised human expert verified segmentations have been generated, that data can be used to generate taxon specific training sets to further refine the model.

The prompt for a segmentation can be a mask. For 3D images it would be reasonable to have the user segment one slice, then use that slice’s segmentation as a prompt to segment adjacent slices, and so on recursively. Some special logic would be needed to handle multiple user-segmented slices, which would be needed if the single slice initialized segmentation is not satisfactory.

I tried it, and it works well on dental xrays and slices from CT and MR images.


Its automatic segmentation:

Hovering with mouse segments vertebral bodies and disks well.

7 Likes

Could it use a similar logic to what is used during the “fill between slices” tool to handle multiple slices?

1 Like

At the minimum it would be a great seeding tool for fill between the slices.

1 Like

Exactly my thoughts.

I can only use it via a screen capture module snapshots, and while playing with window/level settings, it seems to capture different features. These are all same slice. I couldn’t figure out how to do multiple masks manually. All from automatic parsing of the images.

That’s about where I am with it running it through google colab and just seeing how the model behaves with cephalopod contrast enhanced CT images.

Unfortunately I’m not in the position to host my own server for implementing the backend of the AIAA and rather than developing an extension around the SAM, it seems like would be easier to use the existing AIAA UI scheme for marking up individual segments to use as seeds for “fill between slices” (although I admit this may be a naive notion)

“Segment anything” works considerably better than “Remove background” function in PowePoint and it is nice that it is made easily accessible in a web application and that the training data is published as well. But I don’t really get the excitement about using it for medical images.

To me it its performance seems to be comparable to 20-year-old classic Watershed segmentation, but Watershed can be used in 3D as well. See a simple comparison on an easy segmentation problem:

SegmentAnyhing is simpler and more convenient in that you don’t need to switch between effects, segments, etc. But even with the inconveniences of switching between effects, segments, etc. the overall task completion time is comparable. Updates take a bit longer in Slicer, as in Slicer we update a 3D segmentation consisting of 9 slices, while SegmentAnything just segments only 1 slice.

MONAILabel also has similar interactive neural network based tools (deepedit, etc.) that are supposed to work much better, because they are trained on medical images and some of them can also work in 3D.

For me, the main conclusion is that we need to make Slicer’s segmentation tools simpler to use and make more people aware that they exist.

7 Likes

Perhaps im misguided in thinking that models trained on human and mouse data are not gonna work well for segmenting unconventional organisms.

I work on cephalopods and other mollusks (ex vivo CEμCT) and have found flood filling and grow from seeds to be inconsistent/needing a lot of correction but I’m fully willing to concede that it could just be that I don’t know how to best use those tools.

That would be wonderful. However, the need for manual segmentation in non-medical 3D dataset is high mostly due to the fact the there is really no intensity difference between structures (e.g., cranial bones of skull) where semi-automated tools like watershed or grow from the seed is of little use. Or scans are poorly calibrated ore constructed. SO any possibility of reducing manual segmentation (which is extremely costly, since non-medical CTs are much bigger in data size compared to clinical images) is what the appeal is.

When I played with the level/window in that contrast-enhanced scanned, I did obtain more uniform looking boundaries in SAM than then I often get from watershed or grow from the seeds. I don’t know how well this would translate to 3D. But if it does help me cut down the number of slices I have to mark by say 50% (and I do the rest via interpolation using fill between slices), that’s still a huge time gain.

I wasn’t able to successfully use deepedit in MonaiLabel for any new non-clinical dataset. Another issue we tend to have in 3D scans of natural history specimens is that there are only a handful of one class/species (if that many), so leveraging deep-learning approaches in this context (beyond mouse and zebrafish) has been limited (too many classes, very few per class representation).

1 Like

I think we should be able to do much better than SAM, but if you want to explore its usability in Slicer then you can try this Slicer extension that has just been created:

Hopefully @SachidanandAlle and @diazandr3s will have an answer to this soon.

1 Like

Hi all! I’m the founder of RedBrick AI and wanted to share my thoughts on SAM.

Our experimentation with SAM showed that it was VERY effective in most medical imaging segmentation tasks. SAM cannot automatically segment the whole image without prompting. However, with simple key point/bounding box prompts, it’s the fastest interactive segmentation algorithm we have experienced.

We’ve experimented with many traditional segmentation algorithms, like Watershed, Otsu, Level tracing, etc., and so far have found SAM to be the most effective and most ergonomic.

See this comparison video for SAM vs. Watershed segmentation on MRI spine. Apologies in advance if I’m not using Watershed correctly, but I tried this on several modalities/objects and continue to be extremely excited by SAMs performance!

FYI - You can try out SAM for free on RedBrick AI F.A.S.T. ⚡️ Meta AI’s Segment Anything for Medical Imaging. | RedBrick AI Blog :slight_smile: @Dominick_Dickerson @pieper

@Shivam_Sharma it looks like you haven’t tackled the problem of propagation into 3D. Do you intend to work on that?

1 Like

Yep! It’ll be released in a few days.

I think this is clear by @Shivam_Sharma 's usage of Slicer compared to your video @lassoan segmenting the same things.

@Shivam_Sharma 's 10 seed points and initial result:
image
image

@lassoan 's 6 seed points and initial result:
image
image

1 Like

My experience is that SAM is quick and simple, which is good, but it only works for very easy, 2D segmentation tasks. These easy tasks are mostly either already solved or if not solved then it is because they are not clinically relevant problems. Many people feel the same way - see for example this discussion: Samyakh (Sam) Tukra on LinkedIn: Holy smokes SAM (Segment Anything Model) works even on medical images out… | 128 comments

SAM might be a good basis for creating medical image segmentation tools, but you would need improvements to segment in 3D and probably you would need to do some training specifically on medical images, too. So, most likely SAM is actually not the best approach to start from. Instead, you could start from an inherently 3D segmentation tool, or one that has been already trained on medical images. Before jumping into investing time into extending SAM for 3D medical image segmentation, I would recommend to check out MONAI/MONAILabel based tools and other promising tools described in the literature can do.

2 Likes

Fair points! However, we have experimented with all the tools (traditional and deep learning based) and still are impressed by SAMs abilities on a variety of use cases :slight_smile:

1 Like

What tool are you using here in 3D slicer?