Grown from seeds took more than 2 hours

I work with high-resolution micro-CT scans of fossils, obtained with Skyscan 1272 and 1273 micro-CTs. Since the data comes as stacks of .png images, I use the Slice Morph extension to load them. Because of the condition of the fossils and the large number of elements I need to segment for later reconstruction in other software, I mainly use the “Grow from seeds” tool. It gives very good results and saves me a lot of time.

Performance has become extremely slow, even though I upgraded my setup. Normally, the scans I work with range from about 200 MB up to ~3.5 GB. At first, the initial segments only took about 2 minutes, but now the last one I did took more than 2 hours just for “Grow from seeds” to finish.

During those hours, I noticed that RAM usage went up to a maximum of ~55 GB. I also monitored CPU activity with Process Lasso and saw that cores 0, 9, and 11 had the highest load (around 20% each on average), while the others (1–8, 10, 12–15) stayed mostly at 0–1%. At one point, core 0 reached ~45%, while cores 9 and 11 peaked at 55% and 63%, respectively.

Dataset information
I opened the dataset by loading the .mrml scene in Slicer. The main volume is stored as an .nrrd file, with a size of 2.7 GB. Below are the relevant details:

  • File type: NRRD

  • Dimensions: 1972 × 1084 × 1313

  • Voxel spacing: 0.011 mm (isotropic, 11 μm per voxel)

  • Voxel type: unsigned char (1 component)

  • Volume bounds: X: 0–1971, Y: 0–1083, Z: 0–1312

  • Resolution: ~11 μm per voxel

  • Total size (on disk): 2.7 GB

Hardware setup

  • CPU: Ryzen 9 9950X3D (Zen 5)

  • RAM: 64 GB DDR5 @ 6000 MHz (EXPO enabled in BIOS)

  • Storage: NVMe SSD

  • OS: Windows 11 Pro

  • All cores, threads, and SMT are enabled.

What I’ve tried

  • Cropping: not possible, since there is no margin left in the region of interest.

  • Resampling to lower resolution: not acceptable, since I lose the little definition and contrast between bone structures.

  • Import with “half resolution”: sometimes reduces size, but too much detail is lost.

  • Alternative segmentation tools in Slicer: none of them have worked well for my case.

Is there any way to significantly improve the processing time of “Grow from seeds” for datasets of this size, or could I be doing something wrong in my workflow?

I sincerely appreciate your time and your responses.

Jared Amudeo P.

I don’t think you’re doing anything wrong in your workflow. Please note that your image is around 50-80 times larger than a normal CT (in terms of number of voxels). Also, from your description it seems to me that you are using most of the extent of the image for grow from seeds, which usually also does not occur with normal medical images (we segment a smaller part). So putting these two factors together, the two hours unfortunately seems understandable to me.

We have started moving some Segment editor effects on the GPU several years ago, but that line died away, because the direct motivation ceased to exist (the work with fractional labelmaps as I remember). If there is funding, I’m sure it is possible to improve Segment editor effects (even one by one) to use the GPU.

I think others will chime in with their points of view as well.

I think the best approach will likely be to crop your image volume into several smaller pieces and run grow from seeds on each one, then merge them back together. This may cause issues near block boundaries, but I think this is probably still your best bet, and you may be able to alleviate boundary issues by re-running grow from seeds near these regions (I can imagine a couple possibilities of how this could work, but am not sure which would work best in practice). Grow from seeds is fundamentally a local operation, so cropping your image into blocks and running a quick grow from seeds on each block should basically work. If your seed regions are so sparse that blocks might be missing a seed that they should have, then your even the full image grow from seed result probably won’t work very well.

You are not doing anything wrong. These are your main options if you want to get results fasts. You can do the segmentation in half resolution version of your data, get your grow from the seed results fast (with the understanding that it may miss small detail). To fix the issues then you can resample your segmentaiton to the original resolution, and import the full resolution imagestack and fine tune your segmentation.

Alternatively, as suggested, split your full resolution data in 4 smaller volumes via CropVolume (make sure each subvolume overlaps a bit, maybe 40-50 voxels), and then run grow from the seeds independently and merge the segments label.

Thank you all for your responses. I have a question about that methodology. For example, the dataset I am currently working on is a fragment of a dentary with 4 molars. Could I then crop those 4 teeth, segment them using GFS in the sub-volume, and then copy the finished segment to the segmentation of the main volume

How can I resample the segmentation to the original resolution, just importing the imagestack?

Yes, as long as you crop them from the original volume, everything should line up after mergering.

You should use the oversampling feature of Specify Segment Geometry option of segment editor. Segment editor — 3D Slicer documentation

(ie., if you have done the segmentation in half-resolution volume, and transfer it to the full resolution version you need to oversample the existing segmentation by 2)

@JaredAmudeo if you have a cuda-enabled GPU (or you want to borrow one using MorphoCloud) you may want to give this experiment a try:

The python file in the PR is just a drop-in replacement for the corresponding file in a Slicer distribution.

3 Likes

Definitely! I have a RTX 5090, it should go well

I’m out of town for a week, but once I’m back I’ll give it a try and see how it goes. Thank you so much!

Hello, good day! I just tried this method and it’s wonderful. I haven’t quantified it on multiple datasets yet, but in the one I’m currently working on, it reduced the initialization time by just a few seconds. I only have one issue, which occurs when updating. Whether the auto-update option is enabled or disabled, after correcting the seeds and seeing the loading icon, the updates don’t apply. When I click apply, the previous results remain. How could I fix this? I’m using Slicer 5.8.1. Thank you very much in advance!

Image 1 before correcting seeds

Image 2 after painting over the seeds

Image 3 after apply

Check your masking settings. Sometimes a change isn’t made because the masking settings prevent a change your are trying to make from taking effect.

It also seems a little odd that it looks like your segments overlap in image 2. If a voxel is marked as a seed for two different segments, only one of those will win (I think it’s the one lower on the segment list, but I’m not totally certain about that).

Lastly, if it’s not either of those, perhaps make sure you update the preview segmentation before applying. If you have modified the seeds, but not updated the preview, it’s possible that clicking the “Apply” button just transfers the existing preview segmentation to the real segmentation without updating it first.

@mikebind thanks for the suggestions. I think to clarify though I believe @JaredAmudeo was referring to the experimental GPU version linked above, which doesn’t support masking operations. And also as @lassoan reported in the PR thread, the update doesn’t work yet either.

So the GPU GrowCut still needs work for sure, but it sounds from @JaredAmudeo 's enthusiasm it’s still a valid direction to pursue.

Andras and I discussed this at a Slicer dev meeting (Tuesday’s at 10 eastern, open to anyone who wants to join) and were awaiting feedback to think about priorities. We have both been very impressed with nnInteractive, and wondered if putting more effort into Grow From Seeds vs simplifying the process of running nnInteractive servers is a better investment. It’s possible we need both.

Jared, did you try nnInteractive on your data?

@muratmaga do you have any feedback from the SlicerMorph community on what tools are working best for them when segmenting high res scans?

1 Like

Thank you very much for your reply. Sure, the issue I’m having is with the experimental version using the GPU.

I’m asking out of ignorance, to fix that, would I need to work directly with the .py file, or is it something much more complex?

Regarding nnInteractive, I tried using it once with an RTX 4070 Ti Super, but I could never get it to work. I would like to try it again but my question is, it would be similar to Grown froom seeds in it’s experimental GPU version, faster, slower?

Thanks for the extra info @JaredAmudeo.

Fixing the update mode, and even adding the masking should all be doable with the GPU version just by tweaking the python file, and probably can be done with a chatbot. Since the basic GPU algorithm is working I think the missing features would be just a matter of a few lines of python.

For context, I wrote the GPU growcut in OpenCL about 10 years ago and integrated it with the older Editor module, but never bothered to port it over. But your post made me look at it again since it’s a simple algorithm and I have been playing around with Warp for another project. Using gemini, the port from OpenCL to warp worked on the very first try, so used gemini to get the current draft PR in about an hour or two.

As for nnInteractive, it’s a totally different approach. It may not actually work on large volumes but I’m sure that depends on many factors like GPU memory etc. I know I saw it fail on a 10GB card but then work on a 20GB card for a clinical CT so I imagine that a very high res scan would need significant GPU memory.

I’ve tried doing it with the help of an AI for the code, but I haven’t been able to get it to work haha. What I did was download the entire GrowCutCL folder and add it to 3D Slicer through another additional module path. I’m not sure if that has anything to do with it.

To understand, could I achieve a port with AI assistance? Sorry for my ignorance, I know very little about this topic.

You wouldn’t need to use the GrowCutCL code, since the core algorithm has already been ported. The part that very different between the old Editor version and the current Segment Editor version is the way the incremental updates are handled, so that’s the part that really needs to be re-implemented the way the current GrowFromSeeds effect is implemented. Also the old CL version didn’t have the masking option at all, so that needs to be added.

I’m not sure how easy it will be to get the AI to implement these things. It may also depend on what AI is used. Some people have been saying that the latest Claude Code is really good as many tasks. Perhaps by checking out the the branch for the pull request and then also pasting in this discourse thread it would know what to do. Or maybe you need to break it down step-by-step.