Hello! I need a little help understanding the ideal hardware for this feature to work quickly, or fast enough. On my personal computer, I haven’t had any “issues” with files requiring more RAM than I have, and when I do, it takes longer, which is normal since it exceeds my current memory.
I’m currently segmenting fossils still within sediment. This corresponds to a stack of TIF images obtained from a microtomograph with the following specifications: KV 195, uA 128, and 0.15 mm pixel size.
The problem is that for each bone I want to segment using this feature, and after cropping it so my computer can handle it, it takes more than 30 minutes, reaching up to 45 minutes. I tried on another Windows PC with 128GB of RAM and a Mac with 192GB, so memory was no longer a limitation. However, I got very similar results to those on my PC; in short, it was still taking a long time.
Is there a way to speed up the process without using more RAM in this case or without cropping the volume again? I’ve seen that when I segment using this tool, sometimes the segment used as a background remains well-defined and occupies a small, well-defined portion, but other times it occupies the entire tomography despite having a clear and well-defined outline. Could that have something to do with it? I’ve also disabled the 3D view to load faster, but I don’t see any improvement. Or is there a possibility and possible improvement by involving the GPU in some way?
Thank you very much in advance.
Here are the specifications of the PCs I have access to to work with this dataset:
The current .nrrd file is 2.7 GB.
Own: Windows 11, 32 GB DDR5 5600 MHz RAM, Intel i5-14600kf, and RTX 4070ti Super.
Lab 1: Lenovo ThinkStation, Windows 11, 128 GB DDR4 2400 MHz RAM, Intel Xeron Silver 4208, and RTX A5000.
Lab 2: Mac Studio, macOS Sequoia 15.3.2, 195 GB LPDRR5 RAM, Apple M2 Ultra.
How big are your microCT scans? Pixel size doesn’t tell us anything about the size of the data?
This is a challenging segmentation because there is probably an intensity gradation from your fossils to the sediment around it. But hard to tell without looking at your data.
If your volume is big, and the structures you are trying to segment are not continuous across the volume, then the biggest imrpovement you can do is to create subvolumes that only contain what you want to segment (for example if you are goal is to segment hand bones, do not have the thorax in the field of view). Because everything in Slicer in physical space, you can then move/copy segments to one segmentation.
Also give SlicerNNinteractive extension a try. It works really well as replacement for grow from the seeds (just make sure you downsample the data to match your GPU).
I’ve thought about doing the subvolumes, although it’s a bit tedious. I don’t know if it will be viable because the bones are scattered throughout the rock matrix. The animal is complete but disarticulated, and it would be a good idea for me to segment and export all the models in position so I can then rework them in Blender.
This is the first time I’m reading about this extension. How does it work? My GPU has 16GB of VRAM. Would I have to lower the resolution, for example, to make it fit as you say?
I’m thinking about using the Slicemorph module, importing the image stack at half the resolution instead of full, and re-cropping it to fit the threshold. I don’t know how much information I’d lose by lowering the detail.
Isn’t there a way to involve the GPU in the “normal” process of how Grown from Seeds works?
I managed to reduce it to a couple of seconds, but by halving the quality, the result isn’t working. I lose a lot of detail, and the pieces are even millimetric. I tried installing the extension you mentioned, but none of it loads. I don’t know what’s going on. I’m using version 5.7.0.
After following the server installation guide, I’m getting errors trying to run it. Do you know how to fix this?
“Failed to connect to server ‘http://localhost:1527’. Please make sure the server is running and check the server URL in the ‘Configuration’ tab.”
The first command line executes successfully: “powershell -ExecutionPolicy ByPass -c “irm -useb https://pixi.sh/install.ps1 | iex””
The second line gives me an error in this section:
PS C:\Users\jared> cd /d %localappdata%
Set-Location: No location parameter found that accepts the argument ‘%localappdata%’.
Line: 1 Character: 1
and therefore when starting the server I get this error:
PS C:\Users\jared> cd /d %localappdata%\nninteractive-server.pixi\envs\default\Scripts
Set-Location: No position parameter found that accepts the argument
‘%localappdata%\nninteractive-server.pixi\envs\default\Scripts’.
On line: 1 Character: 1
cd /d %localappdata%\nninteractive-server.pixi\envs\default\Scripts
PS C:\Users\jared> pip install nninteractive-slicer-server
pip: The term ‘pip’ is not recognized as the name of a cmdlet, function, script file, or executable program.
Check if you entered the name correctly, or if you included a path, verify that the path is correct and try again.
On Line: 1 Char: 1
Nowadays I don’t use windows as much, but if I were to guess you were running this inside the powershell instead of the terminal windows (command console) as instructed. I have not tried using this on windows.
Aaah! I found the error, you were absolutely right. Thank you so much. I misinterpreted it, there were some interpretation issues when translating, haha. I only have one question, and it seems it doesn’t support CUDA. Is there any way to fix this?
it may not be bringing in right combination of torch and cuda libraries, or you might have some issues with your Nvidia driver? again I find getting deep-learning tools working correctly on windows quite difficult, I suggest you try the MorphoCloud or use a Linux machine locally if you can.
Everything’s up to date. I have CUDA 12.9, so maybe the combination of CUDA and Torch isn’t right. I don’t understand how this actually works; it’s new to me, but I’ll have to figure it out somehow to see if I can improve the segmentation speed.
Well, after trial and error, I managed to make it compatible with CUDA and have it detect the GPU. However, NNInteractive Slicer doesn’t have CUDA compatibility on Pixi, and it’s not viable with a CPU. Thanks anyway.
I am pretty sure quite a few people are using the nninteractive with cuda and gpu on windows. Not sure if they are doing it via docker or pixi.
Hopefully one of them can chime i .
We use nnInteractive on windows. It was just a regular pip install as documented on the github page. It might have been pixi or venv, I don’t recall but I also don’t think it matters.
Update
There’s something about microtomography that makes it take so long. I tested how long Grown from Seeds took with another image stack weighing approximately 9GB at full resolution, and it took less than two minutes.
There should be nothing “special” about microCT data. However, if you can share your data, and the seeds you are trying to grow from, I can take a look…