Was wondering if anyone could help me! I am trying to segment a frog skeleton. Once I have used threshold and want to view the 3d image it crashes. I have everything closed except for slicer and I suspect it might be something to do with my laptops capabilities. My specs are Intel Core I7 - 9750H, 2.6ghz, 16gb ram, NVIDIA GeForce GTX 1660 Ti.
Is my hardware sufficient and is there anyway I can change something for it to work?
I don’t see an error message in here. Few suggestions:
You re using the stable, try a preview build and see if you still have problems.
We typically suggest 6-10X more memory than your dataset for segmentation tasks. Given the size of the dataset, you might be running out of memory. You can try downsampling your volume by Crop Volume module to see if working with lower resolution helps.
Make sure to check Nvidia control panel to see if the Slicer is actually set to run on the geforce 1650 GPU. Can I choose which GPU to use?
If it’s not too much trouble, then it would be interesting how the error shows up in the log (we might then identify the problem easier next time, only by looking at the logs).
Thank you. There does not seem to be any suspicious in this log. If you are sure that this was the session where you had problems then maybe our logs were just not detailed enough to capture more information about the issue.
I might have misspoke, I don’t necessarily know that they are opengl issues. But always happened rendering. Preview builds are much better handling larger 3D volumes than 4.10.2.
Actually I do have a follow up question on this. What is the purpose of ‘adaptive’ setting in the Volume Rendering module? For small volumes (medical size) this has no appreciably performance difference between Normal and Adaptive. For large datasets (>1 gigavoxel) Adaptive often performs poorly compared to Normal (stutters, crashes or results in lower FPS for the same volume property).
Can it be removed? Or at least make the Normal as the default global setting?
Adaptive settings is essential for lower-end graphics hardware and CPU-based volume rendering. It measures rendering time and degrades rendering settings to achieve the desired interactive refresh rate. It works very well for CPU-based rendering, but since GPU-based rendering performs a lot of processing in background threads and on the hardware, rendering time measurement is not a good estimation of how overloaded the rendering pipeline is.
Probably we could use “Normal” quality for GPU-based rendering by default, but unfortunately, quality mode is defined in the view node and not in the display node, so it would not be easy to change it cleanly.
I have seen quite a bit of the stuttering sort of 3D rendering issues for larger volumes fixed simply switching to Normal, even with integrated GPUs. Yes, I would encourage making the default setting Normal, but of course can’t comment on the feasibility.