I recently upgraded my desktop GPU from old Geforce Titan to 1080TI due to the feedback I am getting new VTK requiring newer hardware.
My datasets, which work perfectly with 4.8.1, crashes the GPU volume rendering in nigthlies with identical settings VP. This is on Windows 7, with a recent Nvidia driver 398.11 (from June). It is a fairly standard setup.
I am happy to provide datasets and setting to provide better trouble shooting. Just indicate me what’s necessary. This is really a big obstacle for my lab and our user basis and we really cannot move on from stable version, and yet we need the features introduced to segment editor after the release of 4.8.1
I could reproduce the error. Volume rendering works and the performance is reasonable, but as I zoom in and rotate around after a couple of seconds the application hangs. Windows error log indicates that the video card driver crashed (faulting module: C:\WINDOWS\SYSTEM32\nvoglv64.DLL).
In VTK OpenGL rendering backend (that was used in Slicer-4.8.x) you could specify GPU memory size and the renderer automatically downsampled the input volume to fit there. As far as I know, this features has not been implemented yet in the new VTK OpenGL2 rendering backend. Most likely this is the root cause of TDR errors (graphics card driver timeouts) and this application hang. It might be also some trivial bug, due to unusual properties of your data set (float voxel typ, very small voxel size, etc.), which might be easy to fix.
Now that basic rendering issues around clipped volume are getting resolved, it may be a good time to work on rendering of large volumes.
@jcfr What is the best course of action? Can somebody at kitware have a look at rendering this data set with a small VTK application to see if they can reproduce the error? Should we enter an issue into VTK bugtracker?
FYI, converting your volume to have “unsigned char” voxels fixes the GPU volume rendering issues. Rendering may work because of smaller memory footprint of the image (voxel can be represented on 1 byte instead of 4 bytes) or there may be an error in GPU rendering of volumes with float voxels.
@muratmaga It would be nice if you could test volume rendering of large images that have char or int voxels (not float or double).
@lassoan
I tried converting to uchar and, yes, it did not crashed. Thanks for the tip.
I actually got a better interactivity, when I switched from Adaptive to Normal at the Quality setting. Selecting Maximum caused an immediate crash.
Can I also suggest expanding the GPU memory value range a bit more? 1080TI has 11GB RAM, I can either choose 8GB, which is less than what is capable or 12GB which is more.
The GPU memory combobox allows entering custom values. However, as @lassoan says and I also discovered before (see VTK thread), it has no effect in the new rendering backend, so it needs to be added in VTK for this to work.
It seems there may be a bug in the VTK renderer, as I experience some strange behavior on a laptop with integrated Intel GPU.
Original float image: volume appears but when I try to zoom in, the applications hangs within a few seconds.
Volume appears (in small size), can be rotated, speed is reasonable, but when I try to zoom in, rendering of the application main window (all widgets and viewports) stops. The application still runs, because I can click on the ‘X’ button in the top-right corner and the exit confirmation popup appears, but nothing in the main window gets rendered anymore.
If I rescale and cast the image to unsigned char and increase the image spacing by a factor of 10 (from 0.018mm to 0.18mm) then rendering is robust, surprisingly fast (8-10fps), even on this really underpowered integrated Intel HD Graphics 620 GPU.
I’ve rescaled and casted the sample volume above to unsigned char then resampled to make it 8x larger (1440x2592x1120). This large volume could be rendered without any problems on a very basic integrated GPU (Intel HD Graphics 620 GPU), at about 4-5 fps.
So, it seems that volume rendering of floating-point volumes is broken on NVidia and Intel GPUs. Volume rendering works with integer types, regardless of volume size. It also seems to work with AMD GPUs.
It sounds like a rounding error - different cards can offer different floating point precision and range for the same variable declaration. Probably some extra code to query or test the actual floating point type is needed.
There is no perceivable “data loss”. What you see is due to difference in range of different scalar types.
Before you cast the image, you must rescale intensity levels to match the range of the scalar type you will convert to. For example, use Simple Filters module’s IntensityWindowingImageFilter with Window min = 0, max = 50; Output min = 0, max = 255 before casting float values to unsigned char.
This is rendered from unsigned char voxels after rescaling and casting: