Hi! I have tried the Ircadb files. Patient CT and VTKs. I used 6900XT, it runs very smoothly. I used normal, adaptive and maximum methods, all three ran smoothly. I tried same files in 3090, there is no difference.
Will be possible for you to test on this file
This time a 1004x1024x1018 volume, 2GB size.
I managed to replace 6900XT today with 3090 … Volume rendering the eye, 3090 appears to be stable at normal quality. Using adaptive quality, 3090 works at about 80-90% when rotating non-stop in 3D view, a little laggy when making sudden rotations. At maximum quality, slicer just closes. In windows.
Windows stops applications that use the GPU for too long, exceeding the TDR delay. You can increase TDR delay to avoid this, but it would be great if you could test if setting volume partitions solves this issue, too.
At “adaptive” quality setting the renderer measures rendering time and adjusts rendering quality (number of rays, sampling distances, etc.) for the next rendering request to achieve the requested frame rate. This mechanism works very well for CPUs because the computation is very predictable. However, rendering time on GPU can be highly nonlinear and the GPU may perform complex internal optimizations, too, therefore the prediction is often very inaccurate, especially for large volumes. As a result, “adaptive” setting may not work as intended for GPU volume rendering.
Volume rendering with the eyeglobe dataset works equally well with a 5500XT on Linux, with or without intensity resampling.
It’s worthwhile mentioning that it’s good also on my laptop having an AMD Ryzen 5 2500U with integrated Radeon Vega Mobile Gfx video chip, and only 1 GB shared VRAM, using Linux of course.
It seems that if results are near random with a 6900XT on Windows, it would be the video driver’s performance to be questioned. This is quite surprising because AMD gives optimized drivers for Windows first.
All this to say that the Radeon cards are good, with good drivers.
@Hao_Li will be possible for you to check on the 6900 XT
I tried the setting volume partitions method on 3090 opening the eyeglobe data and used 2, 2, 2 and 4, 4, 4. They both negatively affected volume rendering speed. Most obvious was under normal method, it became very laggy. No difference was observed with 222/444 under maximum quality, every rotation movement took about 5-10 seconds, if the movement was too fast, windows will close slicer. I did not dare to test change TDR delay, as I’m not very handy with computers, it felt dangerous…
I’m happy to test the data for you. Do you have some more files you want tested on 6900XT? I have switched over to work with 3090 now. I’m not very handy with computers so it’s a big project for me to change graphic cards If you have more files, I can try them at same time.
Just freshly tested volume rendering of the eye on 6900XT. Ran smoothly under normal quality. Laggy at adaptive quality. At maximum quality, it is very slow, around 5 seconds for each rotation movement. All similar to 3090. I’ll keep the card today and tomorrow, let me know if you want more data tested.
- testing with resolution 2047 and 2049 (so just below and just above 2048)
- test same resolution on Linux and not Windows
I didn’t find scans just above the 2048 limit, will be possibile to upscale this eye, or downscale an higher res one?
The reason for this test is looking good at the OpenGL Reports DB I noticed the same AMD GPU (almost all the one I checked) have GL_MAX_3D_TEXTURE_SIZE = 2048 under Windows and 8192 under Linux.
Also all Nvidia GPU has the same parameter as 16384 regardless if Windows or Linux.
I found a few cases where this is not true, but if so seems AMD GPU Windows drivers are reporting the wrong value. I didn’t understand if the GPU hw architecture define this value, so there is maybe another “true” and max possible value, but seems to me worth a test (if possible).
@davide445 I’m happy to test on Windows. Just send a file
I’l be happy to test with my 6700XT on Linux. I don’t have any datasets with such high resolution.
This post suggests that not all software report GL_MAX_3D_TEXTURE_SIZE reliably.
I realized I have alot of scans above 2048 and I think I can confirm your concerns.
I have tested the following scans on windows with 6900XT
2367x1784x942 = 7G No
1950x1830x1600 = 11G Wrong stitching
885x1300x1230 = 5G Yes
2125x2510x938 = 10G No
1937x1829x951 = 6,5G Wrong stitching
2836x1948x2664 = 14G No
1928x1927x937 = 6,8G Wrong stitching
2098x1919x936 = 7G No
1268x1454x1064 = 3G yes
2572x1837x951 = 8G no
After testing, I believe scans with any dimension above 2048 (2098 smallest of my scans) can not be volume rendered disregard total scan file size.
Out of curiousity I tested on 3090 as well the followings.
2097x 2134x 1850 , 16G
They all could be volume rendered properly on 3090 in windows, I didn’t have larger scans to test the limit.
@Hao_Li thanks a lot. Might I ask if you can share your 2098 file so @chir.set can test it on Linux.
Also if you will be able to downsample a not rendering image to lower resolution (i.e. using Resample Scalar Volume module) we can cross check.
Yep, I was about to request the 3784x3784x2085, 58G series too. All duly anonymized of course.
Hi, I’m sorry I cannot share the scans, they don’t belong to me …
I used Resample Scalar Volume, Spacing 0,0,0 and interpolation linear.
Volume size resulted same from 2572x1837x951 to 2572x1837x951
And the volume was not able to be rendered, empty square.