That’s all I can get in Volume Rendering with your eyeglobe data using a 6700XT on Linux.
Unusable.
I managed to replace 6900XT today with 3090 … Volume rendering the eye, 3090 appears to be stable at normal quality. Using adaptive quality, 3090 works at about 80-90% when rotating non-stop in 3D view, a little laggy when making sudden rotations. At maximum quality, slicer just closes. In windows.
Windows stops applications that use the GPU for too long, exceeding the TDR delay. You can increase TDR delay to avoid this, but it would be great if you could test if setting volume partitions solves this issue, too.
At “adaptive” quality setting the renderer measures rendering time and adjusts rendering quality (number of rays, sampling distances, etc.) for the next rendering request to achieve the requested frame rate. This mechanism works very well for CPUs because the computation is very predictable. However, rendering time on GPU can be highly nonlinear and the GPU may perform complex internal optimizations, too, therefore the prediction is often very inaccurate, especially for large volumes. As a result, “adaptive” setting may not work as intended for GPU volume rendering.
Volume rendering with the eyeglobe dataset works equally well with a 5500XT on Linux, with or without intensity resampling.
It’s worthwhile mentioning that it’s good also on my laptop having an AMD Ryzen 5 2500U with integrated Radeon Vega Mobile Gfx video chip, and only 1 GB shared VRAM, using Linux of course.
It seems that if results are near random with a 6900XT on Windows, it would be the video driver’s performance to be questioned. This is quite surprising because AMD gives optimized drivers for Windows first.
All this to say that the Radeon cards are good, with good drivers.
@Hao_Li will be possible for you to check on the 6900 XT
Hi!
I tried the setting volume partitions method on 3090 opening the eyeglobe data and used 2, 2, 2 and 4, 4, 4. They both negatively affected volume rendering speed. Most obvious was under normal method, it became very laggy. No difference was observed with 222/444 under maximum quality, every rotation movement took about 5-10 seconds, if the movement was too fast, windows will close slicer. I did not dare to test change TDR delay, as I’m not very handy with computers, it felt dangerous…
Hi Davide!
I’m happy to test the data for you. Do you have some more files you want tested on 6900XT? I have switched over to work with 3090 now. I’m not very handy with computers so it’s a big project for me to change graphic cards If you have more files, I can try them at same time.
@davide445
Just freshly tested volume rendering of the eye on 6900XT. Ran smoothly under normal quality. Laggy at adaptive quality. At maximum quality, it is very slow, around 5 seconds for each rotation movement. All similar to 3090. I’ll keep the card today and tomorrow, let me know if you want more data tested.
@Hao_Li @chir.set there are two final tests to me worth doing
I didn’t find scans just above the 2048 limit, will be possibile to upscale this eye, or downscale an higher res one?
The reason for this test is looking good at the OpenGL Reports DB I noticed the same AMD GPU (almost all the one I checked) have GL_MAX_3D_TEXTURE_SIZE = 2048 under Windows and 8192 under Linux.
Also all Nvidia GPU has the same parameter as 16384 regardless if Windows or Linux.
I found a few cases where this is not true, but if so seems AMD GPU Windows drivers are reporting the wrong value. I didn’t understand if the GPU hw architecture define this value, so there is maybe another “true” and max possible value, but seems to me worth a test (if possible).
@davide445 I’m happy to test on Windows. Just send a file
I’l be happy to test with my 6700XT on Linux. I don’t have any datasets with such high resolution.
This post suggests that not all software report GL_MAX_3D_TEXTURE_SIZE reliably.
@davide445
I realized I have alot of scans above 2048 and I think I can confirm your concerns.
I have tested the following scans on windows with 6900XT
2367x1784x942 = 7G No
1950x1830x1600 = 11G Wrong stitching
885x1300x1230 = 5G Yes
2125x2510x938 = 10G No
1937x1829x951 = 6,5G Wrong stitching
2836x1948x2664 = 14G No
1928x1927x937 = 6,8G Wrong stitching
2098x1919x936 = 7G No
1268x1454x1064 = 3G yes
2572x1837x951 = 8G no
After testing, I believe scans with any dimension above 2048 (2098 smallest of my scans) can not be volume rendered disregard total scan file size.
Out of curiousity I tested on 3090 as well the followings.
2097x 2134x 1850 , 16G
2292x2207x2238, 46G
3784x3784x2085, 58G
They all could be volume rendered properly on 3090 in windows, I didn’t have larger scans to test the limit.
@Hao_Li thanks a lot. Might I ask if you can share your 2098 file so @chir.set can test it on Linux.
Also if you will be able to downsample a not rendering image to lower resolution (i.e. using Resample Scalar Volume module) we can cross check.
Yep, I was about to request the 3784x3784x2085, 58G series too. All duly anonymized of course.
Hi, I’m sorry I cannot share the scans, they don’t belong to me …
I used Resample Scalar Volume, Spacing 0,0,0 and interpolation linear.
Volume size resulted same from 2572x1837x951 to 2572x1837x951
And the volume was not able to be rendered, empty square.
@Hao_Li @chir.set can you try on the ircadb1.12 to use the Crop Volume feature with this parameters and try if the result can be rendered
The resulting volume need to have more than 2048 slice
Edit: sorry use 0.24 spacing scale , not 0.26 as in the image
I don’t fully understand your requirements :
I tested with the eyeglobe and another 512x512x2147 slices volume with ‘Spacing scale’ < 0.24, Volume rendering always works on Linux. In both cases, though the ''Crop volume module shows high dimension values, the ‘Volumes’ module always show the source dimensions for the cropped volumes.