Better GPU will benefit or not VR volume rendering

Hi!
I tried the setting volume partitions method on 3090 opening the eyeglobe data and used 2, 2, 2 and 4, 4, 4. They both negatively affected volume rendering speed. Most obvious was under normal method, it became very laggy. No difference was observed with 222/444 under maximum quality, every rotation movement took about 5-10 seconds, if the movement was too fast, windows will close slicer. I did not dare to test change TDR delay, as I’m not very handy with computers, it felt dangerous…

Hi Davide!

I’m happy to test the data for you. Do you have some more files you want tested on 6900XT? I have switched over to work with 3090 now. I’m not very handy with computers so it’s a big project for me to change graphic cards :sweat_smile: If you have more files, I can try them at same time.

@davide445
Just freshly tested volume rendering of the eye on 6900XT. Ran smoothly under normal quality. Laggy at adaptive quality. At maximum quality, it is very slow, around 5 seconds for each rotation movement. All similar to 3090. I’ll keep the card today and tomorrow, let me know if you want more data tested.

@Hao_Li @chir.set there are two final tests to me worth doing

  • testing with resolution 2047 and 2049 (so just below and just above 2048)
  • test same resolution on Linux and not Windows

I didn’t find scans just above the 2048 limit, will be possibile to upscale this eye, or downscale an higher res one?

The reason for this test is looking good at the OpenGL Reports DB I noticed the same AMD GPU (almost all the one I checked) have GL_MAX_3D_TEXTURE_SIZE = 2048 under Windows and 8192 under Linux.
Also all Nvidia GPU has the same parameter as 16384 regardless if Windows or Linux.

I found a few cases where this is not true, but if so seems AMD GPU Windows drivers are reporting the wrong value. I didn’t understand if the GPU hw architecture define this value, so there is maybe another “true” and max possible value, but seems to me worth a test (if possible).

@davide445 I’m happy to test on Windows. Just send a file :cowboy_hat_face:

I’l be happy to test with my 6700XT on Linux. I don’t have any datasets with such high resolution.

This post suggests that not all software report GL_MAX_3D_TEXTURE_SIZE reliably.

@davide445
I realized I have alot of scans above 2048 and I think I can confirm your concerns.
I have tested the following scans on windows with 6900XT

2367x1784x942 = 7G No
1950x1830x1600 = 11G Wrong stitching
885x1300x1230 = 5G Yes
2125x2510x938 = 10G No
1937x1829x951 = 6,5G Wrong stitching
2836x1948x2664 = 14G No
1928x1927x937 = 6,8G Wrong stitching
2098x1919x936 = 7G No
1268x1454x1064 = 3G yes
2572x1837x951 = 8G no

After testing, I believe scans with any dimension above 2048 (2098 smallest of my scans) can not be volume rendered disregard total scan file size.

Out of curiousity I tested on 3090 as well the followings.
2097x 2134x 1850 , 16G
2292x2207x2238, 46G
3784x3784x2085, 58G
They all could be volume rendered properly on 3090 in windows, I didn’t have larger scans to test the limit.

1 Like

@Hao_Li thanks a lot. Might I ask if you can share your 2098 file so @chir.set can test it on Linux.
Also if you will be able to downsample a not rendering image to lower resolution (i.e. using Resample Scalar Volume module) we can cross check.

Yep, I was about to request the 3784x3784x2085, 58G series too. All duly anonymized of course.

Hi, I’m sorry I cannot share the scans, they don’t belong to me …

I used Resample Scalar Volume, Spacing 0,0,0 and interpolation linear.
Volume size resulted same from 2572x1837x951 to 2572x1837x951
And the volume was not able to be rendered, empty square.

@Hao_Li @chir.set can you try on the ircadb1.12 to use the Crop Volume feature with this parameters and try if the result can be rendered
image
The resulting volume need to have more than 2048 slice
Edit: sorry use 0.24 spacing scale , not 0.26 as in the image

Hi! I got this on 6900XT. Can not volume render.

1 Like

I don’t fully understand your requirements :

  • what is the original volume ?

I tested with the eyeglobe and another 512x512x2147 slices volume with ‘Spacing scale’ < 0.24, Volume rendering always works on Linux. In both cases, though the ''Crop volume module shows high dimension values, the ‘Volumes’ module always show the source dimensions for the cropped volumes.

What is this strange file ?

I probably misunderstood your question. The zip contains dicom volumes from a public anonimized dataset used for research.
Just choosen something anyone can use for testing.

Might I ask you if will be possible to use volume crop changing the spacing scale of your 1268x1454x1064 volume to 0.9, and test this way on the 6900 XT.
This since seems apart the GL_MAX_3D_TEXTURE_SIZE = 2048 topic might be some wrong shared memory management.
Looking at your numbers and seeing the GPU memory usage on my side I got the idea to verify why in some cases you have problems even if the texture resolution is below 2048, and created this table

Res Hao_Li disk Hao_Li results VRAM needed (GByte)
2367x1784x942 7G No 29.6
1950x1830x1600 11G Wrong stitching 42.5
885x1300x1230 5G Yes 10.5
2125x2510x938 10G No 37.3
1937x1829x951 6.5G Wrong stitching 25.0
2836x1948x2664 14G No 109.7
1928x1927x937 6.8G Wrong stitching 25.9
2098x1919x936 7G No 28.1
1268x1454x1064 3G Yes 14.6
2572x1837x951 8G No 33.5

Where the VRAM needed is calculated considering 16bit (2 byte) resolution at every index, so 8 byte per voxel.
Appear to me you got problems not only if the slice number is more than 2048, but also if the needed VRAM is greather than your GPU VRAM, 16GB in your case.
When both the conditions are met (no more than 2048 slices and no more than 16GB VRAM required) you have no problems (two cases).
So upscaling 10% that volume create a new one requiring more than 16GB and if my hypotesys is true you will need to start having problems even if the slice are less than 2048.

Sorry never answered this question. “original volume” it’s just the name appears on my side as the ircabd1.12 data (the PATIENT_DICOM folder) is loaded, nothing else.
About your test being already more than 2048 slices it’s interesting works.
But wanted to ask if you was actually visualizing the 3d of the first upscaled volume (and not maybe the original one): 6400x6400x26837 will need something about 8 TB VRAM!
Might I ask how much RAM has your PC.

OK, I’ll test with that on my work machine tomorrow.

I’m running with 64 GB of RAM and 12 GB of VRAM on the 6700XT.

I visualized on both volumes, original and cropped, with Volume Rendering cache emptied. I don’t think that the cropped volumes have so many real slices. The Volumes module does not report that magnitude of Z dimension in the cropped volumes, but the original one. Moreover, if I export a cropped volume as a DICOM series, it has the same number of files as the Z dimension of the original volume. I don’t how we should understand the ‘Spacing scale’ parameter of Crop Volume.

If you find that the actual volume size is not the same as what Volumes module displays then it is a bug. Check again that you selected the correct volume node etc. and if you confirm that the reported size is wrong then please provide a list of steps to reproduce.

Output volume spacing = input volume spacing * spacing scale