Better GPU will benefit or not VR volume rendering

What is this strange file ?

I probably misunderstood your question. The zip contains dicom volumes from a public anonimized dataset used for research.
Just choosen something anyone can use for testing.

Might I ask you if will be possible to use volume crop changing the spacing scale of your 1268x1454x1064 volume to 0.9, and test this way on the 6900 XT.
This since seems apart the GL_MAX_3D_TEXTURE_SIZE = 2048 topic might be some wrong shared memory management.
Looking at your numbers and seeing the GPU memory usage on my side I got the idea to verify why in some cases you have problems even if the texture resolution is below 2048, and created this table

Res Hao_Li disk Hao_Li results VRAM needed (GByte)
2367x1784x942 7G No 29.6
1950x1830x1600 11G Wrong stitching 42.5
885x1300x1230 5G Yes 10.5
2125x2510x938 10G No 37.3
1937x1829x951 6.5G Wrong stitching 25.0
2836x1948x2664 14G No 109.7
1928x1927x937 6.8G Wrong stitching 25.9
2098x1919x936 7G No 28.1
1268x1454x1064 3G Yes 14.6
2572x1837x951 8G No 33.5

Where the VRAM needed is calculated considering 16bit (2 byte) resolution at every index, so 8 byte per voxel.
Appear to me you got problems not only if the slice number is more than 2048, but also if the needed VRAM is greather than your GPU VRAM, 16GB in your case.
When both the conditions are met (no more than 2048 slices and no more than 16GB VRAM required) you have no problems (two cases).
So upscaling 10% that volume create a new one requiring more than 16GB and if my hypotesys is true you will need to start having problems even if the slice are less than 2048.

Sorry never answered this question. “original volume” it’s just the name appears on my side as the ircabd1.12 data (the PATIENT_DICOM folder) is loaded, nothing else.
About your test being already more than 2048 slices it’s interesting works.
But wanted to ask if you was actually visualizing the 3d of the first upscaled volume (and not maybe the original one): 6400x6400x26837 will need something about 8 TB VRAM!
Might I ask how much RAM has your PC.

OK, I’ll test with that on my work machine tomorrow.

I’m running with 64 GB of RAM and 12 GB of VRAM on the 6700XT.

I visualized on both volumes, original and cropped, with Volume Rendering cache emptied. I don’t think that the cropped volumes have so many real slices. The Volumes module does not report that magnitude of Z dimension in the cropped volumes, but the original one. Moreover, if I export a cropped volume as a DICOM series, it has the same number of files as the Z dimension of the original volume. I don’t how we should understand the ‘Spacing scale’ parameter of Crop Volume.

If you find that the actual volume size is not the same as what Volumes module displays then it is a bug. Check again that you selected the correct volume node etc. and if you confirm that the reported size is wrong then please provide a list of steps to reproduce.

Output volume spacing = input volume spacing * spacing scale

It seems there’s a bug.

Try 1

  • Start Slicer
  • Load CTA-cardio
  • Go to Crop Volume
  • Generate the ROI with the ‘Fix’ button
  • Hide the ROI
  • Set Spacing Scale to 0.1
  • Apply

It completes quickly with the result below.

In Volumes module, the cropped volume is similar as the original.

Try 2

  • Do as above
  • But with Spacing Scale to 0.9

It takes some time to complete, and the Volumes module shows a cropped volume different from the original.

Afterwards, in the same session, it we set Spacing Scale to 0.1, it blows up my laptop.

There’s a problem if first Spacing Scale is too low.

Can confirm that :slight_smile:

Spacing scale of 0.1 is expected to “blow up” most computers, as it means making the volume size 1000x larger! Windows would probably just terminate the offending application but on Linux your computer may need a hard reset.

We could maybe display a warning when somebody attempts to do such an extreme operation, because a scaling value of 0.1 may look innocent if you don’t think about what it does and you don’t look at the output volume size.

Yes, that was my case.

However, if we start at 0.1, it completes without rescaling anything. This may need a fix too. The warning is welcome. But if it does not rescale despite the warning, it’s misleading.

With Spacing Scale at 0.24 using your reference dataset, the resulting volume is 2133 x 2133 x 1083, so > 2048 in 2 dimensions.

Volume rendering with this resampled volume works, and is very fluid at normal speed.
Using adaptative, it’s a little laggy, but yet good at default 8 fps.
At maximum speed, it’s of course minimum interaction speed, 2 - 3 seconds between rotations.

1 Like

Also at that resolution will exceed the VRAM of your GPU, so that we got also this check that using shared memory works in Linux.

I checked that, it used between 1.7 to 2.2 GB of VRAM. Or it’s badly reported by CoreCtrl.

@Hao_Li we need your crosscheck on Windows if possibile as from this post

One last note.

With Spacing Scale at 0.18 using your reference dataset, the resulting volume is 2844 x 2844 x 1444, and Volume Rendering no longer works at any interactive speed.

1 Like

Used VRAM using your monitor system?

1.2 GB, i.e., nothing from Volume Rendering.

Hi! I’m on vacation, will test in a week.

1 Like

@davide445
Hi! I’ve just checked.
1268x1454x1064 Original
I cropped using 0.9 spacing scale, and the new volume became 1408x1615x1182. The result was viewable but ended in wrong stitching. Wonder if it confirms your thoughts. Let me know if you need anything else tested, I’ll switch back to 3090 when we finish.

Please try to decrease the spacing scale until one of the dimensions exceed 2048, this will need to result in no rendering (if the idea is true)