Hi everyone, reopening the thread in 2025
Did we get any update on the issue or are there any way to reconstruct multiple resolutions in order to fit texture in the renderer?
On Windows with a Quadro M6000 24GB ram I can easily load a 13GB dataset with Imaris softwarer but in 3D slicer, the GPU vtk render fails with error “[VTK] Invalid texture dimensions [1776, 1804, 2221]”
I am gonna try to see if using Gemini/claude and chatGPT we cannot solve the problem by creating an extension to be able to render at multiple resolution depending on the zoom factor.
I think Imaris uses image pyramid because it converts first to an imaris file format and if I open the converted image in FIJI, I can select between different size of the image stack.
On second look that patches things only for the MacOS. @lassoan is there a specific reason why the partitioning of the textures is done for MacOS. Other cards on other platforms can suffer from the same problem. Is not possible to detect the maximum texture size of the driver being used and determine whether texture partitioning is necessary or not?
use crop volume on your original data with scaling factor of 1.08 and then use resultant volume should render fine. Slicer does not support multiresolution, so you will need to manually reduce it to the size it works with your GPU at the moment.
It is not trivial to get the maximum 3D texture size via VTK API and on Windows you would get 16384 on most GPUs.
If you want to check the maximum size then you can copy-paste this into the Python console in Slicer:
pip_install('PyOpenGL')
from OpenGL.GL import *
renWin = vtk.vtkRenderWindow()
renWin.Render()
print(glGetIntegerv(GL_MAX_3D_TEXTURE_SIZE))
You can also use the Crop volume module to crop or resample your volume to see what is the size it can cope with.
Setting the maximum texture size by slicer.vtkMRMLVolumeRenderingDisplayableManager.SetMaximum3DTextureSize(2048) should work, but it can slow down the rendering tremendously. Due to this slowdown it may be possible that the GPU does not respond within the allowed “TDR delay” time period and so the operating system shuts down the application. You can confirm this by checking the Windows Event Viewer - if you can see TDR errors in the log and you want to prevent the “crash” then you can increase the TDR delay value (in the Windows registry).
@muratmaga THanks it seems to solve the issue but now I am hitting a TDR delay.
Anyway even with a 2x scale down, the rendering is really slow.
I guess the Quadro M6000 is not that powerful (and seems the case when looking at comparisons NVIDIA Quadro M6000 Specs | TechPowerUp GPU Database )
@lassoan Thanks a lot.
I’m indeed probably getting TDR delay issues. Even if Iincreases the delay value it will anyway not be possible to work with it, that is really too slow.
I really think it would be great to develop for future versions a new rendering module that can use or generate image pyramid. If the Render module would have been python I coudl have given a try but the Render this is a loadable module and C++ is million years away from what I can do
You can implement the pyramid rendering in Python. You don’t need to use any C++ at all!
A very simple implementation is the following (with just two pyramid levels):
Create a new Slicer Python scripted module where the user can select a volume and enable/disable volume rendering.
When the user enables volume rendering then you create a volume that is 1/2 downscaled along each axis (1/8 size in total) and show that volume using volume rendering; and add an observer to the camera node of each 3D view where this volume is displayed.
When you detect that the camera in a certain view is highly zoomed in on the volume then crop the original (full-resolution) volume to the visible region and show that cropped volume using volume rendering (and hide the low-resolution volume in that view).
Keep observing the camera and adjust the cropped image as needed. When the user moves the camera then you can temporarily (while you are getting the new cropped volume) show the half-resolution volume.
If you work with very high resolution volumes (not just few thousand but few ten thousand voxels along each axis) then you can implement multi-level pyramids. At this point it starts to make sense to use xarray (maybe with dask), which make it really easy to get an arbitrary region of a volume at arbitrary resolution. You can find a basic example of using xarray in Slicer in this OME-NGFF image importer.
This is all pure Python, with some basic low-level infrastructure implemented in C++, as always.
That card is almost a decade old. If you want to see how Slicer performs on modern hardware, give it a try to the morphocloud: https://morphocloud.org/
You should have no problem rendering that data on the standard g3.l instance, but to be safe you can go for one higher, g3.xl.