I’m trying to create a method that returns the graphics card information. To do this, I’m importing “from vtk.qt.QVTKRenderWindowInteractor import QVTKRenderWindowInteractor”.
def get_graphics_card_info():
# Create a hidden QVTKRenderWindowInteractor
qvtk_interactor = QVTKRenderWindowInteractor()
qvtk_interactor.Initialize()
# Create a vtkRenderWindow and set the interactor
render_window = vtk.vtkRenderWindow()
render_window.SetOffScreenRendering(1)
render_window.SetInteractor(qvtk_interactor)
# Now get the graphics card info
info = render_window.ReportCapabilities()
# Clean up
render_window.Finalize()
qvtk_interactor.TerminateApp()
return info
I get this error
Traceback (most recent call last):
File "C:---\AppData\Local\slicer.org\Slicer 5.6.1\bin\Lib\site-packages\vtkmodules\qt\QVTKRenderWindowInteractor.py", line 80, in <module>
import PySide6.QtCore
ModuleNotFoundError: No module named 'PySide6'
It appears that Slicer doesn’t find that library, and as far as I know, it should be included in Slicer.
I tried using the import “from vtkmodules.qt.QVTKRenderWindowInteractor import QVTKRenderWindowInteractor” from Slicer’s Python interactor, and it raises the same error. It indicates that the Python environment used by Slicer is unable to find any of the Python bindings for Qt (PyQt4, PyQt5, PyQt6, PySide, PySide2, or PySide6).
Thanks, that seems like the correct solution. However, for some reason, when I try it, it shows: ‘no device context’. I’ll keep researching until I make it work.
This doesnt seem to report the GPU memory size. Is there a way to get that as well? It might be a useful value and report to the user when they are trying to render something large than available gpu memory (currently only an empty 3D render window is displayed).
This call only gives you opengl info, but there are platform-specific and device-specific calls that could be added to get GPU nominal memory size. But of course that’s only helpful if other applications aren’t already using the memory for something else, so typically the most reliable path is to try allocating memory and reporting if it fails. One possible feature could be to determine the largest texture available and resize the volume to fit in it.
In linux glxinfo returns both the max and free memory like this, though I am not sure if this specific for Nvidia GPUs or not.
Memory info (GL_NVX_gpu_memory_info):
Dedicated video memory: 40960 MB
Total available memory: 40960 MB
Currently available dedicated video memory: 37702 MB