3D view screen capture

Operating system: Win 10
Slicer version: 4.11

I captured images of 3D view (refer to the ScreenCapture module), and transferred these images to the library which uses OpenCL and performs image processing.
Now I want to improve the execution time of the modulle.
Is it possible to get the screen image of 3D view in the GPU?
If possible, that would save the time to transfer the image from CPU to GPU

Slicer uses VTK library for rendering. You can access both the 3D data (meshes, textures, etc.) and rendering in the GPU via VTK.

I have gotten the screen image of 3D view via vtkRenderWindow.
Like this.

qMRMLThreeDView* threeDView = qSlicerApplication::application()->layoutManager()
->threeDWidget(0)->threeDView();
vtkRenderWindow* renderWindow = threeDView->renderWindow();
int* dims = renderWindow->GetSize();
unsigned char* pixelData = renderWindow->GetPixelData(0, 0, dims[0]-1, dims[1]-1, 1, 0);

There are two questions

  1. Does GetPixelData() get the image from the GPU?
  2. If yes, how to get the address of the image in the GPU?

Would you like to implement augmented reality? What hardware do you use with what interface?

For example, in Unity, the camera is assigned the texture to display what it capture.
https://docs.unity3d.com/Manual/class-RenderTexture.html

I could transfer the id/name of texture to the library which uses OpenCL. Then the library get displayed image via id/name, and performs image processing.

Can this method be implemented in 3D Slicer?

It is possible to share buffers between OpenGL and OpenCL but we don’t provide any support for that in Slicer. It would be best to do it at the VTK level and then it could be exposed in Slicer. If you figure out a good way to do this please post it for others to learn from.

1 Like

@liku What would you like to achieve? Slicer has been around for so long and used by so many people that most likely somebody has already done something similar that you can build on and we can help you find it.

For example, if you just want to use the rendering as an augmented reality overlay in Hololens, on a tablet, or other AR devices then there are several solutions for that, too (both old, current, and solutions that will be available soon). If you want to process the rendered image then it is probably better to use custom shaders rather than post-process the 2D rendering.

where can I learn more about 3DSlicer and AR?

There were dozens of Slicer-based AR projects over the years. Most of them are public, so you can find out a lot by searching on the web, but there are some current ones and others in infrastructure development phase that may be hard to find information online. Can you narrow it down a bit of what you are interested in?

What device are you planning to use: HoloLens, tablet, half-silvered mirror, video overlay on endoscope/microscope, …?
(the potentially usable approaches mostly depend on this)

What kind of procedure do you plan to use it for: burr hole placement, vertebra level localization, etc. or higher-accuracy application, such as tumor resection, needle guidance, or pedicle screw placement…? (so that we get an idea about accuracy requirements and applicable calibration/registration procedures)

We want to superimpose the video from endoscope/microscope onto the 3D model on the computer screen.

HoloLens is very expensive. At present, we want to implement a simple AR.

We are more concerned about tumor resection. By using a video capture card, the endoscope The /microscope video is displayed on a computer, and 3DSlicer is running on this computer.

If the captured video can be directly displayed in the slicer’s 3D window, it will be great.

Of course, we tested and found that Plus can help us display the video in the 2D window of the slicer, but the 3D model cannot be superimposed.

That sounds like an interesting project and worthy goal. Of course it can be very difficult to know the exact geometry of the endoscopy camera relative to the patient and the 3D model made from pre-procedure imaging. But leaving that aside, did you try the “Show in 3D” button in the Slice controller widget? (it is an icon that looks like an open or closed eye). If you’ve got the endoscope video in 2D view this will let you view it in the 3D view as well. Then you can focus on getting them to line up accurately.

Thank you very much, we will try your method.

At present, we use OBS to display the captured video and the captured slicer’s 3D window at the same time.

At the same time, we use OBS to adjust the background of the Slicer3D view by chroma key for transparency.