How to save the volume pixel data in display area?

Hi all, i am new to this open source software. Here is the thing i want to do.

1.Open a volume, in volume rendering module,we can see an opaque cube(300x300x400 volume pixel)
image

  1. Use kinds of filters to make the image in the cube clear
    image

  2. Now,we need to save the image in the display area,that is processed image.Click the save button,however the .vtk format file is still the raw data.
    image

I think the processed image in display area is available in some RAM memory.So, is there any way to save the new values of voxel instead of the sequence of opacity filters that we applied to change the image(Scene).
By the slicer.util based on python, I can use the getNode and arrayFromVolume method to get the raw data(three dimensional numpy array).Sadly, I haven’t find any existing interface to access the display area data.
Alternatively can I access display graphical card memory and save that information in the same *vtk format typical for 3D images?But I prefer to save the pixel value array directly if possible.

I will be very grateful if anyone can give me any possible advice :grinning:

Do you just need an image of it for display? If so you can use the ScreenCapture module to get images or a movie of the rendering.

Actually not, what I need is the new pixel value data after mapping.

The original voxels are never converted, only the sampled values along the viewing ray. Samples can be more or less dense than the original spacing, rays are usually terminated before getting across the whole volume, and you don’t need huge extra storage to store the transformed samples, so generally you would not want to transform each original voxel value.

If you want to do it anyway, then you can compute lookup tables from the scalar opacity, color, gradient opacity transfer functions (available in the volume property node) and apply the lookup tables using numpy (np.take).

1 Like

Thank you very much for your ideas.
Just to make sure,what I am interested in is the post-image processed volume (and not necessarily the projected/displayed 2D image as seen on the screen)
I may have said what I need is the data in the display area but that was insinuating the whole 3D volume and not the 2D projection.

No post-processing is performed on the volume, as it would slow down the volume rendering and/or decrease the rendered image quality. All the transfer functions are applied during raycasting on the resampled voxels on-the-fly.

Thank you very much for your reply again,
So, if I want to save the post-processed image data, I have to compute the look-up tables and apply them to the volume data, is that right?
d074b6c366c2cbecda4111f1c4beda7.png

I have seen the volume property node in the scene.mrml. What do the lookup tables from transfer functions exactly mean? Or say ,how to compute the lookup tables?

Andras Lasso via 3D Slicer Community <slicer@discoursemail.com> 于2021年11月30日周二 11:47写道:

The transfer functions simply assign an opacity or an RGB triplet to a voxel intensity value; or an opacity value to a voxel gradient value. They can be internally implemented as piecewise linear functions or discretized lookup tables. The VTK textbook describes this in great detail.

THX Andras again.
I have downloaded and seen the textbook and I found the theory is much richer than practice.Are there any programming-related document? I also have seen some official documentation and find something.
6809fcd8c4e3fb889060d0068d246f4
But there is no detail.

Why would you like to apply these transfer functions? If the goal is rendering, such preprocessing would increase memory requirements, slow down the rendering, and potentially reduce rendering quality (because you would need to interpolate RGBA voxels instead of scalar values). If the goal is further processing then probably you don’t want to apply color mapping.

Can you describe your end goal?

Thanks Andras.
Please let me describe the thing in detail.
The voxel data from visualization is the first step,let me call the data new volume.
The new volume is created based on transfer functions in volume rendering.
Then we create mask,mask = threshold(new volume)
In the end,we create final volume,final volume =mask * original volume
These final 3D volumes can be manipulated for functional and molecular imaging.
So the data in the volume rendering is fundamental to us.According your reply,there is no visualization data because all the transfer functions are applied during raycasting on the resampled voxels on-the-fly.
That’s why I want to do volume rendering outside ourselves or maybe you can have more convenient way to do it?
Thank you again for your continued attention to this issue.

You can threshold the volume, apply mask, rescale or apply any arithmetic operations simply by getting the voxels as a numpy array and modify them. See examples in the script repository. You can then display the resulting volume using volume rendering.

Thanks Andras.
I have looked this script repository carefully and searched some examples.But I am stucked with this situation now:
I have successfully get the raw voxel data in 400x300x300 numpy array as follows:
b44080a8e63ef84fdf612479356d66d
Then I defined the transfer functions including color,scalar opacity and gradient opacity as follows:
image
image
image
Then set the VolumeProperty node as follows:
image
Now,how can I apply this VolumeProperty node to VolumeNode in order to change the voxel values?

By setting these transfer functions in the VolumeProperty in a volume property node will not change voxel values. It will only change the pixels of the rendered 2D image.

Modifying voxels should not be necessary. If you feel that you need to change all the voxel values of the 3D volume then your implementation is most likely not optimal.

However, if you want to do this anyway, then you can use
vtkImageMapToColors to map scalar values to RGBA values, using the transfer functions. The filter takes only a single transfer function, so you need to fuse the scalar opacity and color transfer functions into a function. If you want to apply gradient opacity then you need to compute the image gradient and combine it with the existing RGBA volume (you can probably do that by getting the RGBA volume and the gradient opacity mapped volume as numpy arrays and apply element-wise multiplication).

Thanks Andras.
I have come out an alternative possible method to save the visualization data using the ScreenCapture Module:

  1. Choose a dimension to slice (for example Axial or XZ) and start with the first slice.

  2. Start recording (or screenshot) from start/0 to the end of length/dimension and stack them in-sequence to re-create the volume.

Do you think it is feasible or maybe easier compared to code transfer functions?

Screen capture module is good for getting images for publications or presentations.

If you need colored slices for volumetric 3D printing then you can use the SlicerFab extension, which uses the current volume rendering transfer functions.

We cannot help you very effectively here without knowing what your end goal is (what you want to ultimately achieve).

Thanks Andras.
What we actually need is oxygen saturation (SO2) from vessels.
We need the rendered data as a clean mask and element-wise multiply the raw volume in order to get the cleaned volume without unnecessary parts inside and so that we can observe the oxygen saturation (SO2).

I have checked this extension and I have used this tool to slice my rendered volume.
If I put all these slices into Matlab and stack over in order, does this mean I get the volume rendered voxel array data since the SlicerFab uses the volume rendering transfer functions.

Don’t use SlicerFab if your goal is to quantify the resulting images because SlicerFab uses the output of volume rendering, and that introduces. nonlinearities such as transfer functions and lighting. For quantitative imaging it’s better to work with the raw pixel values, for example, using numpy.

Thanks Steve.
Actually,we just need the output of volume rendering :joy:.We need the pixel values after nonlinearities that’s why I use SlicerFab to slice from the bottom to top along the Z axial and get all the slices.
I set the layer thickness parameter according to our volume image spacing and click the generate bitmaps button:
image
Then I picked one of the slices to show in Matlab:
image
But I found the dimensions are not correct.From the picture above we saw about 1000x1000 which is different from the X and Y resolution that are 600DPI and 300DPI.
So,what’s the meaning of the printer resolution parameters and how can I just match the dimension to the volume data?In other words,how can I just save the volume slice without any background.