How to export Model Scalar data?

I’m trying to work with FreeSurfer cortical scalar mappings.
Luckily, Slicer is able to open these as ‘Freesurfer scalar overlay’
They can be applied to a model inside FreeSurfer.

This is what it looks like

I’m trying to see if I can export this scalar mapping, for example to a .matl file so I can work on it.
However, when I export the model to .obj only an empty .matl file comes along.

I also can’t find any place inside Slicer to interact with the scalar overlay. Where can this be found?

You can access all point scalars as numpy arrays by calling arrayFromModelPointData and cell scalars by ‘arrayFromModelCellData’ and use full power of Python to process these arrays using VTK or any Python packages; in the Python console, in a script, Jupyter notebook, or in your custom Python scripted module.

If you must use Matlab then you can save the data set into VTK or VTP file formats, which preserve all point and cell data (maybe there are other file formats that can save some of the scalars).

What kind of processing do you plan to perform? There are several Slicer modules to find minimal path on these surfaces (optionally driven by point scalars), and cut, combine, and quantify surface patches.

1 Like

Hi Andras,
Thanks a lot for the quick response! My aim is to “flatten” the scalar data to a regular image, so to generate a 2D projection (e.g. Mercator projection or something more fancy).

I’m working in python since all I found in literature was (built upon) Matlab toolboxes (like Brainstorm or this MatConvNet based method)

I’m very much in the learning phase with the python interactor for Slicer, but I’ll get back to you once I know if it has worked.

This is called texture mapping and should be straightforward to do with VTK for such a simple shape.

You can generate texture coordinates using vtkTextureMapToSphere (but there is a filter for cylindrical and planar projections, too). Since your input geometry is spherical, you can simply write the generated texture coordinates (available as point scalars) into the point coordinates (using point_x=texture_u, point_y=texture_v, point_z=0). This flattens your sphere to a rectangle that you can visualize by rendering it as a model. You can use vtkProbeFilter as it is done in this example. Since this is a purely VTK programming task, you may ask for help on the VTK forum.

Hi Andras,

Thanks for the support. I’m pretty experienced with python and medical image analysis, even with MRML. But Slicers python interaction with VTK is still quite a conundrum to me. All the functions you mention seem to lack any documentation? In arrayFromModelPointData for example, what even is an arrayName object? Even the source code doesn’t specify. I must be missing something.

I think it’s just my lack of understanding of VTK.

I have been able to close in on the scalar data (scalars = getNode('vtkMRMLModelNode4').GetPolyData().GetPointData().GetScalars()) but this FloatArray seems to be not subscriptable, nor can I find a function to retrieve the array content. This is something I’ll try to get at tomorrow. Would this array data be the input to arrayFromModelPointData?

Do you know if slicer has anything built in so that this scalar data is exported with the .mat file that comes with the mesh when exporting (for example to wavefront)? This functionality would probably have to perform texture mapping.

Thanks in advance!

All the functions that I mentioned are documented here. Please submit pull request on github with more details and suggestions that would make it more clear.

This is a small helper function for accessing additional attributes defined for points (vtkPointData) of the VTK mesh (vtkPolyData or vtkUnstructuredGrid). arrayName is the name of the VTK data array that you want to access. You can learn about VTK data objects from the excellent VTK textbook (I would highly recommend everyone who works in medical image computing to read it a few times).

This is a VTK data object. You can use read/write it from Python, but it is not as convenient interface as a numpy array.

This returns an attribute array of the point data that is chosen as the “active scalar”. This is probably the attribute that is currently used for visualization. It is safer to explicitly specify the array name that you want to retrieve than relying on which one is currently selected to be active.

Yes, the point attributes (and cell attributes, field data, etc.) are all saved when you write the mesh to file. However, it depends on the file format what is saved and how. VTK/VTP/VTU files save everything. STL cannot save anything else than point coordinates. I think texture coordinates are saved into PLY and OBJ.

If you want to flatten the model then you don’t need to do anything like this, you can just take the texture coordinates (they are flat 2D coordinates) and write them to the point coordinates (setting the 3rd coordinate to 0.0). You may need to remove the cells which make the sphere wrap around (probably you can identify them by their large or distorted size using vtkMeshQuality filter and remove them using vtkThreshold filter).

Hi Andras!

I’ve been able to solve it (I believe). Once I was able to access the coordinate and scalar values performing a mercator projection was easy. All points were interpolated creating a 2D texture. With this 2D texture we can do all sorts of cool things like using it as a CNN input, or using it in external rendering software
Thanks a lot for the support Andras!


In the figure above 10% of the data was used to show the effect of interpolation.

For anyone in the future wanting to project FreeSurfer scalar data to a 2D img, this is how I did it:

import numpy as np
from scipy.interpolate import griddata
from PIL import Image
resolution = 1024

# Get coordinate and scalar values
model_node = getNode('vtkMRMLModelNode4')
scalars = arrayFromModelPointData(model_node, 'thickness')
coordinates = arrayFromModelPoints(model_node)

# Convert cartesian to polar coordinates
coordinates = np.subtract(coordinates, np.mean(coordinates, axis=0))
r = np.sqrt(np.sum(np.square(coordinates), axis=1))
xc, yc, zp = [coordinates[:, ax] for ax in range(3)]
xp = np.arctan2(yc, xc)
yp = np.arcsin(zp / r)

# Define interpolation grid
xi = np.linspace(-np.pi, np.pi, resolution)
yi = np.linspace(-0.5 * np.pi, 0.5 * np.pi, resolution)
xi, yi = np.meshgrid(xi, yi)

# interpolate
zi = griddata((xp, yp), scalars, (xi, yi), method='linear')
zi = np.nan_to_num(zi)

# Save array as image
im = Image.fromarray(zi)
im.convert('RGB').save("filename.jpeg")

Tags: Freesurfer, Fastsurfer, cortical vertex data, cortical thickness Freesurfer scalar overlay, texture mapping, 2D projection, mercator projection, flattening

1 Like