"Spatial Cell Data" from a mesh: from Slicer to Pyvista

I exported a Cartesian 4D echo images by Philips QLAB and I converted them using the module [Philips 4D US DICOM patcher]. Then I exported the US volume in a nrrd file into Pyvista for 3D analysis. Reading the mesh (UniformGrid) I observed it reported the ‘Spatial Point Data’ but not the ‘Spatial Cell Data’ that is basically the intensity of each voxel. I just wondering how 3D Slicer can read and show the pixel/voxel intensity from the nrrd file. Where is the intensity value of each cell of the mesh stored, so that maybe i can retrieve using Pyvista.Thanks in advance for the help

UniformGrid (0x26b6e7bc288)
N Cells: 8078175
N Points: 8200192
X Bounds: 0.000e+00, 2.060e+02
Y Bounds: 0.000e+00, 2.007e+02
Z Bounds: 0.000e+00, 1.610e+02
Dimensions: 224, 176, 208
Spacing: 9.237e-01, 1.147e+00, 7.777e-01
N Arrays: 1

mesh.point_data
Out[118]:
pyvista DataSetAttributes
Association : POINT
Active Scalars : ImageFile
Active Vectors : None
Active Texture : None
Active Normals : None
Contains arrays :
ImageFile uint8 (8200192,) SCALARS

mesh.cell_data
Out[119]:
pyvista DataSetAttributes
Association : CELL
Active Scalars : None
Active Vectors : None
Active Texture : None
Active Normals : None
Contains arrays : None

NRRD file stores a voxel array, so it is not a mesh, but an image (vtkImageData). To get a mesh, you need to segment the image.

I don’t know why the image shows up for you as vtkUniformGrid, which is a specialized, more complex kind of image data. If you load the data using Pyvista then probably that’s the culprit - it may do this to allow some special visualization techniques - but using this special class instead of vtkImageData may lead to problems if you attempt to use this data for any processing.

Pyvista is a convenience layer on top of VTK to make visualization easier. I’m not sure if it makes processing easier, as the convenience layer hides lots of VTK features and may introduce additional complexities. What kind of 3D analysis would you like to do?

We’ve been working with cardiac ultrasound image processing and visualization for many years, so if you tell us a bit about what you want to achieve then we can give you advice on potential approaches. What is your overall goal?

Thanks Andras for the information. Quick and very kind in response. Thanks again.
Given a 3D ultrasound image (a volume), I need to extract one or more slices (2D image as array) along a selected vector. This vector, depending how the volume was acquired, can be in any direction in the space. To define the vector, I need to select two marker (points) in the space. Then, starting from the first selected point, I need to extract a slice along the defined vector at a certain distance from that point (in mm). Example I select 1mm, and 10 slices, so every 1mm from the first point, a slice would be extracted for a total of 10 slices. All these operations should be done in automatic for an AI application. Using 3Dslicer and the Valve View tool in the cardiac package (thanks Andras for this package it’s amazing it looks like a mini version of tomtec) I can set the two long axis and the short axis to visualize the plane that contains the aortic valve. Of course the two long axis are not perpendicular to each other as normally set in 3DSlicer but should be set manually by an operator. Then along the short axis the two marker points can be selected.
Using a deep learning model I would like to extract the coordinate of the two marker, in order to then make what I explained above, thus extrapolating the slices along the vector direction perpendicularly. The slices then will be segmented, but this is a pretty easy for me. I need to resolve the problem of how to extract the slices as array along a vector in the 3D space.
Following the example in pyvista (Slicing — PyVista 0.38.1 documentation) and the function (Transform slice() output to 2D numpy array · Issue #89 · pyvista/pyvista-support · GitHub) to convert a slice in a 2D numpy array (even if it does not seem to work properly). I should probably fix it. Unfortunately, the nrrd file imported into pyvista that is converted to a mesh (UniformGrid) does not contain cell information. Even if I export the 3D eco image as a vtk file, it is read as UniformGrid in Pyvista. Basically in Pyvista the two files are seen in the same way. The only difference I found was that the nrrd file is exported and then read by 3DSlicer, while the vtk file is exported but then no longer read by 3DSlicer (- Error: Failed to load model from VTK file F:****. vtk as it does not contain polydata nor unstructured grid. The file might be loadable as a volume.

  • Error: Loading F:/****. vtk - Failed to read node US_2 (vtkMRMLModelNode7) from filename='F:*****. vtk).
    I hope I’ve been clear enough. In case I try to be even more detailed.

What would you like to extract using deep learning? The annulus curve, specific landmark points along the annulus, leaflets, papillary muscles, …?

Just to understand what I have to do by making it automatic: How to assess the Aortic root dimensions by 3D-TEE (Q-lab) - YouTube
using deep learning first specific landmark points along defined long axis, and then the area of the annulus in a plane perpendicular to the resulting short axis. The problem is that I cannot simply train a model to segment a surface in the US volume as this might lead to the classic overfitting problem. Of course I can generate a ROI (and this is what I plan to do) but it doesn’t solve the problem.