Hello, I have a really similar question to this. I want to get the coordinates of the segmented heart’s surface.
I used the Sequence Registration and the module Transform to transform a model generated by the module Segmentation. I want to export the time-varying coordinates information of meshing points (vertices) on that 3D model surface, so that I can use the motion of boundary to run CFD/FSI.
I tried to add the meshing points as nodes in the module Markups. But there are more than 20,000 points so that seems impossible to add them all as markups.
You can get mesh point positions as a numpy array using slicer.util.arrayFromModelPoints
. Note that you need to harden the transform on the model node to get the transformed coordinates in the array.
Once you have the mesh point positions in a numpy array, you can then write the values into any format that your CFD/FSI engine can digest.
Thanks for your reply!
I’m quite a newie to Slicer and Python. Do I need to run the complete code in Python in order to get the array or I can just run slicer.util.arrayFromModelPoints( modelNode ) using the Python Interactor in the 3D Slicer?
Btw, does the modelNode mean the MRML ID shown in Data?
Thanks for your reply!
I’m quite a newie to Slicer and Python. Do I need to run the complete code in Python in order to get the array or I can just run slicer.util.arrayFromModelPoints( modelNode ) using the Python Interactor in the 3D Slicer?
Btw, does the modelNode mean the MRML ID shown in Data?
You open the Python console in Slicer (or use Jupyter notebook interface via SlicerJupyter) and run slicer.util.arrayFromModelPoints
there. You can get modelNode
using slicer.util.getNode()
function. You can find many useful code snippets in the script repository.
To get started with Python programming in Slicer, you can check out the PerkLab Slicer programming tutorial.
Hi Lasso,
Sorry to bother you again, but I’m really a newie to Python and I still got some confusions regarding the mesh point positions.
I got this numpy array using the method you suggested, however, I don’t think I understand what does this array actually mean. I think the three numbers in each line mean the position of one mesh point, but I just don’t know what is the corresponding time point of these coordinates. In addition, are these RAS coordinates or LAS coordinates?
I’d like to write these values into a table-like format. For example, t=0s: coordinates of each mesh point are [x1,y1,z1] [x2,y2,z2] …; t=1s: coordinates of each mesh point are [x1’,y1’,z1’] [x2’,y2’,z2’] …
Do you know what should I do? Thanks a lot!
A quick update. I saved the numpy array as a csv file and found that there are 91933 coordinates, which is exactly the number of mesh points. So I think this is only the coordinates at time=0s? How can I get the coordinates in different time points?
You can step through the time sequence and export coordinates at each timepoint. You can find an example of how to step through all timepoints here.
I read through that sample and realized that in order to step through all timepoints, it needs to browse a sequence. I copied a part of the sample code below, does it mean that I can only get the coordinates of the volume at different time points? But instead I’d like to get the coordinates of a model, not the volume.
Thanks!
# Get currently displayed volume node voxels as numpy array
volumeNode = browserNode.GetProxyNode(sequenceNode)
voxelArray = slicer.util.arrayFromVolume(volumeNode)
I read through that sample and realized that in order to step through all timepoints, it needs to browse a sequence. I copied a part of the sample code below, does it mean that I can only get the coordinates of the volume at different time points? But instead I’d like to get the coordinates of a model, not the volume.
Thanks!
# Get currently displayed volume node voxels as numpy array
volumeNode = browserNode.GetProxyNode(sequenceNode)
voxelArray = slicer.util.arrayFromVolume(volumeNode)
The browser node iterates through all the associated sequences - volumes, transforms, models, etc.
Sorry but I can’t really get it. How can I connect my modelNode
to the sequenceNode
or browserNode
?
When I jumped to a selected sequence item using browserNode.SetSelectedItemNumber()
, and tried to get currently displayed model node using slicer.util.getNode()
and slicer.util.arrayFromModelPoints()
, it still output the coordinates at the initial time point.
Sorry but I can’t really get it. How can I connect my modelNode
to the sequenceNode
or browserNode
?
When I jumped to a selected sequence item using browserNode.SetSelectedItemNumber()
, and tried to get currently displayed model node using slicer.util.getNode()
and slicer.util.arrayFromModelPoints()
, it still output the coordinates at the initial time point.
I don’t think you have a model sequence. I guess you have a transform sequence, so you would iterate through that. At each timepoint you clone your model node (to keep the original non-transformed point coordinates), apply the current transform to the model, then save the transformed model.
@lassoan
I found an example here, so I need to jump to each selected sequence item using browserNode.SetSelectedItemNumber()
and apply the transform? Do I need to firstly convert the model into segmentation in order to apply the transform? Thanks a lot.
That example is perfect. It does not clone the model node but resets the model node content from the segmentation node at each timepoint.
You don’t have to use the browser node, you can just get the transform node from the sequence as it is shown in the example.
When I try to run the for
loop in that example, I met some Attribute Errors as shown below. I can’t really get it, what do these errors indicate? I used exactly the same code as that example.
AttributeError: 'MRMLCorePython.vtkMRMLTransformNode' object has no attribute 'GetNumberOfDataNodes'
AttributeError: 'MRMLCorePython.vtkMRMLTransformNode' object has no attribute 'GetNthDataNode'
The code is correct. You got the error because as transformSequenceNode
you provided a simple transform node.
This transform node that you got from the scene is most likely a “proxy node” for the transform sequence, i.e., a node that exposes the transform at the selected timepoint in the scene. You can get the sequence node from a proxy node like this:
proxyNode = getNode(...)
browserNode = slicer.modules.sequences.logic().GetFirstBrowserNodeForProxyNode(proxyNode)
sequenceNode = browserNode.GetSequenceNode(proxyNode)
Hi @lassoan , thanks for your help with the position extracting. However I met another problem right now: I want to calculate strain tensor of the boundary. I read your suggestion here so I’m trying to export the displacement field.
I found an example for extracting displacement field here. However, I only want the displacement field in the segmentation/model, but not the entire volume. Can I transfer a segmentation into a new volume so that I can export the displacement magnitude of the transform as a volume and visualize it? Or do you have any other ideas to get the displacement field of a model?
Thanks a lot!