2 questions / inqueries regarding transforms:
- I’m having issues loading transforms using the slicer.util.loadtransform() method. No matter what file type I save in slicer (.txt, .tfm, .h5, etc…) I have thus been unable to save a transform file and then reopen it the “conventional” way. Note than I can still drag and drop the transform into slicer and it will open properly, so I know it is not a file type issue.
I have no issues loading segmentations or volumes using similar slicer.util.loadVolume() etc…
- In the past I have written some code that parses the text file and then “recreates” a transform node in Slicer, but I’m getting some very, very strange results where, when I load the raw text file into Slicer, it is seemingly changing the values of the transform.
I have attached a screenshot, showing the raw text file I saved (using slicer! Note that the format in which it saves isn’t perfecly aligned with what slicer displays, but I’ve highlited 2 of the numerous discrepencies between what was saved and what is opened) and the transform when loaded back into slicer… Why are the values different? Clearly there is some rounding going on, but surely this wouldnt explain a 5mm difference between two values.
Any idea what’s going on here?
Any help would be greatly appreciated.
Edit: Also probably worth noting that Slicer’s interpretation of the data is “correct”, i.e. a perfect overlap (top) when I “manually recreate” the transform using the raw numbers from the text file, I get the “wrong” answer (not a perfect overlap) (bottom)
That always confused me until I understood that the conventions are different. So here is one random translartion I created in Slicer:
That got written into the file as:
#Insight Transform File V1.0
Parameters: 1 0 0 0 1 0 0 0 1 -12 14 -6
FixedParameters: 0 0 0
You can see the sign change for 6 to (-6). That because the file written into ITK is the inverse transform of what Slicer is displaying. So actually if you scroll down on the Transform module and click the
Invert button, you will see that the displayed transform will now match the file (Slicer now displays a negatove sign (-) to indicate transform:
I never tried loading the transform from Python into Slicer, but I suggest inverting the transform and then applying it might be a fix.
The Slicer GUI shows modelling transform in RAS coordinate system. ITK transformation file convention is resampling transform in LPS coordinate system. See detailed explanation and link to conversion scripts in the Transforms module’s developer documentation.
Please copy here the code snippets that you use for saving and loading transforms and we’ll have look why they don’t work as you expect.
hello to from a in
I’m feeling particularly dense, at the moment [waiting for admission for deeper diagnosis of SIH/CSF leak…]
i just can’t seem to get my foggy brain wrapped around 3DSlicer, and i would really like to be able to do a good skull/c-spine ‘picture’, using several MRI/DSA/CTA/MRIcontrast, sources.
have clear spinal stenosis, probable CSF leak in c-spine, and a very tricky positional ‘insult’/compression of RVA…
is there any really, really good help/tutorial? i’ve looked at the tutorials from the Wiki, and have used Horos quite a bit, but there is just something in the 3DSlicer UI, and ‘philosophy’, that are defeating me… not a doctor, but, sadly, getting better at reading MRIs than i would have dreamed. if you want to contact me off-list, that’s dandy, don’t want to waste the bandwidth of all if you experts!
3D Slicer is more flexible than typical medical image review applications, because it says designed to deal with one-of-a-kind cases. Since there are no guide rails, it can be more complicated to figure out what you can do and how to do it.
If you write here any specific question then we should be able to help with either pointing to an existing tutorial or describe the steps that you can follow.
thanks, sir…I’ll try to come up with specifics, but right now, just getting a series loaded, and a 3D up, tough enough …maybe i should start with register, first, on 2D, since that’s probably the first step in the process of building a more detailed/‘deeper’ image set.
As always, thanks for the insightful comment Andras! I think that I will be able to reverse engineer a solution that works for my problem from the code example provided.
W/R/T code snippets for saving an loading transforms, some brief snippets of code are attached. Not much to the loadTransform code, which really has me scratching my head.
####creating and saving a transform node
LR = 10 ###
PA = 20 ###
IS = 15 ###
transformNode = slicer.vtkMRMLLinearTransformNode()
transformMatrixNP = np.array(
# Update matrix in transform node
myTransformStorageNode = transformNode.CreateDefaultStorageNode()
##### loading a transform node (doesn't work with any saved file type, whether I save it via code or GUI)
fu_xform = slicer.util.loadTransform("temp_tmax_in_ctp0.txt")
Confirmed the LPS to RAS cordinate system worked so that part is officially resolved, thank you so much!
Still troubles with loading transforms with slicer.util.loadTransform, but now that I can properly generate them from a 4x4 array I should be all set.
I’ve tested your code snippet and everything works well, except that you haven’t specified a full path for saving and loading and used APIs at different levels.
The two different APIs use a different path prefix for relative paths:
- MRML storage node: lower-level API, uses the current working directory (Slicer.exe location) as a basis for resolving relative paths
- slicer.util.loadTransform: higher-level API, uses the scene path as path prefix
I would recommend using full paths when saving or loading data to avoid ambiguity.