Hello, friends. I am trying to perform a rigid alignment between a 2D gray-scale Image and a LABEL image, or a segmentation image.
I set up both moving and fixed fiducials (as in the picture), define saving transform, and apply.
Try to go on Data module and apply the transform generated, but NOTHING HAPPENS. What am I missing?
I am using Fiducial Registration Module.
I clicked on apply and I found no errors warning or message.
I was wondering… how do the program knows that the first pair of fiducuials are attached to gray-level image and that the other pair is attached to the label image?
Just by placing the fiducial over them (image and label respectively)?
There is no need to know if the point positions are associated with anything. Landmark registration just computes a transform that aligns the two point sets.
Can you check the “Save transform_1” transform values in Transforms module? Is that an identity matrix (1.0 in the diagonal and all other values 0.0)?
Could you save the scene as a .mrb file, upload it somewhere and post the link here? (make sure no patient information is included, you can use any image that you find on the web)
All right. I got it.
It was a problem of defining correctly the fiducials. Rigid registration demands at least 3 fiducials (for both fix. and mov. I guess). The other thing was defining fiducials groups. Clicking in the fiducial icon (3 red stars) creates a list, then I had to add the individual ones by clicking in the up arrow icon.
One last question remain. Will the algorithm try to match the _1 fiducial of the moving image with the _1 fiducial of the fixed image? (if its true fiducials should be placed in a specific orther.) Or it does not really matter?
Yes, and this is very important: “Fiducial Registration” module requires point orders to match (N-th moving fiducial will be matched to N-th fixed fiducial.
“Fiducial registration” module is very basic, if you need automatic point matching (so that order and number of fiducials in the fixed and moving list do not have to match exactly) or you want to do warping transform, or just want to have a more convenient user interface, then I would recommend to use the “Fiducial registration wizard” module in SlicerIGT extension.
I had a 2D image with a segmentation (.seg.nrrd) misaligned. I used the fiducial registration wizards, find the right transform, harden the transform and overwrite the .seg.nrrd file.
In the next stage, I load both image and .seg.nrrd into python notebook and use the nrrd package to manipulate the label in the segmentation. I then plot the image with the overlaid segmentation. Its a fast visual check in my application.
There is the problem. The resulting plot (with matplotlib) is misaligned as the previous stage. Take a look on the images:
In 3Dslicer, after alignment, saved, and reopened:
I wonder if the problem is that matplotlib is aligning the images by their dimensions (i.e. by aligning a corner of the images) rather than by spatial information associated with them. This would be a normal approach for a system unused to dealing with images which need to be located in space. The Slicer registration and transform hardening modified only header information, not the pixel grid. If matplotlib is aligning based on the pixel grid, then it will look like the registration had no effect.
At the end, segment_1 should contain both spacial and grid info converted into array, shouldn’t it?
Maybe, I would need an interpolation itself, so that the grids would hold new values produced by the transformation. I’m not sure it makes sense, but…
Perhaps I’m being unclear. Here is a better description of what I imagine may be happening:
Initial State:
Original Image is, for example, 100 pixels x 100 pixels with an origin at (0,0)
Original Labelmap/Segmentation is also 100 pixels x 100 pixels and has an origin at (0,0)
However, the original labelmap is not aligned with the original image, so you perform fiducial registration and the resulting transform translates points up 10 mm and to the right 10 mm. You apply this transform to the original Segmentation/Labelmap and harden the transform. The modified Segmentation/Labelmap is still 100 pixels x 100 pixels, but it now has an origin at (10, 10) instead of at (0,0). When you save the modified version, the only change from the original is that it has a different origin, and this information is only in the nrrd header, not in the pixel grid image. When you extract the pixel grid from the modified version it is identical to the pixel grid from the unmodified version. Slicer shows the modified version as shifted because it reads the header and positions the pixel grid with respect to the origin. Matplotlib almost certainly does not do this, instead implicitly assuming that the (0,0) voxel of one image should be lined up with the (0,0) voxel of the other. The positioning information necessary to line up the image and the segment is contained in the header and is not accessible to matplotlib, which is only given the pixel grid.
So, I think what you likely need is a version of the registered segmentation/labelmap which has been resampled into the same grid as the original image. Try the following:
From the Segmentations module, select your segmentation (with the transform already hardened) and go down to the section titled Export/import models and labelmaps
and select “Export”, “Labelmap”, and open up the “Advanced” section. For “Reference volume”, select your original image. Then click “Export”.
Save the exported labelmap and try overlaying that in matplotlib with your original image. If that works, then what I’ve laid out above is probably the correct diagnosis of the problem.