I am using dry mandible so predefect and postdefect periodontal bone changes can be superimposed then the difference can be detected in color map. I have converted CBCT dicom files to gipl then manual approximation then segment using ITK then surface register in stl then harden it using transform then model to model but I still get the color map with only one color(no post periodontal bone defect is shown)
Please let me know if you know what I am doing wrong.
The workflow i use which works is,
Import DICOM data
registration can be done of the two DICOM image sets with General registration.
or you can go to segmentation editor first and then do the segmentation and make the segments and do the registration,
use ITK if you reallywant to segment in that one and import the STL and register the models with
IGT fiducial registration wizard or CMF registration --> surface registration or ROI
I don’t know which way to register is the most accurate will be but we can see the accuracy of the registration once we have the color maps.
Use model to model distance module to measure the difference
Load it in the shape population viewer (signed distance)
you will have to figure out the color map values to make a meaningful conclusion in a case like yours as you are dealing with difficult and highly subjective segmentation.
Thanks for your input.
Does that mean I can just use 3d slicer 4.1 and do all registration and segmentation?
In other words, just foollow step 1 to 4 without using segment editor nor ITK nor CMF?
Please let me know.
yes you can use just the 3D Slicer for everything you want to do and the workflow will be very less complicated.
I agree with Manjula that the first step here is to see a bit more of what type of distances you have computed. Take into account that the color mapping in SPV by default will happen by setting your highest value in your distances to the color in your colorbar on the right and the lowest to the other end of the colorbar.
Could you please look at the attributes/ranges tab in SPV and see what is your min and max value in the distance map you have calculated, and let us know?
You dont see much because you are visualizing the “normals” map. The magnitude of those maps goes from -1 to 1 (the magnitude of unit vectors pointing in or out)
In the attribute drop down menu, please change to “Signed”, there your values go from -8 to 8 mm (roughly). Play with changing the range to -5 to 5 and you should definitely see something.
Please let us know how it goes,
In my screenshot above i have rounded in white where you need to change the attribute from normal to signed. Then changes the value range to -5 to 5 or far as it makes sense in the range section which i have pointed too.
Please see my screenshot below. The defect change especially #30 D is about 5mm so I should see more definite color(green or red) but I do not see it.
Please let me know!
Can you please add a two points to the color map near the median with different color and move it around a bit.
If it does not work i guess there is some problem at Model to model distance measurement step or even at the registration step. If you could share some screenshots of the process or share your models some one might be able to help
Agreeing with Manjula here, continue saturating the color map (go down to -1,1, or lower) until you see something. It seems to be a difference in the crop between the two models, particularly in the mandibular area towards the chin and that is driving the most of the color interpolation.
Could you please take a snapshot of the two models overlayed with transparency and share that with us??
Thanks for the feedback!
May I have your email address? I will send you pre and post STL models via google drive. I can also share pre/post dicom files as well, if you want. Perhaps registration and hardening are issues as you both stated.
I will also send you screenshots of each step and send them. I would like to start from the dicom files.
I tried with your data. I think it works well. I did not do a good registration so the teeth are not well aligned. I think there is a error in your workflow. I will try to write the workflow in the afternoon.
Looking forward to a more detailed workflow. Since I am working with CBCT scans not CT nor MRI, I was using Slicer CMF, FYI.
Manjula, thanks for that snapshot. The registration seems wrong, as illustrated for a pattern of positive distances parallel to the longest axis of the model. Erin, is it possible to see both of your models?
Also, have you tried to manually approximating your models to see if you can generate a better colormap?
yes registration is wrong. i did not did the registration properly. it was just a quick test !
Sorry i was bit busy.
I did use the models Erin send for registration with CMF Regis - surface registration
With max iterations and max landmarks.
Please see the video
The registration is not good. This is similar to problem i encountered with my previous work.
I solve my problem with cloud compare but i wanted to stick to one software for my work and i got good results with CMR- ROI registration.
In this case i did not do the ROI registration because i again encountered the bug that i reported on this some time back.
I use these models in cloud compare even with default setting i got good results.
please check the color maps with both registration methods
Erin i think you can try CMF ROI method. I think it should work pretty well.
Also since you have DICOM data why don’t you do the Image registration (General Registration -BRAINS) and then do the segmentation. Then your segments will be created properly registered. It would be great if @bpaniagua or Prof @lassoan can tell further about this.
Is registration on DICOM will be more accurate than registering the Surface models/Segments ?
Despite the answer to me previously by Prof Lasso on registration i am still baffled by the accuracy of cloud compare registration over CMF Surface registration.
In any case the problem that we set out to see seems like you are not measuring the model to model distance properly. Please see the video.
Image registration would try to align soft tissues as well, so you would need to apply masking, but that would be essentially the same kind of thresholding that you do to extract the surface. Image-based registration would be also more vulnerable to image artifacts (due to presence of metals). So, unless you register a small region, such as a single tooth, surface registration is probably more appropriate. It also allows you to use the same method to register CBCT/CBCT and CBCT/intra-oral surface scan.
Rigid surface registration of low-noise surfaces with good initialization is trivial, using ICP. Maybe the problem is that the cut surface is included in the registration, which of course must be excluded because they are not exactly the same on the two models. If you cut those off (leaving the cut-off ends open) then registration must be accurate.
It would be nice to add a surface selection tool to SlicerCMF that would allow to select a part of a surface for registration.
Thank you for the explanations. First questions is answered understood.
With regard to second question i am not sure what you meant by the cut surface ?
In any case the way i understood was the anterior and posterior ends of the mandible.
So i just clipped around the teeth and did the CMF surface registration with 4000 iterations and the results were much better. Then i applied the transform to the whole model and got a much better color map. I dont know is that what you meant.
And ofcourse if we can select the surface then it will absolutely solve this problem i guess…