There is no tutorial, since there is not much to the module. You specify the reference model from which the distances will be calculated (fixed in this case) and then the target and choose to create a new model file as output. I slightly modified the visualization so anything within -1 to 1mm range is rendered green.
Depending on the question, you may skip the deformable registration. Rigid will only rotate and translate the mesh. Whereas deformable will non-linearly deform the mesh to match the others. Whether you need deformable or not depends on what you want to measure.
thank you for testing. Yes i managed to get the same with your settings. i was not using the settings properly as I was confused with the tooltips. Many thanks for the help.
Thank you. Some great points here.
If you wouldnāt mind, I had a couple follow-up questions.
Is it possible for me to modify the -1 to 1mm threshold? For example I would like to be a little more precise, down to maybe within -0.3 to 0.3mm range.
Is it possible for me to see the difference of the āinsideā of the model? Iāve tried reducing the opacity however it doesnāt seem to do much. For example, I have 2 tooth models - one of them has a hollow cavity preparation (it has been drilled), while the other has not. The model to model distance allows me to see the outside surface, however, I would like to see the distance INSIDE of the tooth as well if possible. Please see attached for reference.
Thanks manjula. Everything is fine with model-to-model distance, but somehow when I load the vtk file for shape population viewer, I get a weird result and am thinking I must be doing something wrong. Would you be able to have a look
I am not familiar with population shape viewer module. Instead I used the Models moduleās Scalar Tab, after running the Model to Model Distance. There you can modify what values to be displayed by playing around with the Scalar Range Mode, and the Displayed Range settings.
Hello Arthur. This would be a great change as it would simplify the alignment process drastically.
Would you guys have any plans to work on this?
I am trying to see if I can create a UI where I can simply drag and drop two .ply models (without the need for landmark transfer) and get the alignment done in a single step.
Because ALPACA as a module is currently being reviewed in a paper, we are unlikely to make a change like that interim. Most likely that interactive alignment of meshes would be a separate module.
I see thank you. Iām so glad I ran into ALPACA, it really works like magic.
In the meanwhile, as my research project is due this coming July, would you guys mind if I attempt to make progress myself?
Any direction & pointers as to how I can achieve it would be appreciated. Iām looking to
Replace the file selectors by node selectors (ex. drag & drop 3D models)
Remove the need for .fcsv file
I have some basic programming knowledge so I am not sure if I can achieve it, but as Andras has mentioned, it is likely a trivial change so I hope I will be able to make some progress.
Hereās an example of how the drag-and-drop works in python. If you search through the code for the qt methods it uses youāll see how this integrates with other ways drops are handled.
Hi Sean,
A while ago, I started working on a module for pointcloud registration that I didnāt have time to finish. Perhaps that would be a good starting point (and I can help you a bit in updating it).
It can be found at :
With regard to the io.read_point_cloud function, you can load a model from the scene with something like this:
m = getNode(āSegment_1ā)
p = arrayFromModelPoints(m)
cloud = o3d.geometry.PointCloud()
cloud.points = o3d.utility.Vector3dVector(p)
Anyway, if you want to start working on this, I can either give you access to the repo or you can create a branch and I can make some commits to it.
Thank you all for the feedbacks. Iāve spent some time to digest the code from the scripts this past week - however, I will admit that as a dental student with only basic coding background, it has not been easy creating modifications.
By any chance, would any of you be interested in developing & creating extensions to the ALPACA with some funding? I do have some grants available that I can apply for from my university - and if it means accelerating the timeline of this project & improving the quality of the final product, I am happy to look into this. The funding amount varies from $500-$1000 USD.
Here are some of the functionalities that Iām looking for:
Enable drag-and-drop feature (something similar to attached pic)
Upon drag-and-drop and alignment, highlight areas for correction - (for ex. red = needs to be trimmed down, blue = must be added). I think this is possible via shape population viewer already - would be nice to integrate this with ALPACA.
General UI simplification (ex. remove unnecessary toolbar items). I was thinking slicelet could be appropriate for this?
Bonus: provide a score (%) based on total difference. The less, the better.
Any feedbacks & interest would be appreciated.
If this project is successful, it is sure to make a difference in the dental field.
Can you tell a bit more about this application? How the colored overlay display helps the dentist?
Do you see potential further improvements, additional features that could be useful?
The amount of funding that you mentioned to develop this might be sufficient for a software developer/engineering student looking for a summer project. If you cannot find a suitable student then you could contact one of the Slicer commercial partners but probably they would ask for about one magnitude more funding to get started on a new project. There seem to be many dental applications of Slicer, so it could also make sense to coordinate with others in the community who are in this field and see if you can find some good common topics to work on and join forces to find funding and developers together.
Upon research it looks like it may be beneficial to create a custom slicelet with the drag-and-drop function + ALPACA function. Could someone please confirm?