Hello! I was placing some landmarks on a PLY surface and noticed that when I place/reposition a landmark to my desired position (Fig. 2), the landmark is not actually touching the model surface (Fig. 2). Is this normal or am I doing something wrong?
I have the 3D Display placement settings as “snap to visible surface,” which I though meant that it would place the landmark on the surface. Am I misunderstanding this as well?
One possibility is that you have single vertex that’s floating in space and too small to see (though from your close up I don’t think that’s the case). Regardless, run the model through the Surface Toolbox with extract largest component option enabled, and retry with the new model.
If it doesn’t work, share your model and let us know what platform you are using Slicer and the release number.
I ran the model through the Surface Toolbox as suggested and it improved the placement (Fig. 1 & 2) but when I choose to “Refocus camera on point” and Zoom in further, I still get a floating point (Fig. 3).
The only I time I can sort of see this, if I zoom in unrealistically close like this, which I think the rendering breaks down. I don’t even see the scale anymore, if displayed it would be at scale of nanometers for this data. I wouldn’t worry about this…
Interesting… Did you run the model through Surface Toolbox or run the original?
I updated to Slicer 5.9.0, extracted the surface with Surface Toolbox and tried landmarking again. I added the scale to compare to your images and I’m getting the floating issue at around wider zoom of 100um.
I tried this with the original as well. I followed the same procedure: placed landmark initially and then zoomed to adjust the placement and I get a very similar result.
If you put the point manually on the model via markups, it should snap to the surface. However, any downstream modification (manully repositioning it) may modify it. To me it still looks like maybe a camera and rendering issue at the close range, but someone more familiar with that needs to comment on
We use vtkCellPicker with tolerance set to 0.005 to find point on a visible surface (the tolerance allows picking a triangle if the picking line just misses the edge of a cell). This tolerance value is scaled with the window size internally, so it is hard to tell what physical distance from surface this tolerance can lead to, but I can confirm that reducing the tolerance from 0.005 to 0.00005 made the point more accurately snap to the surface (I could not zoom in enough the view to see the separation from surface anymore).
If this tolerance value is too high for your application (e.g., because you work with microscopic structures and your entire model’s physical size is just a few tenth of a millimeter) then you can rescale your model (using Transforms module) and adjust the length unit in the scene accordingly. However, implementation of custom unit is not fully complete (see known limitations here), so it may be better to just rescale your model, leave the length unit unchanged, and note that you need to divide the displayed values by 10/100/1000 to get the correct physical values.
Alternatively, we could change this tolerance value in Slicer. Probably it would not cause perceivable difference in speed or quality of the pick, but below a certain tolerance value, numerical inaccuracies may cause picking failures. So, we would need to spend some time with testing before we reduce the tolerance value. I’ve added an issue to track this:
Sorry, I missed this question. Answer is a solid NO. You (as the individual) would be making far larger error than 25 micron, when you try to pick the same spot on the same model at a different day. Hence this technical error is likely far smaller than your individual error, and you don’t need to be concerned about it.