I am trying the Slicer Photogrammetry with a student. The photos were taken by a cell phone at a single perspective. However, the masking behaved very weirdly.
I’m also not very sure about the orientations. The photos were rotated horizontally in Slicer, and the output masks were also horizontal. The original photos were still vertical.
Yes, if that’s the masking result you got, photogrammetry isn’t likely to work on it. I have never seen patterns like that. Are you using the ROI tool to specify what to keep and then fine tune with points? Are there any errors in the console log?
Even if you can mask I suspect you are not going to get very good results, since your model shape very uniform and doesn’t have a lot of features. The most important step of photogrammetry is feature extracting and matching. This will actually will help by using something that has more texture than what you have.
For SlicerMorph Photogrammetry, accurate masking is considered crucial because it improves the reconstruction as well as speeds up the process and we have spent a lot of time trying to integrate SAM to achieve that. If you want to skip the masking and upload your photos as is, the best is probably would be launching the ODM on your own (without using the extension).
I see. I think this model should be fine for regular photogrammetry. I will ask the student to take photos with some background to see if it works. If not, using WebODM might be an option.
Also I wonder if the phone is embedding some rotation info in the jpeg header about if the phone was in landscape or portrait mode when the photo was taken and that needs to be taken into account.
It makes sense to me, because sometimes my iphone photos just rotated when I imported them into my computer. I feel the output masking may not match the photos, which led to failure.
For example, for some reason, I got a very good masking results, but the task still failed:
As for your second set of masks, yes, looks like there is some kind of a rotation. The masks should inherit the image information from the photographs, but we never worked with cellphone photos. I will ask @oothomas to take a look. If you can provide the originals as saved by the camera (prior to any processing) would be great.
It would be cool to be able to use a cellphone camera if you can figure out how to account for the orientation tags.
Also I think Murat’s point about the surface of the object needing to have more texture for feature masking makes sense. Maybe try a different toy for testing.
Regarding the first images you shared with the masking issue; I’ve run into this before when the bounding box isn’t tight enough around the objects to be masked. Were you using the include/exclude options at all?
I think currently the issue is even when the mask is good, they seem to be rotated with respect to the original image, which might indicate some kind of a portrait/landscape flip in the image headers that we are missing out. but I think we will need the original photos from the phone as well to track where the difference is coming from.
Thanks, @oothomas. Yes, I did a big ROI box since the object was not in a stable position. I did not try placing ROI for a single object. I’ll wait for the original photos then give it a try.
Since the same ROI is used for all images, setting a very large ROI to accommodate everything gives less than ideal result. If you don’t have too many pictures, you can try to set the ROI manually per photo, which is obviously a bit tedious.
Alternatively, there is ClusterPhotos which sort of tries re-organizes your photos into sets with similar orientation to deal with subject moving in the picture frame. You can also give that a try…
With the help of ChatGPT, yes, iphone would add a transpose flag when the photos are taken in portrait mode. Slicer is not a photo-editing app, so it’ll read raw images without EXIF data and display it in landscape orientation.
This is a simple script using Pillow to transpose the image to counter the flag: