- In addition, in one of the sets (“uh-dimensions”) “fill between slices” function in the segmentation editor does not work well. is there maybe a connections to the image dimensions?
Image dimensions and total size of the image does not depend on the specimen size, only on the imaging protocol. Physical size of an object (= object size in voxels multiplied by the voxel size) is expected to be similar if the same specimen is imaged.
Note that both images are single slice volumes, so you cannot really do any volume measurements, only areas and distances. It is also possible that the slices are not in the exactly same anatomical location, so you may find significant size differences.
What is your overall goal with these images? What is the clinical application? What would you like to measure or compare?
Hey lassoan, thanks for the respond.
I am creating a 3d model of the heart in two stages of a heart condition (hfpef) by segmentation of the scans in healthy and unhealthy states (the two scans in the previous mesaage). This is why the differences in dimensions presented in the previous message are odd. Any insights?
It seems you have 2D (maybe 2D+t) images. How do you plan to create 3D model from them? Do you have 3D volumes as well?
2D images can be taken of different parts of the heart, so it is normal to have some size differences, but physical size should be at least the same magnitude in all images.