I’m using a publicly available dataset which consists of both CT volume and corresponding segmentation mask in nifti format. This image volume consists of below set of categories.
0: Background (None of the following organs)
1: Liver
2: Bladder
3: Lungs
4: Kidneys
5: Bone
6: Brain
I need to generate the volumetric liver for each CT volume and then project their coordinates to the DRR image plane.
As the first step, I have first converted the multi-organ segmentation mask image to a 3D binary mask image for the first CT volume by substituting non-liver intensities to zero for all the slices and fed this into 3D slicer to generate the tetrahedral mesh for the liver (see the attached images) using the Segment Mesher extension. I have here generated the TetGen-based volumetric mesh.
After that I have projected vertex coordinates of the volumetric liver onto the DRR image plane of the corresponding CT volume (DRR image taken by applying Syddon-Jacob Ray tracing algorithm in ITK), but they are not aligned as you can see in the attached image.
My question is that, in 3D slicer, when producing the volumetric mesh, does it encounter the original image origin and pixel spacing ? Or adding any offset to the volumetric mesh ?
(I have implemented an algorithm to project each 3D vertex coordinate of the volumetric mesh onto the DRR plane in the same way that the Syddon-Jacob algorithm does. I am convinced that this vertex projection is correct because I have done testing manually as well )