I am working on a deep learning project where the network generates a semantically segmented mask from .dcm CT input. For instance, if I want to separate out the kidneys, with the input being a 200x512x512 3D array, the output would be a 3D array of the same shape and all the voxel of the kidneys would be labelled 1 and everything else would be labelled 0 in that array.
The problem that I am having is that when I import this .seg.nrrd mask into Slicer, the mask doesn’t overlap on top of the original CT scan. It seems to have been shifted and truncated in the depth dimension as well. Does anyone have experience with this? Any pointers would be greatly appreciated. Thanks in advance.