Match segmentation to volume for deep learning training

Hi,
I have a question regarding the padding of 3d images (.nii.gz). Say my volume dimension is [512, 512,273] , my segmentation is [183, 263, 374], they have different origins and coordinates. And then, I want my segmentation to match volume for deep learning training. Any suggestions?

Thank you so much!!!

If you use a newer version of Slicer the exported segmentations should match the source volumes by default.