Also, as above, I wouldn’t augment data via exporting new datasets through Slicer, but rather dynamically and randomly augmenting them through pytorch DataLoader transforms. This way you can even implement e.g. adaptive histogram equalization with randomly assigned parameters from skimage, or other interesting solutions.
@Prashant_Pandey, @JanWitowski Thanks for your quick response. If I am not wrong, Albumentations doesn’t seem to do data augmentation on 3D volumetric data.
Yes, that’s what it looks like to me as well. Perhaps the architecture is such that we could plug in a 3D augmentation path? If not, perhaps follow the general style but with a 3D approach in mind (or maybe ND for that matter).
Yes, for sure it would be good to follow this approach.