When Slicer imports a raw CT DICOM dataset, my recollection is this is a set of 2D image ‘stacks’ taken on 3 orthogonal axes - correct? And Slicer uses MPR to convert this into a volumetric dataset so you can slice it any way and do volume rendering.
Is this volumetric data literally just a 3D array of voxel data, and is that what is saved in NRRD or are the original slices retained?
In the consumer GPU world, 3D textures are widely supported now for polygon rendering so I’m wondering how trivial it would be to export/convert a CT scan into such a format that could be applied in regular OpenGL or DirectX? Like if I wanted to employ a brain scan in a video game, etc