I use Slicer to segment CT/MRI images. Currently I would like to try deep learning on medical image segmentation.
For deep learning, we need to prepare the training data first. Particularly, if we have 200 images for one CT (then the input is 200heightwidth), then we need a labeled image set for this CT and it also has 200 images (200heightwidth).
The problem is I can manually segment the CT now. However, do anyone know how to export the segmented results (e.g., the input is 200heightwidth) to a 2d image set (e.g., 200 images and their dimension is height*width)?
(exporting it to stl and than performing slicing is a choice but is inconvenient and time-consuming)
I have just used Slicer for 3 months, and thank you for your help.
I’m not sure that dumping image slices into series of 2D files would be the best approach. Consumer file formats (jpeg, png, etc.) are not well suited for storing metadata that is very common for medical image computing (axis directions, origin, pixel and slice spacing), they may not support 16-bit grayscale images, etc.
I would recommend to have a look at existing approaches for file storage, data normalization, and augmentation for deep learning in medical images.
@raul’s group using HDF file format for deep learning, which allows structure storage of 3D medical images (including all metadata, arbitrary pixel type, etc).
You may check out DeepInfer and DeepInfer Slicer extension, see how models are created, etc.
You can get some ideas from NiftyNet, too.
Do you know how to export the segmentation results to HDF format?
I’ll participate in a project to explore this, but we’ll get there in a few months. Maybe @raul can comment on how they create them now.
What you can do easily now is to export segmentation as 4D NRRD file (each segment is a 3D subvolume). If segments don’t overlap then you can also export segmentation to labelmap node and save it as 3D NRRD file.
How do you solve this problem？