I appreciate you help if you could reply to my following requests:
1.I trying to install stable release of 3D slider on Ubuntu 16.04 but i couldn’t and when i checked the tutorial i found that it require Ubuntu 18.04 or late, so how I can do to install it?
I wanted to use 3D slider to convert the ProstateX dataset to nrrd format them use pyradiomics tools to extract the features, it’s possible to convert the whole dataset and also the true mask using 3D slider ?
my last question is about the dataset itself, I couldn’t visualize which file represents the segmented label in dataset for each patient, anyone have an idea, it would be a huge help for me.
sorry if I asked a lot, I never work before on medical image and I am getting confused between the concepts and the modalities.
waiting for your response soon.
I don’t think there is anything that would prevent Slicer running on Ubuntu 16.04 (see Linux install instructions). We just don’t test with this Ubuntu version anymore. We haven’t heard other users complaining about this, so either not many people use Ubuntu 16 anymore or Slicer works for most of them.
Yes, you can everything either using the GUI or using Python scripting. See examples in the script repository.
This works well (I’ve just tested with a recent Slicer Preview Release on PROSTATEx-0141). What you have probably missed is to install Quantitative Reporting extension to be able to load DICOM segmentation objects.
Thank @lassoan I appreciate your help
finally i successfully installed 3Dslider and i could upload a sample from the dataset but about the segmented image, i mean the true label how you did it?
for example in the ISBI 2013 dataset, the raw image saved in dicom files however the true mask saved in nrrd files and i could visualize both the original slice and it segmentation. but for prostatex, what’s the true mask? and how i can load it into 3d slicer and then save the both in nrrd files?
another question please, how can I transfer the whole dataset from dicom to nrrd in the same instruction?
looking hearing from you.
Slicer can load images and segmentations in many formats (nrrd, nifti, DICOM segmentation object, DICOM RT structure set, …) and save in nrrd or other formats.
Any operation that you can do by the user interface, you can also do using Python scripting. For example: import from DICOM and save volume as nrrd.
There are 346 series in PROSTATEx but only 98 segmentations. On the TCIA website you can search for segmentations (image modality: SEG) then download all the subjects that have segmentations.
I was looking for the segmentation mask as you said but the prostateX (or prostateX-2) data consisted of a total of 204 mpMRIs dicom images(one per patient) including the following sequences: T2-weighted (T2), diffusion-weighted (DW)
with b-values b50, b400, and b800 s/mm2
, apparent diffusion coefficient (ADC) map (calculated from
the b-values), with ktrans images and lession information which consists on csv files.
when i convert the dicom images to nrrd using 3D slicer, i obtained 10 nrrd refers to the mpMRI. however in my task just i need the t2-MRI transversal.
should i take just the t2-tra.nrrd file ?
about the masks, I took them from this repository GitHub - rcuocolo/PROSTATEx_masks: Lesion and prostate masks for the PROSTATEx training dataset, after a lesion-by-lesion quality check.
I will attach you image to show you what contains each dicom image, also what obtained after converting.
please let me know.
I appreciate your time and your help.
You are free to choose whichever sequences you find useful and ignore the rest. For example, localizers are rarely usable for anything.
I loaded the dicom images also the true mask in 3D slicer and i wanted to save the slices for each patient with the correspondent ground truth into png images, however i obtained just single png file for dicom images and a single png for the label but for dicom image the slice is blurry and for the true mask, the image is totally black, the gland does not appear there. you
can check the attached files.
please let me know why?
I appreciate your time. Thank you.
I would not recommend to use PNG file format for storing medical images. PNG is not suitable for storing high-bit-depth 3D medical images and with essential metadata (image position, spacing, and axis directions). The deep learning tutorials that you find on the web use PNG format because they are developed for computer vision tasks (where native images are stored as PNG) or they were copied from computer vision tutorials.
Instead, follow torchio and monai tutorials for learning about how to use medical images for deep learning.
Thank you again for your quick reply.
I am not working with png format, i work with simpleitk, pynrrd and pydicom libraries.
just i wanted to get sample of slice as png. however i tried with python script and i got the same result for label black image but the raw images are clear.
I am not working with png format,
just i wanted to get sample of slice as png
If you save a 3D volume into a png file then only the very first slice is saved (you can get the slice axis from the image direction matrix).
The easiest way to access volume voxels in a Python script is to get the volume as a numpy array and use indexing. See examples in the script repository.
You can also save the displayed slice as shown here.