Thanks a lot @fedorov. I’ll try to modify it manually. Just one questions:
I understood that sys.argv[1] takes the first command line argument passed to my script, but how can I pass my image as first argument? Should I do it in command line or like in python IDLE?
Sorry for the lack of usage instructions… You can save that code in a (for example) converter.py file, and then run it as follows from the command line:
The .nrrd image that is now created has the same pixel spacing of the original dicom one. There’s still a funny fact though: while in Slicer the pixel spacing is correct and in mm (as the original), in ImageJ the pixel spacing is correct, the size is also correct (it was modified to 512,512,1) but the unit of measure is still in microns…maybe it’s really just an ImageJ bug!
Anyway, this leads us to my initial question I asked in PyRadiomics to @JoostJM:
Now that the pixel spacing is preserved during the conversion to nrrd, but my images have different pixel spacing, is there a rule to decide the resampling with imageoperations.resampleImage? I mean…if I have some images with a pixel spacing of (0.89, 0.89) mm and others with a pixel spacing of (1.2,1.2) mm should I just resample them to a mean pixel spacing? Should I choose the smallest spacing and resample all images to that?
ImageJ is mostly used for microscopy imaging and ignores most DICOM metadata, so it is not surprising that it blindly assumes unit is always microns.
In general, in medical image computing algorithms, visualization and analysis is always performed in physical space, so it does not matter much what was the resolution of the input images. @JoostJM can confirm if pyradiomics properly takes image spacing into account when computing metrics.
One example where units may matter in pyradiomics is smoothing filter sigma is defined in mm, which will be a problem if defaults are used and the actual units are microns.
I agree with @lassoan - it is likely ImageJ just assumes microns. That image is a human head, right?
There is not really a “correct” answer to this. As a general rule you will lose information by resampling. As another general rule, harmonized acquisition protocol is recommended for this kind of studies. The rest is your decision that you need to make based on your application, your data, and your analysis of the relevant literature.
Welcome to the world of research with no ground truth! We are here to help you get started with the tools, but the rest is up to you.
@Tommaso_Di_Noto, What you can also try is to export stack instead of export image in syngo via, if I’m correct, this will export the entire volume and maybe prevents the conversion to secondary capture.
As to your other points, I agree with @fedorov, with the small addition that, especially for texture features, ‘common’ protocols is advised. Specifically for image spacing, this means that you are looking at the same level of texture (i.e. fine or coarse) and there is some literature available that has shown that some features are dependent on the voxel size (although this is usually due to some interaction with volume, the true dependency is the number of voxel in the VOI)