I am extracting radiomics features from whole-body PET/CT using Pyradiomics. My ROI is whole-body fat.
Problem is, it got stuck at some point and would not give any output even after running overnight. But when I seperate the whole-body fat to two separate part, upper/lower, it can be finished.
My question is, is it due to the configuration dose not allow running too many slices at a time, such as whole-body images? If separate the whole-body structure into two parts is the only solution, is there anyway to combine the output into a result as a whole structure afterwards?
Even though this is possible for some of the radiomic features, in general, combining the radiomics would not be possible. Higher order features are based on spatial dependency of voxels. When you split the ROI in two parts, you break this spatial relationship.
Have you tried monitoring your RAM utilization during runtime? Consider increasing available RAM if this is your bottleneck.
Thank you so much for your comments. Actually, I was thinking the same that split the ROI will make it unavailable to combine again.
Yes, I have checked the RAM (32G) has ~50% usage and CPU is at ~15% during running. Is there any configuration in the package that limit the total images to be processed? Otherwise, I can not think up anything to fix it.
You can also consider this setting in PyRadiomics configuration file: Force2D = True
This will result in per-slice radiomics computation, with averaging at the end. Resulting features will be more stable, but you will loose some of the 3D information.
Is there any configuration that set limit to RAM/CPU usage or maximum processing slices for each ROI? It stuck as a whole but can get through after split, which still make me wondering.
I do not think that there is a setting for max number of slices, but you can set max number of voxels processed in a single batch, voxelBatch > X, where X is an integer > 0.
I also think it would be a good test to restrict your computation to a single feature class and see if that works first, say:
featureClass:
firstorder:
This may give you some insight if this fails with higher order features only.
I was also wondering this is possibly due to the high amount of voxels/data it has to process and store temporarily, which exceeds some sort of limitations before the final result is generated.
Is the voxelBatch > X, X is the minimum, or maximum? Should I set it higher in my case?
Thanks
It is a maximum. So try setting it to some larger number that is smaller than your largest ROI’s number of voxels divided by 2. The actual parameter setting would be something like this…
voxelSetting:
voxelBatch: 500
where I have chosen 500 to be the max number of voxels in a batch.