I am using SlicerRT (v4.8) to try and accumulate dose for a 30 fraction treatment.
I am hoping someone may be able to explain how SlicerRT dose accumulation weights dose distributions. It seems to NOT be a simple linear weighting. For example, if I weight a single dose distribution with a factor of 1.0 and calculate DVH parameters I get a different result than if I weight the same single dose distribution with a factor of 0.5 (and then multiple the DVH parameters by 2).
Probably I’m missing something obvious but would appreciate any help!
A simple linear weighing is performed. Each dose volume is resampled to the reference volume node’s geometry, multiplied by the specified factor, and added to the accumulated dose volume.
Your reply makes sense of course. But for some reason I still get differences when doing the above,
If I weight a single dose volume with a factor of 1.0 and calculate DVH parameters I get a different result than if I weight the same single dose volume with a factor of 0.5 (and then manually multiple the DVH parameters by 2).
I also keep the reference dose volume and structures constant. So I’m a little perplexed as to why I get the differences. In any case, if it’s not a known bug I will re-examine my method.
You cannot simply scale the DVH parameters (V20, D95 etc).Dose accumulation involves adding up different dose distributions that, when summed and weighted, will look quite different than any of the individual input dose accumulations. At the same time, you use the same structure set, so the boundaries may be different. So you cannot expect to be able to just linearly scale up the metrics.
@cpinter
Of course one cannot simply scale DVH parameters when using different dose distributions.
But, I’m confused as to why there would be a difference in DVH parameters (D98 etc.) for one single dose distribution when it is weighted with a factor of 1.0 and for the same single dose distribution when it is weighted with a factor of 0.03 with resulting DVH parameters manually multiplied by 33.333
Note the same structure set and dose distribution is being used both times so boundaries are not different.
By downscaling the dose by a factor of 33.3x, computing the histogram, and then upscaling the result, you essentially scale up histogram quantization error by 33.3x.
To maintain error at the same level, despite of this aggressive scaling, you need to set a smaller histogram bin width (=DVH step size). You can get current step size by opening the Python console (Ctrl-3) and typing: