Gel Dosimetry Slicelet - Calibration



In the calibration step of the slicelet (from the Wiki) I see step 10:

  1. Get mean optical densities from the central cylinder of the CALIBRATION volume

Must a MeV irradiation be performed down the central axis of a jar to compare with databook/commissioning PDD values imported, or can a lateral irradiation be used? Can a MV irradiation be used, or does it require the entire range as an MeV would provide? I’m assuming it relies on the typical axial view being down the length of the jar to determine this?

Thank you!

The Gel Dosimetry application has a fixed workflow, so you must follow that. See details here


You need to use the very same type of beam for calibration and treatment evaluation.

@kmalexander please confirm if you have a minute.

1 Like

@cpinter @SmHoop The reason we use an electron beam calibration is that it produces the greatest range of dose values within the length of our gel jar. You can certainly try using a 6 MV photon beam for calibration. What size jar are you using? If you’re using the comparable 1L jars as we do (around 15 cm tall), your dose values for calibration will be limited from PDD100% to roughly PDD60%. Do note: The automatic scaling and shifting of the curves will likely not produce a very good match, since you’re in essence lining up two straight lines. I’d encourage you to think about placing some sort of optical marker at a known depth so that you can see the spike in your PDD and can align that to the known depth. Hope this helps a bit!


Yes thank you both, this is very helpful information. I think for this data set the best course is to find the manual calibration coefficients. Future experiments will have to be set-up much more carefully to be inline with the workflow. I did achieve a 94.76% gamma match at 3%3mm using a CAX MV irradiation as the calibration, used on a lateral MV irradiation (matched with Eclipse plan), but that hasn’t worked for anything else so far.

@kmalexander @cpinter

Quick follow-up if you have time!

I’ve manually determined a calibration fit in Matlab by taking a 1 cm (for noise) cylinder down the center of the MeV irradiation and determining the median value of each slice down the length. (This image shows the curve overlaid on the dose profile taken from the Eclipse plan, not showing the alignment tweaks to find the range of best fit)

When I register the dosimeter in Slicer and apply these manual coefficients it blows up the dosimeter to values of basically the first coefficient, as seen in this picture below (Calibrated overlaid on Eclipse Dose, with value in Data probe on bottom left):

When I use these values to scale the attenuation coefficients in Matlab I see a reasonable fit (unsurprisingly given the linear fit).

Any ideas on why Slicer is having a hard time with this? Thank you!

I just realized the manual dose calibration in Gel Dosimetry is y = b + mx, not y = mx+b, in the input order. That solved my issue.

1 Like