# Bone Mineral Density (BMD) measurement method

I have been told a method of calculating BMD from micro CT data using a proprietary software and i am wondering about the accuracy of the method and replicating it with the 3D Slicer.

The way it was done in the software was,
we use 3 phantoms of know density, water and 0.25 and 0.75 and the mean hounsfield values was

water 0 = -575.936
0.25 = 1142.4
0.75 = 3267.8

so the equation was
X =(y+11.708)/4381.1

so i try to do this the same with the 3D Slicer,

so my problem is,

1. in the segment statistics module, the value that come as the mean in the scalar volume statistics, is it the mean hounsfield ?

2. Is the method of trying to arrive at BMD in this way is accurate ?

3. What is the reason for the slight difference in the mean values between the 3D Slicer and the other program ?

thank you.

You get results in the same unit as the input image. In case of a calibrated CT, it is Hounsfield unit. In case of a non-calibrated system, it can be anything.

3 measurement points are very few. At least for initial exploration, I would add at least 5-10 points, just to have an idea about shape and amount of noise in the curve. It is highly unlikely that the calibration curve would be a line, so fitting a line to 3 points is a potential source of error.

GelDosimetry extension contains an automatic image intensity calibration and evaluation workflow that should be directly usable. @cpinter do you have any advice?

There could be many reasons, but I don’t know what that software does. Is that software open-source? Do they describe somewhere what they do?

How much is the difference? If it’s small then it may be due to forcing the calibration curve to be a line.

1 Like

The one we used is Burker.

The method it describe was way too complex for me but the way it was done is very simple.

You segment out the 3 phantoms. then calculate the mean values. ( i mean automatically it is given just like in the segment statistics module)

Then we put the 3 values to the graph and calculate the equation for line with libreCalc,

So from the trend-line we can calculate the density of any know HU value.

As you can see the difference between 3D slicer and the burker is very small i guess…

Do you mean we take few more segments from each the 3 phantoms or we should use more phantoms with different densities ???

GelDosimetry allows you to automatically compute the curve on many samples and also evaluates the error and how it is distributed in the image (error may be different in different regions of the phantom). These additional features may also mean more complexity - most importantly, you need a ground truth image of the phantom, which can come from a calibration CT scanner or you can construct it manually.

I would recommend to use a phantom with 5-10 different densities to get an idea of the shape of the intensity calibration function. That will help you to decide if line fit is sufficient or you need a polynomial fit.

1 Like

The GelDosimetryAnalysis application is closely tailored to the gel dosimetry workflow, which includes one phantom for calibration, and one for the measurement. One calibration phantom is enough, because it measures the photon beam attenuation as a function of depth, and the measurement along the central line is matched with the percent depth dose curve which is given. So it is a quite different workflow then what is needed here. The FilmDosimetryAnalysis application is more similar, in that it manages a list of calibration images for each dose level, but it handles 2D images (film scans) and the workflow is quite fixed.

I think your method makes sense for the current scenario with three phantoms.

Most of the difference between the two programs is probably due to a different selection of calibration ROI. Since the regions contain a lot of noise, the size and position of the segments that you draw may change the values significantly.

Fitting a polynomial on the dat points is quite easy, see for example the way it is done for gel dosimetry.

2 Likes