DICOM image intensity rescaling using non-linear regression based fitting

Hi everybody !

I want to perform a value conversion on a DICOM image by solving an equation with 2 unknown variables (a and b) :
output value = (a - input value) / (b + input value)

To do that we decided to perform least-square error minimization with non linear regression based on levenberg-marquardt model

To calibrate the model we have to use 2 segmentation volumes in the image were the value is fixed to calibrate the model.

Any advice for this? Thank for your help

You can use arrayFromVolume and arrayFromSegment methods to get voxels as numpy arrays. You can find examples in the script repository.

You can implement the computation in Jupyter notebook using Slicer Jupyter notebook or as scripted module.

Hi thank for your help !

Do you think it is easier (for code) to use DICOM mask OR segmentation volume to identify “calibration volume”?

What do you mean by DICOM mask and segmentation volume?

i mean link the algorythm with DICOM mask=(image?) or link on segmentations (contours?)

There are many ways to store segmentation in DICOM. Slicer can import DICOM RT Structure Set or DICOM Segmentation Object information objects.

yes i can use all : RT structures Set, binary label map but is it better to use “masked image”

RT structure set is somewhat less deterministic (due to complex rasterization procedure that includes contour interpolation, branching, end-capping, keyhole resolution), but in most cases they should give equivalent results.

Ok i prefer to use RT STRUCTURE SET !

I ve found this function what to you think about?


scipy.optimize.least_squares function is suitable for non-linear curve fitting.

is is better than arrayFromVolume and arrayFromSegment methods you describe me previously?

You first need to get data using arrayFrom* methods, probably apply some masking operation, and finally fit your model using least_squares or similar method.