To get started, you don’t need any Python scripting, but you can use Thresholding and Mask volume effect to replace very dark and very bright values.
However, I don’t see how that would help with segmenting something, because clipping the intensity range just removes details in very bright and very dark regions, which should not matter anyway (because you are interested in the boundary of an object that will not be outside the clipping thresholds). Moreover, “Level Tracing” effect has questionable value anyway, as it performs only 2D segmentation, so you would need to manually iterate through a large number of slices to segment anything with it.
I would recommend to experiment further with existing segmentation tools (e.g., “Grow from seeds” effect with a reduced “Editable intensity range”) to segment 30-50 images manually, then use those to train a neural network (MONAI Auto3DSeg, nn-UNet, etc.) to make all further segmentations fully automatic.