Generating augmented training data from nrrd file using random translations, rotations, and deformations

Hi everybody,
I don’t find any tutorial on how to connect between python and 3Dslicer without using python interactor.
After doing segmentation in 3d slicer I read the file in python:
readdata, header =

And I want to use the nrrd file to see the segmentation on python, in the header I have some array, but I don’t know how to show the segmentation.
Can someone guide me to some tutorial or help me to understand what to do now.

I’m very new with 3Dslicer,
I don’t want to use with python interactor.

Not sure I understand this part… Do you mean you want to use a different python interpreter and not Slicer? In that case you would just use matpilotlib or whatever you want to visualize the array.

1 Like

Can you tell us a bit more about what you are trying to achieve?

Slicer segmentations are standard 3D or 4D nrrd files with additional metadata, which can be read with any nrrd reader. For 3D medical image visualization in Python, I would recommend to use Slicer. I have not seen anything that would even come close to it. You can even use it from Jupyter notebooks.


Sorry if I was not so clear, I upload the picture of the model and files.

I want to load this model on python (using pycharm) and I want to do data augmentation (rotate the 3D image), and after that to decompose the model into 3 images (what I segment).
I don’t know how to use it and this is the reason I ask for a tutorial.

You can apply random linear and warping transforms to a 3D volume is quite easy using Slicer modules as shown in this Python code snippet and then you can use these augmented data sets to train your network as you would do it in any other Python environment.

Note that since Slicer includes a full Python environment, you can do everything without leaving Slicer. The advantages that 1. you have access to full, interactive 3D visualization of lots of 3D data types and 2. you can Slicer’s built in features to specify inputs, verify outputs, quantify, and quality-control results. We managed to integrate all these with usual Python tools, so while Slicer is running, you can still connect to it using Pycharm - either using debugging interface or Jupyter notebook interface.

Thank you for your help.
You convinced me , and I try to do data augmentation with 3Dslicer, but can you know how can I do it automatically, Because I want to create 10e3 samples, and I would do it with hands it will take a years.

I want to take the cube (slices, h,w) - CT and to transform it (rotate, affine etc…) and to save the model in some file on my computer. After that, load the model on pycharm and train it.

@eran_bam this is isn’t exactly what you are looking for but it will give you an idea of how you might approach this. The function creates a numpy array with a bunch of randomly positioned and oriented slices from a volume. The array could then be fed into a machine learning network. If you study the scripts that Andras linked above you can modify this to perturb the parameters of transforms and also extract volume arrays instead of just slices. This kind of code could also be wrapped in a generator so your network can get new training batches on demand rather than creating them all in advance.

def randomSlices(volume, sliceCount, sliceShape):
    layoutManager =
    redWidget = layoutManager.sliceWidget('Red')
    sliceNode = redWidget.mrmlSliceNode()
    sliceNode.SetDimensions(*sliceShape, 1)
    sliceNode.SetFieldOfView(*sliceShape, 1)
    bounds = [0]*6
    imageReslice = redWidget.sliceLogic().GetBackgroundLayer().GetReslice()

    sliceSize = sliceShape[0] * sliceShape[1]
    X = numpy.zeros([sliceCount, sliceSize])

    for sliceIndex in range(sliceCount):
        position = numpy.random.rand(3) * 2 - 1
        position = [bounds[0] + bounds[1]-bounds[0] * position[0],
                    bounds[2] + bounds[3]-bounds[2] * position[1],
                    bounds[4] + bounds[5]-bounds[4] * position[2]]
        normal = numpy.random.rand(3) * 2 - 1
        normal = normal / numpy.linalg.norm(normal)
        transverse = numpy.cross(normal, [0,0,1])
        orientation = 0
        sliceNode.SetSliceToRASByNTP( normal[0], normal[1], normal[2], 
                                      transverse[0], transverse[1], transverse[2], 
                                      position[0], position[1], position[2],
        if sliceIndex % 100 == 0:
        imageData = imageReslice.GetOutputDataObject(0)
        array = vtk.util.numpy_support.vtk_to_numpy(imageData.GetPointData().GetScalars())
        X[sliceIndex] = array
    return X

I’ve updated the volume augmentation script to apply random translations and rotations in addition to the random deformations, and to also take screenshots so that you can get a quick overview of how the augmented data looks like.

Gallery of example output:

Script that generated it:


Hi Steve and lassoan,

I tried to do everything you told me, but it does not work.
To simplify the question can you guide me how can I do this step:

  1. load nrrd file : images.nrrd, Segmentation_1.seg.nrrd -> (cube, cube_seg)
  2. To translate the two cubes (with some matrix, never mind) and multiply between them (translate_cube * translate_cube_seg).
  3. sum up the column (a bunch of slices, height, width) of the multiply cubes to create a 2D image
    (like: Img = sum(multi_cube[:, I, :])
    image[i, :] = Img).
  4. save that image

I think it’s very simple but I don’t know how to do this with a 3D slicer package.
I try to do this with a regular python package, but it does not work on 3D.

Thanks for helping me.

Have you tried to copy-paste the complete example that I posted above?

If it works, then you can modify input parameters one by one. For example, use your input volume instead of download a sample data.

I’ve updated the example to also transform a segmentation (or label volume) that corresponds to the same volume. This way you get pairs of volume & label that you can directly use for training.

Let us know how it works.

I think I will try another way, that what I get

And I try to figure it out, but I failed.

Does the example work, if you run it as is?

Does the example work if you just change the input volume?

If the 3D volume appears off center in the screenshots, as it can be seen in the screenshot above then that’s just a cosmetic issue - all the generated volumes are still good - and you can fix the screenshots by centering the 3D view before starting the script.

Make sure you use a very recent Slicer Preview Release.