@eran_bam this is isn’t exactly what you are looking for but it will give you an idea of how you might approach this. The function creates a numpy array with a bunch of randomly positioned and oriented slices from a volume. The array could then be fed into a machine learning network. If you study the scripts that Andras linked above you can modify this to perturb the parameters of transforms and also extract volume arrays instead of just slices. This kind of code could also be wrapped in a generator so your network can get new training batches on demand rather than creating them all in advance.

```
def randomSlices(volume, sliceCount, sliceShape):
layoutManager = slicer.app.layoutManager()
redWidget = layoutManager.sliceWidget('Red')
sliceNode = redWidget.mrmlSliceNode()
sliceNode.SetDimensions(*sliceShape, 1)
sliceNode.SetFieldOfView(*sliceShape, 1)
bounds = [0]*6
volume.GetRASBounds(bounds)
imageReslice = redWidget.sliceLogic().GetBackgroundLayer().GetReslice()
sliceSize = sliceShape[0] * sliceShape[1]
X = numpy.zeros([sliceCount, sliceSize])
for sliceIndex in range(sliceCount):
position = numpy.random.rand(3) * 2 - 1
position = [bounds[0] + bounds[1]-bounds[0] * position[0],
bounds[2] + bounds[3]-bounds[2] * position[1],
bounds[4] + bounds[5]-bounds[4] * position[2]]
normal = numpy.random.rand(3) * 2 - 1
normal = normal / numpy.linalg.norm(normal)
transverse = numpy.cross(normal, [0,0,1])
orientation = 0
sliceNode.SetSliceToRASByNTP( normal[0], normal[1], normal[2],
transverse[0], transverse[1], transverse[2],
position[0], position[1], position[2],
orientation)
if sliceIndex % 100 == 0:
slicer.app.processEvents()
imageReslice.Update()
imageData = imageReslice.GetOutputDataObject(0)
array = vtk.util.numpy_support.vtk_to_numpy(imageData.GetPointData().GetScalars())
X[sliceIndex] = array
return X
```