Memory Usage Increase in Looped Script

Hello,

I’ve written a script that uses slicer.util.loadVolume() to open a scan in each iteration of a loop, and despite using slicer.mrmlScene.RemoveNode() each time, the memory usage still goes up in task manager, suggesting that the vtk object is still in memory somewhere and not freed. Once the memory fills up all the way, slicer will crash, so I guess I can’t count on the Python GC to take care of it either.

I suspected an extraneous reference count might be responsible for this, but observed the exact same effect even after calling UnRegister() to manually decrement it. Is there a proper way to free the memory associated with loading a volume?

Thanks

It may be simpler to clear the scene, but if that’s not an option then delete not just the volume node but also the volume’s display node and storage node.

I was just running a quick test while Andras replied – the memory does stay stable if you delete those two nodes.

>>> def loadLoop():
...   slicer.util.loadVolume("/var/folders/7l/2qsp6sqx4_b5x9kf_yzzm4lc0000gn/T/Slicer/RemoteIO/MR-head.nrrd")
...   n = getNode("MR-head*")
...   slicer.mrmlScene.RemoveNode(n.GetStorageNode())
...   slicer.mrmlScene.RemoveNode(n.GetDisplayNode())
...   slicer.mrmlScene.RemoveNode(n)
... 
>>> for i in xrange(0,100):
...   loadLoop()

If you want to find out what nodes are associated with a given data type, you can look at slicer.mrmlScene.GetNodes() before and after, with something like:

def getNodes():
  nodes = slicer.mrmlScene.GetNodes()
  return [nodes.GetItemAsObject(i).GetID() for i in xrange(0,nodes.GetNumberOfItems()]

nodes1 = getNodes()
#... do your load operation
nodes2 = getNodes()
filter(lambda x: x not in nodes1, nodes2)
3 Likes