Slicer crash when call multiprocessing

I want to use python multiprocessing, but it makes slicer crash.

Traceback (most recent call last):
  File "C:\Program Files\Slicer 4.7.0-2017-08-16\lib\Python\Lib\multiprocessing\", line 249, in _bootstrap
AttributeError: 'PythonQtStdInRedirect' object has no attribute 'close'

Does slicer have plans to fully support Python3 and PyQt5?

You can already build Slicer with Qt5, probably we’ll update to Qt5 in a couple of weeks.

We plan to move to Python3 when all Slicer’s dependencies support Python3.

PyQt uses GPL license, which is incompatible with Slicer’s more permissive BSD-type license (we use PythonQt, which is LGPL).

I found PythonQt too many bugs, why not replace it with PySide? It’s also LGPL.

PySide seems to be only for Qt4 ( Is PySide2 complete and stable, supporting Python3 and latest Qt5?

We don’t experience any issues with Qt wrapping using PythonQt, so unless we run into some major problems in the future, it is not likely Slicer would switch.

Do you know what would be some advantages of using PySide2 compared to PythonQt? What “bugs” you experienced with PythonQt? Are you sure you using it correctly? Have you reported the errors to PythonQt?

  1. multiprocsssing can not work
  2. QAbstractItemModel createIndex overide error
  3. setItemDelegateForColumn crash
  4. Delegate can not work normally

The code runs fine on PyQt

The issues that you describe do not seem to be difficult to address; except the last one, which I don’t understand. I would suggest to submit your questions (with more specific description of the problem) to

There are some differences compared to PySide (see, but there doesn’t seem to be any major limitation in PythonQt. The beauty of PythonQt is that it is so small and simple that it is easy to develop and maintain it, even for a small team. PySide has some advantages, mainly the larger user community, but I don’t think that would justify the huge work that would be required to migrate Slicer and all extensions. It is probably a better investment to spend that time with fixing issues or adding more features to Slicer.

The code runs fine on PyQt

PyQt is irrelevant for us due to its restrictive license.

I have already submitted a question on sourceforge.
Model/View/Delegate some problems!


To increase the chance that your questions will be answered on sourceforge, post only one question in a topic and provide much more details including a complete example that reproduces the problem.


You can work around the Slicer PythonQtStdInRedirect multiprocessing crash by temporarily changing sys.stdin. For example, the below multiprocessing works in slicer:

from multiprocessing import Pool
import sys, os
import math

original_stdin = sys.stdin
sys.stdin = open(os.devnull)
  p = Pool(5)
  print(, [4, 8, 16]))
  sys.stdin = original_stdin

Thanks @John_DiMatteo for sending the Pool recipe. Did you also get it working for multiprocessing.Process? I’m looking for something that will work robustly on all platforms and I’m thinking I’ll use subprocess with PythonSlicer.

This trick does not work on Windows, and will never work. Although it works perfectly on Mac and linux.

Thanks for the info @Alex_Vergara. Yes, I did some testing and now I’m thinking I’ll use QProcess since it has a very clean API and is already integrated with the signals/slots event loop. Turns out it’s pretty straightforward to start a PythonSlicer process and send pickled data back and forth through stdin/stdout. Will share some sample code if I get something working well on all platforms.


You will end up by facing the same problem in Windows:

There is simply not solution available to Windows. The multiprocessing shall be done in a separate script with the if self.__name__ == "main" trick

In this last case the script itself must be called in a way that it is the main module, otherwise the multiprocessing spawn will just create several non functioning processes. This is a real fault in windows conception. The best explanation is this

Agreed, Windows and Unix took very different approaches years ago and it’s caused a lot of complexity.

For my particular use case the QProcess approach is working well for me on both Windows and Mac. In particular we have some computationally intense single threaded python code we want to apply to a set of data already loaded in Slicer. So launching a PythonSlicer instances and sending them the data will work. I don’t exactly want to fork the whole Slicer process anyway and may at some point want to run these processes on another machine, e.g. via ssh. I’m still working on complete examples and then I’ll push some code.

Here’s the example using QProcess:

1 Like

I have successfully translated your code into mine with some caveats:

  • Basically you need to decode the stdout to the correct data type, it is not enough to ask pickle to do it
  • You need to deepcopy your input data
  • You can add a progress bar :wink:
import qt
import json
import pickle
import slicer
import copy
import numpy as np

from Logic import logging, utils
from slicer.ScriptedLoadableModule import *

# ProcessesLogic

class ProcessesLogic(ScriptedLoadableModuleLogic):

    def __init__(self, parent = None, maximumRunningProcesses=None, completedCallback=lambda : None):
        self.results = {}
        if not maximumRunningProcesses:
            cpu_cores, cpu_processors, lsystem = utils.getSystemCoresInfo()
            self.maximumRunningProcesses = cpu_cores
        self.completedCallback = completedCallback

        self.QProcessStates = {0: 'NotRunning', 1: 'Starting', 2: 'Running',}

        self.processStates = ["Pending", "Running", "Completed"]
        self.processLists = {}
        for processState in self.processStates:
            self.processLists[processState] = []

        self.canceled = False
        self.ProgressDialog = slicer.util.createProgressDialog(
                parent=None, value=0, maximum=100)
        labelText = "Processing ..."
        self.ProgressDialog.labelText = labelText

    def setMaximumRunningProcesses(self, value):
        self.maximumRunningProcesses = value

    def saveState(self):
        state = {}
        for processState in self.processStates:
            state[processState] = [ for process in self.processLists[processState]]
        self.getParameterNode().SetAttribute("state", json.dumps(state))

    def state(self):
        return json.loads(self.getParameterNode().GetAttribute("state"))

    def addProcess(self, process):
        self.ProgressDialog.maximum = len(self.processLists["Pending"])

    def run(self):
        while len(self.processLists["Pending"]) > 0:
            if len(self.processLists["Running"]) >= self.maximumRunningProcesses:
            process = self.processLists["Pending"].pop()

    def onProcessFinished(self,process):
        self.ProgressDialog.value += 1
        if len(self.processLists["Running"]) == 0 and len(self.processLists["Pending"]) == 0:
            for process in self.processLists["Completed"]:
                k, v = process.result[0], process.result[1]
                self.results[k] = v
        if self.ProgressDialog.wasCanceled:
            self.canceled = True

class Process(qt.QProcess):
    """TODO: maybe this should be a subclass of QProcess"""

    def __init__(self, scriptPath):
        super().__init__() = "Process"
        self.processState = "Pending"
        self.scriptPath = scriptPath
        self.debug = False
        self.logger = logging.getLogger("Dosimetry4D.qprocess")

    def run(self, logic):
        self.connect('stateChanged(QProcess::ProcessState)', self.onStateChanged)
        self.connect('started()', self.onStarted)
        finishedSlot = lambda exitCode, exitStatus : self.onFinished(logic, exitCode, exitStatus)
        self.connect('finished(int,QProcess::ExitStatus)', finishedSlot)
        self.start("PythonSlicer", [self.scriptPath,])

    def onStateChanged(self, newState):'-'*40)'qprocess state code is: {self.state()}')'qprocess error code is: {self.error()}')

    def onStarted(self):"writing")
        if self.debug:
            with open("/tmp/pickledInput", "w") as fp:

    def onFinished(self, logic, exitCode, exitStatus):'finished, code {exitCode}, status {exitStatus}')
        stdout = self.readAllStandardOutput()

class ConvolutionProcess(Process):
    """This is an example of running a process to operate on model data"""

    def __init__(self, scriptPath, initDict, iteration):
        self.initDict = copy.deepcopy(initDict)
        self.iteration = iteration = f"Iteration {iteration}"

    def pickledInput(self):
        return pickle.dumps(self.initDict)

    def usePickledOutput(self, pickledOutput):
        output = np.frombuffer(pickledOutput, dtype=float)
        #output = pickle.loads(pickledOutput)
        self.result = [self.iteration, output]

Remaining question, Have you tested this in Windows??

It should be possible to define custom pickling methods for VTK and MRML classes. It would be nice if you could look into this. It may be more elegant than implementing some custom serialization solution for each project.

In my case I just output a numpy array in the cpu intensive calculation. But you are right, standard pickling methods would be great, specially for generic intensive computations.

This algorithm can be made generic for any process that basically a class that accepts dictionaries as input (any kind of data wrapped into a dictionary), watch my convolution class:

import numpy as np
import pickle
import sys
import Logic.logging as logging

class Convolution:
    '''convolution class to be runned either sequentially or parallel
    for parallel execution a wrapper class must be implemented as:
        def multi_run_wrapper(args):
            convolution = Convolution(*args) 
            return convolution.convolute()
    where args must be a tuple with
        args = (indexes, p0, DVK, distance_kernel, dens_array, act_array, num, boundary)
    def __init__(self, indexes, p0, DVK, distance_kernel, dens_array, act_array, num, boundary):
        self.logger = logging.getLogger('Dosimetry4D.convolution')

    def convolute(self):
        ''' Non homogeneous convolution in a list of voxel (vectorized)
            Variable density correction by distance

        res = np.zeros_like(self.indexes, dtype=float)
        .....    Computation     .....

        return res

    pickledInput =
    input = pickle.loads(pickledInput)

    convolution = Convolution(**input) # Making the convolution pickable
    output = convolution.convolute() # Invoking the function is now thread safe

except Exception as e:

And the calling method is

                    logic = ProcessesLogic()
                    scriptPath = self.script_path / "Logic" / ""

                    chunks = 100 # to avoid overheading 
                    iteration = 0
                    args1 = {  # wrap arguments for multithreading
                            'num': num, 
                    while acc < length1:
                        stepsize = max(1, min(int(length1/chunks), length1-acc))
                        lindex = copy.deepcopy(args1)
                        lindex['indexes'] = np.array(indexes[acc:acc + stepsize])
                        convolutionProcess = ConvolutionProcess(scriptPath, lindex, iteration)
                        acc += stepsize
                        iteration += 1
          "Launching convolution in {self.CpuCores} CPU cores")

                        canceled = logic.canceled
                    except Exception as e:
                        return False

For reference, when you use the pickle method to decode the output you will usually get the following error:

  File "/Users/alexvergaragil/Documents/GIT/dosimetry4d/Dosimetry4D/Logic/", line 131, in usePickledOutput
    output = pickle.loads(pickledOutput)
_pickle.UnpicklingError: could not find MARK

This only means pickle does not know how to read the output