Can't use `launchConsoleProcess`

Hi,

From this topic I’m trying to run simple command and see output in Slicer Log messages:

slicer.util.launchConsoleProcess([sys.executable, '-c', 'print(\"Hi\")'])

I get the output in Python console <Popen: returncode: None args: ['C:/C/r/python-install/bin/PythonSlicer.exe'...>

But I hoped to see Hi in Log Messages.

Maybe I misunderstand something?

Of it seems I needed to read data from returned Popen:

p = slicer.util.launchConsoleProcess([sys.executable, '-c', 'print(\"Hi\")'])
print(p.stdout.read())

But is there a way to run subprocess with stdout to Slicer Log message?
Something similar to:

subprocess.run(my_cmd, stdout=slicer.stdout)

slicer.util.launchConsoleProcess is typically used with slicer.util.logProcessOutput to wait for the execution to complete and forward the process output to the application log.

1 Like

Thank you,

Is there a way to do such thing on the background i.e. without stopping Slicer?

I’ve seen similar topic and does @pieper 's solution with SlicerProcesses extension is able to help me? I’m going to execute heavy computation that may take long time and it would be pretty attractive to run it on the background and to see all output somwhere in Slicer GUI.
Haven’t tested SlicerParallelProcessing yet.

And the latest option probably is python cli module, but it scares me a little as it claims to write GUI in XML :slight_smile:

If you only need to run one long processing at a time then you can use the CLI infrastructure (either calling C++ or Python CLI modules). If you start multiple CLI modules then they are executed one after the other, so if you want to run several of them in parallel then you would need to improve the CLI infrastructure or use another mechanism.

SlicerProcesses manages many long-running tasks in parallel, so you can use this for managing your background processes.

You can manage processes manually, too - launching them using launchConsoleProcess and monitor their status, gather output from them using customized versions of logProcessOutput (see for example in SlicerElastix).

Probably SlicerProcesses would be the best solution for your needs.

1 Like

Is it ok to run threads in python within Slicer?
For example create python module and create thread:

  def threaded_function(self, arg):
    from time import sleep
    for i in range(arg):
      print("running")
      sleep(1)

  thread = Thread(target = self.threaded_function, args = (10, ))
  thread.start()

from a first glance it works: the text “running” appears with some time step (definately not 1 second but still) and I’m able to use Slicer while it is executing.

I would only use threads for very short, very frequent operations (where launching a separate process would be a significant overhear) or that needs complex interaction with the scene or when operating on extremely large data (that would cause significant extra memory usage to copy into another thread). For most other cases, launching a separate process for background processing is preferable, because it is:

  • interruptible (you cannot force a thread to stop from the same process, but you can always terminate a separate process)
  • safer (if an algorithm crashes in a subprocess then it won’t crash the main application, but if an algorithm crashes in a background thread of the main application then it will make the main application crash)
  • more versatile (you can run any binary, you can run code in a different virtual Python environment, etc.)
1 Like

That makes sense.
Thank you for sharing these ideas, haven’t thought about it.

It seems I’ve found a curious solution to the issue - QProcess.
Advantages: usability of signals/slots when new data is available on the stream, when finished etc.

# somwhere in setup
self.qprocess = qt.QProcess()
# without `qt.QProcess.MergedChannels` onProcessReadyReadStandardOutput called only once at the end of the subprocess execution
self.qprocess.setProcessChannelMode(qt.QProcess.MergedChannels)
self.qprocess.connect('readyReadStandardOutput()', self.onProcessReadyReadStandardOutput)
self.qprocess.connect('errorOccurred(qt.QProcess.ProcessError)', self.onProcessErrorOccured)
self.qprocess.connect('finished(int)', self.onProcessFinished)

#.........................................................................

# slots
def onProcessReadyReadStandardOutput(self):
  print("QProcess output:\t", str(self.qprocess.readAllStandardOutput()))

def onProcessErrorOccured(self, err : qt.QProcess.ProcessError):
  print("QProcess error:\t", str(err))

def onProcessFinished(self, exitCode : int):
  print("QProcess finished:\t", str(exitCode))

def onStartButtonClicked(self):
  self.qprocess.setProgram('C:\\my.exe')
  self.qprocess.setArguments(['-E', 'my_command'])
  self.qprocess.start() # or startDetached(): as I understood even if app is closed detached process will still be running
  # the code goes further without stopping execution

This doesn’t block Slicer or python and output appears in the console as it becomes ready.

Are there any disadvantages?

Few links that helped me:

P.S. another option is to use QTimer() and send signals every say 10 seconds to do the updates (read output) but somewhere near each update (once in 10 seconds) the GUI may freeze for a half of a second (probably when signal is emitted)

Yes, a SlicerParallelProcessing Process is a subclass of QProcess so it has all the advantages of using signals and slots to integrate with the event loop of the app.

A QTimer could be helpful to give some progress information to the user, but yes, I’d suggest relying on the signals to determine when there is data ready to read so you don’t block.

1 Like