I Would like to move some jobs to another QThread to avoid the frozen of the main thread (GUI).
I tried python ThreadPoolExecutor to create a thread. However, the child thread is too slow. According to the suggestion of multithreading-in-extension, I tried QTimer. It also frozen GUI.
If I click the button to test it, slicer crashes with the following error:
"
Received signal 11 SEGV_MAPERR 559d00000002 #0 0x7f58faea558f #1 0x7f58f98d785d #2 0x7f58faea5a9e #3 0x7f58e49ff890 #4 0x7f58eaddfe2d QThreadPoolThread::run() #5 0x7f58eade9554 QThreadPrivate::start() #6 0x7f58e49f46db start_thread #7 0x7f58d877e88f clone
"
I have no idea what is wrong.
Could anyone point me how to create QThread in Slicer with Python? Thanks.
Python multi-threading is really messy in general and it is further complicated by using a Python interpreter embedded in an application.
I would recommend running background processing in a Python CLI module (which runs processing in a separate process) or implement it in a C++ loadable module (where you can use QThread).
yep, if you try to do this more than once slicer crashes, but if you wait until the end it works pretty well. So you shall enforce some kind of lock yourself.
Oh, and this only works with python3, I forgot to mention this.
To elaborate more on this: SlicerPython is attached to the console which is locked by slicer itself, this procedure liberates temporarily the console input and sets a virtual input with no owner (no GIL). If you try to use slicer console while it is calculating, then SlycerPython will not read from it (not that bad). If you try to execute the pool again while is being executed then the system will see colliding threads and will kill slicer.
instead of using terminate and join, try using quit and wait respectively as they are the recommended with QThreadPool.
Anyways, if you want to try the multiprocessing module
from multiprocessing import Pool
def worker_wrapper(args):
worker = Worker(*args)
return worker.run()
original_stdin = sys.stdin # Unlock SlicerPython GIL
sys.stdin = open(os.devnull)
args = tuple('your arguments here, can be empty so you can omit args')
p = Pool(self.CpuCores)
try: # Start producing results
iresults = p.imap(worker_wrapper, args) # This is an iterator pointer
p.close()
for i, result in enumerate(iresults):
print("performed {} threads".format(i)) # Here you can follow your threads progress ;)
except Exception as e: # Is something wrong happens, force to terminate the pool
canceled = True # Necesary, no "return False" allowed here
p.terminate()
logging.error(e)
finally:
p.join()
sys.stdin.close()
sys.stdin = original_stdin
This sounds very fragile. Python CLI or C++ module options are much more reliable. Currently, Slicer’s scheduler runs Python CLIs one by one, but it would be possible to change this so that tasks that do not depend on completion of other tasks could all run in parallel.
I found this poster here talking about using multiprocessing.Manager. Due to the stdin problem, it does not working in Slicer.
I simply set a flag before call the subprocess and reset it in callback function.