I am currently building a module for image segmentation using a Unet model that has already been trained. When I test the module, the execution blocks when calling the function tensorflow.keras.load_model() and Slicer does not respond. Unfortunately, there is no error message.
When I try to load the model using the same function in pyCharm with the Python Interpreter being that of Slicer, everything goes well. I also checked the environment variables, it turns out that PYTHONHOME and PYTHONPATH are both the same when running the code inside or outside Slicer.
Has someone had the same problem? How can I solve it?
Hard to say exactly, but if you’ve done anything that changes the environment variables or paths or tries to mix-and-match different python libs from different distributions that could cause this kind of behavior.
Tensorflow has always worked for me, including loading models, by using pip_install("tensorflow") in Slicer’s python. Probably start fresh with a new slicer install report what you did step-by-step.
In fact, I have tried what you suggested. Firstly, I deleted slicer and installed a new one. After that, I pasted my code about loading of the model into the script of a new Slicer module. Then, I installed all the necessary packages (tensorflow, cv2, tqdm, skimage, etc.) using the command pip_install("...") in Slicer’s python interactor. Finally, I tested the new module but the problem still exists. The execution blocked at the step of loading of the model by calling the function tensorflow.keras.models.load_model().
Besides, I tried to debug with Slicer’s plug-in Debugging tools and PyCharm. In fact, the information “no frame is available” was shown when I tried to step into model_config = f.attrs.get('model_config') in the file hdf5_format.py. I hope this information could help you…and thanks again for your help.
My guess would be an incompatible hdf5 library being pulled in by one of the pip packages. You wouldn’t be able to debug that with pycharm but there are system tools (such as sysinternals that can help you debug. If nothing works maybe you could share a reproducible example, e.g. a model file and the script the script that does the install and load that hangs, then maybe someone can trace what’s happening.
HDF5 is a very troublesome library, there are problems with it all the time due to binary incompatibity issues. One solution is to save your model number in the recommended SavedModel file saving format instead of the problematic legacy HDF5 format (referred to as H5 in Keras documentation).