Hi everyone!
When I tried to use a custom pre-trained torch model in a scripted module, I came across an inconsistency that I couldn’t understand. Having followed this guide, I exported my model using torch.jit.script
and am now trying to reload it using torch.load
.
However, when I try to do this in a scripted module, I receive this error:
File "/home/.../Slicer-5.8.0-linux-amd64/lib/Python/lib/python3.9/site-packages/torch/jit/_serialization.py", line 163, in load
cpp_module = torch._C.import_ir_module(cu, os.fspath(f), map_location, _extra_files, _restore_shapes) # type: ignore[call-arg]
MemoryError: std::bad_alloc
.
The same error occurs if I repeat the process in the Slicer Python terminal.
However, if I run PythonSlicer in a terminal and repeat the process, the model loads without any problems.
I have tried exporting it as both a zip file and a pt file.
I have enough memory to allocate the model.
Slicer version: 5.8.1.
The model was exported using Torch 2.1.0.
The Torch version in Slicer is 2.5.1.
Can anyone tell me what this problem/inconsistency could be due to?