Machine Learning Model Importing

Hi!

I am currently building a module which involves importing a deep learning model trained with keras to predict some segmentations. Initially, the question I had was how to go about running the keras/tensorflow packages in the module that I would like to create. I found an answe to that question here: Loading a deep learning model to a scripted module

What I’m wondering is that if this approach is feasible if I were to package these modules into an extension. If a user were to download this extension on a different computer, and use it there, would the packages still install in their version of slicer-python? If not, do you have any recommendations on how to approach this issue or any resources on how to do this?

Thanks!

1 Like

It can be hard to distribute models given the constraints on computing environments. We think that when Slicer moves to python 3 (maybe as soon as next week) it will be easier, but there are still some potential problems. You might look at bundling through DeepInfer, which uses Docker for the computing.

http://www.deepinfer.org/

1 Like

Any updates on the “best practice” for developing a slicer module that uses tensorflow under the hood? We’re looking to implement https://arxiv.org/abs/1908.05782 as a module, and would like to start with our best foot forward. The DeepInfer approach w/ a Docker backend seems attractive, but it would add some appreciable setup overhead on Windows client machines, which is our target.

Hi -

With the Slicer nightly you can use the slicer.util.pip_install() command to install tensorflow and many other standard packages directly. This opens the door for a lot of interesting things but the exact practices haven’t yet been explored. It would be great to see people try some experiments. I’d love to see how things go because getting some of these models directly in slicer has been on my wishlist for quite a while.

We might be those guinea pigs… our decision point hinges on the most robust way to support CPU and GPU hardware with the package install since the latter would need to play nicely w/ CUDA library installs, etc. We already have an existing pain point when doing that on a machine-to-machine basis, and Docker solves a lot of those problems. Some packages like (I think) pytorch bundle the necessary libraries with the package install, which would be another approach.

I believe it should work if the user has the right other dependencies installed (e.g. CUDA). I’ve tested with other python libraries pip installed in Slicer that access the GPU and it worked.

We use both tensorflow CPU and GPU in Slicer on Windows without docker. You may need to copy some Cuda DLLs manually to make GPU work, which should be no problem for developers, but may need some work to make it fully automatic and robust for non-technical users.

Overall, there are many deployment options you can choose from, with several examples you can follow (DeepInfer, TOMAAT, NVidia Clara segment editor effect, AIGT). The best option depends on your constraints - what hardware you need to support, how computationally demanding your application is, what are your time constraints, network connectivity, privacy concerns, how many users you need to serve, who are your users, etc.

1 Like

I have been doing some experiments with connecting a few automatic segmentation&object architectures (e.g. U-Net, RetinaNet) with TOMAAT and it seems like a simple tool for prototyping. Might post a short video of initial results next week.

2 Likes

@Lassoan do you have specific instructions to get tensorflow with GPU enabled under Slicer?

If you pip install tensorflow 2 on Windows or Linux then you can use both CPU and GPU. No special instructions are needed (other than in general setting up Cuda on Linux can be tricky). See details in release notes.