Access mrmlScene from terminal using PythonSlicer

Hi everybody.
I have used PythonSlicer from terminal before. I’ve used it to run scripts using the environment and dependencies that come with slicer. Ex: ./PythonSlicer ~/test.py

Now I want to run a python script using PythonSlicer, but I want to be able to use mrmlScene from within the script. I do not need to access any buttons, sliderwidgets, etc. Those values can be hardcoded. For example, I want to be able to run the following
segmentationNode = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLSegmentationNode")

Basically, I want to test python scripts in Slicer without having to launch the application so that the tests can run in a docker image.
Please suggest ways to make this happen.

Also, I have observed in various modules such as Vascular Modeling Toolkit that there is usually a Reload and Test button and its code in written in a test class within the module.py
Is there any way to run this test without launching the slicer application?

Hello,

having the same problem, have any solution to this been found? The reason I care about PythonSlicer.exe is that it provides live feedback unlike runnings scripts in Slicer.exe. Any way to get the same functionality as in Slicer.exe?

As far as I know you will need to launch the Slicer application one way or another, and not just the Python environment. Something like this could work:

[path/]Slicer.exe --no-main-window --python-script [path/]YourScript.py

But let’s wait for people who are more expert on Slicer’s Python environment than I…

Yes, if you need the functionality of slicer then you need to use the full application as your python environment.

In the bigger picture, I was trying to use PythonSlicer as a “weak link” between Slicers capabilities and, in this case, a Unity application. I would send code, it would send feedback and store results in a designated folder, where I would pick them up. The lack of feedback from Slicer.exe is unfortunate imo.
I expected a “strong link” to be much more labor intensive, that is sharing a direct data stream, storing data only in memory and perhaps sharing a process that can be seen and manipulated from both the Slicer GUI und the Unity App.
Your reponses have redirected me towards that approach, which I still need to research. Thanks.

Why not use the Slicer WebServer module for this link? Or maybe OpenIGTLink?

Because I, correctly, assumed they would be a mountain of work for me. With this post, I aim to check my approach to avoid needlessly committing to the mountain. This need comes from my inability to find easily understood existing discussion/documentation in this area so I expect it to be useful for others as well.

Generally, I was hoping to cheaply test a way to build an AR/VR viewer/simulator for volumes/segmentations in Unity. Unity and not SlicerVR because I see easier future expansion by using its physics engine and performance efficient realism or making the app standalone for narrower applications. I am also guided by a vision of using the 3D Slicer Desktop UI (keyboard + mouse) while viewing models in VR similar to what the combo Desktop+ and BlenderVR can do.

Trying to link the applications, I went for OpenIGTLink, as suggested, because its protocol (by my understanding) is more suitable (than WebServers HTTP) for exchanging live feeds of eg transformations or from multiple users in one session.
The currently targeted process looks like this:

  1. Launch AR/VR app
  2. To connect to Slicer, find the .exe - currently by manual path entering
  3. Launch Slicer.exe with python script that launches OpenIGTLink Server (alternatively enter Server details in Unity app interface)
  4. Launch any python script from the AR/VR interface to be executed in Slicer
  5. Get an AR/VR view of any mesh that is in the 3D View in Slicer, later maybe other node types
  6. Interact by transforming, slicing, measuring the model in AR/VR while any changes are synchronized with 3D Slicer which displays their equivalent (if available) in the 3D view.

Critique of the above approach is welcome.

Finally, I am faced with a steep learning curve that comes from trying to use OpenIGTLink to receive+execute scripts and later send polydata and me being a network noob and C++ illiterate. I am trying to learn by reading through the AR Planner code, but would welcome any beginner friendly ressources on this topic.