Hi there!
I want to stream 2 different 3D views over a web server for a 3D (stereo) augmented reality application in Python. It already works for one 3D view.
How can I do this? Is there a better/faster way than using a web server?
thank you!
Hi there!
I want to stream 2 different 3D views over a web server for a 3D (stereo) augmented reality application in Python. It already works for one 3D view.
How can I do this? Is there a better/faster way than using a web server?
thank you!
Right now the threeD view endpoint doesn’t take a view id parameter like the slice view one does, but this would be easy to extend if you wanted to.
In general though the web server isn’t going to be the highest performance option so you may want to explore something like the hololens remoting system.
Thank You!
For a first proof of concept the plan was to go with the web server.
The docu says: “Get screenshot of the first 3D view after applying parameters.” Does "after applaying parameters mean “the active 3D view”?
if yes, i changed the active threeD view and verified the change with putting the following code in the console:
layoutManager = slicer.app.layoutManager()
targetThreeDViewIndex = 2
targetThreeDWidget = layoutManager.threeDWidget(targetThreeDViewIndex)
targetThreeDViewNode = targetThreeDWidget.mrmlViewNode()
slicer.app.applicationLogic().GetSelectionNode().SetActiveViewID(targetThreeDViewNode.GetID())
slicer.app.applicationLogic().GetSelectionNode().GetActiveViewID()
→ But the webserver delivers always the same threeD-view.
Why doesn’t it work?
I think @pieper comment above explains what you are observing:
To better understand, consider reviewing the implementation of the corresponding endpoint in Modules/Scripted/WebServer/WebServerLib/SlicerRequestHandler.py#L1009-L1077
corresponding to the endpoint document at https://slicer.readthedocs.io/en/latest/user_guide/modules/webserver.html#get-threed
Yes, @jcfr is correct.
Interestingly, I see now that the view
parameter is being parsed:
But then later it’s not being used (0
is hard coded):
So this would be an easy fix if someone wants to give it a try.
Does "after applaying parameters mean “the active 3D view”?
This means that any view options specified in the url parameters, like changing the view direction are applied before the screenshot is returned.
Hi!
I have started to explore the hololens remoting system. In my opinion, it is too extensive for my requirements. I also do not have a Hololens and therefore it’s cumbersome for me to understand the code.
In principle, I just need to broadcast 2 different threeD views from 3D Slicer to an external python programme. I’ve sketched my setup in the following image:
Do you have any tips on how to do this and get better performance than with the WebServer?
Thank you!
I would suggest you could start with the WebServer implementation and then optimize if it’s not fast enough. On a local machine or local network it can provide images pretty quickly, although it depends on how big they are, etc. Usually surgical microscope’s don’t move very quickly so it could be okay and is probably the easiest just to make the small fix suggested above. If you do need to optimize then a dedicated connection you should first determine exactly what the bottlenecks are and what performance you actually need.
Ok, thank you for your input!
Hi,
The following code shows how the ThreeD View image is loaded into the external Python programme:
import cv2
import numpy as np
import urllib
def load_from_3DSlicer(url):
# download the image, convert it to NumPy array, and decode it to OpenCV-format
response = urllib.request.urlopen(url)
image = np.asarray(bytearray(response.read()), dtype="uint8")
image = cv2.imdecode(image, cv2.IMREAD_COLOR)
return image
def main():
img_url = "http://localhost:2016/slicer/threeD"
threeDslicer_img = load_from_3DSlicer(img_url)
cv2.imshow('3D Slicer image 3D', threeDslicer_img)
cv2.waitKey()
if __name__ == '__main__':
main()
Is there a much faster/better approach in general? If yes, could you pls give me a hint?
The whole function “load_from_3DSlicer” takes about ~2.2s on the currently used computer, whereby 2.12s are taken from the line “response = urllib.request.urlopen(url)”.
thank you!
Update:
I tried it now with asynchronous requests, e.g:
async def fetch_image(url):
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
content = await response.read()
return content
url = "http://localhost:2016/slicer/threeD"
loop = asyncio.get_event_loop()
image = loop.run_until_complete(fetch_image(url))
image = np.asarray(bytearray(image), dtype="uint8")
image = cv2.imdecode(image, cv2.IMREAD_COLOR)
cv2.imshow('3D Slicer img Slice', image)
cv2.waitKey()
Works much faster, takes ~1/4 of the computation time.
Now I’d like to switch between the threeD views (left and right camera) by passing a parameter to the function “threeD” in the SlicerRequestHandler.py of the WebServer module.
How is it possible to execute the “threeD” function via the 3D slicer Python console or via an external Python console?
You just need to implement the fix described above (change the hard-coded 0 to use the view id).
Not sure what you are asking here.
Yes, but I have to pass the information about which threeD view I want to receive into the threeD function. ?
For example:
threeD(self, request, view_idx = 0) → left virtual cam
threeD(self, request, view_idx = 1) → right virtual cam
You would need to encode the view id as a url parameter.