We’re trying to get live streaming in 3D slicer using Plus Server but we’re having a bit of trouble. We tried to follow the instructions under Show live images in 3D Slicer on this link: http://perk-software.cs.queensu.ca/plus/doc/nightly/user/ProcedureStreamingToSlicer.html but were not able to get live streaming. Essentially, we launch Slicer without anything else, load an Image Reference mentioned in the link above and then follow the steps.
We’re currently using the Plus Server Launcher and in the instructions it refers to an Image Reference. We used the Image_Image.nrrdfile. We’re not sure if we’re missing a step or doing something incorrectly?
Hi, Sunderlandkyl!
I always tried the function of “PlusServer: Video for Windows video capture device”. But when I connect the US device, I found the directions of that figure on 3D Slicer and on US device are upside-down. For the better presentation of US imaging data, I tried the “Transform” on Slicer, but it didn’t work
So, could u offer me some solutions?
Thanks, Kyle! And sorry for the late reply.
But actually, I only have the Cannon US device (which was not included in the list of supported devices). So, I just assumed the US probe as the video capture tool for testing the function of "PlusServer: Video for Windows video capture device”. Then, the images showed upside-down.
Then, I also tried the method you told me, the fCal. It doesn’t work. But I found the “Volume Reslice Driver” can flip the image, which may not show the actual orientation ( I guess?).
Sorry, one more question. The most attractive points of PLUS toolkit are the function of volume reconstruction and the flexible application of different devices. And now we have the Cannon US device and the robotic arm. Both of them were not included in the supported devices. I was wondering if that is possible to utilize PLUS toolkit to help me to realize the volume reconstruction in real-time under the assistance of robotic arm. Thank you for your time and consideration again!
Are you tracking the ultrasound or sending the transforms from the robotic arm to Plus or Slicer? If the ultrasound is tracked in some way, then you should be able to do live volume reconstruction with the current setup.
Sorry for the late reply. Actually, we just bond the US probe and robotic arm together, which means the spatial data of US probe can be obtained by the robotic arm.
And, I will try to follow the setup to do the live volume reconstruction. Thank you so much~~~