Fetching real world postition/orientation of HTC Vive controller/HMD

Hello!
I am trying to create slices using orientation of the HTC Vive HMD/controller loation for some custom visualization. Are the position/orientation of the HTC Vive hardware available through some python code using the SlicerIGT or SlicerVirtualReality extensions?

The Features section of the SlicerVirtualReality documentation page (GitHub - KitwareMedical/SlicerVirtualReality: A Slicer extension that enables user to interact with a Slicer scene using virtual reality.) says

Make position of controllers available as transforms in the Slicer scene. These transforms can be used in custom modules to reslice volumes (using Volume Reslice Driver module in SlicerIGT extension) or transform any nodes in the scene.

But I don’t seem to be able to find any way to access it.

Please help me with it.

Cheers!

Controller transforms can be exposed as transform nodes in the scene (in world coordinate system) as described here in SlicerVirtualReality extension module documentation.

Controller poses could be made available in physical coordinate system, too. Best option depends on what you would use that information and how, so if you need this then please provide more details about what you would like to do.

I am sorry, I do not have much of an experience in working with C++ variables exposed through Python, so I basically wanted to understand how to go about viewing the Controller Pose in Python. After my query, I took the bit of code “vrView=getNode(‘VirtualRealityView’)” on the page SlicerVirtualReality/DeveloperGuide.md at master · KitwareMedical/SlicerVirtualReality · GitHub as my starting point and realized that I need to fetch the ‘node’ for one of the controllers (left in my case). I will post the process I followed, in case someone is stuck like I was.

First thing I tried was printing the vrView variable above. It contained a node called

LeftController: vtkMRMLLinearTransformNodeVirtualReality.LeftController

I used the above syntax to get “LCon= getNode(‘vtkMRMLLinearTransformNodeVirtualReality.LeftController’)”

Printing the above LCon variable gave me the first glimpse of the transformation matrix for the Left Controller. Then I was again stuck at extracting that matrix for a very long time when dir(LCon) came to my rescue and I was able to extract the matrix part using:

TMatrix=LCon.GetMatrixTransformToParent()

This TMatrix is again an object whose individual elements have to be accessed by:

OneElement=TMatrix.GetElement(x,y)

Where x,y are the row and column numbers. I am still not able to get the whole 4x4 matrix in a single go.

Phew! Now, my question is, is there a shorter way to go about it?

And, the main reason why I am doing this, is to get 3 points in the horizontal plane of the Controller so that I can pass them as fiducials to the code at Documentation/Nightly/ScriptRepository - Slicer Wiki so that I can display a slice based on the Pose of the controller.

Any ideas how I can go about getting those fiducial points from the transformation matrix?

Cheers!

To show a custom model or markup at the controller’s position you don’t even need to do any programming. You can apply a transform to any transformable node by drag-and-dropping the node under the transform node in Data module / Transform hierarchy (or use Transforms module).

To apply a transform to a node programmatically, you can use SetAndObserveTransformNodeID method as described in Transforms module documentation.

I beg your pardon, I should have clarified before. I am not trying to show a custom model or creating a markup at the controller’s position. In fact, I am not trying to display anything in VR at all.

Let me try to explain myself with an analogy: I am just trying to make use of the controller as a sword to slice (in this case, horizontal to the Pose of the controller) the 3D model and display the slice, continually updating as the location and the orientation of the controller changes.

Please clarify another doubt that I have: is the transformation matrix obtained above is the same as a Quaternion? I have always been running away from understanding how Quaternions work. But if this requires it, I will give it another shot.

Thank you for your patience.

Cheers!

I’m not sure what you would like to do. What do you mean by VR? Volume rendering or virtual reality?

If you mean that you would like to reslice the volume using controllers then use the controller transforms in SlicerIGT extension’s Volume reslice driver module.

Using the HTC Vive to track hand pose is a huge overkill. You can use a webcam and a simple 2D barcode to define transforms, use an inertial measurement unit (using PLUS toolkit), a LeapMotion controller, or a low-cost professional-grade pose tracker (such as OptiTrack Duo) which have various advantages over using an HTC Vive for this.

I found exactly what I am trying to achieve. I would like to replicate the project shown in the following video:

I played around with SlicerIGT extension’s Volume reslice driver module but I am not able replicate the above effect using the module. Did I miss anything?

Cheers!

I confirm that you can do the above by using Volume reslice driver module, without writing a single line of code. In fact, this was one of the demos that we showed last week at RSNA - you can download the Slicer scene from here. Just load the scene, enable virtual reality view, and move your hand into the volume to see the slices. You can switch to a one-up slice view to see the slice full-screen on your tablet.

Yes, I think so. We can only help if you describe exactly what you did, what you expected to happen, and what happened instead.