Controlling Slicer using hand gestures

For the SendMessage command, the handler parameter needs to be connected to the Python interactor process. In order to get c# to send Python code to the 3d slicer, how do you get 3d slicer to act as the process and set up the interactor to act as the handler in the SendMessage method …??

What SendMessage command do you mean? Would you like to set up socket communication between Slicer and another application? Slicer already can receive messages to update any nodes in the scene and can send updates to connected application through OpenIGTLink protocol.

If you describe what you would like to achieve (what is the background and overall goal of your project) then we can give more specific help.

Hi,
It’s the command that let’s you use the keyboard to type in a specified application. Ill look into the opebigtlink
It’s to send Python instructions from a c# program (Kinect) to 3d slicer. I am basically interfacing kinect with 3D slicer in order to manipulate medical images by navigating them through gestures.

I have been trying to send commands for 3D slicer, but for some reason it is not responding

I would not recommend to try controlling Slicer by simulating keyboard events. It would be extremely fragile. Instead, you can send transforms through OpenIGTLink adjust camera position/orientation and slice position/orientation.

In the long term, I would recommend not to use Kinect, as the product is discontinued and it may stop working at any time. For short-distance interactions (for example, when sensor is fixed to the surgeon’s head, OR table side-rail, or monitor boom), then you can readily use a LeapMotion controller (see Slicer module here: https://github.com/lassoan/SlicerLeapMotionControl):

For Kinect-like room-scale tracking, you can use a webcam or other camera with a 2D barcode attached to the hand or tool. Marker tracking is provided by Plus toolkit and SlicerIGT extension:

If you need markerless tracking then I would recommend Intel RealSense cameras. We don’t have readily available body model or gesture recognition, so you would need to find a solution yourself.

We experimented with these touchless user interfaces in the past couple of years, but our conclusion was that (at least for our intra-operative applications), the easiest to learn and richest control can be achieved by running Slicer on a touch-screen tablet, which is placed in a sterile bag. We create a custom user interface for our Slicer module, which has large buttons, which are easy to see and push through the sterile bag, while wearing gloves.