Control 3D objects in 3D view

Operating system:win10
Slicer version:slicer 4.10.0 / 4.10.1
Expected behavior:
Actual behavior:

Hi, everyone. Recently i have got an idea about control 3D objects to make them rotate, translate, scale in 3D view.

My first move is to develop a function which allows me to select an object. For example, the yellow tumor in the pic.

And then i can move the tumor out of the green liver and then do the rotation … to the tumor,
but the liver needs to keep still.

(I did some experiments:

Since in slicer 3D views , 3D objects(or 3D segments) move is because of the use the camera, but if we use camera, then all the visible segments are going to move together, which is not what i expect.

Therefore, i created another 3D view window and i tried to seperate them to ensure there is only one segment in both windows, but i failed to do it because these two windows are synchronized. Is there anyway to make these two windows not synchronized ?)

Is my idea feasible in Slicer ? (Cause these functions ,such as rotation, translation, scaling , they are usually achieved in some modeling softwares.)

Any help would be appreciated.

Your version of slicer looks quite old? Can you use the transformation module to move the tumor out of the liver?

You may find some of the tips in this Video helpful for using the transformation module. From about 5:30 she covers the use of the transformation module to move things around.

I see that both of the segments are under the same segmentation node. If you want to move just the yellow tumor with the transformation module then I think that you will have to move the yellow tumor segment to a separate segmentation node. You can use the copy/move segment part of the segmentation module to move segments to a new node (I can see it in your picture above).

2 Likes

Really thanks for the reply.

I can use the transformation module to do the things i want, but the whole operation seems really time-consuming and inefficient.

Yes, it seems strange at first and it takes some getting your head around. There are some little tricks to make it work well. You quickly get used to it though. I have found there are some really good advantages to the way the transform module works compared to other 3D CAD programs. It is really handy having direct access to the linear transform matrix and it can be really good being able to bring groups of objects in and out of the transform and harden or not harden them etc.

Slicer is extremely flexible, there are always several ways to do things, depending on what your constraints are. We were just trying to guess what would work for you but if you can tell more about what you would like to achieve and why do you find the proposed workflow inefficient the we may be able to give more specific advice.

For example, if you want freely grab and move/translate around objects then I would recommend to do it in virtual reality (you can get headset and controllers for a few hundred dollars) - it is much faster than working on a 2D screen with mouse and keyboard. See this demo:

Really thanks for the reply. What i want to achieve is the video demo that you recommend to me above.

I want to achieve those functions displayed in that video, including grab, move,etc. But i want to do it in 3D view with mouse and keyboard .

Could you tell me why it is hard or inconvenient to achieve these functions in 3D view?

I tried to find the reasons,then i found this in the source code. but still not sure i totally understand it.

Why is it expensive ?

image

Is it possible to obtain the accurate coordinates (x,y,z) of cursor in 3D views ?

Because a mouse has only 2 degrees of freedom (up/down, left/right) and a virtual reality controller has 6 degrees of freedom (translation and rotation around 3 axes) and we typically use two of these controllers.

Setting position/orientation accurately using a mouse is so tedious that we almost never do it but use image intensity, landmark, or surface matching based registrations instead. Registration methods are usually much more accurate and faster than manual visual alignment.

To determine what is displayed at the current mouse position is a computationally expensive operation because a model may consists of hundreds of thousands of triangles that you need to check if intersect with the view ray. There are various tricks to make this fast, but it is not a simple operation.

We plan to improve model picking for virtual reality and as a side effect, desktop/mouse based picking may be improved as well - probably within a year you will be able to select a model by clicking on it. Still, translation/rotation by using mouse would work essentially the same way as now when you use the transform widget - very cumbersome compared to direct grabbing and moving with 6-DOF controllers.

If you can give information about what is the high-level goal you would like to achieve (what anatomy, disease, treatment method, etc.) then we can give more specific advice.

1 Like

These advice are really helpful. Thanks again.

For now , i do not have a specific goal to achieve. I admit that we can use the HTC Vive controller (the VR equipment that i am using) to grab 3D objects in VR view and then the VR view can synchronize to 3D view in slicer, which is amazing.

But there is a scene that i am considering . If there is a doctor who wants to show some visualization data to his or her patient. He or she can only do operations in 3D view and these operations can synchronize to VR view (put a VR headset on the patient’s head).

In this scene, a doctor can easily show the visualization data to his or her patient and uses mouse instead of the controller, and the patient just need to focus on what he or she can see in VR view and can understand each step/operation that the doctor did.

Based on this scene, i came up with the question i posted.

Patient education and collaborative review/planning are indeed important use cases for virtual reality and it is already available in Slicer (by loading the same scene on two computers and sharing a few transforms between them using OpenIGTLink):

1 Like

This video is great.

But personally, i do not think doctors are willing to put on a headset to do a demostration to their patients.

The headset is kind of heavy and not convenient to use when they use the controller to control 3D objects, while the mouse is usually more accurate than the controller and more easier to manipulate.

I guess the best way of using just a mouse to do interactive transforms as you say without changing the way slicer work (in the future) is to make a transform for each selected node on the 3d viewer then change its matrix without hardening the transform as you pull some axises on the 3d view as you would do with blender or other modeling software.
The caveat though is that this method (even in blender where it is already available) is not very efficient in explaining to a patient interactively and you will be better off using a VR controller (even without a headset) to freely move your models or some other kind of stereotactic device.

1 Like

The idea is that both patients and doctors would be in the virtual world. If the doctor is not willing to put on a headset then just using the VR controllers could be an option, as @Amine suggests. You can also use scene views to save pre-configured views and switch between those (it may be then enough to move/rotate the camera around). If eye contact with the patient is important and visualization of static models is enough then you may export segmentation to obj model file and view them on an augmented reality headset (converting and uploading models takes some effort and I’m not sure if multi-user model viewer is readily available).

1 Like