Change interaction speed in 3D view

Hi,

I’m looking for a way to change view movement/rotation speed.
The problem is that when I change camera aspect ratio in 3D view (by adding transform->scale to camera) I can see that interaction along scaled axis becomes slower (rotation/movement), I guess proporsionally to scale factor set.

I need to compensate this speed change.

Have a look at vtkMRMLCameraWidget and see if you can come up with a solution that works for both equal and distorted view aspect ratios.

I’m surprised that the rotation is impacted, because mouse event position changes are evaluated in display coordinate system and scaled with this->Renderer->GetRenderWindow()->GetSize() and I would have thought that none of them is impacted by your changes. How did you change the camera aspect ratio? Did it result in change of renderwindow size or display coordinates?

1 Like

Hi,

Thank you for reply, I will take a look at vtkCameraWidget soon.

All I do is set transform to camera node. Here is the code, it’s pretty simple
So it doesn’t change renderwindow.

They code is indeed just a few lines, but it modifies the camera at a very low level. We do not anticipate such changes when we implement Slicer core features. I’m surprised that this rotation speed is the only side effect that you encountered.

1 Like

Probably there are more I just haven’t noticed them yet :slight_smile:

It seems that vtkMRMLCameraWidget can’t control interaction speed.

Does VTK in general provide API to control spin/move speed? Can’t find this information in the internet

The interaction speed is controlled in the vtkMRMLCameraWidget. See for example how rotation speed is determined:

1 Like

Thank you for the hint,

To fix interaction issue when aspect ratio is changed it is enough to modify the function bool vtkMRMLCameraWidget::ProcessTranslate(vtkMRMLInteractionEventData* eventData) by multyplying each translation (dx, dy, dz) by scaling factor from the matrix:

  // Camera motion is reversed

  vtkMatrix4x4* M = camera->GetModelTransformMatrix(); 
  double motionVector[3] =
    {
    (oldPickPoint[0] - newPickPoint[0]) * M->GetElement(0,0),
    (oldPickPoint[1] - newPickPoint[1]) * M->GetElement(1,1),
    (oldPickPoint[2] - newPickPoint[2]) * M->GetElement(2,2)
    };

I don’t know whether Slicer community need that to be implemented via PR?

Also vtkMRMLViewDisplayableManager::AxisLabelTexts shown in the picture are affected by aspect ratio (I have set Z scale to 20 in the picture). I don’t know how to fix that but for vtkCubeAxesActor the solution is provided here

Nice progress. You can submit a pull request for the interaction scaling issue. Instead of using only the diagonal elements of the camera matrix, you need to use the column norm.

One option is to add an option to show axis codes in top/down/left/right annotations instead of painting those on the 3D cube sides (this would be a very useful orientation marking option anyway):

You may also use orientation markers in the lower-right corner.

You can fix the issue by using the using the camera transform to adjust the AxisLabelText transforms in vtkMRMLViewDisplayableManager (you need to experiment with how exactly).

It would be important to add a Python-scripted test that adjusts the scale of a 3D view the same way as you do in your application and performs basic checks. This would ensure that the changes that you contribute remain functional as Slicer code evolves. It would also make it easier for anyone to enable the distorted rendering, in case they want to test if some changes are compatible with this rendering mode.

1 Like

Yes, you should be right (I guess we are talking about matrix multiplication) and I think it could be rewritten in the following way:

  // Camera motion is reversed

  double motionVector[3] =
    {
    oldPickPoint[0] - newPickPoint[0],
    oldPickPoint[1] - newPickPoint[1],
    oldPickPoint[2] - newPickPoint[2]
    };

  // If any transform is applied to the camera then we need to
  // take that into account
  vtkMatrix4x4* M = camera->GetModelTransformMatrix();
  motionVector[0] =
      motionVector[0]*M->GetElement(0,0) +
      motionVector[1]*M->GetElement(1,0) +
      motionVector[2]*M->GetElement(2,0);
  motionVector[1] =
      motionVector[0]*M->GetElement(0,1) +
      motionVector[1]*M->GetElement(1,1) +
      motionVector[2]*M->GetElement(2,1);
  motionVector[2] =
      motionVector[0]*M->GetElement(0,2) +
      motionVector[1]*M->GetElement(1,2) +
      motionVector[2]*M->GetElement(2,2);

Is that correct now?
I don’t use intermal vtkMatrix4x4 methods to multiply values as I don’t want to introduce new vector variable.

It is easy to understand what behaviour should interaction have when we simply want to scale the camera (as I do). But what if user want to apply some rotation for example (even though I don’t know why)? How the interaction should behave now?
Here is a python code that can be used to try that (probably it should be used in pair with vtkMRMLCameraWidget::PocessTranslate() modification applied):

#-------PREPARING DATA-------
# Prepare data for volume
nodeName = "MyNewVolume"
imageSize = [10, 10, 10]
imageOrigin = [0.0, 0.0, 0.0]
imageSpacing = [1.0, 1.0, 1.0]

scalars = vtk.vtkDoubleArray()
scalars.SetName("my_scalars")

for i in range(0, imageSize[0]*imageSize[1]*imageSize[2]):
    v = scalars.InsertNextValue(i)

# Create an image volume
imageData = vtk.vtkImageData()
imageData.SetDimensions(imageSize)
imageData.GetPointData().SetScalars(scalars)

# Create volume node
volumeNode = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLScalarVolumeNode", nodeName)
volumeNode.SetOrigin(imageOrigin)
volumeNode.SetSpacing(imageSpacing)
volumeNode.SetAndObserveImageData(imageData)
volumeNode.CreateDefaultDisplayNodes()
volumeNode.CreateDefaultStorageNode()

#-------PREPARING CAMERA-------
# Getting camera from 3D view
layoutManager = slicer.app.layoutManager()
view = layoutManager.threeDWidget(0).threeDView()
threeDViewNode = view.mrmlViewNode()
cameraNode = slicer.modules.cameras.logic().GetViewActiveCameraNode(threeDViewNode)

# Getting camera
renderWindow = view.renderWindow()
renderers = renderWindow.GetRenderers()
renderer = renderers.GetItemAsObject(0)
camera = cameraNode.GetCamera()

#-------PREPARING CAMERA TRANSFORM-------
# Rotate camera in 90 degree in LR
# 1 0 0
# 0 0 -1
# 0 1 0
m = camera.GetModelTransformMatrix()
m.SetElement(0,0, 1)
m.SetElement(0,1, 0)
m.SetElement(0,2, 0)
m.SetElement(1,0, 0)
m.SetElement(1,1, 0)
m.SetElement(1,2, -1)
m.SetElement(2,0, 0)
m.SetElement(2,1, 1)
m.SetElement(2,2, 0)

camera.SetModelTransformMatrix(m)

Thus if we apply scale to the camera then everything is fine.
But when we apply rotation then it is difficult to understand what is going on when we try to translate the volume. I think this is important for Slicer community

I completely agree but I don’t have experience of writing tests in Slicer so far. Is there any test examples close to my task? I will try to look in Slicer documentation and find information how to write tests.

Thank you!
I like the idea how you labeled axes near window boundaries. I will refer to that labeling staff in the future (for now this is not the most important thing, I need to work now on foundamental staff)

Did you try the code below? I don’t know if it would work but I think you are using the transpose of the rotation matrix instead of the rotation matrix.

// Camera motion is reversed

  double motionVector[3] =
    {
    oldPickPoint[0] - newPickPoint[0],
    oldPickPoint[1] - newPickPoint[1],
    oldPickPoint[2] - newPickPoint[2]
    };

  // If any transform is applied to the camera then we need to
  // take that into account
  vtkMatrix4x4* M = camera->GetModelTransformMatrix();
  motionVector[0] =
      motionVector[0]*M->GetElement(0,0) +
      motionVector[1]*M->GetElement(0,1) +
      motionVector[2]*M->GetElement(0,2);
  motionVector[1] =
      motionVector[0]*M->GetElement(1,0) +
      motionVector[1]*M->GetElement(1,1) +
      motionVector[2]*M->GetElement(1,2);
  motionVector[2] =
      motionVector[0]*M->GetElement(2,0) +
      motionVector[1]*M->GetElement(2,1) +
      motionVector[2]*M->GetElement(2,2);
1 Like

Thank you for response,

I’m not 100% sure but I think according to matrix multiplication rule if we want to multiply a vector to a matrix we can write that in the form:

# v = motionVector; M - transform matrix
v * M = [x, y, z] * [a00, a01, a02; a10, a11, a12; a20, a21, a22] # `a21` means element at second row and first col. Symbol `;` (semicolon) is a row delimiter, `,` (comma) is a column delimiter

# multiplying vector to each column gives:
v * M = [x*a00 + y*a10 + z*a20, x*a01 + y*a11 + z*a21, x*a02 + y*a12 + z*a22]

As I understand you propose to multiply matrix M to a vector motionVector. Maybe that is correct but I have some doubts :slight_smile:

Anyway I just tried your proposition and the behaviour is still hardly predictable when I apply rotation transform to the camera

To ensure that the computations are correct, please use good names for variables. Add the coordinate system name as a suffix to vector variables. For example, motionVector_RAS (or motionVector_World) instead of just motionVector. Transforms must be named based on the coordinate system names they transform between. For example instead of M you must use something like worldToSliceTransform (I’m not sure if M is actually a transform between these, just an example).

Please, use multiply method in vtkMatrix4x4. For a single mouse move, you perform hundreds of memory allocations and thousands of multiplications, so reducing readability and risk of introducing bugs is just not worth it. It is only acceptable to reimplement basic methods like this if you can prove with performance profiling data that it leads to significant performance improvement (>10%).

@mau_igna_06 you were right about matrix multiplication, I checked that. Thank you!

I have written a test in python but I’m not sure where should I put that test. I can see that d/slicersources-src/Libs/MRML/DisplayableManager has subfolder Testing but now it only contains Cxx tests.
Is it ok if I add Python folder with my test there?

# Get all necesssary objects
threeDViewWidget = slicer.app.layoutManager().threeDWidget(0)
view = threeDViewWidget.threeDView()
viewNode = view.mrmlViewNode()
renderer = view.renderWindow().GetRenderers().GetItemAsObject(0)
camera = renderer.GetActiveCamera()
cameraDisplayableManager = view.displayableManagerByClassName("vtkMRMLCameraDisplayableManager")
cameraWidget = cameraDisplayableManager.GetCameraWidget()


# Automatically send events to make a translation like user usually do interactively
def do_translate():
    # Process CLICK ON (click mouse 1 btn)
    ev = slicer.vtkMRMLInteractionEventData()
    ev.SetRenderer(renderer)
    ev.SetViewNode(viewNode)
    ev.SetType(12)
    ev.SetKeySym("Shift_L")
    ev.SetModifiers(1)	# Shift btn
    ev.SetMouseMovedSinceButtonDown(False)
    ev.SetWorldPosition([0, 0, 0]) 
    ev.SetDisplayPosition([0, 0])
    cameraWidget.ProcessInteractionEvent(ev)
    
    # Process DRAG
    ev = slicer.vtkMRMLInteractionEventData()
    ev.SetRenderer(renderer)
    ev.SetViewNode(viewNode)
    ev.SetType(26)
    ev.SetKeySym("Shift_L")
    ev.SetModifiers(1)	# Shift btn
    ev.SetMouseMovedSinceButtonDown(True)
    ev.SetWorldPosition([0, 0, 10])
    ev.SetDisplayPosition([0, 100])
    cameraWidget.ProcessInteractionEvent(ev)
    
    # Process CLICK OFF (release the mouse 1 btn)
    ev = slicer.vtkMRMLInteractionEventData()
    ev.SetRenderer(renderer)
    ev.SetViewNode(viewNode)
    ev.SetType(13)
    ev.SetKeySym("Shift_L")
    ev.SetModifiers(1)	# Shift btn
    ev.SetMouseMovedSinceButtonDown(True)
    ev.SetWorldPosition([0, 0, 10])
    ev.SetDisplayPosition([0, 100])
    cameraWidget.ProcessInteractionEvent(ev)


# list_out = list_a - list_b
def substract_lists(a: list, b: list) -> list:
    if len(a) != len(b):
        return
    out = [0] * len(a)
    for i in range(0, len(a)):
        out[i] = a[i]-b[i]
    return out


# Calculate `delta_cam_pos` before changing aspect ratio
cam_pos_old = camera.GetPosition()
do_translate()
cam_pos_new = camera.GetPosition()
delta_cam_pos = substract_lists(cam_pos_new, cam_pos_old)

# Change aspect ratio
transform = vtk.vtkTransform()
transform.Scale(1, 1, 10)
camera.SetModelTransformMatrix(transform.GetMatrix())

# Calculate `delta_translated_cam_pos` after changing aspect ratio
translated_cam_pos_old = camera.GetPosition()
do_translate()
translated_cam_pos_new= camera.GetPosition()
delta_translated_cam_pos = substract_lists(translated_cam_pos_new, translated_cam_pos_old)


# return `True` if `list_a` contains almost the same elements as `list_b`
def compare_lists(a: list, b: list) -> bool:
    if len(a) != len(b):
        return
    from math import isclose
    out = True
    for i in range(0, len(a)):
        out = out and isclose(a[i], b[i])
    return out


# Delta translations calculated before scaling camera and after that should be equal
compare_lists(delta_cam_pos, delta_translated_cam_pos)
1 Like

I added a PR with the C++ test.

Also could someone check my another PR. It hangs on for almost three weeks

1 Like