Yes, you should be right (I guess we are talking about matrix multiplication) and I think it could be rewritten in the following way:
// Camera motion is reversed
double motionVector[3] =
{
oldPickPoint[0] - newPickPoint[0],
oldPickPoint[1] - newPickPoint[1],
oldPickPoint[2] - newPickPoint[2]
};
// If any transform is applied to the camera then we need to
// take that into account
vtkMatrix4x4* M = camera->GetModelTransformMatrix();
motionVector[0] =
motionVector[0]*M->GetElement(0,0) +
motionVector[1]*M->GetElement(1,0) +
motionVector[2]*M->GetElement(2,0);
motionVector[1] =
motionVector[0]*M->GetElement(0,1) +
motionVector[1]*M->GetElement(1,1) +
motionVector[2]*M->GetElement(2,1);
motionVector[2] =
motionVector[0]*M->GetElement(0,2) +
motionVector[1]*M->GetElement(1,2) +
motionVector[2]*M->GetElement(2,2);
Is that correct now?
I don’t use intermal vtkMatrix4x4
methods to multiply values as I don’t want to introduce new vector variable.
It is easy to understand what behaviour should interaction have when we simply want to scale the camera (as I do). But what if user want to apply some rotation for example (even though I don’t know why)? How the interaction should behave now?
Here is a python code that can be used to try that (probably it should be used in pair with vtkMRMLCameraWidget::PocessTranslate()
modification applied):
#-------PREPARING DATA-------
# Prepare data for volume
nodeName = "MyNewVolume"
imageSize = [10, 10, 10]
imageOrigin = [0.0, 0.0, 0.0]
imageSpacing = [1.0, 1.0, 1.0]
scalars = vtk.vtkDoubleArray()
scalars.SetName("my_scalars")
for i in range(0, imageSize[0]*imageSize[1]*imageSize[2]):
v = scalars.InsertNextValue(i)
# Create an image volume
imageData = vtk.vtkImageData()
imageData.SetDimensions(imageSize)
imageData.GetPointData().SetScalars(scalars)
# Create volume node
volumeNode = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLScalarVolumeNode", nodeName)
volumeNode.SetOrigin(imageOrigin)
volumeNode.SetSpacing(imageSpacing)
volumeNode.SetAndObserveImageData(imageData)
volumeNode.CreateDefaultDisplayNodes()
volumeNode.CreateDefaultStorageNode()
#-------PREPARING CAMERA-------
# Getting camera from 3D view
layoutManager = slicer.app.layoutManager()
view = layoutManager.threeDWidget(0).threeDView()
threeDViewNode = view.mrmlViewNode()
cameraNode = slicer.modules.cameras.logic().GetViewActiveCameraNode(threeDViewNode)
# Getting camera
renderWindow = view.renderWindow()
renderers = renderWindow.GetRenderers()
renderer = renderers.GetItemAsObject(0)
camera = cameraNode.GetCamera()
#-------PREPARING CAMERA TRANSFORM-------
# Rotate camera in 90 degree in LR
# 1 0 0
# 0 0 -1
# 0 1 0
m = camera.GetModelTransformMatrix()
m.SetElement(0,0, 1)
m.SetElement(0,1, 0)
m.SetElement(0,2, 0)
m.SetElement(1,0, 0)
m.SetElement(1,1, 0)
m.SetElement(1,2, -1)
m.SetElement(2,0, 0)
m.SetElement(2,1, 1)
m.SetElement(2,2, 0)
camera.SetModelTransformMatrix(m)
Thus if we apply scale to the camera then everything is fine.
But when we apply rotation then it is difficult to understand what is going on when we try to translate the volume. I think this is important for Slicer community
I completely agree but I don’t have experience of writing tests in Slicer so far. Is there any test examples close to my task? I will try to look in Slicer documentation and find information how to write tests.
Thank you!
I like the idea how you labeled axes near window boundaries. I will refer to that labeling staff in the future (for now this is not the most important thing, I need to work now on foundamental staff)