How to divide the view area area of 3d slicer

Operating system: windows 11
Slicer version: slicer 4.13.0

I am developing a python extension version of a function that takes a depth map for markups points.
I want to print this depthmap image in a small size in the view area of the 3D slicer,
but what can I do about related actions?


< feature explain >

I want to make it like the shape above, but is there any way?

Hello,

for the Slicer-Liver extension we have been playing lately with the same idea (an overlay that shows 2D information on top of the 3D viewer). I donā€™t think you can achieve this in Python, though. Our approach has been to create our own MRML nodes and MRMLDisplayableManagers (MRML Overview ā€” 3D Slicer documentation) for the 3D view in C++.

Alternatively, you can generate the depth maps as volumes and display them in one of the 2D slice views. In this line, shortly there will be available the multi-monitor support (you can follow the progress at Multi monitor layout by lassoan Ā· Pull Request #6776 Ā· Slicer/Slicer Ā· GitHub), which allows to spawn additional visualization windows (i.e, a single floating 2D window, which is separated from the main Slicer window) and probably can get you closer to what you want in Python.

One more approach that I have seen implemented (donā€™t remember where, though) is adding a Slice view widget to the side panel, together with the rest of the moduleā€™s widgets. In this way you can have a single 3D view in the visualization area and the depth maps together with the moduleā€™s widgets. This should be doable in Python.

I hope this helps.

2 Likes

Humā€¦ It seems difficult.

Iā€™ll try my best

Yes it is. Basically Slicer does not allow non-rectangular division of the layout. You can divide the layout vertically or horizontally, but this is it. Also, Qt does not support layers, so you cannot show widgets on top of widgets in a controlled way (if you add a widget with a parent but donā€™t add it to the layout it will show up in the corner but it may have unwanted consequences, it is not safe). What you can do is tweak the 3D rendering as @RafaelPalomar suggests with MRMLDMs and such, or you can add any widget in the module panel.

1 Like

No matter how much I think about it, this seems too complicated, so I thought of another way.
I know that 3D slicer has 4 views.( one for model , the others for DICOM data )

_view_node = slicer.mrmlScene.GetSingletonNode(ā€œ1ā€, ā€œvtkMRMLViewNodeā€)
_camera_node = slicer.modules.cameras.logic().GetViewActiveCameraNode(_view_node)

Through the code above, draw a depthmap from ā€˜Tag: 1, view nodeā€™ and save it as a dicom image.
And if I make the saved Dicom image output from ā€˜Tag 2: viewnodeā€™, will there be a problem?

  • I know the depthmap image isnā€™t a dicom file.
    but Iā€™m doing this to ā€˜consider ease of use from a UX point of viewā€™
    in order to immediately check the results of the depth map I drew.

Asking to add wizet above seems to be an action similar to what I am trying to do, but I think this method will be a little easier.

If there is a better way, please let me know.

Thinking in DICOM terms is useful when you are going to load/save data to disk. If you are thinking in getting a sort of interactive depth map visualization while you are modifying Markups, it is more useful to think in terms of vtkMRML nodes, which is what the Slicer application uses internally when it runs.

At a high level, the 3D slicer model is not based on direct manipulation of the views, but the manipulation of vtkMRML nodes and managers that dictate how these nodes are translated into visualizations in the different views. In C++, you have the possibility to create new vtkMRMLDisplayableManager classes that can define how certain nodes will be displayed; in Python, unfortunately you donā€™t have that option.

My first approach to this problem (in Python) would be to create a vtkMRMLScalarVolumeNode and configure it so it has a single slice where I can put my data. By modifying the data that the node is holding, you will get an updated visualization. Later you can also save this data in various formats.

There are a good set of examples on how to deal with volume nodes in 3D Slicer using Python in Script repository ā€” 3D Slicer documentation in the volumes section.

I hope this can set you in a useful direction.

2 Likes

This is a hack, but you can also potentially show any widget in a layout view. For example in a layout that has a table view, you can replace it with your widget like this:

tableWidget = layoutManager.tableWidget(tableViewIndex)
tableWidget.visible = True
tableWidget.tableController().visible = False
tableWidget.tableView().visible = False
# Add new widget, make sure it is visible
tableWidget.layout().addWidget(myWidget)
myWidget.visible = True
2 Likes

Actually, this feature is already available. You can show any model as a corner annotation. It even keeps the orientation of the model aligned with the current view orientation. The feature is originally added for displaying orientation markers, but it can be used for displaying complex, dynamically changing content.

All you need to do is to set the model as orientation marker in all 3D views:

orientationMarkerNode = getNode('MyDepthMap')
orientationMarkerNode.GetDisplayNode().SetVisibility(False) # hide from the viewers, just use it as an orientation marker

viewNodes = slicer.util.getNodesByClass("vtkMRMLAbstractViewNode")
for viewNode in viewNodes:
  viewNode.SetOrientationMarkerHumanModelNodeID(orientationMarkerNode.GetID())
  viewNode.SetOrientationMarkerType(slicer.vtkMRMLAbstractViewNode.OrientationMarkerTypeHuman)
  viewNode.SetOrientationMarkerSize(slicer.vtkMRMLAbstractViewNode.OrientationMarkerSizeMedium)
2 Likes

I tried to summarize what you said, please check if it is correct.
The depthmap I created has the form of an array.
So, my procedure is as follows.

  1. Convert array to MRML node

  2. Set the viewnode to render depthmap turned into MRML nodes

  3. MRMLnode-depthmap rendering to the corresponding view.

In the 3D slicer, many classes come together to form one meta data, and it seems that I can handle it as I want if I extract only the desired properties from the meta data, but Itā€™s frustrating because I feel like Iā€™m not good at it.

Iā€™ll proceed step by step with the opinions of everyone who commented.
Thanks for everyoneā€™s help.

Yes, that sounds like the basic steps. There is a lot to learn in order to implement custom functionality and it may take some time to feel comfortable but the advantage of Slicer is that all the code is available for inspection and you can learn from lots of examples.

You can also expect that commonly needed operations are already implemented in Slicer.

For example, acquiring image from a depth camera, such as Intel RealSense and live display of the pint cloud is already available as free, open-source software.

Plus toolkit (www.plustoolkit.org) acquires the data and sends it to Slicer via OpenIGTLink, and SlicerOpenIGTLink extension receives the data, and DepthImageToPointCloud extension can display the point cloud. The point cloud is already a model node, so you can choose to display it as an orientation marker.