Operating system: windows 11
Slicer version: slicer 4.13.0
I am developing a python extension version of a function that takes a depth map for markups points.
I want to print this depthmap image in a small size in the view area of the 3D slicer,
but what can I do about related actions?
for the Slicer-Liver extension we have been playing lately with the same idea (an overlay that shows 2D information on top of the 3D viewer). I donāt think you can achieve this in Python, though. Our approach has been to create our own MRML nodes and MRMLDisplayableManagers (MRML Overview ā 3D Slicer documentation) for the 3D view in C++.
Alternatively, you can generate the depth maps as volumes and display them in one of the 2D slice views. In this line, shortly there will be available the multi-monitor support (you can follow the progress at Multi monitor layout by lassoan Ā· Pull Request #6776 Ā· Slicer/Slicer Ā· GitHub), which allows to spawn additional visualization windows (i.e, a single floating 2D window, which is separated from the main Slicer window) and probably can get you closer to what you want in Python.
One more approach that I have seen implemented (donāt remember where, though) is adding a Slice view widget to the side panel, together with the rest of the moduleās widgets. In this way you can have a single 3D view in the visualization area and the depth maps together with the moduleās widgets. This should be doable in Python.
Yes it is. Basically Slicer does not allow non-rectangular division of the layout. You can divide the layout vertically or horizontally, but this is it. Also, Qt does not support layers, so you cannot show widgets on top of widgets in a controlled way (if you add a widget with a parent but donāt add it to the layout it will show up in the corner but it may have unwanted consequences, it is not safe). What you can do is tweak the 3D rendering as @RafaelPalomar suggests with MRMLDMs and such, or you can add any widget in the module panel.
No matter how much I think about it, this seems too complicated, so I thought of another way.
I know that 3D slicer has 4 views.( one for model , the others for DICOM data )
Through the code above, draw a depthmap from āTag: 1, view nodeā and save it as a dicom image.
And if I make the saved Dicom image output from āTag 2: viewnodeā, will there be a problem?
I know the depthmap image isnāt a dicom file.
but Iām doing this to āconsider ease of use from a UX point of viewā
in order to immediately check the results of the depth map I drew.
Asking to add wizet above seems to be an action similar to what I am trying to do, but I think this method will be a little easier.
Thinking in DICOM terms is useful when you are going to load/save data to disk. If you are thinking in getting a sort of interactive depth map visualization while you are modifying Markups, it is more useful to think in terms of vtkMRML nodes, which is what the Slicer application uses internally when it runs.
At a high level, the 3D slicer model is not based on direct manipulation of the views, but the manipulation of vtkMRML nodes and managers that dictate how these nodes are translated into visualizations in the different views. In C++, you have the possibility to create new vtkMRMLDisplayableManager classes that can define how certain nodes will be displayed; in Python, unfortunately you donāt have that option.
My first approach to this problem (in Python) would be to create a vtkMRMLScalarVolumeNode and configure it so it has a single slice where I can put my data. By modifying the data that the node is holding, you will get an updated visualization. Later you can also save this data in various formats.
This is a hack, but you can also potentially show any widget in a layout view. For example in a layout that has a table view, you can replace it with your widget like this:
tableWidget = layoutManager.tableWidget(tableViewIndex)
tableWidget.visible = True
tableWidget.tableController().visible = False
tableWidget.tableView().visible = False
# Add new widget, make sure it is visible
tableWidget.layout().addWidget(myWidget)
myWidget.visible = True
Actually, this feature is already available. You can show any model as a corner annotation. It even keeps the orientation of the model aligned with the current view orientation. The feature is originally added for displaying orientation markers, but it can be used for displaying complex, dynamically changing content.
All you need to do is to set the model as orientation marker in all 3D views:
orientationMarkerNode = getNode('MyDepthMap')
orientationMarkerNode.GetDisplayNode().SetVisibility(False) # hide from the viewers, just use it as an orientation marker
viewNodes = slicer.util.getNodesByClass("vtkMRMLAbstractViewNode")
for viewNode in viewNodes:
viewNode.SetOrientationMarkerHumanModelNodeID(orientationMarkerNode.GetID())
viewNode.SetOrientationMarkerType(slicer.vtkMRMLAbstractViewNode.OrientationMarkerTypeHuman)
viewNode.SetOrientationMarkerSize(slicer.vtkMRMLAbstractViewNode.OrientationMarkerSizeMedium)
I tried to summarize what you said, please check if it is correct.
The depthmap I created has the form of an array.
So, my procedure is as follows.
Convert array to MRML node
Set the viewnode to render depthmap turned into MRML nodes
MRMLnode-depthmap rendering to the corresponding view.
In the 3D slicer, many classes come together to form one meta data, and it seems that I can handle it as I want if I extract only the desired properties from the meta data, but Itās frustrating because I feel like Iām not good at it.
Iāll proceed step by step with the opinions of everyone who commented.
Thanks for everyoneās help.
Yes, that sounds like the basic steps. There is a lot to learn in order to implement custom functionality and it may take some time to feel comfortable but the advantage of Slicer is that all the code is available for inspection and you can learn from lots of examples.
You can also expect that commonly needed operations are already implemented in Slicer.
For example, acquiring image from a depth camera, such as Intel RealSense and live display of the pint cloud is already available as free, open-source software.
Plus toolkit (www.plustoolkit.org) acquires the data and sends it to Slicer via OpenIGTLink, and SlicerOpenIGTLink extension receives the data, and DepthImageToPointCloud extension can display the point cloud. The point cloud is already a model node, so you can choose to display it as an orientation marker.