I am doing a CLI module in which I am trying to pass the .stl chosen in a file browser, created with the .xml file.
In the .cxx file, I read the file chosen and the data is converted to polydata and then it’s added to a new vtkMRMLModelNode.
When I try to add the model to the current scene to see it in the 3D view, it doesn’t work. I would appreciate if anyone can help me. The code that I am using is the following:
You create a new scene in your CLI and put the node into that, but that scene is not related to the scene that the Slicer application uses.
If you generate one or a few models then specify them as output data, as it is done in the Grayscale model maker module.
If you generate many models (or you don’t know in advance how many models you’ll have in the output), then export it into a mini-scene as it is done in the ModelMaker module.
I’m trying to do it like the grayscale model maker but I can’t pass the polydata information of the .stl model file to the output node created with the output geometry. Also, the output geometry is created when I apply the CLI but it isn’t displayed at the scene and I don’t have a cursor to work with the output geometry model node created by default to pass the information to it. How can I pass the polydata of the model to the output geometry?
I want to pass a .stl file choosing it in a file browser in the cli module, and put it in the scene for modifying its position with a transformation.I have two parameters in my xml file for doing it. I expect to add the path of the .stl file in the first parameter, create a new output geometry in the second and then, when I apply the module, have the .stl model chosen in my 3D view. Actually, when I press apply, the module return: “Status: Completed with errors” and the vtkmodelnode created with the output geometry label appears in the data module, but without any polydata and it can’t be visualized.
The parameters of the xml file are:
< file fileExtensions=".stl">
< longflag>InputVolume< /longflag>
< description>< ![CDATA[fichero de entrada con el modelo de la camara]]>< /description>
< label>Camera model file< /label>
< channel>input< /channel>
< /file>
<geometry type="model" reference="InputVolume">
<name>OutputGeometry</name>
<label>Output Geometry</label>
<channel>output</channel>
<index>1</index>
<description><![CDATA[Output that contains geometry model.]]></description>
</geometry>
Do you have volume input/output or is that only a mistake in the xml file (InputVolume) and in the cxx file (vtkITKArchetypeImageSeriesReader)?
Instead of file tag, use geometry for models (surface mesh) or image for volumes. If you use file then it means that the input is read from file (not from the scene) / result will be written to file (not loaded into the scene).
If you need to transform a mesh then it’s better to leave it to Slicer, as it can apply the transform to any loaded data types (volumes, models, markups, other transforms) dynamically. For that, have a transform output element. You can apply a transform by drag-and-dropping your model under the transform in the Data module (or in the Transforms module: select your transform node at the top, then use the Apply transform section to apply it to a node).
There are many registration methods already available in Slicer between images, models, points, etc. Have you checked if there is a similar module that does what you need? You could get started much more easily by modifying, improving an existing module.
What I want to do in my module is to have an input model and an input transform. Choose them and, when I apply, see the transformed model in the scene, but I don’t find any module that makes it. After, I want to pass a transform from other software (ORBSLAM2) to move my model in real time.
It’s very good to know what you actually want to achieve.
CLI modules are not ideal for real-time processing, as there is some overhead with writing/reading parameters and running the CLI module.
You could implement camera-based tracking in a loadable module more efficiently, because it has direct access to all objects in the application. However, we implement real-time imaging and tracking applications in Slicer using an external process that performs all the data acquisition from hardware and pre-processing (computing transforms, etc) and communicates with Slicer using OpenIGTLink protocol.
You can either implement the separate application from scratch and link the very simple and small OpenIGTLink library to be able to send transforms to Slicer in real-time. Or, you can use an existing generic application, such as Plus toolkit, which can already connect to a wide range of cameras, other imaging and tracking devices, inertial measurement units, and other sensors, compute and combine transforms, calibrate and fuse data streams and send them to Slicer through OpenIGTLink. If you have transforms updating in real-time in Slicer then you can use the SlicerIGT extension to implement a complete surgical navigation or guidance system, without any additional programming (there are several tutorials on the SlicerIGT website that describe how to do this, with step-by-step instructions).
Probably the simplest way to get a complete RGBD-camera based guidance system is to add a new device class to the Plus toolkit that connects to a camera directly or uses output of a Plus camera device and computes transforms (see for example how our IntelRealSense based tracking device works in Plus). Plus can take care of sending those transforms to Slicer, and SlicerIGT takes care of calibrating and visualizing tools and surfaces. This summer we’ll add better support in Plus for Intel RealSense RGBD cameras and will implement marker tracking (using OpenCV and probably ArUco). In the future, we will be interested in from surface-based tracking, so we would be happy to help you and see what you can get from ORB SLAM2.
Feel free to ask any Plus related questions on the Plus website and any Slicer or SlicerIGT related questions on this forum.