A post was merged into an existing topic: Panoramic image/view
Thank you for the detailed explanation. I think I now understand your needs better.
- Updating of CAD models
It can be certainly useful to update your CAD model as you combine it with anatomical shapes. We plan to implement real-time synchronization with selected modeling software (Blender and FreeCAD) so that if you modify your CAD model, the changes show up immediately in Slicer. This would make it easier to create an updated volumetric mesh whenever you adjust your CAD model.
- Mesh generation
COMSOL’s mesh generation capabilities are very nice, but you don’t need to use them, if you generate your volumetric mesh in another software. COMSOL can load volumetric meshes in COMSOL or NASTRAN format (see “Importing COMSOL Meshes” slide here). This is the approach that Materialise Mimics and Simpleware ScanFE use and you can do it with Slicer, too: save the volumetric mesh as a VTK unstructured grid file (.vtu) and convert it to COMSOL or NASTRAN using FEconv. With a little work, FEconv could be added to Slicer as an extension and then you could save the volumetric mesh directly from Slicer to a variety of FE mesh file formats.
- Converting surface meshes NURBS/BRep solids
We are using SolidWorks and Autodesk Fusion 360 and they of course cannot automatically convert meshes to solids (NURBS or BRep). This is similar to how vector graphics programs deal with bitmap images: they can load them, nicely display them, and can convert to vector graphics - kind of, with some data loss.
Reverse engineering tools sometimes work for simple geometric shapes (Fusion 360 sets the limit at about 10000 facets), but if you want to have usable result then you have to re-create the object manually (see for example this tutorial).
@moondrake99 thank you very much for the way you carried out the discussion. I’m sorry for my absence in the meantime, but I found you highlighted the problem in a more exhaustive way than I can.
I try to explain again my simplier challenge, the reason why I need geometry: with anatomical structures, in particular in the knee and with contact problems in FEM, it would be better to have volumetric HEXAHEDRAL meshes than tetrahedral meshes. More in general, to have a certain control on mesh process (number and TYPE of elements): with free softwares, I am finding problems in converting native trianglular (tetrahedral) meshes into smooth surfaces (CAD geometries, i.e. nurbs) that I can mesh in the way I want (possibly hexhaedral meshes and in an automatic way: “I.A.FEMesh” software makes it but meanly manually).
Here a recent reference strictly related to my problem:
“Automated hexahedral meshing of knee cartilage structures – application to data from the osteoarthritis initiative”
I was looking at this extension. Does it need a segmented image as input to produce mesh? is there any explanation on how to use segment mesher extension? Is there any extension available that can create volumetric tetrahedral meshes directly form images in 3-d slicer?
You need a segmentation node as input. You can create segmentation node from an existing labelmap volume or model node (surface mesh). If all you have is an grayscale input volume, then you can create a segmentation node using Segment editor module.
Yes. See documentation here: GitHub - lassoan/SlicerSegmentMesher: Create volumetric mesh from segmentation using Cleaver2 or TetGen
Without segmentation? Would you like to fill the image volume with uniformly sized tetrahedra? If you segment structures then you can create a volumetric mesh for those using Segment mesher extension.
I tried to use the segment mesher extension. Followed these steps:
Downloaded Braintumor data
Labeled tumor and brain using editor in segmentation
Mesh generation >> Clever
Selected labels as inputs
Output models new >> output model1 and output model 2
Apply and got the results you can see in the image in 3D view.
Can I get mesh generation for each of the label separately?
How to extract the nodes of the mesh generated for the structure in python?
In segmentMesher I did the following. Please see the screen shot.
In segmentation I can only select segmentation. I do not see any other option.
What I am doing wrong. Please tell me.
Use recent nightly version of Slicer. You can Segment using Segment Editor module and create volumetric mesh using Segment mesher extension.
Did segmentation with segment editor for brain portion.
Then using Segment Mesher it generated the following.
How to access all the nodes(points) for the mesh generated for the brain portion?
Segment index is stored in the mesh for each cell as cell scalar data.
mesh = n.GetMesh()
Dear Lasso I need to understand its architecture.
Aim is to extract the vertexes and its x, y, z locations of only the brain portion of the mesh generated. Please help.
Any way to get the description for the methods and the pipeline to use.
You can extract any portion of the mesh by using Slicer’s Python console, using VTK filters that extract cells that have scalars in the specified range. Probably threshold filter would work and it takes only a few lines of Python code.
If you are not familiar with VTK then read the VTK textbook (https://blog.kitware.com/vtk-textbook-and-users-guide-now-available-for-download/) or use other mesh pre-processing software that can extract sub-meshes, such as Paraview or MeshLab.
thank you Andras. I will contact you once I done.
for j in range(mesh.GetNumberOfCells()): ... cellObject = mesh.GetCell(j) ... pts = cellObject.GetPoints() ... data = pts.GetData() ... for i in range(data.GetNumberOfTuples()): ... print("%s",data.GetTuple(i)) ... ('%s', (-72.65650177001953, 62.96849822998047, 56.67499923706055)) ('%s', (-72.65650177001953, 89.84349822998047, 56.67499923706055)) ('%s', (-91.40650177001953, 89.84349822998047, 56.67499923706055)) ('%s', (-72.65650177001953, 68.21849822998047, 78.30000305175781)) ('%s', (-72.65650177001953, 116.71849822998047, 56.67499923706055)) ('%s', (-72.65650177001953, 116.71849822998047, 29.799999237060547)) ('%s', (-91.40650177001953, 97.96849822998047, 48.54999923706055)) ('%s', (-91.40650177001953, 116.71849822998047, 48.54999923706055))
Dear Andras is this right?
These are points locations x,y,z for every tetra. which I understand.
Please tell me if I am doing anything wrong.
Another thing is I want to color a specific cell in the generated mesh. How to do this?
I am also looking into vtkextractcells.
I cannot really follow what you are trying to do, but you can find complete example of extracting submesh corresponding to a specific material here:
# Extract submeshes corresponding to 2 different material Ids cellData.SetActiveScalars( "Material Id" ) threshold2 = vtkThreshold() threshold2.SetInputData( mesh ) threshold2.SetInputArrayToProcess(0, 0, 0, vtkDataObject.FIELD_ASSOCIATION_CELLS, vtkDataSetAttributes.SCALARS ); threshold2.ThresholdByLower( 2 ) threshold2.Update() meshMat2 = threshold2.GetOutput() threshold3 = vtkThreshold() threshold3.SetInputData( mesh ) threshold3.SetInputArrayToProcess(0, 0, 0, vtkDataObject.FIELD_ASSOCIATION_CELLS, vtkDataSetAttributes.SCALARS ); threshold3.ThresholdByUpper( 3 ) threshold3.Update() meshMat3 = threshold3.GetOutput()
I am trying to extract every point id along with its x,y,z coordinates.
Specifically, the aim is to extract locations of all the cells of the segmented portion of the mesh
You can extract sub-mesh using VTK threshold filter and then access all the point coordinates in a numpy array by calling
A post was split to a new topic: Python debugging in Eclipse
Is it possible to generate surface mesh for brain using segment mesher. What will be the steps for doing it?
You can get a surface mesh by exporting segmentation to model node. If you need the surface mesh corresponding to the volumetric mesh with direct correspondence to volumetric mesh point IDs, then you can use VTK geometry filter on the volumetric mesh.
Yes I need to generate a surface mesh corresponding to the volumetric mesh. Can you please give details on how to do it?
getnode->getmesh>setoutputdata for extract filter = mesh>extactinsideon> ???>mapper>actor>renderer>renderwindow>render windowinteractor
everytime when mesher is used cube is generated within which the geometry of interest(brain) exists. is it possible to have the volumetric mesh for only the segmented portion of the geometry or you have to extract it from cube?