tldr: I would like to cut a Model in “half” using a non-flat planar surface I created with Markup > Geometric Surface Grid. Is it possible? Further explanation below.
I have a CT scan asymmetrical brain Volume that I have made a Model out of. I am trying to cut it into two hemispheres to measure the actual volume.
Because of the asymmetry a straight vertical cut is unsatisfactory, and it seems the best way to follow the longitudinal fissure is with homologous points on the Geometric Surface Grid (I need repeatability, and it will be further implemented by some Python guru’s and use of a named template).
I would like to cut the resultant Model with a GSG, OR failing that, cut the Volume with a GSG, and make two Models with that, that I can measure the actual volumes with.
Is it possible?
I think this is possible even without scripting. Once established, a simple module can be created easily based on the workflow, especially with the current capabilities of agents.
The grid surface needs a surface model node so that we have the surface patch in a data structure that can be used more easily later
The model node containing the patch needs to be extruded so that it becomes a closed surface containing the half you need to keep. I haven’t used it yet, but it seems that the Extrude tool in Dynamic Modeler can do it
Get the intersection of the two models using Combine Models module in the Sandbox extension. Not sure how robust it is, but this would be my first option
I can’t get that to work.
I made model “plane” and extruded it away from a plane as you can see in screenshot.
Then I tried the various operations (in Combine Model) including Intersections for brain model and the extruded model, but it just thinks for 5 minutes or so and then makes the intersection model, but that model is always empty.
I am at a loss. Is there an easier way that I can use a model or the grid markup as a border for a segmentation instead?
If surface mesh operations are not reliable (if you look up I mentioned that I was not sure how robust it was, because this is a very hard problem, but some tools were added by @mau_igna_06 which I haven’t used yet), then you could use Segment Editor. Convert both the brain model and the extruded model to segmentation, and create the intersection in Segment Editor using the Logical operators effect. Actually I’d be surprised if your brain model was not a segmentation before you made it a model.
By the way, looking at your first picture (on the left side):
That brain model has a lot of holes which makes the CombineModels algorithm work harder and increasing the possibility it will fail. I suggest you save both models, the brain and the grid as .vtk files and open an issue for the maintainer of the algorithm → here
I would try to measure the volume on the labelmap domain as @cpinter suggested as it straight-forward, you may need to check if your hardware is big enough to reach a measurement quality that makes the computed volume error trivial.
You may also need to upsample your image/segmentation to achieve smaller voxels that increase the measurement quality at expense of larger computing-time/hardware
Yes, the holes were a contrast issue I screwed up early when playing with the volumes. I just didn’t go back and fix it as this was mainly experimenting with a technique and I wasn’t concerned, though maybe should have been.
cPinter’s 2nd suggestion worked after converting models back into segmentations, putting them in a single Segmentation folder and using the logical operator Intersect properly.