I have separated one vertebrae into its own segmentation by copying the segment from the main segmentation.
So far I have been able to generate a volumetric mesh with in segment mesher with cleaver but it looks like the interior volumes are not being meshed correctly and the cortex is very thick:
In most cases, you can fill internal holes in a segment using Wrap Solidify effect (provided by SurfaceWrapSolidify extension). This tool can deal with quite complex shapes, but each vertebra has a hole in it and the effect cannot deal with that. So, you have to split the segment into two along either coronal or sagittal axis, solidify each, then merge them into one segment again. This should work if you only need to solidify a few verteabrae.
If you need to segment many vertebrae then splitting&mergin each could be tedious. In this case, you can segment the spine so that you don’t have internal holes in the first place. You can achieve this by using Grow from seeds effect. If you paint seeds inside the bone then the hypodense regions in the bone will be filled, too.
Let me add some details to Michelle’s post. The spine was segmented using grow from seed module, which works remarkably well. For the segmentation, the volume is cropped and created as isotropic 0.3125^3, prior to using grow from seed. These are cancer spines, thus requiring hours of fixing due to lack of resolution (0.3125^2 by 0.6-1.5) resulting in the need to separate the facet joints, destroyed geometry etc.
One of the problems that we are facing is that the spines are highly lytic with the cancellous bone often having a range of 50-150 HU, being remarkably low, and close to the discs (40-115) or tumor tissue and even the muscle tissue. I have found that setting the threshold to about 150 allows the module to perform a good initial segmentation of the geometry with the set up of seed locality to 6.5-7.5 up to 9 to maintain the bleed-out to the surrounding discs, tissue to a minimum. As a result, the cancellous bone within the vertebral geometry is not fully captured, as can be seen on the segment masks.
I have used wrap solidify with the threshold set to 0 and (outer surface, carve-hole at 7mm) to fully segment the vertebra, as can be seen on the image. The resulting model created seems to yield a surface mesh
.
This is one of the models that we are trying to use as the input for the meshing. I am not sure what parameter I am missing in oredr to get the volume model from this module.
As we are trying to figure out the pipeline (CT-FEA), how best should we proceed to allow us to apply segment mesher for creating the volumetric tet mesh? How do we apply material models?
Once you have the solidified (and merged) segment, you can use SegmentMesher module to create a volumetric mesh. Or export as a surface mesh and use any other mesher.
You can specify material models, loads, boundary conditions, etc. in the preprocessor of your solver. If it helps, you can save the CT density values at each mesh point using “Probe volume with model” module.
People have asked about vertebral FEM modeling many times on this forum, but unfortunately we did not get much updates form them later, so we don’t know what workflow they ended up implementing - but a few tips:
Not many FEM tool can read VTK unstructured grids, but FEBio/FEBio Studio can, and it is free and works quite nicely for biomedical applications.
If you want to use SegmentMesher output in software that cannot read VTK unstructured grid files then you can use meshio Python package to convert to other formats.
Thanks you for the very quick reply. A we will be more than happy to update the community. We have at present 45 spines segmented with each one having/will have multiple follow up scanning, 3, 6, 9, 12M.
We have many vertebra’s to do / per spine as we are looking at the effect of cancer on vertebral strength. So the split/ merge approach may not work. If I try to use values lower than 150 for the grow from seed, than the whole volume (entire torso) get painted. Would it be possible to run a second segmentation within the bounds of the initial segmentation with a lower seed value to solve this problem?
Its a lot of work, but I look at this as the hard road to get true labeled model for development of AI approach, we expect to have 430 spines * 4 time points.
I think I may have somewhat “hijacked” Michelle original question regarding the thick cortex at the vertebral body, being much greater then what we are able to visualize on the CT, Ron
If you have already spent a lot of time with segmenting the vertebrae then splitting, solidifying/merging should be less work than segmenting again. You can fully automate the process by Python scripting.
This is the expected behavior. You can add an “other” segment that contains seeds outside the vertebrae. You can remove any leaks by painting this “other” segment nearby. You can even use Grow from seeds if there is not contrast difference, you just need to paint more seeds (and always paint seeds both sides of places where no intensity difference exists but you still want to have a contour).
This is a large number, so it may make sense to spend some time experimenting with various methods and automate things as much as possible using Python scripting. For segmenting a new time point of the same patient, you may utilize image registration (e.g., SlicerElastix).
You can make the volumetric mesh resolution very fine to capture arbitrarily small details, but since this would lead to very large and complex meshes, most commonly you create a simple mesh and just vary material properties across the mesh.
Alas, I do not know how to program in python. I believe Michelle does. May I trouble you to help Michelle in this regard?
I will try this approach.
Yes, we fully expect to have to get to registration at a later stage of the project, thanks for the initial direction.
We may have to go down this route as vertebral mechanics is very sensitive to cortex thickness and the picture gets much more complicated once we deal with vertebrae with mixed lesions (Breast, Lung, and Prostate) or osteosclerotic (Breast, Prostate) for which the association between bone material and architecture starts to break down. Some of these bone have up to 90% bone volume vs the normal 10-15%
Andras
Thank you for your response. As you may see, in collaboration with MIT, we are trying to establish the pipeline for segmenting the cancer spines ( both at radiation planning and longitudinal follow-up) and converting the multi vertebral segmentation to FE. We will include discs models later with the hope of incorporating DTI-to FE-based models as well as muscle-based models. I have used MIMCS, TruGrid, ABAQUS to do portions of the pipeline but never to this scale. I would very much like to try and use Slicer for this development, not only for the issue of cost but as a platform to develop tools for clinical for better management of these very infirm patients. We have multiple questions regarding the imaging and development of code to carry out operations on the image data ( how best to reslice the vertebrae for analytical computation of strength) establishing AI / Atlas-based approaches for segmentation. My question is it worth opening a topic for the spine? I will welcome any interest in collaboration on this project. alternatively we can just post issues as they arise. thanks
We can continue the discussion here but it is probably more clear if you create a new topic for each specific question you have. You can also join one of the weekly Slicer teleconferences to discuss in person.
@Michael_Hardisty could also have some useful inputs, as they did quite a bit of work on this topic using Slicer.
This sounds like a very interesting project and I hope Slicer becomes a useful part of your workflows. If you end up wanting to develop and disseminate a customized set of tools for this problem domain, such as a SlicerSpine extension, we could also create a top level Slicer Community like we have for SlicerHeart, SlicerDMRI, etc.