I would like to use a statistical shape model that I created in order to segment new structures from dicom files.
Basically I have a shape model (vtk polydata) modifiable through 26 scalar parameters. The mathematical operations are very simple. Each parameter w_i is just a weight to be multiplied to a basis vector v_i of 20000 elements. The operation sum_i (w_i * v_i) = X is used to compute the positions of the points of the mesh (20000 points).
I would like to adapt this model to a dicom file by doing an optimization process on these 26 parameters.
Do you know something ready to use to do this ? The metric to optimize could be based on the gradient of the dicom images, highligthing edges on the images and trying to bring surface mesh and these edges as close as possible.
I would like to implement this into slicer in order to actively watch the superposition between the model and the dicom file, but I have never used slicer with the python interprer.
The only methods to do this right now, are outside of the realm of 3D Slicer, to my knowledge. you need to create both an intensity based statistical model and a morphometric based model. You can do this in deformetrica, shapeworks, Scalismo (Scala programming), RvtkStatismo (R programming), and now on Matlab if you search for the Active Appearance or Shape model code.
Thank you for the answer. I’m looking into what you suggested. I did not know shapeworks, but it seems very powerful. About the other frameworks, I have worked with some of them, but I felt they were a little bit too “narrow”. I would like to get my hands dirty with some code, but for example deformetrica does not allow much elasticity.
So RvtkStatismo and Scalismo both offer much more elasticity. By that I mean Scalismo allows you to build your own kernels such as point change kernels and symmetry based kernels to do parametric non-rigid registration, and allows you to do markov chain monte carlo shape completion. RvtkStatismo appears to not need an a priori or a posteriori shape based off from any gaussian kernels to do it. Scalismo is the best performer between the two and can allow you to visualize your models way better than RvtkStatismo, as that was something you were interested in, and has innate functions for AAS and ASM.
@scarpma Do you want to use SSM only to have a baseline for comparison to deep learning based methods, or you would like to use SSM for image segmentation? I’m just curious because what I see is that researchers that invested decades in developing SSM-based image segmentation methods have mostly switched to deep learning. However, it is hard to tell if this is because deep learning is so much better or it is because nowadays you need to do “AI” to get research funding.
Hello Mauro, thanks for your very rich reply. I did my statistical shape model by non-rigidly aligning the various polygonal meshes that I have and then by doing a PCA. I did this in python, and also tried scalismo for reference. The code I used is very sparse, but it is really something not special. The most difficult part is the non rigid registration. For that I implemented a gradient descent based approach in pytorch and pytorch 3d.
Thank you very much for the reference. I will look into it for both the theoretical and technical part. I will keep this post updated in the future.
By the way, so you are suggesting itk ? I have zero experience with it, but I read of it many times. I imagine it would be easily integrated into slicer3d, no ?
Thanks for the reply. In fact I’ve seen some active shape models done in some scalismo tutorial. I should give it a try because it does not require much effort, at least to begin with some simple test.
My main concerns with scalismo are:
I have 0 confidence with the scala language and I don’t know if I want to invest in learning it (maybe that could be useful, I don’t know)
Scalismo framework seems useful, but I don’t know how much it is scalable and for how long it will be maintained. My doubt is if maybe Slicer3d is more flexible being python / c++ based and having a bigger community (I’m not completely sure of this, maybe you can confirm… )
@scarpma can you describe your segmentation goals? The article you reference is about vertebrae and this is something of interest to several of us as well, and there is work going on to make a robust segmenter in Slicer so perhaps we can join forces on that.
@pieper actually i’m working with thoracic aortas. I would like to build a workflow similar to the one in the article for the aorta. Aorta is a bit complicated because I want to segment the main body of the thoracic aorta (ascending, arch and descending) as well as the first piece of the supraortic vessels.
Yes. I understand that for application (aorta segmentation) you need to non-rigidly aligning as preprocessing.
By the way, so you are suggesting itk ? I have zero experience with it, but I read of it many times. I imagine it would be easily integrated into slicer3d, no ?
yes, itk is available in Slicer’s python interpreter if you pip-install it, so it really easy to use I think. Although, most itk examples are on C++ so you need to port them yourself.
Also you can look at this possibility to use some itk filters (I’ve used them in the past)
I was thinking of doing a Statistical shape model of vertebraes from the VerSe dataset just for curiosity to see if I could achieve some good segmentation results. Although I do this on my free time only. @pieper we could share results, a bounding box detection algorithm would be really useful if we would want to use the active shape model to do segmentation also.
I was thinking about an idea. I’m not sure is correct but it would be great if it was. If you have a SSM of the vertebrae (or any anatomy) then if you mark landmarks on the mean model and on the mean + each eigenVector models. You could get the landmarks positions for any registration you do. So this would be useful for automatic landmarking I think.
If you get your SSM before than I. I could help you test this hypothesis if you are interested @scarpma.
So, if I understand well, your idea is to know (in some way) how some landmarks move in real space while changing the shape in the pca space.
If i got this right, then it is really easy. Being the mesh always with the same connectivity, you can define landmarks on the mean shape and then you will always know where these points are (just by knowing their index in the mesh).
In fact we could cast this problem into an easier one by letting the user define some fixed landmark that must match with predefined landmarks on the active shape. This optimization would be carried in a far smaller space and could give a good initialization for the segmentation.
For my case, though, it is not very simple to define landmarks on the aorta. Maybe with vertebrae it is simpler (?)
I don’t know much about PCA on meshes but it should work too I think. The idea was about using images (labelmaps) for the PCA. I have seen same examples and references of it being done on itk so it’s possible.
This is a schema of my idea:
What I mean is this:
Let’s suppose that you have a landmark on the active model, say the point Y=(Y_x, Y_y, Y_z), which will have an index k_Y (with respect to all the points of the active shape). Since PCA acts as a deformation of the mean mesh, each point is moved through space, but the indexes remain the same ! So, after the deformation of the mesh, the new position of the landmark Y’ will be X’[K_Y] (if you see X as the array of coordinates of the points in the active shape). Here X’ = X_mean + P b’ (using the notation you used).