I want to reconstruct the whole vertebrae volume based on the ultrasound reconstruction (Img 1). for this I’m using the module ALPACA-SlicerMorph having as source other vertebrae reconstructed from CT(Img. 2), but when I see the TPS Warped source model this one is distorted (Img. 3). I’ve tried with cut reconstruction from CT to only take the posterior park and they work, but when I use the US reconstruction it doesn’t. What can I do?
Do you see the entire veetebra in ultrasound? Is it in humans?
Many research group has been working on this problem, including our lab. You can also find PhD dissertations of students of Purang Abolmaesumi. All these were classic methods, such as image registration methods constrained by ultrasound->CT conversion, constrained by statistical shape models, biomechanics simulation, using free-form deformation, thin-plate splines, etc. Ultimately all these attempts failed to reach the quality needed for clinical usability.
Instead of retrying all these methods (PCA, thin-plate-splines, etc.) I would recommend to try deep learning based apporaches. Probably you could manually/semi-automatically segment a few hundred vertebrae using partial 3D ultrasound imaging as input and CT as output.
Yes, it is within humans. In the ultrasound, I’m only able to visualize the posterior aspect of the vertebra, and that’s what I’m utilizing for the slicerIGT reconstruction.
Could you share with me any previous research conducted on this topic? Or perhaps suggest how I might find such studies?
I plan to explore the deep learning approach and determine how to integrate it into my research. Thank you!
While reviewing some of the spine-related papers you shared with me, I came across a research paper (Towards real-time, tracker-less 3D ultrasound guidance for spine anaesthesia) with results similar to what I’m aiming for, particularly in terms of adjusting the ultrasound surface to a shape model. I noticed you’re one of the authors of this paper. Is the logic code for the Registration thread available?