Remove deformation

Hello.
I have a Skull with deformation in the area of the coronal Suture. How I can remove it, to make the parietals and frontal bone match

image

Thank you in advance.

If you create a separate segment for the frontal and export both segments as 3d models, you can use Meshlab or Meshmixer to shift the frontal. Is this a CT or surface scan?

I am not sure how it will turn out, but at least you can try something like this fairly easily.

  1. Create a curve along the coronal suture (with fairly high number of points, for that you can use resample). Rename this curve original
  2. Clone that curve, and move the points where you would like the coronal suture to be stretched. Rename this curve modified.
  3. Use the Fiducial Registration wizard (available in SlicerIGT) to create a Warping transformation From original curve to To modified curve.
  4. Apply this transformation to your model (you may want to clone the model before you do this, so that you can see the effect of the warp).
  5. Keep adjusting the points on the Modified curve, until you achieve the results you are looking for.

Hi. It is a surface scan.

1 Like

Thanks for the advice. But I can´t put the curve in the fiducial Registration Wizard to create a warping transformation.
So how y can put the “original curve” and the “modified curve” in the For and To places??
And the resample is not working also. I only can create the curve from the points tool

Did you scan the frontal separately and digitally attach because I can see displacement of the frontal at the right frontozygomatic suture?

I scan separately the frontal, the parietal and the temporal and later fuse them.
Any advice or recommendations?

You are right, it doesn’t seem to accept curve markups type. While you can copy and paste points from curve to a fiducial markups, I don’t think this will work well for you. Slicer is not really a mesh editing software and as such has limited capabilities to do fusion. You can put every individual bone under a transform and manually translate/rotate them. Or you can try the osteotomy planner:

Fiducial registration wizard (in SlicerIGT) should take care of this. 6-8 anatomical landmarks should align things well if there is only rigid misalignment (translation/rotation).

If you need warping transform then complete the rigid registration and harden the transform. Then place 10-20 markups fiducial points in the region that you want to warp and place about 10-20 additional landmarks distributed evenly in areas that you don’t want to warp. Use these as “from” points in Fiducial registration wizard module. Clone this markups fiducial list and choose the clone as “to” points. Hide the “from” points (to prevent them from moving) and adjust the “to” points to warp the mesh.

If you have open surfaces then you can merge them using “Merge models” module. If you have closed surfaces then you can merge them using “Combine models” module (provided by Sandbox extension).

Marta I sent you a direct message about how to make the adjustments to the separate pieces in Meshlab and Meshmixer. If I have enough time when scanning skulls for facial approximation, I scan the fragments separately and also tape them together (if they fit) to get at least one 360 scan to help with digitally fitting the fragments together.

It is too bad you don’t find the workflow in Slicer convenient enough. Could we do something to improve?

I’ve just always used Meshlab for working with surface scans. I can use the alignment tool for fitting fragments or just grab the pieces and move them where I need them. I haven’t tried to do the same things in Slicer yet.

For many years, we did not really add surface manipulation capabilities to 3D Slicer. However, some of our funded projects (in neuorimaging, cardiac modeling, etc.) increasingly work in the surface domain. Moreover, there are huge improvements in Slicer and VTK while almost no progress in some of the commonly used mesh editing tools (MeshMixer and MeshLab). Therefore it makes sense to expose more surface manipulation features in Slicer.

If you let us know what surface manipulation features you need (especially those that are often used along with imaging) then we can take that information into account when deciding what features to prioritize.

1 Like

We don’t do surface scanning, but a lot of my colleagues working with large-ish things do. I think available tools for manipulating individuals tools (such as transforms and fiducial registration) are ok to manually align individual objects. What I think would entice the surface scanning crowd in biology to start using Slicer more, if there can be a way to register individual mesh segments (that were scanned individually) more automatically and then fuse them.

That, and being able to import/display textures more easily are the common requests we hear about.

1 Like

Displaying RGB color when loading PLY textures should have improved usability already. Loading textures from separate image files would be quite easy to implement, too.

There have been modules developed for automatic bone segment registration, but the developers have moved on. You would need funding or research groups that are interested in making their automatic alignment tools available for Slicer.

For manual alignment, virtual reality works amazingly well. It is very similar to holding the physical pieces and putting them together with your hand, but it is actually easier, because the pieces stay in mid-air where you left them, so you don’t need glue. You can also rotate and zoom the world around very easily, so you are not limited by your physical eyesight or steadiness of your hand. If anyone needs to frequently put together models from multiple pieces, investing $1000 into a virtual reality setup is absolutely worth it (you also need to get a desktop computer or a gaming laptop, because most laptops do not support virtual reality). You can show the current 3D view in virtual reality in Slicer by a single button click.

1 Like

I agree, I haven’t had a chance to test this yet. Looking forward to.

Do you have a demo/tutorial of it somewhere? I tried the Oculus VR with slicer, but wasn’t able to do much beyond being able to slice through MRHead via head gestures and using controllers to manipulate things in space. It was my first time with VR, so perhaps that’s part of it.

This is the closest demo video:

Lots of Slicer features are available, it is just not obvious to the users, because there is no module to conveniently set up the scene for various use cases. For now, you need to set up manually based on the instructions here. I think @cpinter got a grant recently that will address this shortcoming.

I use Meshlab to make final meshes from my surface scans–deleting unwanted parts via surface painting or drawing boxes, aligning scans or parts, flip normals, Poisson surface reconstruction for a watertight mesh, decimation, smoothing, adding color/shading. I can quickly toggle vertex colors or textures on/off and transfer them between meshes. I also apply Ambient Occlusion shading for pathological CT/microCT specimens to highlight the surface details and use the same function to remove internal surfaces if needed.

Even compared to Meshmixer, the manual alignment in Meshlab is easier for me–I can select a single axis to translate/rotate or just click on the model and move it (I usually need this to align separate pieces or 2 surfaces of a flat object). The ICP alignment using 4 or more points is quick and sometimes works for articulating matching edges on fragments or sutures.

I have my workflows down for Meshlab, so it’s just a matter of sitting down with Slicer to see if I can do the same things. For the microCT models, I have started using the Surface Toolbox more than Meshlab to decimate. And I really use a combination of Slicer, Meshlab, and Meshmixer for surface scans and CT/microCT models, depending on what I need!

I have recently started working with post-autopsy CT scans so the areas of interest are not in the correct places. I know I can use Split Volumes in the Segment Editor to create separate volumes for each segment but can I then move these to the correct anatomical positions and then create a new CT volume? I can move the exported models in Meshlab, but then the CT volume doesn’t match. I don’t know at this point if I need the CT volume to match the 3D models, but for these cases it would be nice to have the full workflow in Slicer.