Hi I am an archaeologist experimenting with photogrammetry and 3d modeling, I came across your software while trying to find ways to interrogate my models for dimension info. I was wondering if there was a way to change the units of measurement within your application to match my models. They are taken in meters is there a way to change or amend the scale within the app?
Also is there a way to cross section a 3D model, essentially doing the reverse of the segment approach of your software.
If you know for a fact that the units of your 3D models are in meters, then you can set the default units from millimeters to meters in Slicer (Edit->Application Settings->Units).
Unfortunately not all modules of Slicer supports this custom units. So your mileage may vary.
Another approach that will guaranteed to work, is to create a meter to millimeter transformation (go to transforms module, create a new transform called m2mm, and set the first three diagonals values to 1000). You can then assign your 3D model to this transform, and then when you measure distances they should be reported in correct mm values.
Can you please elaborate? I’m trying to imagine the reverse of segmentation but it gets me The only thing that comes to mind from cross section are the 2D (MPR) views, but you can already see those.
This is an interesting use case. If it’s not an inconvenience I’d love to hear a bit about the project.
Hi, of course, I can elaborate. I’m not sure I’m getting it right, but I assume the software builds a 3D model from medical scan data of some sort. I am essentially starting from the end. I have built 3D scans of archaeological artefacts, namely flint tools, and would like to be able to view them in cross-section if that makes sense. I am mainly trying to gain as much dimensional info about the artefacts as possible to help determine if physical dimensions are a factor in the tools selected by early humans.
I could be getting this all wrong, but if I am, please let me know, and thanks for getting back to me.
To see the cross-sections of a 3D model in slices, all you have to do is to enable the slice views:
What modality these 3D models are generated from? If something like photogrammetry or other surface scanning, there wouldn’t be any internal structure to see, simply a shell.
If they were CT scanned somewhere, and then there will be some internal detail, but most of the information will be lost during the volumetric segmentation to generation of closed-surface representation. You can expect to see something like:
Hi,
That’s great to know. My models were built using photogrammetry and are of stone artefacts, so there’s no internal structure per se, but the cross-sections may give me a more visual representation of the sharp cutting edges vs. the blunt or hafted sides, etc. I had loaded models in already but couldn’t see a cross section in those panels, is there something I need to enable?
So you basically have STLs (or OBJs or similar)? It is complicated to get cross-sections of models without images, as Slicer’s assumption is that every workflow starts with an image. There have been other topics here on the forum about this, but I think there was no good solution.
A simple hack that I suggest is:
Load CTChest from Sample Data module
Volume render it
Enable interaction on the CTChest node (right-click eye and choose Interaction)
Move it so that it fully overlaps your model
Window/level so that it’s all black (to remove confusing medical data)
Now you can slice away
Let us know if this is good enough or you need higher resolution or if you encounter other problems.
Sharing one would be useful. I suggest dropping a google drive link or whatever cloud storage you use.
Another option is to load a copy of the STL as a segmentation object, and then convert the segmentation to a labelmap. That will give you a volume and slices that you can interact…
We have a tool in SlicerMorph to combine those two steps. I haven’t looked at it for a while, but it is called ImportSurfaceToSegment.
Thanks for that, I’ve had a play around with what you suggested but think I’m missing a step or something and can’t quite get it to work as I’m not familiar with the software. I have added a couple of models below so that you can have a look. I’ll keep trying though.
In ‘Application settings / Units’, if you select in ‘Advanced options’ the ‘Meter’ preset for ‘Length’ and set a coefficient of 1000, measurements appear in ‘m’ as below.
This is what @chir.set suggested by modifiying the coefficients field of the Units settings to 1000, but keeping the unit as mm (this is actually works better than my suggestion changing the unit to meters)
Each of these have their own drawbacks. In the first one you can mentally do the change by multiplying the reported values by 1000. In second one, you have to remember to put the object under this transform every time you want to take a measurement. In the third one, every dataset that will be loaded will have their coordinates multiplied by 1000 (not just your data).
So which one is the optimum solution for you depends on what you want to do.
Thanks very much for your detailed response. I shall try them out and see what works best. You’ve all been very helpful, and it’s great to have a community like this.