Hi guys, I am new with this app. I did try the watershed effect for tooth segmentation. I am going to do a research project measuring volume of the tooth. But, my area of interest is the different structure of teeth, specific to measure volume of enamel and root. I want to ask suggestion if you guys have any suggestion to segmented it. I found the enamel part (the red one) is very small and hard to detect the border. I attach with my screenshot of my area of interest. Thank you if you guys could help me.
To get the root volume, I would recommend to segment the entire tooth as described in these topics:
You should be able to then split the segment between the root and enamel using thresholding. If the image quality is not sufficiently good then you may need apply smoothing and/or apply manual touch-ups. You may also try Grow from seeds and Watershed effects with seeds placed in the enamel and outside.
@manjula have you tried to segment the enamel? Do you have any specific suggestions?
I have very limited experience in isolating teeth only. I have not tried to segment the enamel specifically but I felt i was much easier compared to roots even with thresholding.
The project I seek help with you before was to measure the preoperative and postoperative root resorption with the existing CBCT data.
We did a few cases and we were not satisfied with the results that we were getting and we abandoned the project. I think it was primarily due to the quality of the CBCT and what we were measuring was very small amount of resorption and reliably segmenting the root only was not very predictable.
From my limited experience, I feel the hardest part is separating between the root dentine/cementum with the PDL. If we are segmenting above the alveolar bone it should be much easier just to segment out the enamel only as it is air is around it.
Sorry i was not been able to be much help on this.
Thank you @lassoan and @manjula for your help. I really appreciate it. I’ll try to do that. I think, I need to adjust the brightness of the image so I can see clearly the border between enamel and dentine.
Hi! I’ve been doing segmentation of teeth only with the thresholding and grow from seeds. I’ve not been able to distinguish cement from dentin in the images at all.(and it’s microCT), but i managed to segment enamel, dentine and pulp quite easily with the segment editor.
It would be great if you could post a few screenshots to show what you can achieve using a microCT, and if you have any questions then let us know.
Yes, that will be great if you can share your workflow.
I have some to do next week, I will share some
Hello guys, I just want to share my result of my efforts in measuring volume of different structure of the tooth.
Those are looking nice
If you want to get smoother results, you might want to increase the resolution of the segmentation compared to the CT. Search the forum for terms like “upsample” or “supersample” and “segmentation” for examples. Depending on memory you might also need to use CropVolume around the tooth.
i would like to ask how did you reach to this result, i have a microct, and with thresholding and paint i end up with the teeth embedded partially in the bone
would u please share your recipe
Hey guys, after some hard training in segmentation of teeth in microCT i have achieved some nice results.
This is how I do It:
- I use thresholding for dentin
- i use thresholding for enamel
- I use Islands to remove voxels inmersed in tissue that doesnt correspond for each segment
- I use smoothing > closing : to fill the holes of the erased little islands
- I use thresholding for the pulp (and that just selects the background also…but…)
- I start to manually erase all the conections between the background and the canal system (that takes time depending on how complicated the anatomy is)
Also I use some manual tools to make it look better and spend some time correcting little details,
I’ll see if I can post a picture here and I invite you tu share yours
Also I’m open to recieve some ideas on how to improve my work.
Thanks, and good luck!
This looks real good, thanks for sharing the steps. Would be possible to share a sample data and the actual settings (e.g., threshold values), that goes with the data. How long is it taking you to do the segmentation?
Here I can share the acquisition and reconstruction details: The equipment was a Bruker Skyscan 1272
Filename Index Length=8
Number Of Files= 315
Number Of Rows= 672
Number Of Columns= 1008
Image crop origin X=0
Image crop origin Y=0
Optical Axis (line)= 300
Camera to Source (mm)=273.83609
Object to Source (mm)=200.89000
Source Voltage (kV)= 80
Source Current (uA)= 125
Image Pixel Size (um)=26.413040
Scaled Image Pixel Size (um)=26.413040
Rotation Step (deg)=0.600
Use 360 Rotation=NO
Scanning position=19.949 mm
Frame Averaging=ON (1)
Random Movement=ON (30)
Flat Field Correction=ON
Type of Detector Motion=STEP AND SHOOT
Number Of Horizontal Offset Positions=1
Number of connected scans=2
Current scan number=2
Number of lines to be reconstructed=503
Study Date and Time=23 Sep 2020 14h:52m:29s
Maximum vertical TS=5.0
Program Version=Version: 188.8.131.52
Program Home Directory=C:\SkyScan1272\SkyScan1272
Engine version=Version: 1.7.3
Reconstruction from batch=No
Connected Reconstruction (parts)=2
Sub-scan post alignment =1.000000
Sub-scan post alignment =-0.500000
Sub-scan scan length =504
Sub-scan scan length =503
Used extra rotation per scan(deg)= 0.000 0.000
Used extra shift in X per scan(micron)= 0.000 13.629
Used extra shift in Y per scan(micron)= 0.000 10.525
Reconstruction servers= DESKTOP-A4MIU3L
Option for additional DICOM format=ON
Time and Date=
Reconstruction duration per slice (seconds)=0.162376
Total reconstruction time (505 slices) in seconds=82.000000
Section to Section Step=1
Result File Type=TIF
Result File Header Length (bytes)=12
Result Image Width (pixels)=448
Result Image Height (pixels)=416
Pixel Size (um)=26.41304
Reconstruction Angular Range (deg)=189.00
Angular Step (deg)=0.6000
Smoothing kernel=2 (Gaussian)
Ring Artifact Correction=2
Object Bigger than FOV=OFF
Reconstruction from ROI=ON
ROI Top (pixels)=700
ROI Bottom (pixels)=284
ROI Left (pixels)=208
ROI Right (pixels)=658
ROI reference length=1008
Filter cutoff relative to Nyquist frequency=100
Filter type description=Hamming (Alpha=0.54)
Threshold for defect pixel mask (%)=0
Beam Hardening Correction (%)=30
CS Static Rotation (deg)=0.00
CS Static Rotation Total(deg)=0.00
Minimum for CS to Image Conversion=0.000000
Maximum for CS to Image Conversion=0.080086
HU Calibration scale=65535
Cone-beam Angle Horiz.(deg)=7.582436
Cone-beam Angle Vert.(deg)=5.059059
Automatic matching in Z=50
Automatic matching in X/Y=50
Automatic matching in rotation=5.000000
I dont think I can share the uCT as it’s not my property, i just borrow it to learn.
Although it looks good I’m looking foward to improve it as I’m not yet satisfied with the results.
I tried with another uCT from other teeth but they’re not always the same threshold values.
In this case I got this grey values.
I still dont know what it means that the thresholding values are <0 and if it’s necessary to change that. The gray values start from -1000.
As I know nothing about programming but my aim is to create an all in one automatic segmmentation for teeth i know I still have to work a lot in it.
I tried using the Bruker software but it’s licensed, so I’m trying to figure out how to do it with 3dslicer so I can share with the my university community.
Images obtained from CtAn Bruker are amazing, they use automatic thresholding segmentation to do it. But still, I find that the program they have to visualize 3D models quite slow and poor. Also, if you doesn’t give you the chance to save the scene, so you have to start and finish in the same session. And multirradicular teeth only can be done by generating a model for each one of the roots. The pro is that you can automate a set list of tasks and save it so it can be done automatically in a bunch of Ct’s. So you can let it running and go get some coffee.
Slicer’s much better on it cause u can save unfinished work and apparently it’s full of usefull tools I still dont know how to use, hahah
Still dont understand, however, how with simply thresholding in CTAn they can get this amazing images as seen in Dr. Versiani’s blogspot. If it has to do with the acquisition or reconstruction parameters maybe
That’s what I want to achieve with slicer.
Hope you find this useful.
Keep in touch and share!
The first 5 steps of the segmentation might take me 15 minutes, but then when i start manually correcting, i can be there for hours
The image looks very noisy. Probably you can make the segmentation much easier by applying some filtering (e.g., Curveature or Gradient Anisotropic Diffusion modules, or various filters in Simple Filters module) before starting the segmentation.
If your only goal is visualization then you can get very high quality pictures with minimum amount of work (without any segmentation) using Volume Rendering module. However, that does not help if you need to do quantitative analysis (measure volume, etc.).
Based on the log file, you chose to not to do any smoothing during reconstruction. As @lassoan commented out that’s why you have such a noisy, pixelated looking dataset. I find a minimum amount of smoothing in Nrecon to be beneficial for the quality of the data. You can also do that with slicer of course.
Looks like you are reconstructing as DICOM and chose to use HU (hounsfield unit, Hounsfield Unit - StatPearls - NCBI Bookshelf). In that case -1000 represents density of air, 0 density of water so forth. It shouldn’t have an effect on threshold calculations. It has been a while, but as I recall the automatic thresholding tool in CTAN uses OTSU method. That algorithm is available in Slicer in the Segment Editor tool.
Hola Ugi me gustaría hablarte de un proyecto en el cual podríamos trabajar que te parece la idea?
Hello Ugi, I would like to tell you about a project we could work on, what do you think of the idea?