Semi-automatic segmentation tools-PET image-Head and Neck cancer

Operating system: Windows 10 enterprise
Slicer version:4.13.0-2021-12-02
Hi

I would like to know which are the semi-automatic segmentation tools available in 3D Slicer which could be used for segmenting tumour in head and neck cancer patients on PET images.
I came across watershed, grow from seeds and thresholding(including automatic methods). Are there any other methods that I am not aware of, including extensions?
I have seen that grow from seeds is equivalent to grow cut algorithm.
I had gone through this link too :https://slicer.readthedocs.io/en/latest/user_guide/modules/segmenteditor.html

Would appreciate any suggestions.

Thanks in advance.

There is a “PETTumorSegmentation” extension developed by the University of Iowa, as part of their PET Analysis tools: https://qin.iibi.uiowa.edu/

https://www.slicer.org/wiki/Documentation/Nightly/Extensions/PETTumorSegmentation

It’s a semi-automated tool that’s available as Slicer Editor Effect and Segment Editor Effect.
The video on the documentation website uses the “Legacy Editor”, which was available in 3D Slicer until the most recent stable release.
It was replaced in favor of the Segment Editor, which has a slightly different user interface.

You can use the PET Tumor Segment Editor Effect either in the most recent stable release of Slicer or in the most recent nightly version. In the Slicer version:4.13.0-2021-12-02 mentioned by you, it was broke.

Best,
Christian

Thanks for the reply. The tool looks promising and the video provided on the documentation is very easy to follow.

Would it be possible to calculate other textural features from the segmented region using the radiomics extension on 3Dslicer ?
What is the algorithm called-is it graph cut algorithm or JCI ?

Would like to know.

Thanks

It would be very helpful if links to any tutorials(preferably videos) on how to implement Otsu thresholding and watershed segmentation on PET images could be provided.
I came across these links:
Watershed-https://discourse.slicer.org/t/watershed-fast-marching-and-flood-filling-effects-in-segment-editor/104
Otsu-https://www.slicer.org/wiki/Modules:OtsuThresholdSegmentation-Documentation-3.6

Thanks

What is the algorithm called-is it graph cut algorithm or JCI ?

You can find details of the algorithm and an evaluation in Beichel et al. (2016): “Semiautomated Segmentation of Head and Neck Cancers in 18F-FDG PET Scans: A Just-Enough-Interaction Approach”, in Medical Physics. http://dx.doi.org/10.1118/1.4948679
JEI stands for Just-Enough-Interactions; i.e. as few clicks as needed to get the job done.

Would it be possible to calculate other textural features from the segmented region using the radiomics extension on 3Dslicer ?

Certainly. The PET Tumor Segment Editor Effect works on “Segmentations” like any other Segment Editor Effect .

Thanks for the answers.

I had a quick look at the paper which was shown in the video before posting the previous questions, but I think I am not sure about the name of the algorithm. Can it be called JCI algorithm (I could see it as JCI principle in the paper) or a modified graph cut algorithm?

Thank you

In the paper we use the acronym JEI (Just-Enough-Interaction) to refer to the overall principle of the segmentation approach. (not JCI)

The algorithm uses an optimal surface segmentation (OSS) approach, which is a variant of the LOGISMOS (layered optimal graph image segmentation of multiple objects and surfaces) segmentation framework:

Y. Yin, X. Zhang, R. Williams, X. Wu, D. D. Anderson, and M. Sonka, “LOGISMOS–layered optimal graph image segmentation of multiple objects and surfaces: Cartilage segmentation in the knee joint,” IEEE Trans. Med. Imaging 29, 2023– 2037 (2010).10.1109/TMI.2010.2058861

It’s not a modified graph-cut. You may call it a LOGISMOS-based approach, if you really have to label it.

Thank you very much for your response and for the link to the paper. Now it is clear, which approach is used.

Yes, it is JEI and not JCI. Sorry about that.

Hi @chribaue
I have a PET image as below(image 1-manually segmented tumour and 3D view).


But on this image when I use JEI tool on 3DSlicer only one among those regions can be selected(Image 2). May I know how can I select both regions? Do I need to move through different slices? Do I need to use ‘create new’ option?

When I click on the other yellow region it becomes like this.

when I move through different slices, I can’t see the JEI segmented regions on those slices, I can see only the manual one(yellow region) as shown below. Am I doing it correct?

The below is another image. When I clicked(JEI tool) on the manually segmented tumour(yellow) only a small portion of the tumour is selected(red colour) which does not include the entire tumour. Do I need to click multiple times? Or do I need to make changes in the interaction style, and select from various options and advanced options?

Hope you would be able to help.

Thanks

The PETTumorSegmentationTool is semi-automatic. It needs the user to decide what high-uptake parts should get included and it offers the user options to achieve their desired results with only few clicks.

This means that the user has to be familiar with the tool’s options and it takes some training/experience to know how to best approach more complex cases.

Start with the documentation: Documentation/Nightly/Modules/PETTumorSegmentEditorEffect - Slicer Wiki
and read the paper: Semi-automated Segmentation of Head and Neck Cancers in 18F-FDG PET Scans: A Just-Enough-Interaction Approach", in Medical Physics, 2016.
again. Then you should be able to understand the different options and that the tool is designed to segment blob-like structures.

To segment multiple disconnected blobs or an elongated lesion that seems to consist of multiple parts as part of one “Slicer Segment” , it’s best to segment those parts individually using “Create new”.

For the first example you showed above, you want to segment two different uptake areas and combine them into one mask. After segmenting the first lesion, you have to switch to “Create new” and segment the second one. From the image it seems you used the “Global refinement” which is certainly not what you wanted.

For the second example you showed, it is also unclear to the algorithm if the neighboring high uptake regions are also part of same lesion or e.g. inflammation or another lesion/lymph node. The user needs to make that decision. In your case, you want all these parts included. Probably the best approach to achieve this is to use “Create new” for the additional regions.

Hope that helps.

Thank you very much. It was helpful indeed.

@chribaue
I have a follow-up question.
In the case of JEI or any other semi-automatic segmentation tools, is it necessary that the lesion to be segmented is identified from all those disconnected parts of the tumour individually by moving through different slices(say, in the sagittal view) in order for the algorithm to segment the tumour correctly, for an image like example 1? I have seen that if I locate the tumour on just one or two slices in a particular view, after applying the segmentation, in the 3D view of the segmented tumour I could see that the entire tumour was not segmented including disconnected blobs or elongated lesions and the volume of the semi-automatically segmented tumour is far too small compared to the manually segmented volume.
I could read in the paper that one of the limitations of the algorithm is: in some cases more than ten user actions are necessary to produce a segmentation of a lesion. Is this the case with the PET image in example 1?

Hope you would be able to help.

Thank you

@MPhilip
I hope I understand your questions correctly.

…semi-automatic segmentation tools, is it necessary that the lesion to be segmented is identified from all those disconnected parts of the tumour individually…

I think with any semi-automated tool you would have to click into each disconnected part at least once.
If the algorithm would be able by itself to identify other regions and classify which of them are part of the tumor you have in mind, it would be basically a fully-automated algorithm.

The doctors at our hospital specify a lesion to always be one connected region. And each lymph node in a lymph node chain or adjacent to the primary tumor is considered a separate lesion. Thus, your example 1 would be considered as 4 separate lesions.
Lymph nodes are typically small and roundish, they can often get segmented with only one click (and the right setting: “splitting on”). More complex lesions might need more than one click.

I locate the tumour on just one or two slices in a particular view, after applying the segmentation…

You have to click roughly at the 3D center of the lesion that you want to segment.
Once you identify a lesion, you have to move the view/slice roughly to the 3D center of the lesion and select a center point there, otherwise you might specify a point at the periphery of the lesion and the algorithm is not designed for that.

It’s best to first get a mental impression of the lesion in the CT scan before you start segmenting it. Turning “Slice intersections” on and pressing the “shift key” while moving over one of the 3D views to position the other 2 views helps me a lot to select proper axial, coronal, and sagittal views around the lesion’s center, before I start segmenting.

Hi @chribaue

Thank you for your time in replying to my query in detail.
I’m not quite sure whether I got this right:

‘You have to click roughly at the 3D center of the lesion that you want to segment.
Once you identify a lesion, you have to move the view/slice roughly to the 3D center of the lesion and select a center point there’

In the case of a disconnected blob or elongated lesion, I felt that it is taking some time to locate the tumour on different slices/views. Is it right to locate the tumour on different views rather than on different slices?
Was this the 3D view(shown below) that will give an overall picture of the tumour before using any semi-automatic segmentation tools?


I have done a semi-automatic segmentation using the JEI tool on this image and it is segmented as below.

the 3d view is as below
image
The volume appears smaller than the manually segmented tumour.
Can it be improved or is this just right?
I had experimented with other options available, but could not find any improvement.
Hope you would be kind enough to comment on this.

Thanks in advance.

Hi @MPhilip

Our training material/tutorials were mostly designed to teach our tools to medical professionals who are familiar with tracing lesions using standard clinical tools/viewers and who have the medical expertise identifying lesions. I’m not sure if I can teach you all of that, but I’ll try my best helping you with your questions:

In the case of a disconnected blob or elongated lesion…

You have to split such a case into regions that you segment separately.

I felt that it is taking some time to locate the tumour on different slices/views

Once I find the lesion in any of the 3 (red/green/yellow) slice views, it takes me just a few seconds to find axial/coronal/saggital slices close to a lesion center. Using the “Slicer Intersection” is really very helpful in this process:

Turn it on via the Slicer main tool bar:
image

Example: Initial view of PET scan with a lesion and slice intersections turned on:

Find lesion in one of the views; e.g. the yellow one in this case. Then move mouse cursor there while holding the shift-key down. When you’re there, release the shift-key. This will get the red and green slice to show the lesion as well.

In the red/green slices you can see that the yellow slice is not close to the lesion center. Move it closer to the center by again holding down the shift-key and moving the mouse cursor to the center of the structure in the red or green slice. Now we have all views close to the lesion center:

Was this the 3D view(shown below) that will give an overall picture of the tumour before using any semi-automatic segmentation tools?

I think to get a complete picture of the lesion and segmentation you need to move through all the slices it intersects, preferably in all 3 anatomic planes.

The volume appears smaller than the manually segmented tumour. Can it be improved or is this just right?

When we designed the algorithm, we adjusted it to reflect the tracing behavior of our most experienced radiation oncologist (where he would set the boundary based on properties of the lesion and its surrounding area). But different medical experts have different preferences; some draw the contours slightly larger or smaller than others.

If based on your medical experience you think that after the initial segmentation the overall contour is too small, you can use the “global refinement” option to adjust the gray-value level obtained by the algorithm. Switch to this option and click in the PET scan on a location for the surface that reflects the gray-level you think is right. The whole surface will adjust to this gray-level.

Hi @chribaue

Thank you very much for explaining everything in detail. This is very easy to follow and the ‘slice intersections’ option is a time saver. Thanks for introducing this to me as I am not a medical professional and have no medical expertise in tracing lesions. I am using 3D slicer for my research for the first time.

In the below image I was unable to segment all the disconnected blob using the ‘PET tumour segmentation’ tool even after turning ON the ‘slice intersections’ option. As you can see in the 3D view entire tumour is not identified(the 3D view in green which is segmented using the tool as compared to the yellow 3D view of the tumour which was segmented manually).
The 3D view of tumour segmented using ‘PET segmentation tool’ appears as below:


The 3D view of the tumour segmented manually appears different and it is clear that there are some unidentified regions that are visible only if I scroll through any of these slice views(sagittal/axial/coronal) as shown below.
?

Also in the below image, I find it a bit difficult to identify the lesion centre.

Hope you would be able to guide me again.

Thanks in advance.

I’ve watched medical professionals annotating image data/segmenting lesions in PET.

They typically start at the brain and go down the datasets through all axial slices using the mouse wheel. When they identify something they think might be a lesion, they might go up and down a couple slices around that structure and use the slice intersection to look at it in the coronal and sagittal views, maybe also going forward and backward a few slices there to get a good idea about the shape of structure and if it’s really tumor or some other high uptake structure that should not be segmented. To make this decision they might also look at the subject’s medical records, other scan modalities, or literature.

Once they decide that it actually is a lesion, they start the segmentation. They pick a point close to the center to start the segmentation. For an oddly shaped lesion this is not the center of the bounding box, but somewhere well inside the high uptake structure. After the initial segmentation they inspect the result carefully in at least 2 of the 3 planes, inspecting all slices that the structure intersects. If they are not satisfied with the segmentation, they use JEI to refine it. Once they approve the segmentation of that lesion, they keep going though the remaining dataset in a systematic manner. After they went trough the whole dataset, they might double-check everything one more time to make sure they didn’t miss anything.

In your case, you will have to do that too and you will still have to go through all slices of the image stack that contains the structures you are interested in. You will have to identify and segment all the 4 disjoint parts separately and use “global/local refinement”, “create new” or “splitting” to improve the segmentation if you think it is needed.

Our tool does not solve the high level tasks of identifying lesions and what parts to include in the segmentation. It helps the user segmenting the structures they decide to segment in a much more efficient and consistent way, compared to manual tracing, or other semi-automated tools that e.g. require adjusting thresholds etc.

1 Like

Thank you very much @chribaue for the explanation. Thanks for your time.

Hi @chribaue . Could you please suggest a name which I can use for this tool while describing about it? In 3d slicer I can see when I hover over the icon the name ‘PET tumour segmentation’. Is it ideal to use that name? can I use the name ‘tool based on JEI principle’ I am looking for something like watershed, grow from seeds,42% SUVmax threshold etc.
I would like to know whether this algorithm is based on JEI principle or LOGISMOS? You said in one of the above replies that it is based on LOGISMOS principle.
I would like to know.
Many thanks