This depends on whether your paint stroke starts inside the segment or outside - the term ‘smudge’ is meant to be like finger painting.
Making new segment editor effects is a bit of work, but can certainly be done in python. Making something efficient may be easier with some C++ code, but it’s also possible there are existing VTK or ITK classes that could be used.
I can see on your video that you started the segmentation from inside the segment, in which case it should expand the segmentation. However, your case was unusual/unlucky in many aspects:
You switched the segmentation representation displayed in slice views to be “Closed surface”, therefore you see the 3D surface interpolation of the segmentation in slice views, which may have some subpixel differences. Since the binary labelmap representation of the segmentation is used for editing, it is probably a better choice to use that during editing.
Voxels in your image are very large (maybe it is some low-resolution sample data set?), therefore very small (considering voxel size) differences between various representations of the segmentation may become perceivable.
You happened to start the painting very close to the boundary.
If you change any of these then “repulsing” will work as expected. Probably the most robust solution is to switch to display the binary labelmap representation in slice views; or start the painting from inside the structure (not that close to the boundary).
We could address the unexpected behavior by switching the inside/outside check to use the closed surface representation if that is shown in slice views, but it is such an edge case that it would be quite low on the priority list. If you find that after trying the described alternative solutions this is still a significant issue for you then please submit an issue at https://issues.slicer.org .
In “Closed surface” representation, starting on the inside does not seem to work (and is also not the kind of interaction I need as seen in the video at the top of this post. Note there how it shows a + or -, depending on whether its inside or outside the contour)
Using the “Binary labelmap” representation makes the tool function as advertised.
However, such a pixelated representation is not something that clinicians are used to and I would still need to work in the closed surface representation.
It does surprise me that this is considered as an edge case as other commercial tools also have this (1, 2, 3, 4). Some others also use the closed surface representation to draw (5). Conversely, open source tools dont seem to have this option (6)
I will raise an issue as suggested. Thank you for the detailed response!
Slicer allows not just adding/removing but moving boundaries between any two segments. Limiting it to editing the currently selected segment could be possible, but that would feel more like a limitation than a feature. Also, if you want to just paint/erase the currently selected segment then you switch between them using the space key (after clicking on the paint and erase effects, as space key toggles between the last two active effects).
By default, the resolution of the segmentation is the same as the resolution of the source image. 2mm voxel size indeed means quite low resolution, and so the voxel size of the segmentation can be quite visible.
It could be useful to discuss with the clinicians what their accuracy requirements are. They may not realize just how crude these 2x2x2mm images are and that the resulting contours can have several-millimeter errors. Maybe this error size is acceptable, but then they should have no problem with using this resolution for representing segmentations, too.
It is an edge case because you need to use low-resolution input volume + use closed surface representation + click close to the boundary (in 3D, so in the slice view it may seem you are far from the edge, but you may be actually very close).
The labelmap representation is objectively better for representing 3D shapes than planar contour sets, and generally labelmap representation is used by most modern tools for storing segmentation.
There are a number of exceptions in the field of radiation treatment planning, as medical physicists decades ago made the unfortunate decision to use planar contour representation for storing segmentation results in DICOM. This kind of forced commercial treatment planning systems to work with planar contours and changing that would be enormous cost and risk for them.
Note on microscopy
Since you have included some videos of microscopy software, I would add that of course contour representation is an attractive option in this field, because:
Data sets are mostly just 2D, so you don’t need to worry about reconstructing a 3D shape from contours, and
They need to operate on a very wide range of scale, which is trivial if you use a contour representation but quite complex if you use a multi-resolution labelmap representation.
However, these don’t apply to radiology images, which are 3D and not super-high-resolution.
3D Slicer is exceptional in that it can work with multiple representations (binary labelmap, fractional labelmap, closed surface, planar contours, ribbons), each representation can be stored losslessly, without having to convert to some hardcoded representation, you can also choose what representations are used for visualization and quantification. Representations are computed automatically from the soruce representation as needed, and the user has even control over what methods are used and with what parameters (and developers can add their own custom conversion methods and representation types). As a result, Slicer can import, process/edit, visualize, and export data in a wide range of formats, in a variety of workflows. This extreme flexibility also means that the user may be exposed to more information than usual and may need to make more decisions that have further consequences (e.g., if you choose to switch from the default binary labelmap representation then you may also need to consider adjusting the internal labelmap resolution if the source image had low resolution). In clinical software, there is no flexibility because only routine clinical workflows need to be supported; and in most research projects there is just no capacity to develop and maintain such level of flexibility.
I tested the Color Smudge option on another scan with a xy resolution of 0.9mm and the feature worked as advertised. So I see why you call it an edge case.
The CT I have used is actually paired with a PET scan, which in general have low resolution. So the clinicians are aware of the low resolution. But I think the issue is with getting them to use tools (in this case 3D Slicer) which do not conform with their daily clinical experience.
Of course, as a researcher, 3D Slicer suits my purpose, since I can extend it with deep learning based auto-contouring extensions. But I also do not wish to add a learning curve for clinicians, as that can be an additional factor affecting my experiments.
I agree with you, but with a slight modification. I believe labelmaps are better for storing 3D shapes (as they then align with the same discretization of the scan they are associated with). However, visualization/editing is just more pleasing with their smoothed versions (i.e., contour points). I guess this where I fundamentally disagree with the approach taken by slicer in the Segment Editor extension. Looks like similar concerns have been raised on this forum before (1, 2). Nevertheless, I am still an advocate of slicer at my lab and really find the tool very convenient to use for exploring 3D data.
This is indeed a simple solution to my issue. I tried doing this and was able to edit existing masks with a higher resolution. However, I was unable to create additional masks on other slices. Even when I switch off the “Color Smudge” option. Is this an error or am I incorrectly using this? Please see the video below.
Finally, thank for explaining your design reasons. Its been interesting to get the inside scoop on what motivated the features of these tools.
Thanks for pointing this out, yes, it’s a limitation of the smudge tool. Since you click on a place with no segmentation it gets set to erase mode. Maybe we should take this into account somehow, like if the slice changes then smudge mode ignores a click on an empty segment and instead uses the most recent segment value. There might be cases where that’s confusing too, but at least this case would be okay.
Would it help if we added a shortcut to enable the smudge option, i.e., if you hold down Alt key when painting then it would switch to the segment under the mouse pointer before making the paintstroke? We could similarly add shortcut for erasing, i.e., holding down Shift while using the paint tool would always erase.
The +/- hint does not make much sense to me for smudge mode, as it assumes that you work with a single segment. Very often you want to adjust the contours between two segments, so you would need a hint (change color, display text, etc.) that can tell the segment name that will be painted there.
However, overall I’m not sure how much “segmentation touch up” is a common need. If you create ground truth segmentations for AI training then you don’t want to introduce a bias by starting from some poor contours. For clinical work, it is hard to imagine that requiring manual touchup of segmentation using paintbrush is a viable strategy (quick broad cuts using Scissors, yet, but it is really hard to imagine someone doing meticulous slice-by-slice painting in day-to-day clinical work).
@strider_hunter What is your use case? AI training data set generation or clinical work? What are your time constraints and accuracy requirements?
In my limited experience observing clinicians, they work with just one organ/tumor type at a time, and also use a single tool for painting/erasing (with shortcuts associated with that tool). So I can only comment on such a scenario.
Thus, having chosen a particular organ/tumor (and the paint tool with smudge option), my use case is to allow the clinician to edit an existing AI segmentation so that it can be improved to clinical quality. So then they add/erase parts of the segmentation. In areas where the segmentation does not exist, I think the Alt keypress to add segmentation would be useful (see the video above this post).
I believe the field of auto-segmentation is moving towards the “edit-and-QA” direction instead of “drawing-from-scratch” anymore. Given the maturity of deep learning models, large datasets, and the software ecosystem, it seems natural for this to be the next step. Check this video for example (they dont edit segmentations though).
To summarize this really long thread, this is my main requirement (smoothed=Closed surface). But it seems that this requirement is not the direction that Slicer will take due to all the reasons cited in the comments above.