I agree that it looks cool, so I’ve reproduced something like this with the Watershed algorithm that Slicer uses). The result was completely disappointing. The method is useless, especially for segmenting in 3D.
In 3D, you cannot make good decisions about which cell to add/remove by looking at a single slice. You need to inspect each cell in 3D, in orthogonal slices, to decide if it should be added or not. This would require several seconds for each cell.
With large cell sizes you cannot segment accurately. You can see in their demo video that even in 2D the cells are often too coarse and you cannot separate structures that normally would be trivial to segment (e.g., using Grow from seeds or Watershed effects in Slicer). With small sizes, it becomes tedious to fill in the entire structure.
- In 3D, you need to review and fill in every slice. You cannot skip slices, as most often neighbor slices have a number of holes that you need to review and fill in. In the best case, when segmenting a very simple shape on a single slice at unrealistically low resolution takes 10 seconds (see demo video), but with more realistic shape and resolution it would probably take half minute or a minute. Segmentation of a single structure that spans 50-100 slices could take an hour or so. The same structure would be segmented in a few minutes using tools in Slicer (assuming the structure has reasonable contrast, which is a requirement for the demoed tool).
I can imagine that there are very few special cases when the tool shown in the demo may be usable, but it would be impractical for most segmentation problems. Therefore, I do not think it is worth the time to implement and maintain such segmentation tool in Slicer.