I think segmentation itself is best done outside of Slicer. I used to install TensorFlow and PyTorch in Slicer’s python environment and run segmentation in Slicer, but then it seemed better to run segmentation in everyone’s favorite AI environment, and communicate with Slicer via OpenIGTLink. You can stream the ultrasound images from Slicer or PLUS to an Anaconda environment and receive the segmentations back in Slicer (or whatever your application is) nearly real time.
I still use Slicer for manual segmentations, for generating training data. Like this: https://youtu.be/zlrUFaP9q1w?si=4UgkGDA9U_Jpyz5S
And Slicer is great for visualization, 3D volume reconstruction (if you track your ultrasound images), and providing a custom user interface if necessary for clinical users.
For ultrasound segmentation AI training, I’m using this code (under development): aigt/UltrasoundSegmentation at master · SlicerIGT/aigt (github.com)