We are now working about building a new trained models in Segmentation UNet, that requires manuall ultrasound images segmentation for generating AI training data. The module “Single Slice Segmentation” from “SlicerIGT” seems very helpful cause it generates .npy files. But the npy files I exported only contained the ultrasound images but no segmentation information, file sizes of “_ultrasound.npy” and “_segmentation.npy” are same. Did something go wrong?
@ungi can you help with this?
I use this module regularly, and I have not run into the issue you describe.
The error message in your second screenshot is from a function that saves your segmentation. That code is not executed during data export, so the problem might be during saving your segmentations. I don’t see anything wrong on your first screenshot, so I need more information to figure out the problem. Could you post here your full Slicer log?
Help / Report a bug / Copy log message to clipboard
It would be best if you put a Slicer scene with a short recorded ultrasound sequence in a shared folder and share that too. I would use that to segment and export the data on my computer to see if I can reproduce the problem.
Hello professor Ungi,
I am very glad to get your help. The steps were as follows: set the input selections as shown in the first screenshot, then add a new segmentation item, use the paint tool to determine the segmentation area, click “capture”. Sometimes click “skip” to skip improper image.
The files I uploaded, one is initially recorded ultrasound sequence, the other one is completed segmentation scene, but this time I clicked export, there was no file exported.
I’d appreciate it if you could help me see what went wrong, thank you!
Slicerlog and scene fires in OneDrive: Microsoft OneDrive - Access files anywhere. Create docs with free Office Online.
Most likely the problem was that your ultrasound image and your segmentation were in different positions. Segmentations use something called a “master volume”. It is an image that provides pixel/voxel locations for the segmentation to paint on. If you segmentation object is not perfectly overlapping with your master volume, then your segmentations will not work. It’s like trying to paint but your paintbrush is not touching the canvas.
You master volume is an ultrasound image that is on a moving transform. When you create the segmentation node, it copies the current location of the ultrasound image. But when you go to the next frame, the ultrasound image moves, so you paintbrush moves off of your canvas (segmentation node). The solution is to always keep the master volume and the segmentation in the same place in the transform hierarchy. Since segmentations are created at the root of the transform hierarchy, you must put your ultrasound image in the root of the transform hierarchy when you first open the Segment Editor module. I know this is not user friendly, but there is no other way to do it, because Segment Editor automatically creates the segmentation there when you first open that module.
Just to make sure you see every detail, I’ve recorded my screen while using your data. You will notice that in the beginning, I painted on untransformed images. But after a few frames I moved the image and the segmentation to the tracker transform and continue to paint there. The only important thing is always keep these two together. Here is the video:
Let me know if this solves your problem.
Thank you very much, that is the perfect solution to my problem and I exported the segmentation sequence successfully. Thank you again!
There is one small problem, the images I exported seemed to be distorted than the original ones, like covered with mist. Is this a problem with my source file? Because I didn’t make any adjustments.
I’m glad it worked.
It’s not mist, the two images are identical. You just use different viewer software with different brightness and contrast settings for the left and right side images. Note that medical image viewers call brightness and contrast “level” and “window”. Higher level means lower brightness, and higher window means lower contrast.
I see, thank you for your help and guidance!！
I am trying to use the Single Slice Segmentation extension too and I think I have the same problem. I wanted to watch your video but it isn’t available anymore, is it possible to download the video again for me?
Hi @ElkeC, interesting that the video link doesn’t work… I’ve created another link for the same file. Please try this: Microsoft OneDrive - Access files anywhere. Create docs with free Office Online.
I had a problem that was bothering me lately. The manual segmentation result exported by the 3D Slicer has two file forms .npy and .png, but our binary diagram is different from the PerkLab open data.The gray value in our segmentation images is 0 to 255 rather than 0 to 1, and somehow the gray value in our ultrasound images is 0 to 1. After many attempts, we found that it will affect the outcome of learning. So, is something that can be changed in Single Slice Segmentation module?
Thanks! Wish you best!
Uploading: 2022-04-10 下午3.50.44.png…
Hi, the intensity range is typically managed in your AI training script. If you load the segmentation data in a numpy array (let’s call it S), and its values range between 0 and 255, then you can scale it between 0 and 1 by one line of code:
S = S / 255.0
I hope this helps.
OK I will try, thank you!