Dear all,
I am currently working on automatic segmentation in ultrasound imaging using the instructions provided on the AIGT GitHub repository.
I have successfully trained my own model by following all the steps outlined there. As a result, I obtained several files, all named unet-DiceCE.
However, I need the final segmentation model in .h5 format to use it for ultrasound segmentation with 3D Slicer. Could anyone advise on how to convert these files or generate the required .h5 format?
Dear Dr. @ungi , since you are actively involved in research on this topic, I would greatly appreciate your guidance.
Thank you very much
Hi, if you are using this training script:
aigt/UltrasoundSegmentation/train.py at master · SlicerIGT/aigt
then the models are saved in traced format, so you don’t need the source code that defines the model. You could change the lines where the traced model is saved to save in any format you like. But I’m not sure why would you want to do that.
Thank you for answering my question.
Since the file format used for segmentation UNet (the extension in 3D Slicer) is .h5, I want to convert the final model to .h5.
Do you mean this is not necessary?
I think I can imagine what is happening based on the final results of this model. From what I learned in your PowerPoint file (U38-live AIrec), the file needs to be in .h5 format to be imported into the SegmentationUNet extension. However, I noticed there are new extensions in your Git repository, such as torch_live_ultrasound
or ‘torch sequence segmentation’. Should the final “model_traced_bes.pt” be used with this extension directly?
Oh, unfortunately that tutorial is outdated. It still works, but that is not how I would do things now.
For real-time applications, you will get best performance if you stream the ultrasound data directly to a segmentation script in a separate python environment. Then, that script streams the segmentations to Slicer. Here is an example for such a script: aigt/UltrasoundSegmentation/Inference/ScanConversionInference.py at master · SlicerIGT/aigt
And this example script expects traced models, so you don’t need to change the way models are saved.
Ah, I see—this likely means the ultrasound extension is outdated. May I ask what steps I should follow if I want to run the auto-segmentation using a recorded sequence in 3d slicer?
For that, you may use the Torch Sequence Segmentation module:
aigt/SlicerExtension/LiveUltrasoundAi/TorchSequenceSegmentation at master · SlicerIGT/aigt
First of all, thank you for the guidance so far.
However, I have encountered some problems using Torch Sequence Segmentation. With the settings configured as shown in the video, I selected my model, which is named model_traced_best. But when I choose 3D Volume Reconstruction and start, not only does nothing happen, but the image also freezes on the current slide. While the view updates based on transformations in the 3D view, the displayed image remains unchanged.
Moreover, the generated prediction file doesn’t actually contain any predictions.
Is there anything specific I should pay attention to while using this extension?
I am attaching a video here to demonstrate what is happening.
it shows before and after starting the torch segmentation.