Error while liveAI volume reconstrcution

Hello everyone,

I am trying to work with the live AI segmentation as shown in the YouTube video and the related PowerPoint. Although I have followed the exact same procedure, after inputting the mrb file, when I try to run the segmentation through Segmentation UNet, I keep getting the below error.

Failed to start live segmentation: ‘inputLayer’ object has no attribute ‘input_shape’

Additionally, my Python console shows:

[VTK] GetSliceOrientationPreset: invalid orientation preset name: Reformat
[FD] 2024-05-31 14:35:01.106830: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
[FD] To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
[Python] Failed to start live segmentation: ‘InputLayer’ object has no attribute ‘input_shape’
Traceback (most recent call last):
File “C:/Users/effatpar/Desktop/SlicerIGT aigt master SlicerExtension-LiveUltrasoundAi_SegmentationUNet/SegmentationUNet.py”, line 296, in onApplyButton
self.logic.setRealTimePrediction(toggled)
File “C:/Users/effatpar/Desktop/SlicerIGT aigt master SlicerExtension-LiveUltrasoundAi_SegmentationUNet/SegmentationUNet.py”, line 516, in setRealTimePrediction
model_input_shape = layer.input_shape[0]
AttributeError: ‘InputLayer’ object has no attribute 'input_shape’02:37 PM

I would be grateful for any help

We moved on from TensorFlow to PyTorch and MONAI. Probably the best would be to train your own ultrasound segmentation model using MONAI.

@ungi is now developing the Ultrasound extension for Slicer that has useful tools to help with this.

Yes, the answer is a bit complex. We left the SegmentationUNet Slicer module in the aigt repository because some systems still use tensorflow and that example also shows how to start a separate process for inference from Slicer. But that is not how I would implement a new system today.

In new systems, the ultrasound machine (or PLUS) directly streams the images to an inference program that runs the AI model on the images in real time (not in Slicer). The inference program can stream the segmentations to Slicer using the pyigtl package and an OpenIGTLink connector in Slicer.

We don’t have comprehensive tutorials yet, mainly because systems are still evolving. But an example inference program can be found here:
aigt/UltrasoundSegmentation/Inference/ScanConversionInference.py at master · SlicerIGT/aigt (github.com)

MONAI is a great help in creating and training pytorch models. The example above does not depend on MONAI for inference, but we should adopt more modules of MONAI. E.g. the example above uses a separate custom config file to specify how to preprocess the images before they can be used by the trained model. It would be better to use standard methods and format for that to ensure that trained models are compatible across inference programs.

1 Like

Thank you very much for your reply. It’s great to hear that the available procedures are developing. However, since I have already started a new study based on what I had seen and learned from your current model, is there any way I can use the TensorFlow method in its current format? Alternatively, how can I adapt my own system to work with TensorFlow?

Honestly, I don’t have enough samples at the moment to train my own segmentation model.

Thanks in advance for your help.