Some steps further:-)
Still an error:
Processing started
Writing input file to C:/Users/Ronald/AppData/Local/Temp/Slicer/__SlicerTemp__2022-12-28_17+54+58.904/total-segmentator-input.nii
Creating segmentations with TotalSegmentator AI…
Total Segmentator arguments: [‘-i’, ‘C:/Users/Ronald/AppData/Local/Temp/Slicer/__SlicerTemp__2022-12-28_17+54+58.904/total-segmentator-input.nii’, ‘-o’, ‘C:/Users/Ronald/AppData/Local/Temp/Slicer/__SlicerTemp__2022-12-28_17+54+58.904/segmentation’, ‘–ml’, ‘–task’, ‘total’, ‘–fast’]
Please cite the following paper when using nnUNet:
Isensee, F., Jaeger, P.F., Kohl, S.A.A. et al. “nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation.” Nat Methods (2020). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation | Nature Methods
If you have questions or suggestions, feel free to open an issue at GitHub - MIC-DKFZ/nnUNet
preprocessing C:\Users\Ronald\AppData\Local\Temp\nnunet_tmp_8pv33ag1\s01.nii.gz
using preprocessor GenericPreprocessor
before crop: (1, 293, 200, 200) after crop: (1, 293, 200, 200) spacing: [3. 3. 3.]
no resampling necessary
no resampling necessary
before: {‘spacing’: array([3., 3., 3.]), ‘spacing_transposed’: array([3., 3., 3.]), ‘data.shape (data is transposed)’: (1, 293, 200, 200)}
after: {‘spacing’: array([3., 3., 3.]), ‘data.shape (data is resampled)’: (1, 293, 200, 200)}
(1, 293, 200, 200)
This worker has ended successfully, no errors to report
Traceback (most recent call last):
File “C:\Users\Ronald\AppData\Local\NA-MIC\Slicer 5.2.1\lib\Python\Scripts\TotalSegmentator”, line 201, in
main()
File “C:\Users\Ronald\AppData\Local\NA-MIC\Slicer 5.2.1\lib\Python\Scripts\TotalSegmentator”, line 179, in main
seg = nnUNet_predict_image(args.input, args.output, task_id, model=model, folds=folds,
File “C:\Users\Ronald\AppData\Local\NA-MIC\Slicer 5.2.1\lib\Python\Lib\site-packages\totalsegmentator\nnunet.py”, line 232, in nnUNet_predict_image
nnUNet_predict(tmp_dir, tmp_dir, task_id, model, folds, trainer, tta)
File “C:\Users\Ronald\AppData\Local\NA-MIC\Slicer 5.2.1\lib\Python\Lib\site-packages\totalsegmentator\nnunet.py”, line 106, in nnUNet_predict
predict_from_folder(model_folder_name, dir_in, dir_out, folds, save_npz, num_threads_preprocessing,
File “C:\Users\Ronald\AppData\Local\NA-MIC\Slicer 5.2.1\lib\Python\Lib\site-packages\nnunet\inference\predict.py”, line 668, in predict_from_folder
return predict_cases_fastest(model, list_of_lists[part_id::num_parts], output_files[part_id::num_parts], folds,
File “C:\Users\Ronald\AppData\Local\NA-MIC\Slicer 5.2.1\lib\Python\Lib\site-packages\nnunet\inference\predict.py”, line 493, in predict_cases_fastest
res = trainer.predict_preprocessed_data_return_seg_and_softmax(d, do_mirroring=do_tta,
File “C:\Users\Ronald\AppData\Local\NA-MIC\Slicer 5.2.1\lib\Python\Lib\site-packages\nnunet\training\network_training\nnUNetTrainerV2.py”, line 211, in predict_preprocessed_data_return_seg_and_softmax
ret = super().predict_preprocessed_data_return_seg_and_softmax(data,
File “C:\Users\Ronald\AppData\Local\NA-MIC\Slicer 5.2.1\lib\Python\Lib\site-packages\nnunet\training\network_training\nnUNetTrainer.py”, line 516, in predict_preprocessed_data_return_seg_and_softmax
ret = self.network.predict_3D(data, do_mirroring=do_mirroring, mirror_axes=mirror_axes,
File “C:\Users\Ronald\AppData\Local\NA-MIC\Slicer 5.2.1\lib\Python\Lib\site-packages\nnunet\network_architecture\neural_network.py”, line 147, in predict_3D
res = self._internal_predict_3D_3Dconv_tiled(x, step_size, do_mirroring, mirror_axes, patch_size,
File “C:\Users\Ronald\AppData\Local\NA-MIC\Slicer 5.2.1\lib\Python\Lib\site-packages\nnunet\network_architecture\neural_network.py”, line 384, in _internal_predict_3D_3Dconv_tiled
predicted_patch = self._internal_maybe_mirror_and_pred_3D(
File “C:\Users\Ronald\AppData\Local\NA-MIC\Slicer 5.2.1\lib\Python\Lib\site-packages\nnunet\network_architecture\neural_network.py”, line 510, in _internal_maybe_mirror_and_pred_3D
result_torch = to_cuda(result_torch, gpu_id=self.get_device())
File “C:\Users\Ronald\AppData\Local\NA-MIC\Slicer 5.2.1\lib\Python\Lib\site-packages\nnunet\utilities\to_torch.py”, line 31, in to_cuda
data = data.cuda(gpu_id, non_blocking=non_blocking)
RuntimeError: CUDA out of memory. Tried to allocate 356.00 MiB (GPU 0; 6.00 GiB total capacity; 4.75 GiB already allocated; 0 bytes free; 4.78 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Exception ignored in: <totalsegmentator.libs.DummyFile object at 0x0000020E346E5250>
AttributeError: ‘DummyFile’ object has no attribute ‘flush’
Using ‘fast’ option: resampling to lower resolution (3mm)
Resampling…
Resampled in 4.12s
Predicting…
I got CUDA out of memory. Should I try to install the CPU pytorch version? I do have an nvidia card, but the amount of memory is very low…
Thnx
Ronald