The current code is the result of lots of feedback and troubleshooting on computers on thousands of users. All those checks and warnings are guardrails for users to prevent them from reporting that “Slicer crashed” (it actually just hangs for a long time, while the computation is in progress). So, simply removing these checks is not the best solution.
Adding a silent
option (enabled by default, disabled when the module widget calls it) to skip any GUI interaction and just do what is specified in the input arguments would be a very simple change. It would be great if you could implement this, test that it works for you, and send a pull request.
Current location of checking the parameters and displaying of the popups is incorrect: all GUI interactions belong to the module widget and not to the logic. If you can implement this change so that it all works for you but the current behavior via the GUI does not change then that would be even better. maybe instead of a popup, a warning icon should appear next to the “Apply” button when runtime is likely to be excessive.
If you have any suggestion for changing wording of messages then please send a pull request. I also experienced that that full-resolution segmentation on CPU can run on a 20-core workstation with 64GB RAM within a few minutes, but on an i7 laptop with 16GB RAM typical runtime is around 40 minutes. Maybe some of this information could be added to popup messages and documentation.
Also note that we are releasing a new extension tomorrow: MONAIAuto3DSeg. It has similar models (low-res/high-res with similar hardware requirements) like TotalSegmentator, but better in many sense: does not contain only a few models, but over 30; not just healthy anatomy, but various abnormalities; not just CT, but also MRI (and some private models successfully used on ultrasound, too); it can accept not just a single input image but multiple images; it can run in the background and segmentation can be interrupted anytime; has less Python package dependencies and installs more cleanly and can be trained faster than nn-UNet, etc. I would recommend to check it out.