I suggest implementing exactly what is shown in the paper you linked to. What is missing in the current implementation is the automatic physical spin of the detector (and 90/180 degrees image rotation in software) as you rotate the C-arm. The projection curve does not specify the detector spin. The spin is computed from simple rules that help the clinician orient himself, by aligning directions on screen with anatomical directions. For head first supine position the commonly used rules are:
- align screen up with patient superior direction
- align screen right with patient left direction (except near lateral images, in that case with patient anterior direction)
Commercial software are generally a couple of years behind. This will be avaialable on most commercial software within a few years.
Apple’s AI support is still limited. Pytorch (the toolkit used by most medical image computing AI), is gradually getting some hardware accelaration features on Apple, but it is still quite slow. On CPU or Apple graphics hardware segmentation may take several minutes, so what you experienced is the expected behavior.
If you need speed then you can use a desktop computer with a strong NVIDIA GPU. Currently, the Cardiac TS2 model runs in 20-25 seconds on an NVIDIA GPU.
Even if processing takes a few minutes, it should be generally acceptable, because fully automatic processing does not take any time of the clinician. You could even configure a workflow in your hospital to automatically process the CT image right after it is acquired.
Sounds great! By then hopefully the SlicerHeart cathlab simulator will be released, too, so your module could use/extend features provided by the simulator or features from your module could be integrated into the simulator.