Optimal C-arm angulation along a plane from CT

Hi there. I’m an Interventional Cardiologist looking to use 3D Slicer for planning interventions based on CT. Current state of the art software (3mensio) enables C-arm rotation angles along a plane, thereby displaying a sinusoidal wave of LAO/RAO CAU/CRA. Is there any way of doing this on 3D Slicer? I read this post from a couple of years ago (Get C-arm angles from 3D view orientation) where lassoan (Andras Lasso) said they were working on this. I’m reasonably comfortable with python, in case this is reasonably easy to implement. One
image

Yes, we have a really nice cathlab simulator for Slicer that includes configurable C-arm models, realistic 3D rendering (for patient collision and field of view evaluation), DRR generation, segmentation based opacitication simulation, etc.

We’ll release this publicly as soon as the associated paper gets accepted for publication, which will probably take a few months.

It should not be hard to add computation of optimal fluoro angles from markup planes or lines, but we have not worked on that. It would be great if you could work it out in the next few months and then when the cathlab simulator gets released we can help with integrating that into the module. You can use this code snippet to compute how the C-arm can be rotated around a line or you can use any of the formulations described in the papers dicussing optimal C-arm angulations.

Thanks for the swift reply. I really do hope to see that in action soon. I’d argue that for intervention, we absolutely need to understand the planes of desired target structures and display them accordingly. I’d love to create a simple module geared for TAVI and left atrial appendage occlusion in particular, because I believe existing comercial options are extremely expensive considering what they do.

Thanks for the swift reply. I really do hope to see that in action soon. I’d argue that for intervention, we absolutely need to understand the planes of desired target structures and display them accordingly.

In the mean time, I tried your code snippet and it works - thanks! But the problem is, it “collides” with another snippet I was using to get the angles of a fluoro, which I found on another post by yourself:

threeDViewIndex = 0  # change this to show angles in a different 3D view

def positionerAngleFromViewNormal(viewNormal):
    # According to https://www5.informatik.uni-erlangen.de/Forschung/Publikationen/2014/Koch14-OVA.pdf
    nx = -viewNormal[0]  # L
    ny = -viewNormal[1]  # P
    nz =  viewNormal[2]  # S
    import math
    if abs(ny) > 1e-6:
        primaryAngleDeg = math.atan(-nx/ny) * 180.0 / math.pi
    elif nx >= 0:
        primaryAngleDeg = 90.0
    else:
        primaryAngleDeg = -90.0
    secondaryAngleDeg = math.asin(nz) * 180.0 / math.pi
    return [primaryAngleDeg, secondaryAngleDeg]

def formatPositionerAngle(positionerAngles):
    primaryAngleDeg, secondaryAngleDeg = positionerAngles
    text =  f'{"RAO" if primaryAngleDeg < 0 else "LAO"} {abs(primaryAngleDeg):.1f}\n'
    text += f'{"CRA" if secondaryAngleDeg < 0 else "CAU"} {abs(secondaryAngleDeg):.1f}'
    return text

def cameraUpdated(cameraNode, view):
    viewNormal = cameraNode.GetCamera().GetDirectionOfProjection()
    positionerAngleText = formatPositionerAngle(positionerAngleFromViewNormal(viewNormal))
    view.cornerAnnotation().SetText(vtk.vtkCornerAnnotation.UpperRight, positionerAngleText)
    view.cornerAnnotation().GetTextProperty().SetColor(1,1,0)  # yellow
    view.scheduleRender()

layoutManager = slicer.app.layoutManager()
view = layoutManager.threeDWidget(threeDViewIndex).threeDView()
threeDViewNode = view.mrmlViewNode()
cameraNode = slicer.modules.cameras.logic().GetViewActiveCameraNode(threeDViewNode)
cameraObservation = cameraNode.AddObserver(vtk.vtkCommand.ModifiedEvent, lambda caller, event, view=view: cameraUpdated(caller, view))

cameraUpdated(cameraNode, view)

# Execute the next line to stop updating the positioner angles in the view corner
# cameraNode.RemoveObserver(cameraObservation)

I understand one method is actually rotating the volume while the other is using the camera view. My question now would be, which would be the better option? Adapting both for rotating the volume or the camera?

I would recommend to rotate the C-arm (=camera) and keep the image stationary.

Thanks for the tip and the great work you guys do with 3D slicer. So I did end up building a module that I’m quite happy with.

It automatically calculates the ideal projection of a given plane. I geared it to the aortic valve but can be applied to any structure. It displays the projection curve which can be click as well. I also added custom measurements.

This short clip shows it in action, with its almost every feature.

I still need to review the code because it’s still quite messy, especially because I’m just an enthusiast Interventional Cardiologist, not a developer.

What’s now missing is a module for studying the vasculature access. I tried several modules but none seemed quite right. I’ll think about working on that as well.

Link: https://drive.google.com/file/d/1WlN7XExhRCvTYhapD8tRF8O7qRqjTpBK/view?usp=sharing

1 Like

Amazing progress! The clickable plane projection curve is especially a nice touch (and could be useful when you work with multiple lines you want to rotate around).

I would recommend to tune the projection computation to auto-rotate the view (spin the image around the camera normal) to make patient superior direction approximately be the up direction in the view (see implementation here) as it is done in fluoro systems.

Also note that the annotations don’t need to be done manually anymore. For example, “Cardiac TS2” model of MONAIAuto3DSeg extension can segment all the relevant cardiac structures and from that you can get your landmark points fully automatically. Maybe computation of the commissure points would not be completely trivial, but should be doable, too.

If you think this module could be useful for others, too, then it could make sense to add it to an extension so that it is easily accessible.

Thanks! With regards to your observations:

  • I did the current angulation projection on purpose - I did have one at first like you suggested, but the way the module works is precisely replicating what we use in clinical practice for TAVI (

    ). See link here (https://www.sciencedirect.com/science/article/pii/S1936879814009236)That’s why the logic behind my implementation is getting all the angulations corresponding to all vectors perpendicular to the aortic plane
  • Automatically getting the hinge points would be a “Holy Grail” as it would fundamentally render the process fully automatic. I’m not aware of any software that does this, but all that it would take I suppose would be getting a lot of cases where we’ve had such measurements done. I have already thought of finding a way of exporting our current measurements in official leading software to 3D slicer to form a training base. If I find I way, a collaborative approach with other institutions could follow for validating it
  • I have experimented with the MONAI and Total Segmentator. They’re very impressive, but even on a MacBook M3 Max with 64 Gb of RAM (what I’m running) seems pretty slow, as it takes a couple of minutes running the inference. Don’t know if that’s to be expected or not.
  • I’ll be testing my module locally with fellow Structural Heart Operators to see how they feel about it and correct/find possible bugs. If they’re happy with it, I think it may make sense to either integrate it with a existing extension (like Heart Slicer or others) or provide an entirely new one.

I’ll keep in touch as I further develop this.

I suggest implementing exactly what is shown in the paper you linked to. What is missing in the current implementation is the automatic physical spin of the detector (and 90/180 degrees image rotation in software) as you rotate the C-arm. The projection curve does not specify the detector spin. The spin is computed from simple rules that help the clinician orient himself, by aligning directions on screen with anatomical directions. For head first supine position the commonly used rules are:

  • align screen up with patient superior direction
  • align screen right with patient left direction (except near lateral images, in that case with patient anterior direction)

Commercial software are generally a couple of years behind. This will be avaialable on most commercial software within a few years.

Apple’s AI support is still limited. Pytorch (the toolkit used by most medical image computing AI), is gradually getting some hardware accelaration features on Apple, but it is still quite slow. On CPU or Apple graphics hardware segmentation may take several minutes, so what you experienced is the expected behavior.

If you need speed then you can use a desktop computer with a strong NVIDIA GPU. Currently, the Cardiac TS2 model runs in 20-25 seconds on an NVIDIA GPU.

Even if processing takes a few minutes, it should be generally acceptable, because fully automatic processing does not take any time of the clinician. You could even configure a workflow in your hospital to automatically process the CT image right after it is acquired.

Sounds great! By then hopefully the SlicerHeart cathlab simulator will be released, too, so your module could use/extend features provided by the simulator or features from your module could be integrated into the simulator.

1 Like

Thanks for the feedback. I’m not exactly sure what you meant by the automatic physical spin of the detector, since the module computes the exact angulation of the C arm as it rotates. Currently, the anatomical plane is viewed exactly as we view it in clinical practice, and according to that paper. For example, when doing TAVI (aortic plane), we view the aorta on screen aligned but obliquely and use that to calculate the angle (how horizontal/vertical it is). I even tried applying it to other structures where I also perform interventions (such as the left atrial appendage) and it is working very nicely.

In the meantime, I’ll also compare what I’m getting to current state of the art software (3mensio). If looking good, I’ll challenge colleagues to conduct a multicentric validation study - would make it more appealing.

Thanks for all your support, I’ll keep in touch. I’ve cleaned up the code a bit in the meantime, but I’m sure I’ll still find bugs that may require some help.

I am referring to image rotation:

For a specific anatomical orientation (RAO/LAU, CRA/CAU) the up direction in the image is not constrained by a simple sequential rotation around two orthogonal axes, but the image can be rotated due to several reasons:

  • Simple software rotation
  • C-arm base rotation: up direction for a specific anatomical orientation depends on the C-arm base angle (or wig/wag). Floor-mounted systems automatically compute anatomical angles that take into account the base (L-arm) angle.
  • On some systems the detector can be rotated completely independently:

Philips Allura:

Siemens Artis Zee:

image

I see what you mean now. Actually, from a strictly intervention perspective, calculating the spin is not that relevant. So long as we know the RAO/LAO CAU/CRA angles we already know which way is up. We seldom use it, except for applying filters, like the one in the image you sent. For example, I know just from looking at the image you sent that it is an RAO CAU projection of the left coronary artey. In that image, left is (kind of) posterior (in terms of heart anatomy) and vice versa.

The paper I sent you does not calculate the spin, only those 2 angle pairs, as does current structural intervention software that we’re using in clinical practice (3mension).

Having said that, the module is not entirely devoid of spin calculations - in fact, when the rotate button is pressed, the sagital and coronal planes are rotated so that the specified plane is facing perfectly upwards and horizontal (as per the plane, not the patient).

Calculating the spin would be interesting though. If I can find the time I’ll consider doing it. But first, i’d like to see how colleagues react to thr module prior to adding further stuff, as I’m sure I’ll run into bugs as, like I said, I’m not a developer, just an enthusiast who loves these challenges.