Clipping 2D slices in an arbitary shape

Hi,

I want to display a part of the slice data(in any slice view) based on an existing geometric constraint or specifically the 2D projection of a surface model(attaching a sample image). Instead of showing all the voxels in a plane(axial, sagittal, coronal or an oblique plane), I just wish to show the intersection. Is there an existing way to do this through any existing module or extension? If not, any pointers would really help…
Thanks in advance!
Hitesh

If you have a threshold set on the background volume in a slice view, the foreground and label layers are only shown where the background volume is within the threshold bounds. So, I think the easiest way to achieve the effect you want is to

  1. convert your model to a labelmap (by importing it into a segmentation and exporting to a labelmap (use Segmentations module, or see example code for this here). Then
  2. set this labelmap as the background volume in your slice views, and apply a lower threshold of 0.5 (to exclude background voxels) using the Volumes module. Then,
  3. set whatever volume you wanted masked as the foreground volume in the slice views and
  4. shift the foreground/background blend slider to show only the foreground.

If you have any trouble implementing this, just post what step you are stuck on and I can provide some more details.

Thanks! I understand this and was able to quickly achieve this… However, while trying this I realized it would be better for my current need to do have an annotation achieve this using a markup(e,g. a closed curve) and achieve the same effect…
Is there a way to generate similar lablemap for background layer from a markup? Also, I want to have this markup more like an extension of a 3d model(e.g. ultrasound probe) and move with it(mimicking the ultrasound plane).
Is there a better way to do this other than use observing transforms on a 3d model and apply these to a “child”?
Lastly, can I have more than 2 forground layers over a lablemap( e.g. in this case have a MRI and a registered Ultrasound volume over the background labelmap). So far as I understand there is one background and one foreground layer…

Thanks for your help.

Take a look at the “Mask Scalar Volume” module. This would allow you to create masked versions of the MRI and ultrasound and use one as the background volume and one as the foreground volume. This should work OK to achieve the desired effect. If update time is not a major issue, I’m not sure there is much advantage to trying to convert the slice through your 3D model to a markups curve to then try to create a labelmap from that. It seems like you might as well just create 3D labelmap from the model and let Slicer handle the slice display from that.

However, I wonder if you are asking about reducing to 2D in hopes of speeding up the display updates. If you are hoping to do this in real time, I expect this approach will be far too slow for you. If that’s the case, then you need a more sophisticated solution. I might suggest taking a look at Slicer IGT, which is used, for example, for tracking surgical instruments relative to registered images in real time.

Thanks for your reply and suggestions!

While the filtering plugins on scalar volume like “Mask Scalar Volume” module are good options to generate the desired effect, I think it will not help me much to create real time slice updates, like you said.

I am trying to build a small “torchlight” like effect, where a 3d model(of a US probe) can be used to move in real time and its “projection or beam” can light up specific voxels on a slice and only those should be visible in that slice. So far, I have been able to build real time tracking of an external sensor through which I am able to generate a specific slice data(of registered MR and US).

So, I just need to add the “torchlight” like effect so that it mimics and tries to simulate an ultrasound probe at that position and only the voxels in the area of the beam are visible. I hope am able to explain what I want to achieve. I have played a bit with IGT before, but I am not sure if IGT can help here?

Please let me know if you have further ideas/suggestions. Thanks!

Unfortunately, I haven’t even experimented with any real-time updating in Slicer, so I don’t think I can be much further help. @lassoan or @cpinter, might you be able to give some guidance or point @Hitesh_Ganjoo towards someone else who could?

1 Like

I think this is possible because Slicer slices the scalarVolume of a CT to show it to the user on the sliceView. So you just need to add your torchlike effect (that sounds like a threshold operation to me).
Probably you need to create a single slice scalarVolume which is updated when a new transform from the tracker arrives. I think for the update of the scalarVolume’s imageData you would need your original MRI and the transform used in combination with a vtkThreshold filter (for slicing and the torchEffect). Or something like this.

I hope this gives some ideas to test out.

It would be great if you could share a code sample if you get it working

2 Likes

If you use a convex probe then the ultrasound image is already fan shaped (0 outside the beam), so you can achieve a “flashlight effect” quite easily by using thresholding and adjusting the blending mode:

  • If you want to blank out the MRI completely: you can use the US image as background, MRI as foreground, and adjust the US volume’s display threshold in Volumes module.
  • If you want to add the US as an overlay on the MRI: you can use the MRI as background, US as foreground, and set the slice view’s blending mode to “Add”.
  • If you want to display the fan-shape US image in 3D views: adjust display threshold in the Volumes module and choose to show that slice view in 3D.

If thresholding does not work for you (e.g., because you have lots of 0 voxels in the image and you want to show a solid beam) then you can apply a mask very quickly using a VTK filter (vtkImageMask or stencil filters) or numpy (using numpy array indexing and lookup). It should not add more than a few millisecond delay to the image display. This requires some minimal Python coding (setting up the mask filter and input image observer is probably 5-10 lines of code; the callback function that updates the mask is probably 3-4 lines of code).

4 Likes

Thank you for the help and ideas. I was able to acheive this by doing the following:-

  • Created a dummy 3d model to mimic the beam/fan like shape. Thereafter, I created the segmentation node and binary labelmap from the 3d model.

  • Then I set the labelmap as the background layer, while setting the registered(MR and US) in the foreground layer. So its pretty much like thresholding but I was concerned about the 2D slice update speed, since I receive hand poses through an external sensor and I update the slices(orientation and position) based on the hand position. I used a little bit of reference from how the reformat widget slices using a slice normal and position and was able to achieve this at a reasonably fast update of 2D slices

Thanks…

2 Likes