VTK multivolume/cinematic volume rendering

Looks like that variable didn’t get set. Probably it needs to be exposed in the UI. For now you can just comment out this line (put # at the front) in /Users/amaga/Desktop/SlicerPreview.app/Contents/Extensions-32228/Sandbox/lib/Slicer-5.5/qt-scripted-modules/ColorizeVolume.py", line 348. or set the variable manually.

OMG! This is a game changer for us.

We just now a way to adjust TF per segment basis somehow… And regular shadows/lights work great, just need the ambient shadows as @lassoan mentioned.

3 Likes

gorgeous image! Keep 'em coming.

I’m making some changes/fixes. You’ll get even better results! Just need an hour or so.

2 Likes

It’s all ready. You can update manually from github or download the Slicer Preview Release tomorrow to get the updated version.

Example renderings:

Default rendering:

Using gradient opacity:

4 Likes

I am now working home with remote connection to lab server, and while it works with CPU rendering, I am getting bad memory allocation with GPU rendering.

This is probably an issue with virtualGL. However, when I invoke slicer without vgl, I still get the error with GPU rendering (which should use the software renderer). Tomorrow, when I back I will try with windows and report back.

`
"Slicer has caught an application error, please save your work and restart.

The application has run out of memory. Increasing swap size in system settings or adding more RAM may fix this issue. f you have a repeatable sequence of steps that causes this message, please report the issue following instructions available at https://slicer.org\n\n\nThe message detail is:\n\nException thrown in event: std::bad_alloc"
`

You can crop&resample the volume if you run out of memory. Or, as suggested, increase the swap size.

We use a couple of temporary buffers during computation that we release when the processing is completed. We can tune the code a bit to delete the buffers immediately when they are no longer needed. Let me know what line fails (you can run the module in a debugger or add logs to get the location).

I dont think out of memory is real (it is working off a gpu with 48gb and all iof which is available) and system has hundreds of gigs available.

GPU RAM should be fine. "The application has run out of memory…“ popup means that a memory allocation failed due to not enough CPU RAM / swap size configured in the OS. To confirm that you run out of memory, please crop&resample the input image. You can also send me the image and I can test it for you.

1 Like

It doesn’t run out of memory during other datasets (that are much bigger). But anyways, if you like to try on your own I am using the files starting with 35_mic

I don’t see any problems with this mouse template on my computer. It is quite small, loads and renders quickly:

1 Like

Yes, it works fine on my Mac too (that’s why I think it has smoething to do with remote connection or the virtualgl). Curious that your exported color volume doesn’t also show the brain endocast. When i redid a second time, that’s when it became visible…

The endocast is air, and even if you color the region by the segmentation, it will still remain empty and will not show up (the boundary may be somewhat visible due to the edge smoothing). To display the endocast you can use “Mask volume” effect to virtually fill it with water or some other material.

1 Like

Unreal… I am having better/awesome experience with windows. Ubuntu, I am getting the alloc error.

Yes, I think the out of memory error with vtkGPURaycasting is real. I am getting it even with mrhead simply when I am using GPURaycasting on vanialla data (no colorization or anything (with or without vgl).

Probably better to post this on the GH issues though.

I opened a bug report on Slicer repo, because I can reproduce the behavior without installing any extension. See

Any tips on how to iterate over volume and segmentation sequences to create a true volume colored sequence?

I’m not easily impressed, but being able to do this myself with 100% open source software…

1 Like

This looks really, really nice!

To make it a sequence, add the colorized volume to the sequence:

  • in Sequences module, click the green + icon to add a new sequence
  • in the table below set the colorized volume as Proxy node
  • enable Save changes

After this, you can generate the colorized volume sequence by iterating through the time points of the sequence (manually, using the sequence toolbar) and click Apply for each. Since the colorized volume is assigned to a sequence and you enabled saving, the a new colorized volume is stored for each time point.

You can create an animation video by using Screen Capture module with “sequence” animation mode.

It would be great if you could post the resulting video here.

I’m sorry, I was not specific enough.

I start with a segmentation sequence and a volume sequence. 20 segmentations and 20 volumes.
Saved as an .mrb. It’s awesome… drag and drop.

When using Colorize Volume, I have Input volume = Volume Sequence and Input segmentation =Segmentation sequence. The result is a single colorized volume selected from the sequence browser. I’m not sure how that exactly happens.

I see two paths:

  1. A separate python script to iterate over n=20 segs and vols. But my guess is that this would necessitate the creation of the Colorize Volume Widget, setting all options, …20 times…
  2. Hacking the “Colorize Volume Extension” to “Colorize Volume Sequence Extension”. The options in the ui would be set once and used. Create a new sequence and fill it.

I think that option #2 is the best… just wanted to confirm…it’s a lot of trial and error. I am now familar with scripting a shape rendered sequence, but this is my first volume render…

Really nice extension!!! Thank you!!!

After this, use Colorize volume on one time point and then add the created colorized volume to the sequence as I described in my previous post.

If it is acceptable to do two keypresses per time point (Ctrl + Shift + Right-arrow to go to the next frame and Space key to Apply colorization) then there is no need for any programming. If you do this regularly then you can write a tiny Python script that automates these two actions (3-4 lines of code).

If it is commonly needed by other users, too, then we can add an Apply to all time points option to the Colorize volume module that automates these few steps (adding the colorized volume to a sequence + apply colorization for all time points, not just the current one).