Thanks for the clarification, I now understand that you need SSAO for semi-transparent surface actors, not volumes. This is yet another story which can be partially solved with the information in this topic.
Similar to what is required for volumes, you will indeed need to change the SSAO pass in the Lights.py script (or VTK later) so that it uses vtkRenderStepPass instead of the opaque pass. As long as you don’t render volume, you don’t need/are not impacted by the VTK hack.
However, similar hack/commit might be required to fully support SSAO for transparent surface actors: semi-transparent actors do not write to the depth buffer, this must be changed in a similar way as it was done for volumes here.
For now I will focus on implementing the proper fix for volumes in VTK. I’ll see to give a try to transparent actors in the meantime
Thank you so much @LucasGandel for bearing with me and answering corner case questions I understand now, that it is about the depth buffer, and profound changes would be needed to do this fix. OK let’s put this aside for now, I appreciate you helping me understand the root cause!
I’ve submitted a pull request for the Lights module that takes care of setting up everything (enable SSAO rendering pass for transparent rendering, applies shader replacements, allows customizing the opacity threshold):
I’ll wait for a proper VTK fix from @LucasGandel that we can merge into Slicer’s VTK and then I’ll merge the pull request and the feature will be available for everyone.
Until then the feature is only available for developers who apply the VTK fix locally and use this branch of the Sandbox.
This is fantastic @lassoan, thank you for trying and for creating the branch.
So I understand that there is interest for it, and that it should be finalized at some point. If yes, I will now focus on the fix for volume occlusion, the rest of the change can be integrated with mapper options or volume properties.
I completely agree with your points:
A configurable threshold makes sense as it can highlights different structures (in addition to prevent weird effect with almost fully transparent pixels when 0.0 is used as you said). I started work in this direction and I think g_fragColor.a must be used to take the opacity transfer function instead of g_srcColor.a (at least it makes a difference for RGBA columes I think)
Right, this line is non-sense. I just added it because the RenderDoc frame debugger needs anything to be written to the texture, but those fragments are not used in the final blending as no depth is associated to them. This needs to be fixed when mixing volumes and surface to ensure existing fragments are not overridden
@LucasGandel needs to do some cleanup of a VTK fix, which may take some time (hopefully not more than a few days?) and then we can make the ambient shading available for everyone. We’ll post updates on this thread.
If you want to access this feature earlier then you ned to build Slicer from source code and apply the VTK fix locally.
Yesterday I could find the fix to bring back volume depth occlusion. The required change will be done in the coming days. I have a project to handle the integration of the depth mask change in VTK so I will start with that. For this we will use the vtkOpenActor::DepthMaskOverride information key. @lassoan Is it possible to set the information key from python? like this
You can set those information keys in Slicer in Python like this:
threeDViewWidget = slicer.app.layoutManager().threeDWidget(0)
vrDisplayableManager = threeDViewWidget.threeDView().displayableManagerByClassName("vtkMRMLVolumeRenderingDisplayableManager")
volumeNode = slicer.mrmlScene.GetFirstNodeByClass('vtkMRMLVolumeNode')
volumeActor = vrDisplayableManager.GetVolumeActor(volumeNode)
info = volumeActor.GetPropertyKeys()
if not info:
info = vtk.vtkInformation()
Would it be possible to set enable depth mask override by default to avoid the need for such low-level manipulations? None of the other volume rendering techniques require such calls.
If there are some negative side-effects of enabling depth mask override in some cases then making it optional could make sense, but then there should be more convenient API, for example a method in vtkVolume.
Perfect, thanks. Eventually it will be enabled by default by the ssao pass so it won’t be required anymore. Same for the shaders replacement and modification of the ssao pass detp texture format. But for now, all the VTK hack from my volume-ssao branch can be replaced with:
Thank you for working on this and for clarifying the roadmap. When I reviewed the VTK merge requests I was not sure if we’ll need all the low-level API calls to activate this feature, but it’s great to hear that it’ll be all seamlessly integrated into the existing SSAO feature! I’ll test this and let you know if it all works the same way as before.
Thanks a lot for following up on this. To answer your question, this is defined in the shader replacement that you must add on the application side for now. See the following code from previous comments:
However, this won’t be required anymore when everything gets moved to the SSAO pass. I am almost done with that, still need to write a test and open an MR on VTK, but you can access the code on my updated volume-ssao branch.
The branch has been rebased on master and only VTK!10644 has been cherry-picked because it did not make it yet into VTK ( VTK!10642 and VTK!10645 have been merged).
The only thing that is still required on the application side if using my volume-ssao branch is to set the depth format on the ssao pass: ssao->SetDepthFormat(vtkTextureObject::Fixed32);.
Let me know if something is still missing after that, setting the vtkRenderStepPass as a delegate of the SSAO pass is still required on the application too as I did not change the vtkRenderer SSAO pass.
All the low level pending MRs have been merged into VTK and VTK!10725 handles shader replacements in the SSAO pass and is ready for reviews
I plan to write a post on the Kitware blog to present the approach. If anyone wants to provide data or screenshots, that would be much appreciated. Maybe a second blog post presenting the same at an higher level, with the colorized volume approach, also makes sense, if anyone wants to co-author that. I really think we should communicate more on this now that it is kind of ready. @lassoan WDYT?
Great, thank you very much for all your work on this. I’ll test this and provide data and screenshots. I would also be happy to help with a blog post and probably write a short paper that can be cited, because I think this (along with colorizing volume using segmentation) may be a breakthrough in utilizing data that all the new “AI” segmentation tools provide.