I want to create my first Slicer extension. At the moment I am doing research on the design of the extension (mostly reading Slicer API documentation), after this is done I’ll inform you about my plan.
One loadable module that I want to include, requires me to perform Projective Texture Mapping. In short, this is the projection of a 2D image onto a 3D surface. Because the texture and its position in the scene is changing in real time, I decided to implement this algorithm with an GPU-based approach similar to Shadow Mapping. Therefore I need to perform two render passes:
- render the 3D surface with a viewport that is aligned with the 2D image in world space to obtain a depth buffer
- render the 3D surface with the desired viewport, project its fragments/vertices onto the 2D image to obtain the color and only draw them if their distance to the image plane matches the one in the depth buffer
Unfortunately implementing such a custom rendering processes isn’t that easy as I thought it would be. If I were to implement this with VTK, I would try to replace the shaders of
vtkOpenGLPolyDataMapper as described here.
As far as I know slicer encapsulates VTK, so that there is no direct access to the underlying
Does anyone have an idea on how to approach this problem?
What I’ve got so far
- 2D RGB image represented as a
- 3D surface represented as a
- orientation and position of the image and the surface represeneted as a
- obtain a
- add a new
- use custom shaders for the
- Are there any easier alternatives?
- How do i get the
- Do I need to add new Actors and Mappers to the obtained Renderer or can I use existing ones?
- What is the best way to perform the 2 render passes?
Thank in advance,