Compute normal vectors from planar contours

Hi,

Is it possible to compute normal vectors for each 3D point of planar contour in SlicerRT? If yes , how we can do it.

thank’s in advance

Computing surface normals is not a SlicerRT function. I am not an expert, but I think converting to model representation would be the first step. In what format would you want the normal vectors and why do you think you need them?

I want implement an implicit method to construct a 3D surface from a set of planar contours. So I need to compute a normal vector for each 3D point of the contour to be able to evalute the implicit function. More precisely, i want convert planar contours to a 3D surface using the converter rule planarcontoursto3d surface. there is an interpolation method in slicerRT but the compute of the 3d surface via this method does not need the knowledge of normal of each 3D point.

Computing the normal vector correctly is hard (comparable to the difficulty of reconstructing a surface). SlicerRT can compute it after it has already reconstructed the closed surface.

Hi professor;

thank’s for the response. I just want to know if I can access to the points of each contour separately?. I want implement a flood fill algorithm in order to be able to compute an approximate normal vector.

Thank’s in advance

You can find the closest point on the reconstructed surface using a VTK locator and get the surface normal from there.

but i need to knwo the normal vector before reconstructing the 3D surface . I have implemented the implicot method in 3D slicer but the results were not very satisfactory. So i want reimplement it in SlicerRT because the planar contours is the master representation.

I thought modify thie existing method https://github.com/SlicerRt/SlicerRT/blob/master/DicomRtImportExport/ConversionRules/vtkPlanarContourToClosedSurfaceConversionRule.h

but there are just vtkPolyData* inputROIPoints which contains all points of all contours. Is there any way to extract the points of each contours from inputROIPoints?

You can let SlicerRT reconstruct the surface and provide you the normals. If your reconstructed surface is significantly better than what SlicerRT generates then you can start thinking about how to get the normals (if it’s OK to use SlicerRT as a preprocessor or you want to use an alternative normal estimation method).

If you want to compute the surface normal for a triangulated mesh, you simply compute the cross-product for two sides. The winding-order of the triangles is crucial to disambiguate the front and back face of the triangle.

If you can to compute the surface normal for a voxelwise 3D image (using the intensity of each voxel), you can compute the gradient using a Sobel operator. You can do this using the CPU, but tor 3D textures a GPU using GLSL is tremendously faster. CPU and GLSL code is provided with MRIcroGL. MRIcroGL does this once for the entire 3D volume to help estimate lighting (specular, diffuse, MatCaps).

2 Likes

In vtkPlanarContourToClosedSurfaceConversionRule, all contours are set to be counter-clockwise ? I don’t understand what is the difference between counter-clockwise and clockwise contours.

Do you mean by this the existence of small contours inside the large contour?

In computer graphics, direction of a polygon (which way direction is towards inside/outside) is usually defined by its winding.

In planar contours in DICOM RTSTRUCT information objects, winding is not used. Instead, concavities (which may appear as holes in planar contours) and holes are specified using keyhole technique. See details in the DICOM standard.

1 Like

SlicerRT building instructions can be found here: https://github.com/SlicerRt/SlicerRT/wiki/SlicerRt-developers-page

If you mention any errors, it is very useful to actually send those errors so that we can give a meaningful answer.

If you build your own Sicer, then you also need to build all extensions you need. It is very simple to do that, just clone the repository on your computer, configure with CMake (set you Slicer-build folder as Slicer_DIR), and build.

In Visual Studio, you need to choose the same build configuration (Debug/Release) that you chose when you built Slicer.

The repository you cloned is a fork of the official repository. It was forked May 2018, and therefore it is two years out of date.

Hi Mr;

thank’s forthe reply. I search in MRIcroGL code but i Don’t found how the normal vectors are computed from binary volume using filters on CPU. Can you give me more précisions please.

thank’s in advance

The function sobel() estimates the Sobel operator.

This is easiest to describe in 2D, where each pixel has 8 neighbors (in 3D each voxel has 26 neighbors that share a face, edge or corner). Since the principle is the same, I will describe for 2D. Consider a 2D image where each pixel has 8 neighbors that are Left/Right(L/R), Up/Down(U/D) from the center: LU, U, RU, L, R, LD, D, RD.

The left-right gradient x is
x = (LU+2*L+LD) - (RU+2*R+RD)
The up-down gradient y is
y = (LU+2*U+RU) - (LD+2*D+RD)

Extending to 3 dimensions is described here.

thank’s again for reply. what about “vtkImageSobel3D”?

I have applied gaussian filter to blur the binary volume folowed by vtkImageSobel3D but i have artifacts in some régions of model. what is the difference by the sobel operator illustrated in your exemple the the vtkSobelImage filter.? the both give same results?

You can see the code for vtkImageSobel3D.cxx here. It is conceptually identical to the wiki Sobel 3D formula. The wiki uses weights 1,2,4 for voxels that share a corner, edge or face while VTK uses the weights 1, 1.71, 3.41. These should provide very similar results.

In general, your approach of applying a Gaussian blur prior to a Sobel is a wise choice. I am not sure what artifacts you are experiencing. Working with binary data might be part of the issue. If you look at my application, I apply these to continuous data (e.g. different voxels in CT and MRI have slightly different intensities). Specifically, I usually apply it to the alpha channel (transparency) of the image I am working with. Below is an example of a CT scan with a window center of 188 and a window width of 288 (in Hounsfield units). The horizontal axis in the graph shows the selected intensity range (from 114…302) and the vertical axis shows the alpha channel, which in this case is a linear ramp. Thus, values darker than 114 are completely transparent and values greater than 302 are completely opaque. Intermediate values are translucent. Using this approach helps with partial volume issues (e.g. a voxel that is only partially bone). The gradient is shown on the right. So my sense is you want to think about a segmentation that is continuous rather than binary to capture partial volume effects.

1 Like

Thank you so much for clarification