How to implement CPR (curved planar reconstruction) from centerline?

I want to show the vessel in planar. Now I have a .csv file of centerline, that contains points of the veseel. How to reconstruction the vessel plannar from centerline file?

  1. How to import the csv file as a list of some point?
  2. How to reconstruction and show in the window?

And If the centerline file is a mask file, such as .mhd segmentation file (just some points). How can I implement CPR?

1 Like

you can create a curve with ‘curvemaker’ module from your csv file readed with numpy.

You can load a curve from a CSV file that has this structure:

# Markups fiducial file version = 4.11
# CoordinateSystem = 0
# columns = id,x,y,z,ow,ox,oy,oz,vis,sel,lock,label,desc,associatedNodeID
1,-48.966353538964675,-101.7065776156342,74.67105263157896,0,0,0,1,1,1,0,MarkupsCurve-1,,
1,-53.75692521242794,-85.6239441404361,66.4605263157895,0,0,0,1,1,1,0,MarkupsCurve-2,,
1,-60.86814580722918,-72.76476934319273,60.7763157894737,0,0,0,1,1,1,0,MarkupsCurve-3,,
1,-65.87663276996875,-62.425437293974404,57.87936805530591,0,0,0,1,1,1,0,MarkupsCurve-4,,
1,-73.58132198684874,-44.72104505093102,57.37936805530591,0,0,0,1,1,1,0,MarkupsCurve-5,,

To load from Python, you can curveNode = slicer.util.loadMarkupsCurve('some/folder/MarkupsCurve.fcsv') (this convenience function is available in Slicer Preview Release downloaded tomorrow or later).

I created a script for CPR earlier, but to make it more accessible, I’ve now added it as a module (“Curved Planar Reformat”) to SlicerSandbox extension. It is available for Slicer Preview Releases you download tomorrow or later (in Examples category).

1 Like

Thank you for your response.
Can you explain the meaning of CoordinateSystem and columns?

And what’s the version of Slicer Preview Releases? I can’t find the SlicerSandbox extension in revision 28669 built 2019-12-10.

See description of file format here. You just need to set x, y, z values and leave the rest as in the example.

The Sandbox is in the Examples category in the extension manager. Use Slicer rev 28672 or later.

Where can I download Slicer rev 28672 or later? This link contains the newest Preview Release is rev 28669.
https://download.slicer.org/

Hi Andras,

Sometimes people end up having to scan coiled snakes, or fish that are in little jars. Depending on what they need to do with it, they may have to ‘straighten’ the specimen. Is there a way this CPR can be generalized for such use cases?

2 Likes

Is this micro-CT and has any three dimensional sigmentation?

The module is already completely generic, so it can straighten snakes just as well as vessels. It would be great if you could post a few examples.

Here is one snake from MorphoSource straightened:

if the specimen touches itself then you need to apply masking and sharp breaks and wrinkles due to squeezing into a container cannot be fully compensated for, but it works quite well.

2 Likes

I cannot find the the curve planar reformat module to do this ? Where can i find it ? i am using the stable version and the nightly build 28669.

Or can you point me to a tutorial where i could do this please ?

thank you

Extensions are not available yet for 28669. Probably because today is Wednesday and Windows patches are usually installed Tuesday nights, which can interrupt the nightly build process. I expect the extensions will be available in a few hours.

@Sam_Horvath Could you check the factory machines / restart the extensions build?

1 Like

After the associated build directory is cleaned, the extension build will be restarted.

Update: Windows extensions build in progress :hourglass_flowing_sand:

Wow, this looks great thank you Andras!

1 Like

Hi everyone, I have been trying to figure this module out for a while now. Unfortunately I’m not getting the same results as you. Would someone mind posting a youtube video on exactly how this works. It seems pretty straight forward but I think the rotation angle is messing me up? Also if the CT scan itself is angled I can’t figure out how to transform it back straight (if that is even necessary). How do you figure out that angle? Any help is much appreciated as these reconstructions are no long offered to our in house surgeons without having to pay.

Also, which series are you using from the data set? Thanks!

Is it possible to get the transformation matrix/formula for this 3D to 2D projection (I mean generation of panoramic image based on the curve/points and initial 3D volume)?

Sorry for the delayed response. We have a new version of the module now. It does not have the rotation angle slider anymore but instead the output volume is automatically aligned with the input volume. You can rotate reslicing axes as shown in this video:

Projection is done by simple averaging, which corresponds to X-ray projection (with intensity normalization). If you need minimum or maximum intensity projection then you may just replace mean by min or max.

Thanks for the reply, by projection I mean the straightening transformation of the volume. Having the volume in nrrd fomrat (ijk coordinates) and points of the curve from Slicer (LAS/world coordinates?) I am trying to do this in Python. Could you please clarify the following points:
1- In CurvedPlanarReformat scripthttps://github.com/PerkLab/SlicerSandbox/blob/6e7c5c2c59bd313222a47aad8c620957fd9713b0/CurvedPlanarReformat/CurvedPlanarReformat.py#L359,
we get the transformation 4x4 matrix from point to world by “GetCurvePointToWorldTransformAtPointIndex”.
If I want to get this matrix from DICOM parametetrs, Is this the right formula: [[spacing_x, 0 ,0, origin_x],[0,spacing_y, 0, origin_y],[0, 0 ,spacing_z, origin_z],[0 ,0 ,0 ,1]]?
Is this the matrix to transform the curve points (gotten from Slicer markups) from slicer/world coordinates to image (ijk) coordinates so that I can proceed with the straightening transformation in a consistent coordinate?
2- In the same script, could you explain how you define the grid dimension?
“gridDimensions = [2, 2, numberOfSlices]”, what is 2 for? and why the dimension in x and y different from z?
3- In “CurvedPlanarReformating.py” https://gist.github.com/lassoan/b445c734f118a5fb7643f3fb05f98b07
The panoramic plane spacing is defined base on FOV and image size, could you explain how exactly?

This is my current code (inspired by CurvedPlanarReformat script and tried to make it independently running in Python):

import os
import nrrd
import csv
import nrrd
import pandas as pd
import numpy as np
from numpy.linalg import svd

def curveLength(x,y,z):
diffs = np.sqrt(np.diff(x)**2+np.diff(y)**2+np.diff(z)**2)
length = diffs.sum()

return length

def planeFit(points):
“”"
p, n = planeFit(points)
Given an array, points, of shape (d,…)
representing points in d-dimensional space,
fit an d-dimensional plane to the points.
Return a point, p, on the plane (the point-cloud centroid),
and the normal, n.
“”"
points = points.T # dimension is (N,3)
points = np.reshape(points, (np.shape(points)[0], -1)) # Collapse trialing dimensions
assert points.shape[0] <= points.shape[1], “There are only {} points in {} dimensions.”.format(points.shape[1], points.shape[0])
ctr = points.mean(axis=1)
x = points - ctr[:,np.newaxis]
M = np.dot(x, x.T) # Could also use np.cov(x) here.
return ctr, svd(M)[0][:,-1]

def readPoints():

pandas_data = pd.read_csv(r'path to \F.fcsv', sep=',', skiprows=3)
numpy_data = np.asarray(pandas_data)
all_points = np.asarray(numpy_data[:,1:4], 'float')
return all_points

def computeStraighteningTransform(curvePoints, outputSpacingMm = 0.1):

transformation_RAS2ijk = np.array([[-0.287, 0 ,0, 73.212],[0, -0.287, 0, 75.644],[0, 0 ,0.287, -80.779],[0 ,0 ,0 ,1]]) # gotten from DICOM header

# Slice thickness characterizes how sharply focused your image slice is. can be found in metadata of CT
sliceSizeMm = [0.287,  0.287, 0.287]
curve_length = curveLength(curvePoints[:,0], curvePoints[:,1], curvePoints[:,2])
nPoints = curvePoints.shape[0]

transformSpacingFactor = 5 # there is no need to compute displacement for each slice we just compute for every n-th to make computation faster and inverse computation more robust

resamplingCurveSpacing = outputSpacingMm * transformSpacingFactor

# Z axis (from first curve point to last, this will be the straightened curve long axis)
#transformGridAxisZ = (curveEndPoint-curveStartPoint)/np.linalg.norm(curveEndPoint-curveStartPoint)
transformGridAxisZ = (curvePoints[-1,:]-curvePoints[0,:])/np.linalg.norm(curvePoints[-1,:]-curvePoints[0,:])


# X axis = average X axis of curve, to minimize torsion (and so have a simple displacement field, which can be robustly inverted)
sumCurveAxisX_RAS = np.zeros(3)
for nSlices in range(nPoints):

	curvePointToWorldArray = transformation_RAS2ijk

    curveAxisX_RAS = curvePointToWorldArray[0:3, 0]
    sumCurveAxisX_RAS += curvePoints[nSlices,:]

meanCurveAxisX_RAS = sumCurveAxisX_RAS/np.linalg.norm(sumCurveAxisX_RAS)
transformGridAxisX = meanCurveAxisX_RAS

# Y axis
transformGridAxisY = np.cross(transformGridAxisZ, transformGridAxisX)
transformGridAxisY = transformGridAxisY/np.linalg.norm(transformGridAxisY)

# Make sure that X axis is orthogonal to Y and Z
transformGridAxisX = np.cross(transformGridAxisY, transformGridAxisZ)
transformGridAxisX = transformGridAxisX/np.linalg.norm(transformGridAxisX)


# Origin (makes the grid centered at the curve)
planeCenter, norm = planeFit(curvePoints)

transformGridOrigin = planeCenter
transformGridOrigin -= transformGridAxisX * sliceSizeMm[0]/2.0
transformGridOrigin -= transformGridAxisY * sliceSizeMm[1]/2.0
transformGridOrigin -= transformGridAxisZ * curve_length/2.0

# Create grid transform
# Each corner of each slice is mapped from the original volume's reformatted slice
# to the straightened volume slice.
# The grid transform contains one vector at the corner of each slice.
# The transform is in the same space and orientation as the straightened volume.

gridDimensions = [2, 2, nPoints]
gridSpacing = [sliceSizeMm[0], sliceSizeMm[1], resamplingCurveSpacing]
gridDirectionMatrixArray = np.eye(4)
gridDirectionMatrixArray[0:3, 0] = transformGridAxisX
gridDirectionMatrixArray[0:3, 1] = transformGridAxisY
gridDirectionMatrixArray[0:3, 2] = transformGridAxisZ


transformDisplacements_RAS = np.zeros((gridDimensions[0],gridDimensions[1],gridDimensions[2]))

for gridK in range(gridDimensions[2]):

    curvePointToWorldArray = transformation_RAS2ijk

    curveAxisX_RAS = curvePointToWorldArray[0:3, 0]
    curveAxisY_RAS = curvePointToWorldArray[0:3, 1]
    curvePoint_RAS = curvePointToWorldArray[0:3, 3]
    for gridJ in range(gridDimensions[1]):
        for gridI in range(gridDimensions[0]): 
            straightenedVolume_RAS = (transformGridOrigin+ gridI*gridSpacing[0]*transformGridAxisX+ gridJ*gridSpacing[1]*transformGridAxisY+ gridK*gridSpacing[2]*transformGridAxisZ)
            inputVolume_RAS = (curvePoint_RAS + (gridI-0.287)*sliceSizeMm[0]*curveAxisX_RAS + (gridJ-0.287)*sliceSizeMm[1]*curveAxisY_RAS)
            transformDisplacements_RAS[gridK][gridJ][gridI] = inputVolume_RAS - straightenedVolume_RAS # TODO ValueError: setting an array element with a sequence.

return transformDisplacements_RAS

A post was split to a new topic: Loading markups curves from fcsv file

Internal Slicer coordinate system is RAS, but all files that Slicer writes are in LPS. DICOM is LPS.

If you run the processing in Slicer then everything is in RAS, so you don’t need RAS<->LPS conversion. If you use DICOM and files that Slicer created then everything is LPS, so you don’t need RAS<->LPS conversion either.

If for any other scenario you need LPS/RAS conversion then multiply the homogeneous coordinates by np.diag([-1,-1,1,1]).

Since we apply the same transform to all points within a slice, it is enough to specify transform at the 4 corners of a slice (that’s why the first 2, 2). numberOfSlices is up to the user to decide: you just need to choose a resolution that you want in your output image, compute the transforms for each, and store the result in this array.

That was a preliminary implementation. The implementation in the Sandbox extension is completely reworked and all the spacings are computed accurately.

The original CurvedPlanerReformat approach proved to be a dead end. Since it did not provide a grid transform, it was impossible to transform meshes, curves, landmark points along with the image (images require inverse transform, points/lines/meshes require forward transform) and performance is suffered, too, because you always had to compute a complete straightened volume (while if you have a transform then you can extract just the single slice that you display).