What's wrong when trying to convert a model's coordinate points into a volume?

  1. First, create a new volume and obtain the coordinates of the model based on it.
  2. Convert the coordinates of the model (RAS) to IJK (from:  (slicer.readthedocs.io))
def ras2IjkFor(ps, reVol):
  mat = vtk.vtkMatrix4x4()
  reVol.GetRASToIJKMatrix(mat)
  ijks= []
  for p in ps:
    ijk = [0, 0, 0, 1]
    mat.MultiplyPoint(np.append(p,1.0), ijk)
    ijk = [ int(round(c)) for c in ijk[0:3] ]
    ijks.append(ijk)
  return np.asarray(ijks)

  1. Then, according to IJK staining
ps = slicer.util.array(mod)
reVol = slicer.util.getNode(reVol)
reArr = slicer.util.array(reVol)
ijk = ras2IjkFor(ps, reVol)
mArr = np.zeros_like(reArr)
z = ijk[:, 0].astype(int)
y = ijk[:, 1].astype(int)
x = ijk[:, 2].astype(int)
mArr[x, y, z] = lb
slicer.util.addVolumeFromArray(mArr)

but …

The distance between the red vertebral model and the newly generated volume is so outrageous. what’s wrong??? @lassoan@pieper@Juicy@jamesobutler @jcfr

@lassoan@pieper@Juicy@jamesobutler @jcfr

You have found the correct code snippet in the script repository. It works well, you can use it as is.

I cannot decypher what you intend to do from your code sample, but if you want to paint a small spheres in a volume at markup point positions then I would recommend to create a model that contains a small sphere at each markup position and then import that into a segmentation as it is shown here:

https://slicer.readthedocs.io/en/latest/developer_guide/script_repository.html#create-segmentation-from-a-model-node

Another example where several spheres are added (it is more efficient to append all the spheres into a single polydata and add it as one segment if you have many points):

@lassoan I have also seen the snippet you provided and it was very successful, but I want to achieve the conversion from coordinate array to pixel array Because I want to convert the model or the array to a volume. Teacher, help me take a look Or perhaps there is something wrong with the code I wrote?

def mod2Vol(mod, rVol=None, lbl = 1, mNam=''):
  mod = slicer.util.getNode(mod)
  if rVol is None:
    rVol = slicer.mrmlScene.GetFirstNodeByClass('vtkMRMLScalarVolumeNode')
  else:
    rVol = slicer.util.getNode(rVol)
  seg = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLSegmentationNode")
  seg.SetReferenceImageGeometryParameterFromVolumeNode(rVol)
  slicer.modules.segmentations.logic().ImportModelToSegmentationNode(mod, seg)
  seg.CreateBinaryLabelmapRepresentation()
  vol = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLLabelMapVolumeNode", mNam)
  slicer.modules.segmentations.logic().ExportVisibleSegmentsToLabelmapNode(seg, vol, rVol)
  segArr = getArr(vol)
  Helper.nodsDel(seg)
  return sicer.util.updateVolumeFromArray(vol, segArr*lbl)

# vol = mod2Vol('pjMod')

:point_up_2:t2: Um Yes, I used the vtk.vtkProjectPointsToPlane() method to press(or project) the vertebral body model into a plane along a certain direction…and then run mod2Vol('pjMod'), get a none volume:point_down:.

You can export the segmentation to a labelmap volume.

slicer.modules.segmentations.logic().ExportVisibleSegmentsToLabelmapNode(seg, vol, rVol), :point_left:t2:It seems that there is a problem with this sentence, Too thin to converted to? How to convert with code??
@lassoan

可算对齐了!