I have annotation files that contain a box surrounding the tumor. In order to do the segmentation process in 3D Slicer and extract features, I am going to find the exact match between the annotation file and the same image in 3D Slicer. I wondered if there is a way to match the annotation image (surrounding box) with the coordinates that 3Dslicer gives about the position of the image in the sequence.
Yes, sure, it should be no problem. What software did you use to specify the boxes? What is the format of the annotation files? Can you copy the content of an example annotation file here?
Hi Andras, The image annotations are saved as XML files in PASCAL VOC format and Python code is written to visualize the annotation boxes on top of the DICOM images. The following is the content of one of the annotation files for the first patient. Totally, there are 20 annotation files for the first patient, and I am going to find a way to match the annotation image (surrounding box) with the coordinates that 3D Slicer gives about the position of the image in the sequence.
Probably the simplest is to write a short Python script that creates fills a numpy array with these ROI rectangles (you can use numpy indexing to fill a rectangular region with a value), then save the numpy array into a nrrd file using pynrrd.
Make sure the set origin, spacing, and axis directions of the nrrd file according to the DICOM file’s geometry. If you are not sure how to get this information then you can import the DICOM series into Slicer, save as nrrd, and use the same header.
You don’t need anything fancy, just create a numpy array using data=np.zeros([512,512,512]), fill each rectangle by numpy array indexing, such as data[10:20, 30:40, 45]=1 and write the resulting file nrrd as shown here.
If you save the file with .label.nrrd file extension then Slicer will load it by default as a labelmap volume and if you use .seg.nrrd then Slicer will load it as a segmentation by default.