Markups module lagging when placing anatomical landmarks

Hello! I’m hoping to get some clarification on some issues I’ve been having with markups in Slicer. I am very new to using the program so any help is appreciated.

I have two ages of mouse skulls (E17.5 and P28) for which we have CT scan data, and the volume rendering shows all the external structures of the head. TIFF stacks were imported using the SlicerMorph module for both ages, and the image spacing was added from our voxel sizes as shown in our .pcr files. I landmarked both using the markups module. For the E17.5, I had no issues creating only a volume rendering, adjusting the shift to see the skull, and landmarking directly on the rendering of the skull (possibly because there is less bone at this age). I also had no issues with Slicer lagging/not responding or with the markups module lagging at all.

For the P28 I was having trouble visualizing the skull structures I needed to see in the volume rendering before placing a landmark in the markups module. My solution was to segment out the skull and save it as a model, then landmark directly on the model itself. When I did this though, I ran into a couple of issues.

First, the markups module began lagging and taking 15-20 seconds to place a landmark, and I often had to re-take landmarks that were not placed properly due to the lag. Second, Slicer stopped responding, and I often had to wait until the program finished whatever it was doing before adjusting landmarks or moving the model around.

At one point, the program would switch back and forth between the fiducial tool (where the landmark displays as a green dot) and attempting to place a landmark for a minute or two before it placed the landmark, after which I had to edit the placement because of the lagging. It was almost as if my placement of the landmark didn’t register in Slicer, and I had to be careful not to click anywhere else until the landmark was placed.

For reference, the computer I was using is a high-powered machine that is capable of handling large amounts of data and even running Avizo without any issues (512 RAM, 64-bit OS, 2.20 GHz, and an additional 1.81 TB of SSD for local data storage). All data is saved locally so that importing large files does not disrupt processing power in these programs. Additionally, the data we use is typically between 5-25 microns.

Is there possibly an issue with the image spacing in SlicerMorph? For example, I sometimes have to convert the voxel size to mm or to um depending on what program I am using. I believe SlicerMorph asks for mm. Or is this an issue with having multiple nodes in the data tree (model, segmentation, volume rendering, markups) even if the visibility is off for the ones I am not actively using? Lastly, does the coordinate system change at all when landmarking on a model versus a volume rendering? All the data is cropped and reoriented in Dragonfly and Fiji before saving as TIFF files so that file sizes are cut down and easily viewable in Avizo or Slicer.

Hi @meganfveltri to help others diagnose your issue can you provide the version of Slicer that you are using? Also providing details of when you installed SlicerMorph or last updated the extension.

Most likely the displayed meshes are too complex and therefore 3D point picking takes too long. There are several solutions for this, but to suggest what to do, we would need a little bit more information.

How many points have you placed?
Do you place the points on segmentations or on models?
How many segments or models do you have?
If you export the segmentation to a model then what is the number of points in the mode;?

@meganfveltri please provide the information @lassoan and @jamesobutler asked as this will help narrow down the issue. As for your questions below:

Incorrect spacing should not alter the display performance. But always enter spacing informaiton to ImageStacks in millimeters, as that’s what we expect (so 5 micron will be entered as 0.005).

Your computer sounds powerful enough that having multiple objects in the scene shouldn;t be too much of a concern for available resources. but to rule out that you can export your segmentation as a 3D model (e.g., in OBJ format), close everything and load only that into the scene and see if that makes a difference.

If you are landmarking in Slicer, landmarking in model vs volume rendering will not affect the coordinate system. I prefer to do landmarking in volume rendering mostly because the anatomical detail I obtain from the volume rendering is usually superior to the 3D model representation of its segmentation.

Finally, I would discourage cropping your raw data in Dragonfly or Fiji. You should be able to do that as easily in ImageStacks module of SlicerMorph, by specifying the ROI you would like to import. In fact one way to deal with performance issues, is to only import the relevant sections of a large dataset into Slicer using the ImageStacks. Because the coordinate system is preserved, later if you import another section from the same image sequence, you will be able to continue segmenting (or landmarking) without worrying about whether things continue to line up in physical world. You will not be able to do this if you use FIJI for cropping your images. You can see our tutorial on ImageStacks and specifically how to use the ROI option to import data here:

1 Like

Hi James,

I am currently running Slicer version 5.2.1 r31317 and SlicerMorph version 5813db0 (2022-12-01). We downloaded both at in the middle of December 2022.

It will be great if you can share your dataset as well.

I’ve updated the Slicer Morph extension today.
@lassoan here are the answers to your questions:

The E17.5 skull has 34 landmarks. The P28 skull has 32. For the P28 I placed the landmarks on the model. I segmented out the cranium and saved it as a model, then landmarked the model. Therefore, I have one segment and one model. By the number of points in the model do you mean the number of landmarks? If so, there are 32 landmarks.

@muratmaga
For background, we crop and reorient all of our data as part of our data transformation pipeline since most of these scans are going to another collaborator for additional processing and it helps to cut down processing power and file sizes in dropbox. We also do this so that we can process our bone phantom which is scanned with the specimens so that we can maintain consistency in the bone threshold across all the specimens in a particular scan range. Our specimens are usually scanned three at a time, so we need to crop each individual specimen out from the raw data for processing later. That way we can visualize one specimen at a time versus three at a time and have to figure out which specimens are which. So the data for P28, for example, has already been cropped to the relevant ROI and TIFF slices that I want to analyze. Can you elaborate on what you said here: “Because the coordinate system is preserved, later if you import another section from the same image sequence, you will be able to continue segmenting (or landmarking) without worrying about whether things continue to line up in physical world.” ? Am I correct in thinking that if we crop data in Dragonfly or Fiji prior to landmarking that the coordinate system can be disturbed between specimens since they are imported as separate image stacks?

Thanks again for all of your help!

Thanks. @lassoan was asking about the point size of the model. For that, right click on the model object and choose “Edit Properties” (which will take you to Models module).
Expand the information tab, and report the values under points and cells.

I will post another response to your other inquiry a little later.


Maybe too many points?

What is the best way to share the dataset? Box?
Also, would you prefer just the .tiff files or the entire scene/model/segmentation/markups for the P28 skull?

Yes, that’s a fairly large model. Here are a few things you can do to possibly speed up the interactions.

Go to Markups module and expand the display tab and navigate to the section that says 3D Display, and change the placement mode to the unconstrained. By default this snap to the visible surface which makes sure that the points are always on the vertex on the model (which slows down things).

Also, navigate further down to Control Points tab and click the lock sign that says interactions (the left one). This will disable any mouse mode interaction with the points already created, but should speed up mouse movements. If you need to interact with them for any reason, you can turn off the lock.

Please share both the tiff stack and the segmentation. If you do not want to publicly share it on the cloud and provide a link for everyone in the list, you can share it with me here: http://faculty.uw.edu/maga/data_dropbox.

Thanks so much for all your help. I will double-check with the rest of the lab that I can share the data, then let you know. We’re meeting to discuss this tomorrow during our lab meeting.

1 Like

Hi Murat,

I’ve uploaded the tiff stack and segmentation. Let me know if you need anything else! Thanks again for all your help.

There are few things going on.

First, I can replicate the extreme lag when placing the landmarks on 3D models, and that’s directly related to the size of the model. My suggestions of disabling the interactions and other things didn’t help much, so I would like to hear what @lassoan suggest to improve that. So hold on that one.

There are other things you should do that will make your life easier. I noticed that the tiff stack contains very tight intensity ranges (min = -0.0086, max =0.0765), and the data is represented as a float32 data type. In my experience this is very uncommon for microCT datasets. Most of the time mCT are either 8 bit (if it’s a dry skull) with 256 discrete intensities or, 16 bit (usually soft tissue, or contrast enhanced) with 65K discrete values. Float uses much more memory than either of these. So here the steps you can do rescale and cast your image with almost no loss in detail.

  1. Import your data with ImageStacks as usual.

  2. under the Module finder search for SimpleFilters

  3. Filters search option type **rescale **

  4. Enter 0 as output minimum and 255 as output maximum, and create new output volume and hit apply

  5. After this is completed type Cast

  6. As the input volume choose the output volume you specified in the previous step.

  7. Set the Output pixel type to uint8_t (unsigned 8 bit, since we choose 0-255 range in the previous step).

  8. Set the output volume same as the input volume and hit apply

After these operations you will have a new 8 bit volume. Air background would be around 0, soft tissue surrounding the skull will be 60-90 range, and the bone will be about 100+ values (I just poked around, these may not be entirely true). After this operations notice that how fast the volume rendering will become. You will have to create new set of volume rendering properties for this volume. If you find that the 255 intensity range is not sufficient, you can go back to the rescaling step and enter the range 0-65535 and then in the case choose uint16_t (unsigned 16 bit). Make sure to save the final volume as NRRD file so that you don’t have to repeat these steps.

These will not impact the slowness you encounter during the landmarking of the 3D model from segmentation. Unless @lassoan have other suggestions, if you have feel like you need to the landmarking on the model, you will need to use a slightly lower resolution for faster picking.

(3D model on the left, volume rendering on the right dervied from this 8 bit data)

I will comment on the ImageStacks and ROI operations in the next one.

1 Like

Here is an example of how you can use ImageStack’s ROI functionality to reduce your memory footprint and partially import datasets. This would have more sense if you had the original data, where there are multiple specimens scanned side by side (which I think you are scanning them). But still demonstrates the concept.

  1. Proceed with ImageStacks as before, but choose to import half-resolution quality.
  2. Volume render the half-resolution set and create an ROI in volume rendering that only contains Left Mandible.
  3. Go back to ImageStacks and then change these settings:
    • Output Volume: Create a new output volume as and set it as Left Mandible
    • Set Region of interest to the new ROI you created in step #2
    • change the resolution to full resolution.

These changes reduces the memory footprint to 1/8th of the original dataset. Hit load files to import this new volume.

Go ahead and create a single landmark on the tip of the left incisor.

Repeat these procedures to import the Right mandible, create a single point on the right mandible as well. Then go ahead and import the full dataset (at any resolution) without the ROI and notice that both of these points are indeed in correct spot for incisors on the full dataset. That’s because Slicer preserves the correct offset for the partial volumes you created. Of course, save the data as NRRD file right after the import (so that you don’t repeat the steps).

If you did the cropping of left and right mandible in ImageJ and then bring those datasets to Slicer, you wouldn’t be able to do this, because ImageJ will not preserve the full coordinate system. That’s why I say there is really no reason to do the cropping in ImageJ. In future, we will add the option to put the ROI under transform (so that you can crop in the oblique orientations) as well.

1 Like

This is all very helpful. Thank you so much! I am hoping to get back to this on Thursday to play around with the scans and try landmarking again. I will reply then and let you all know how it goes.

Hi everyone. Just wanted to give an update. @muratmaga 's recommendations to simple filter the data have worked well. The landmark lagging issue has been resolved. Thanks!

1 Like