Volume rendering produces block of "white noise"

It is true that some non-clinical scanners create invalid images. Some other data sets get corrupted during anonymization or other processing that researchers apply on them.

These invalid files do not cause issues if they are only used internally. A typical scenario is that someone reports that Slicer cannot load some images, we investigate and tell that it is because some mandatory fields were removed by incorrect anonymization or the scanner created invalid files. They fix their anonymization script or patch the files, and use these correct files afterwards. They may even report back the issue to scanner manufacturers. Problem is solved.

The damage happens when invalid files are made widely available to many people. Distributors of data sets has a responsibility to stop spreading data that causes harm (by degradation of safety and performance, and increased testing and maintenance workload of various DICOM software).

At least there should be mechanisms in place that allow fixing of detected file format issues.

Hi all! I am the product manager and lead dev for MorphoSource. I’ve also communicated with some folks here off-thread, but I wanted to chime in on the forum as well. First off, thanks for raising this @Mark_1 and to everyone else for contributing to the discussion here. Doug and I definitely agree with @lassoan that data repositories have a responsibility (and an incentive!) to make sure the data they are sharing is of the highest quality and format compliance. While like everyone else we are dealing with limited time and resources, blindly making malformed data available and doing nothing about it is definitely not something we want to be doing. Also my own research involves CT, so I can personally identify with the pain of problematic data!

To give a bit of background context, right now we are working on basically rebuilding MorphoSource’s software application from the ground-up using Hyrax, an open source digital repository platform initially created for library and information science use cases. As part of this effort, our ability to validate files (and especially DICOMs) coming into the system will be greatly improved. We would be very interested in hearing suggestions for standard-compliant command line tools or workflows that could be used here.

For these particular problematic DICOMs, our preference is to fix them and replace the original files in the repository with more usable versions (while maintaining the iffy originals in dark storage). And we would be happy to work with people in this community to get the problematic data fixed and replaced. Additionally, if there are suggestions for things we can check in an automated way, we can start the process of checking all the data in the system and remediate as needed.

As one final comment, I don’t want it to sound like we are trying to put the burden of this work on anyone else. Doug and I are immediately going to start trying to resolve these issues on our end, to improve things for everyone who uses MorphoSource. But I definitely want to indicate how open and interested we are to hearing suggestions from and working with everyone involved in discussions like this. Our interest is in being a community resource, and responding to the needs and ideas of others is a big part of that.

6 Likes

Thank you @Julie_Winchester, these all sound great!

We will set up a meeting to discuss specific steps to take. If anybody wants to participate in this meeting then please let me know in a reply to this post.

I have to say I never imagined my question would spark this discussion, but I’m glad it was stimulating and that it sounds like the outcome of all of this might be more significant than me getting some pretty pictures for my thesis :slight_smile:

Now I understand the difference between volume rendering and segmenting and when to use each - yes, segmenting is what I should have been doing all along. Sorry to be slow on the uptake! Thanks for the link to that tutorial @muratmaga, it was very clear and helpful.

@lassoan I appreciate your concern that by modifying our software to handle DICOM-like images we degrade the performance, and that the increased complexity comes with a cost in terms of maintainability and the probability for unintended consequences. The resulting software can become more brittle. Unfortunately, this is the reality for any tool that wants to handle DICOM data. The standard is so complex, that each vendor has developed their own interpretation. Robustly detecting a simple feature like slice thickness is hard, each major vendor has at least one proprietary method for reporting DWI information. My software has pages that document the methods used to extract common data from Siemens, GE and Philips. Any DICOM image from any vendor that was touched be a GEIIS system had a thumbnail introduced that broke DICOM compliance as well as many DICOM tools (this bug existed for over a decade, and many of these PACS systems are still in the wild). If you look at many DICOM readers you will note that they will convert tags with length of 13 to length of 10 as a kludge for old GE CT scans (the saving grace is that DICOM tags MUST have an even length, so this can be done without borking compliant DICOM images). Non-standard features like mosaic layout must also be handled.

I would think @fedorov would be well placed to work with the groups who share these terrific datasets to help them clean their datasets to be more compliant. I do suspect these complexities are one reason why simpler formats like NRRD/NIfTI have gained a strong niche. From my perspective, the enhanced DICOM was a missed opportunity to create a backward compatible but streamlined solution, much like OpenGL Core deprecated many OpenGL features allowing lean implementations. Unfortunately, this did not happen, and we are in a transition where many enhanced DICOMs break compatibility with existing tools and add complexity to any attempt to provide robust DICOM reading.

2 Likes

I don’t know whether this is the appropriate place for a follow-up question now, but I’ve created a segment from the skull and I am now trying to fill the braincase to create a virtual endocast. Following @muratmaga’s suggestion, I’ve plugged the foramina by painting over them as part of the skull segment, then I thought I could use Flood Filling in the area inside the braincase to create a new endocast segment, but when I try that the whole background gets filled. I don’t know whether I’m not plugging the foramina correctly so voxels inside the braincase are connected to ones outside, or whether I’m not restricting the operation to inside the skull segment correctly, or whether I’m just going about this in completely the wrong way?

After you threshold for the skull, make sure you use the Margin effects and grow the segment for the skull, perhaps by 5 mm or so. This should fill all the holes except for foramen magnum. That you will have to manually patch. Then you should be able to do flood fill as a third segment (Segment_1: Skull, Segment_2: Plug, Segment_3 Endocast). This will result in the undersegmentation of the endocast, which you can dilate it the same amount as you did with the skull to accommodate that difference.

Alternatively, you can build a 3D model from the skull as ply, take to endomaker, get your endocast, and export it as PLY from R, and then import into Slicer as a segmentation to see how well it lines up with your original image…

@Mark_1

  1. Crop Volume by 4 and use the resultant data
  2. Threshold for skull
  3. Margin effect grow 3mm
  4. Create a blank segment, switch to paint tool set the modify other segments to overlap. Choose 3D brush and set the scale really big, I used about 30-50% range, and paint over the endocranial space entirely without going out too much (overlap with bone is fine).
  5. While keeping Segment_2 as your main segment to edit, go to Logical Operations and choose subtract from Segment_1. This should result just keeping the endocranial space (no more overlap with bone).
  6. You can trim out the remaining (if any) overflow of endocranium with scissors tool.
  7. Use Margin tool to dilate the segment_2 3mm (the same amount you did for skull).

This dataset is big, the best thing you can do for yourself as you are learning Slicer is to practice with low-resolution data so that the experimentations go faster, and once you figure out the protocol, you can try with higher resolution. But in terms of the detail on endocranium, you will not get much out of using the high-resolution data.

@Lassoan @pieper it would be good to have a tool where we can draw a large 3D sphere from a specified center; almost like the local threshold tool where you click and then expand the radius but in 3D…

This is what the Paint effect does when “3D brush” option is enabled. You can adjust the sphere size by shift-mousewheel and fine-tune by zooming the view in/out.

1 Like

@muratmaga That’s brilliant, thank you!

This should not be necessary. Slicer has so many powerful tools (VTK, ITK, and all the contributed extensions) that it is easy to put together fully automatic segmentation scripts for such simple tasks.

Here is how to create the endocast in Slicer fully automatically:

  1. Crop Volume by 4 and remove the original volume from the scene
  2. Copy-paste this script into the Python console: Automatic endocranium segmentation from dry bone CT scan · GitHub
  3. Wait about 2 minutes

Prerequisite: Install SurfaceWrapSolidify extension.

The script is not optimized for performance. Probably skull solidify step could be made faster by tuning its parameters. It could be also possible to modify Solidify effect to create internal surfaces instead of external surfaces (this would allow us to fill in discontinuities on the bone without smoothing the surface and could be useful for any cavity segmentation).

If you find it useful you can created a scripted module from the script and add it to SlicerMorph in a matter of minutes, but of course you would need to spend some more time with creating an icon, module documentation page, and tutorial.

If you want to do manual segmentation, then there are a few variants of @muratmaga’s manual method that might make things a bit simpler (and might be useful for other segmentation tasks, too):

A. Use Scissors instead of large paintbrush + use Island effect instead of subtract

1-3. Same as in @muratmaga’s method
4. Create a blank segment, switch to Scissors effect, set “Operation” → “Fill inside”, “Editable area” → “Outside all segments”, trace around the endocranial space (remain outside with a safe margin, only need to pay attention to the actual line position at the foramen magnum.


5. Switch to Islands effect, choose “Keep selected island”, then click in any of the slice views anywhere in the endocranium (this removes all the small disconnected regions outside the skull)
6. Switch to Margin effect, set “Editable area” → “All”, grow by 3mm (the same amount you did for skull)

B. “Plug” the foramen magnum to close volume + fill volume using “Add selected island”

1-3. Same as in @muratmaga’s method
4. Align one of the views to show the foramen magnum: enable slice intersections (in the toolbar, click the dropdown menu of the crosshair button and click “Slice intersections”), and use Shift+MouseMove to move slice views, Ctrl/Cmd + Alt + Left-click-and-drag to rotate slice views.


5. Fill in the foramen magnum in the view where the whole opening is visible

6. Create a new segment, switch to Islands effect, choose “Add selected island”, and click inside the endocranial space in any of the slice views.
7. Switch to Margin effect, grow by 3mm (the same amount you did for skull)

3 Likes

This is excellent @lassoan, thank you.

2 Likes

Seconded! Thank you so much @lassoan! The automatic script is great and I think would make for a useful module for cavity segmentation, though for display purposes it would be ideal if there was a way so that the solidifying didn’t fill gaps such as those between the teeth or cheekbones.

Thank you for providing the manual options as well - I’ve learnt a lot from them about how to combine different tools. Your post would actually make for a nice blog post or mini tutorial if there was somewhere appropriate on the Slicer website, as I’m sure it would be of interest and value to other biological anthropologists or general users who may not come across it under this discussion topic here.

1 Like

The solidified segment is currently approximately a convex hull. There are solidification options for not filling in large holes, but we did not need to enable those because they were not needed. This solidified segment can be removed if you don’t find it useful.

If you need the complete bone model then a simple thresholding (without solidification) should work.

It would be great if you could write a blog post like that. I can help with any providing more technical details and review/give feedback on the text.

@smrolfe We will very likely wrap Andras’s example as an additional module for SlicerMorph.
@Mark_1 if you write a step-by-step documentation, we can add to SlicerMorph project site (some examples are https://slicermorph.github.io/#two). Some of these tutorials predate SLicerMorph, and is hosted outside, but nowadays is real easy to use the github’s markdown and add this as a page new item. Let me know if this is a route you would like to pursue.

1 Like

I’m guessing that this would involve setting the “Carve out Cavities” parameters? If so an issue might be that these parameters would likely need to be tweaked for different sized skulls whose cavity sizes may be different, which could interfere with the automaticity of things. In the end I’m hoping to do about 10 more skulls, including that of the smallest primate.

But as you suggested, thresholding and using one of your manual methods would likely be as quick as anything. Probably the greatest time-saver is actually the automatic thresholding part of your script, and I can just use that bit on its own.

I could certainly give it a go. I’ll have to have a think about how and when I might do it. The SlicerMorph project site looks like a good place to post it though @muratmaga, as it would fit in nicely with the other tutorials there.

Probably the same parameters would work for a wide range of skull sizes, but the extra carving steps might slightly increase the computation time and are not needed at all, if the goal is to just extract the brain cavity.

The automatic threshold computation is available in the Threshold effect GUI, too (in “automatic” section).

@Mark_1 @muratmaga We have now added a fully automatic cavity segmentation option to “Wrap Solidify” effect. It can extract the largest cavity (you can specify a hole size threshold to prevent leaking into other cavities through small holes) and it also has a manual region initialization option so that you can extract any cavity. It could be added to your excellent endocranium segmentation blog post as an additional option. See more details in this post: Fill or extract cavities in segmentations using the new "Wrap Solidify" effect

1 Like