Volume rendering produces block of "white noise"


I’m very new to using 3D Slicer (this is my first day) so apologies if this is a very basic issue, but I’ve been trying to produce a volume render of a microCT scan of a skull and all that I get is a block of white noise:

I’ve tried all the different presets and played around with the volume properties, but to no avail. I’m able to render volumes from other datasets like MRHead and CTChest with no problem.

The dataset I’m trying to render is this scan of a primate skull from MorphoSource: https://www.morphosource.org/index.php/Detail/MediaDetail/Show/media_file_id/4390. I initially had some problems loading this dataset and had to use the DICOM Patcher to fix the files, so I don’t know whether I’m just not handling this particular dataset correctly? I’ve had a look through other similar posts about volume rendering issues but I’m afraid I’m still at a loss, so any help or advice would be appreciated.

Many thanks.

Operating system: Windows 10 Pro 64-bit
Slicer version: 4.10.2

1 Like

Thanks for providing the link to the data and the steps to reproduce. The data is super high-res (and quite beautiful) so you’ll need to play around for a bit. This data pushes the limits.

What I did to get the image below:

  • downloaded from mophosource
  • unzip and ran through the dicompatcher as you suggested
  • imported to database and loaded
  • went into the Volume Rendering module
  • GPU volume rendering did nothing (error message in log about failure to allocate - means the volume is too big)
  • switched to CPU rendering and waited
  • dragged the Shift slider to get reasonable image

I suggest avoiding the presets for this data, since it’s not like what they were designed for. Instead, start with the default and use the shift to get something good.

Also I think this data is such high resolution it doesn’t work well with the internal volume rendering assumptions and calculations (makes it a good test case for the volume renderer). But fortunately if you downsample, e.g. with the Crop Volume module, it is able to load in GPU memory and looks really great. Here I downsampled by 1/2 in all directions. Plus it’s much faster to work with in the GPU.

You can also use the presets with this downsampled data, although they are a bit fiddly.


This is actually not a particularly large volume, but it is 16 bit. Depending on what one wants to see, instead of downsampling at the expense of morphological detail, I would suggest changing the intensity range from 16 bit to 8 bit (First rescale then cast. There is short tutorial in SlicerMorph workshop: https://github.com/SlicerMorph/W_2020/tree/master/Lab11_SlicerPlusPlus#rescalecast)

However, the main issue for consumers of MorphoSource data and SlicerMorphers is these non-standard DICOMs. With the changes, the old shortcut of just dragging and dropping one of the DCM files and bypassing DICOMbrowser no longer works (I get an ITKIO error). Patching is so slow that by the time one is patched, I can open tens of those in fiji manually, and save as NRRD and load them into SLicer. And for some reason DCM2NIIX extension doesnt like this datasets either.

Would it be possible to bring back the DCM drag and drop approach not to core SLicer, but as part of the SlicerMorph extension perhaps? What do you guys think @lassoan @Chris_Rorden @jcfr

1 Like

First of all, MorphoSource must be fixed. It is a shame to have all that valuable data in invalid format. It does not just cause lost time for users but also promotes bad practices. We can help them validating their DICOM files and/or recommend other formats. This fix will solve many issues - failure to import, slow patching, etc.

We haven’t removed the ability to load DICOM files using Add data module, but that is still the plan. If Add data cannot load this then it might mean that this data is just so badly corrupted.

We will improve “time to first image” in the DICOM browser by showing thumbnails. DICOM import and loading speed should be very good already (except this very recent regression that will be fixed soon).

1 Like

I agree that fixing Morphosource would be ideal solution, but unlikely to happen because they only want to act as repository, and my understanding is that they assume data integrity issues are the responsibility of data donors. I can’t speak for them (will forward this thread to Doug Boyer) but I don’t think they have the bandwidth, or the resources to do the conversion. There is a wide variety of groups donating data from different scanners, so each issue is likely to be unique.

As for adding DCM via Add Data, or drag and drop fails with this for sometime now (not just this dataset, but I don’t have anything else right now). FOr whatever’s worth, Fiji reads the sequence fine.

vtkITKArchetypeImageSeriesReader::ExecuteInformation: Cannot open C:/Users/murat/Downloads/Morphosource_mcz_mamm_23167_M4821-4390/mcz_mamm_23167_M4821-4390/Pan_troglodytes_23167/new_Pan_231671122.dcm. ITK exception info: error in unknown: Could not create IO object for reading file C:/Users/murat/Downloads/Morphosource_mcz_mamm_23167_M4821-4390/mcz_mamm_23167_M4821-4390/Pan_troglodytes_23167/new_Pan_231671122.dcm
Tried to create one of the following:
You probably failed to set a file suffix, or
set the suffix to an unsupported type.

Algorithm vtkITKArchetypeDiffusionTensorImageReaderFile(000001F7F17A96C0) returned failure for request: vtkInformation (000001F791027340)
Debug: Off
Modified Time: 318245
Reference Count: 1
Registered Events: (none)


Thank you all for responding so quickly and for your advice, and thanks @pieper for working out a solution - I really appreciate it. I had been wondering whether the resolution of the data might have been an issue because the dataset took so much longer to load than the example datasets, so thank you for also answering what might have been my next question about how to reduce the resolution! I probably should have said, but my ultimate goal is to segment/highlight the braincase of this skull and the skulls of several other primates to use as illustrations in my thesis - along the lines of the bird skull below - so I don’t need the images to be anywhere near the resolution that might be required for analysis.


Also, it’s interesting to learn about the issues with MorphoSource. I had no idea - at first glance it looks like such a good resource.

1 Like

@Mark_1 - good luck with your research!

@muratmaga and @lassoan - we face similar challenges with data in other archives TCIA and NDA. It seems people contributing the data don’t have the bandwidth (or guidance) to put the data in better shape, and the archives are glad to take whatever people are willing to share.

What I think we need more of is secondary archives, where people can contribute back ‘crowd curated’ datasets. Here’s one good example from @fedorov where they took some challenging but valuable research data and put it into a form that should facilitate further work.

Perhaps Doug and the MorphoSource community could be encouraging people to clean up resubmit data (and give them some tokens of academic credit for their contributions). I think it would add a lot of value to the resource.


On the subject of academic tokens, there are more and more journals these days that publish data descriptors and promote FAIR data. Some examples include Nature Scientific Data and Medical Physics dataset articles.

Here’s a data descriptor from the U. Penn group (GBM) which has been cited extremely well: https://www.nature.com/articles/sdata2017117.

If someone thinks they have a valuable dataset to share, definitely there are opportunities to publish and build academic credit.


Back to the original topic of how to volume render this data, one difficulty is that the material around the skull has abnormally high signal. There are pixels that are clearly bone with a value of about 6000, but there’s also clearly background with a value of around 5700. I don’t see any details on the MorphoSource site, but maybe these bones are preserved in some kind of dense medium?

1 Like

These are all osteological specimen from Harvard’s museum. They wouldn’t have been scanned in any type of medium, but usually placed on a piece of foam. Most likely it is a poorly calibrated scan, as the actual intensity values are nowhere close to the spread of 16bit. So rescaling and converting 8 bit makes sense for this data, because original intensity values are sort of arbitrary.

@Mark_1 MorphoSource is indeed a very valuable resource. However, they do not generate the bulk of the data (only stuff that says Duke primate collection) and certainly not this particular dataset. In any event, if you only need the endocast, you don’t need such high resolution. You can plug the foramen magnum and the other foramina using Segment Editor and then fill the remaining endocranial space.

Alternatively, I just saw a small R utility called endomaker that makes decent endocast from a 3D mesh of skull (https://twitter.com/ProficoA/status/1233725674315288576). It seem to work, but I didn’t go beyond just running a sample data.

Also why do through this all work only to make ullustrations. It will be one click (or one line of code) to measure 3D volumes of those endocrania.


Converting to 8 bit is not a good idea here, since there’s a lot of dynamic range in the scan that would be lost, but probably there are other filtering steps that might help to increase the contrast between thin bone and surrounding air.

1 Like

That actually depends on what you want to do. What I am seeing is endocranial space has a background value of 5700 and the enamal, the densest structure in mammalian skulls, is in the range of 9000s. So the real range is in the 4000 range. Rescaling using this biological range and 8 bit conversion will give you 50% data volume reduction. The equivalent reduction in spatial resolution while keeping the intensities at 16 is going to cost you at the voxel detail.


1 Like

True, it depends on what you need to accomplish. But if you go to 8 bits that’s only 256 gray levels and clearly that would be throwing away tons of data here.

8-bit CT images of bone+air+soft tissue are terrible, essentially unusable. However, if you only have bone+air then 8 bits may be enough, especially if partial volume effect is negligible (because you have extremely small voxel size).

1 Like

Yes, these data are proving very difficult to work with, at least for me. Unfortunately I haven’t been able to reproduce a render as nice as yours @pieper.

Here’s a link to the paper describing the dataset and the collection methods used: https://www.nature.com/articles/sdata20161. Yes, it does say that the specimens were placed on a piece of foam for the scanning.

I tried rescaling and casting following the tutorial you linked to @muratmaga, but it didn’t seem to help. The endomaker R function that you mentioned works really well though - very fast and seemingly accurate. Thanks for the tip! However, as you said, that function requires a 3D mesh of a skull as input, so I assume I still need to have a good volume render of the skull before I can make a model from it to export it as a 3D mesh?


I used a spacing scale of 2 with the CropVolume module, in case that helps.

If all you need is a mesh, you can skip the whole volume rendering part and go straight to the SegmentEditor. You’ll find lots of tutorials for that.

1 Like

As @pieper said, you don’t need the volume rendering for making a 3D model. If all you care is the endocranial cast, you might even downsample with 4 using the Crop Volume module and then use the downsampled volume in Segment Editor.

We have some tutorials, but ultimately all you will need is the threshold effect (and perhaps island/scissor tool to remove the mandible).

Then go to Segmentations module to export your segmentation as a 3D model and then save it as PLY (through the Save As dialog box).

1 Like

@pieper I agree with your sense that secondary archives could help. In this example the data is DICOM like but not fully DICOM compliant. Also one could set image intensity to match Hounsfield units. One could even zero out the air voxels which would dramatically improve compression and file transfer. The challenge is doing this in a way that complies with the license of the primary source. In this example, a secondary source would breach the agreement:

Sharing these files, their derivatives, and/or 3D prints generated from them is only allowed if the user and the third party/parties are engaged in a non-commercial pursuit that cannot be reasonably achieved through each involved individual independently downloading the media.

By the way, I did adapt dcm2niix to handle most of these images as well as possible. Therefore, an alternative for users would be to convert the images using the SlicerDcm2nii plugin.

Oh no, this is exactly the kind of changes that I worry about: software developers degrade safety and/or performance of their software for all data sets and increase maintenance and testing workload, just to make it easier to load corrupted DICOM-like data sets.

Database maintainers have a responsibility to stop this harmful trends. A solution could be to reserve the right to make equivalent transformations to submitted data sets to fix file format errors. I don’t think any data provider group would oppose to such changes.

ONe can argue that it is the imaging systems vendor responsibility that they produce compliant files, or the standard to have simpler versions (or subsets) that is easy follow. I don’t think people generating the images are trying to go out their ways to make then non-compliant. They are not imaging technicians/specialist, they are most likely following the directions of the vendor rep who sold the system to them.

1 Like