Load arbitrary contiguous DICOM files in a single volume

This is a feature request.

Via File/Add data/Choose Files to Add, if an arbitrary number of DICOM files are selected, each file is loaded as a separate volume. Can they be loaded as a single volume ?

Alternatively, can there be a supplementary widget, like a slide bar or spin boxes, to skip some files at the beginning and at the end of the series, while selecting one single file ?

The current workflow is to load the whole volume and then use ‘Crop volume’ module.

Thanks for your opinions.


As a workaround, why not use your OS file explorer and just copy the files of interest into a temp folder and load from there.

Well it’s just about saving time, when volumes are loaded multiple times a day in short time frames. It would just remove the need to crop down the volume. Now it’s only a request.

Yes, understand that it’s a feature request - I’m just concerned about complicating the user interface for a special case that might be addressed another way (since the user interface is already complex!).

With dicom files is that you have to have special knowledge of the mapping between file names and volume geometry in order for loading subsets of files useful, so I’m concerned that adding custom GUI elements for this purpose could be error prone when used on different datasets.

On the other hand small custom python script (or scripted module) that exploits your knowledge of the file naming could help with the particular kind of data you are loading.

Ah ! File naming scheme varies indeed between CT machines. Sometimes DICOM and JPEG files are mixed in the same directory, and mutiple series may exist in the same directory also.

So it’s a bad idea, let’s drop it. Sorry to have bothered you.

Crop volume module is better then simply skipping some frames, as often you want to also resample the volume. You can save the cropped & resampled volume to a research file format (metaimage, nrrd,…) or export back to DICOM.

While I agree that crop volume it’s very nice tool, I think an import tool has a utility for datasets from highres mCT, where 2000x2000x2000 is not an common dataset. Importing the entire dataset only to crop a portion of is usually a very time consuming process.
Obviously this can be done outside of Slicer, but it can enhance the user experience.

2000^3 images are 50-100x larger than typical clinical images, so I agree that they would deserve special treatment. What is the typical image size after you import, crop, and resample the volume? What operations are you performing most commonly (importing, various visualizations, segmentation, registration, filtering, etc.)? Which ones are the most time-consuming?

In our typical uses cases, these large volumes usually contain multiple specimens (e.g., 4-5 mouse fetuses) scanned simultaneously to cut the costs associated with imaging.

I never import them through Slicer, because our slice sequences are output in png format, which slicer reads as a vector volume increasing the memory consumption by 3 times for something that is already 8GB to begin with.

Right here an option to NOT read bmp/png/jpg as a vector, but a scalar would be a useful convenience feature.
A stack limiting feature (to constrain top and bottom) as well as an ROI to constrain within the slice would be good features that will help importing these large sequences (I am assuming same tool feature can be provided for DICOM sequences as well).

We currently do all this outside of Slicer (e.g., Fiji). When cut/crop, our volumes are typically smaller than 1000x1000x1000 range (more around 800-600 cubic voxels), for which most of the functions in Slicer works fairly good. Most of our Slicer work is either 3D visualization, or manual segmentation. If I have to have a 3D representation in segment editor to do my task, I usually down-sample them further, because model creation at those sizes takes quite a bit.

I can provide you a sample large slice sequence, if that helps you to see the challenges.

Again, none of these are real deal breakers for someone who is interested in using extensive imaging tools Slicer provides, but will help novice users (especially ones outside of medical imaging domain) to get their data in faster.

It’s good that only importing is challenging. It would be very easy to write a Python script (would take 10-20 hours of an experienced Slicer module developer) that would split the large volume into smaller ones and save each as a separate grayscale volume.

After having discovered the Extension Wizard module, I glued Python code from different sources to come up to a module that suits my workflow.

I am posting it here in the hope that experienced developers express their views on its technical aspects, if any volunteer can find time for that.

Please note that I am not claiming that this module would be useful to others.


1 Like

I took a quick look - you seem to be getting the idea and I hope it’s seeming productive. Did you run into any specific questions? As a hint if you put the code in a github repository it’s possible to have discussion threads about specific lines or blocks of code.

I’ll give this a try during the week end, thanks.


Here is an updated github repository for this project. I would like to discuss on two subjects if you wish.


Thanks for putting the code in the repo :+1:

What were the two subjects you wanted to discuss?

  1. Everytime a ROI is loaded using slicer.util.loadAnnotationROI, an entry is added in ‘Recently loaded’ menu, even though it already exists. Can this be prevented through scripting ? Or can we delete the last inserted entry ? Or should Slicer itself prevent duplicates in the ‘Recently loaded’ menu ?

  2. At www.commontk.org/docs/html/index.html and apidocs.slicer.org/master/index.html, we have a complete regular C++ API for Slicer itself and CTK widgets. Does a similar structured API exist for Python scripting? I could not find such an API. Example code is very useful and that’s what I have used. (But perhaps serious scripting requires good knowledge of Slicer’s internals, well on top of my head though… I should not go any further.)

Thanks for your interest.

Regarding the first topic, it makes sense not to have duplicates in the Recently Loaded

Probably we want to avoid adding duplicates here:

Regarding the python API it’s true that we don’t have good documentation. @lassoan and @jcfr have some ideas - definitely we want to start keeping the programming documentation in a repository that is versioned with the code so things stay in sync.