What is the minimum number of multi-frame series frames or single-frame series images reasonable for performing Slicer 3D Multi-Planar Reconstruction?
Which modalities will Slicer MPR process other than CT, MR, PT, or NM?
I’m launching Slicer MPR from another app, specifying the DICOM series folder in the Python command. How do I filter the types of DICOM series sent to Slicer? Will modality and number of frames be enough?
if (Number of frames > 3) && (modality = CT, MR, PT, or NM)
then enable the Slicer MPR extension menu item selection in the image right mouse menu.;
Is this correct?
Slicer treats even a single frame as a volume, so there’s not specific cutoff but looking at fewer than 3 probably doesn’t make sense.
Slicer can also handle other volumetric modalities, like US or OCT. But for all modalities it’s more complex than just is it CT or not, since they CT may be a cine acquired without moving the table, so the frames are like frames of a movie and not slices of a volume. Slicer tries to sort these things out when loading and usually that works for a lot of scans, but sometimes manual intervention is required.
May also be worth reviewing support for specific modalities provided by extensions: Modules/Scripted/DICOM/DICOMExtensions.json
@jcfr @pieper @lassoan
OK, I still a little confused. Are you saying that I should put no filters on types of images the users are allow to process with MPR? I don’t want MPR to crash or malfunction. Remember, I’ve got 30K plus users and I don’t want a lot of confusion and generation of support tickets. I would imaging if it’s not slice-based, don’t allow MPR viewing. But, how can I tell?
In short, you cannot tell. I would consider this one of the big failures of DICOM. It is not really the fault of the standard but you can blame the imaging vendors, who failed to adopt enhanced DICOM information objects.
The best you can do is to guess when it comes to interpreting series in a DICOM study. A 2D, 2D+t, 3D, 3D+t series are stored exactly the same way and you need to inspect dozens of DICOM fields (many of those are private fields) to decide what is the most likely interpretation. Most of the times there is a quite clear winner among potential interpretations, but not always. For example, if your slices are acquired at a few different time points and number of slices at each time point are not the same then how would you know if you should group all slices into one 3D volume or it makes more sense to present it as a 3D+t time series? If a slice is missing how would you know if that slice was not acquired (to reduce patient dose) or just lost somewhere along network transfer? If a single slice is missing it might be more likely that the slice was lost and you can replace it by interpolating the two neighbor slices. However, if 10 are missing then it could just as well mean that the image was acquired with varying spacing between slices, or that the series is severely corrupted and should be discarded.
You can make better guesses if you develop an application for a specific purpose. For example, if you develop a structural heart intervention planning software then you may decide to interpret all input images as 4D cardiac CTs and ignore/reject anything else, and you can implement specific hacks for each known mistake made by imaging system or software vendors. But there is no definitive solution for general-purpose viewers, other than making guesses and allow users to choose an alternative interpretation if the default does not look good.
In Slicer, we run through each DICOM study through a set of plugins, each of those plugins tries to fit a number of different interpretation rules and returns all the possible interpretations with an associated confidence value. By default, we load the interpretation with the highest confidence value, but the user can enable advanced option and load the data using another interpretation. You could duplicate this logic in the QREADS application, but this logic is quite complex and constantly evolving (as imaging vendors come up with new software versions, imaging protocols, etc. and users are discovering issues in them).