Working with large datasets

Hi all, I have the luck of having some really nice high-res / microCT scans to work with. They are around the 1-2GB mark as original .dcms & if loaded into fijiJ show up as 16bit individual images. I can get the sizes down of course by

  • cropping the area of interest
  • then resizing each images x/y to 256x256
  • saving as an 8bit JPG

But this still leaves me often with a 70MB file(if saved as nifti) or folder of imags

I’m loading these across the web so it would be good to get them down even further, if possible.

Do any of you have any tips for getting small image sizes whilst maintaining as high image quality as possible? Happy to explore Slicers abilities, as well as other methods you may have heard of (am familiar w fiji/imageJ)

Thanks!
T
PS love the new forums =)

You can crop and downsample further using Crop volumes module. If file size matters, use compression (nrrd, mha use lossless compression with not too high compression ratio).

What is the reason you need to reduce data size? What specific constraints you have? What information do you need to preserve?

1 Like

Hi Andras thanks.

In essence

  1. we have built a browser-based medical imaging viewer
  2. with the aim of having a reference library of studies for our users of both normal and pathological conditions

Our desire:
So ideally we’d like

  • these studies to load across the internet as quickly as possible,
  • whilst maintaining as much detail as the originals as possible

we can currently load .dcm and .nifti files into our imagine viewer. If you think nrrd and mha might offer some benefits related to the above, we can look at adding this ability too?

The approach in most digital pathology software is to generate several tiled layers at multiple resolutions – typically 3-7 depending on the raw resolution (some microscopy modalities can generate 100GB+ images). The client gets the lowest-resolution version first as a preview, and then subsets of each layer (only the tiles in view) are served on-demand and stitched on the client, as the user zooms in and moves around. That way you are transferring at-most a few screens worth of data at a time. Over a LAN the latency is not noticeable, and over a decent internet connection on the same continent the approach should be very usable. If you are trying to serve to global users you should make sure that requesting new tiles takes exactly 1 round-trip, because latency over satellite (or e.g. BOS-SYD) can be several-hundred ms, and that will add up quickly if your client is chatty.

For research use the above approach should be fine. For clinical diagnostics there may be stricter requirements because you can’t have a radiologist accidentally look at a lossy image. But I have seen a similar approach from a well-known vendor where the un-loaded parts of an image were rendered immediately, but unusably blurry, in order to make the client feel more responsive.

It’s been a while since I worked in this area, but there are a few open source projects you could look at – the one I remember is OpenSeaDragon, which some microscopy groups have used. Kitware had a viewer built from scratch too, but I don’t remember the name.

2 Likes
2 Likes

Are there any suggestions or existing projects for efficiently serving multiresolution high-res 3D data, like the microCT that @arumiat wants to work with?

Hi, Dear arumiat
I am a doctor and recently I was looking for a Micro-CT(DICOM) of the knee joint of Mice, for Morphological Education. But nothing was found from internet. In this post, it seems to be a Micro-CT. And could you have a copy-sharing for me and my students. My Email:timeandoctor3@163.com.
And I would be appreciat if anyone can tell me something about a free download link of knee joint of Mice by Micro-CT.
Thanks.
Li Zhenzhu,

Thanks all =).

Isaiah & Andrey I will take a look at that. It seems perhaps a little complex for our needs but I will investigate it some more and see if it might help to some degree.

Yes Steve would be great if anyone knows of projects or workflows that manage to do what we’re trying to do already.

I can do some tests to see what happens to our current sequences when saved as nrrd / mha as well.

Currently the studies are around 700x700px and 1800 slices in the Z. I can also resample the data in the z-direction in Slicer right? I would be happy to double or triple the slice thickness on these studies to reduce the number of images in the Z axis, as this would save volume significantly right? I need to look at the resample module in Slicer for this right?

Probably Crop volume module is the most suitable for reducing extent and spacing of a volume. I recommend resample to isotropic spacing if you plan to visualize your image in 3D.

1 Like

Thanks I will try Andras.

Li Zhenzhu we don’t have any mice microCT unfortunately. You can perhaps contact a laboratory and ask them to donate a specimen post-mortem, which you can then send to a local commercial microCT scanner?

Thank you for your repply.
I am going to medical teaching in my hospital after reconstructed by 3d slicer.
I also try to get help from our medical university laboratory in which has a microCT scanner maching.