Hey @lassoan, does Slicer automatically detect the presence of GPU, OpenGL2, CPU and use accordingly? Will it use the software renderer only if no other graphics rendering ability is available? This is probably as great incentive for merging back into the open source community.
Will this thread result in a downsized slicer installation that could be used in a hospital network with approx 3000 windows clients, most of them would probably need software rendering as they have no GPU? We would be interested in that. Does software rendering really work with volume rendering, which seems to be hardware-intense whenever I use it?
QREADS is the imaging app that launches SlicerQREADS via a DOS shell command sending the folder path as the one argument. Upon the user selecting SlicerQREADS in the series right-mouse menu, QREADS first checks whether the SlicerQREADS installation folder exists and contains SlicerQREADS.exe. If not, then the menu item in the right-mouse menu reads “Install 3D MPR?” If the user answers yes to the “Are you sure” dialog, then a script is executed that creates the module’s installation folder, downloads the SlicerQREADS.zip file from the file share, and unzips it creating the expected SlicerQREADS folder structure. In the zip file, the app name SlicerQREADS.exe is preceded with an underscore, and the very last thing the installation script does is to rename _SlicerQREADS.exe to SlicerQREADS.exe without the underscore. That way QREADS knows that the module is fully installed and the menu item will read simply “3D Multi-Planar Reconstruction,” and the module will launch successfully.
We have about 35,000 potential users of SlicerQREADS. When we first deployed the QREADS version that included the launching of the SlicerQREADS module, we had the Windows Workstation team pre-deploy the SlicerQREADS app to all systems that has QREADS installed. That way, the scenario described about is only performed for new systems and users that come online that do not have SlicerQREADS installed…
Please let me know if I can help any further. Thank you!
The SlicerQREADS module is a smaller package separate from the build sources although still about 400 MB in our case. but that zips down to about 150MB and downloads and unzips quickly.
I think the best way to show you the performance is through the following demo which is real-time. As you can see it’s tolerable. Depending on the internet download speeds, the performance will vary. In his video, the downloads were pretty fast. with about 200 DCM files. We maximize the download rate by having the DCMs stacked into 1 file on the QREADS server, downloaded to the client via http, and then de-stacked for the SlicerQREADS module. Anyway, the delays are not bad once the DCMs are present and that’s the real bottleneck. You might use a DICOM receiver in your Slicer app, however. For us, that takes longer since we have our stacked DCMs in a files server ready to be downloaded.
The only thing that I think could be improved is how I adjust the color and opacity transforms for the 3D renderings. Could you view this video and let me know if this is what’s expected? I really don’t know what I’m doing here. I’m expecting that when I select CT BONE, I only see bone; when I select pulmonary arteries, I see the arteries, etc. And then I think my slider adjustments are all wrong. Do you agree?
What you do is the simplest way of showing volume rendering and so it makes sense. The preset names describe what the default threshold and color scheme is designed for. Since opacified vessels and bones have the same intensity on CT, they cannot be separated by simply adjusting transfer functions. Therefore if you want to separate them and show only the vessels then you need at least an approximate segmentation.
Creating neural networks that can perform an approximate segmentation of major bones, organs, vasculature, etc. is partly already available (e.g., in NVidia AI-AssistedAnnotation, MONAILabel extensions). However, it would take some effort to extend it to all structures of interest and put together a processing workflow that perform the automatic segmentations and set up the desired visualization.
The processing workflow would be specific to a clinical application (anatomy, disease, treatment approach) and it might not be always fully automatic, so it may be out of scope of a general-purpose image viewer. Exposing more manual segmentation and quantification tools (Segment Editor, Markups, …) could be a more realistic goal, as these manual or semi-automatic tools are applicable to a wide range of clinical applications, but of course they would be more challenging for users to learn.
No word yet on contributing back to open source. I’ll keep pushing from my position. But, even if it does not happen, I know you guys can whip this thing up fast if necessary.