Hi Steve, can you provide adress of “HIF exhibit in the Learning Center (IN019-EC-TUA)”?
Hi Tashrif - that refers to a booth at RSNA in Chicago next week, in case anyone is in town for that and wants to meet face to face.
Thanks Steve, this looks very useful. Say I wanted to build a custom 3D reconstruction web app using medical images for 3D printing etc., would OHIF and VTK.js be the best options for the front end or are there other good contenders? What about good options for back end processing?
There are some other very good options to consider, and there are always new developments to keep any eye on, but at least for the projects I’m working on we have settled on the OHIF and VTK.js approach described in the tutorials. OHIF itself is modular, so the viewports can be rendered by other frameworks, like Cornerstone and OpenLayers. We may also end up using threejs/aframe. There’s also XTK and AMI and no doubt others I’m not thinking of right now.
For the medical imaging back end there are several good DICOMweb servers in various languages, and Google offers DICOMweb as a service, so for the data side of things that’s a logical choice.
For server-side processing I personally use Slicer on cloud VMs so that we can leverage the same codebase for desktop and cloud/web scenarios.
These are all works in progress of course. Hope to have some more concrete worked out examples soon.
We are also exploring using Slicer as front-end (launched from OHIF viewer) for more sophisticated visualization and processing, such as segmentation, and processing large data sets without transferring to the user’s computer. See this topic for some more information and updates about the progress: Can 3D Slicer be hosted on a rendering server?
Great, thanks for the pointers. Where do you see the future of 3D medical imaging going - more toward the cloud and software as a service or there will always be a place for desktop software like 3D Slicer? This is specifically for automated medical image processing.
Looks interesting and I will follow it closely.
Most resource-intensive professional software are running as native applications (desktop and mobile) and this will not likely to go away.
Most simple, non-resource-intensive applications can be implemented as web apps with their frontend running completely in the browser. This does not seem to go away either.
These approaches are already mixing with each other and various interfaces are being built between them - desktop applications often embed web browsers, web applications rely on remote rendering by native applications (even for latency-critical applications, such as games - see Google Stadia), app stores make installations easier and may allow running native applications downloaded directly from a website (without users permanently installing it).
If any application wants to remain relevant then it has to make sure it can be used in various environments and connected to a wide range of services and applications.
There are a few more tricks that Slicer needs to learn but its current state is already not too bad: it runs in docker, can be streamed to a browser using VNC, runs as Jupyter kernel, has embedded web browser, there are experimental projects that make Slicer provide web services, there is coordination with web GUI (OHIF/Cornerstone) and server (Girder, XNAT) folks, etc. - and it’s just keep getting better.
I do think there will be a lot of cases where cloud processing and rendering will make the most sense. Like in this video where the data volume is 16G. I could barely render that data on a state of the art laptop after a long download, but it works nicely on a big cloud hosted GPU.
Great. Slicer as a platform for research is ideal if it enables prototyping locally and deployment to both native and web services.
Couldn’t agree more on the standards and interoperability. One area I’ve been looking at is the use of 3D reconstruction and printing via Implicit Functions. For example, nTopology is a field based CAD tool (https://ntopology.com/), and creating .stl files could be skipped entirely because the implicit models could be sliced directly for 3D printing shortening the workflow while reducing the data requirements. Seems like an interesting direction to go in.