Use 3D slicer with VR for teaching

Dear all,

Have you ever used 3D slicer with virtual reality to teach?
Is there a way to build pedagogical content with 3D slicer such as, locate the appropriate part, annotate the correct part?

I’ll be interested in getting your feedback.

Many thanks in advance.
Best regards

What do you have in mind? Who has the headset on? What does she/he do?

There are lots of anatomical atlases, but lots of basic interaction such as annotation is so far missing from SlicerVR. It will be added in the future for sure but we don’t have a concrete roadmap for that yet.

thank you for the quick feedback.

the idea is to have a group of students. Up to 6 wearing VR headset, or AR headset.
They visualize, and a teacher can select a part and ask of the user to annotate the part.

The idea is a pedagogical tool.

Select how?

Is this all happening remotely?
Is everybody sharing the same live scene, or they load a Slicer scene individually?

SlicerVR currently is quite limited (it can show any scene, and you can navigate conveniently but basically that’s it), but some VR features are coming hopefully soon. For example collaboration within the same scene. Also annotation and segmentation within VR.

I imagine the following set up : a teacher on his desktop, selecting part in Slicer normal app. (or he could also have a VR headset, an gaze at a part with controller).
A group of 6 students with headset, in the same room.

This is an idea for a long term project.
Would you need support to develop a “scenaristic” mode for teaching in 3Dslicer VR?

So as I understand there is the instructor with a desktop computer, loading a Slicer scene with some CT or MRI. Then gives the instruction to the students to segment one organ (I still don’t understand what you mean by “select”, but frankly also not sure about what you mean by “annotate” so I assume segment?). They do it in VR in the same scene, but not in a synchronized way, i.e. they don’t share the actual scene, but only use the same one for starting, then they do their own thing.

Does the instructor need to monitor the progress of the students real-time? If so, how do you imagine that?

@nagy.attila You were thinking about something like this as well. Do you have anything to add?

Thank you, we can of course use any help. First, however, we need to develop some core features to facilitate setting up such scenarios such as an actual in-VR UI that is still missing.

to sum up, the first draft is the following :

The instructor is on a desktop or in VR , to be defined. For instance, he imports a DICOM file.

  • 6 students are in VR, in the same room as the instructor.
  • Everyone is logged in the same virtual scene, and see the DICOM file from his own angle.
  • The instructor ask user 1 to find a given anatomical part.
  • User 1 with VR controller manipulate the DICOM, zoom, find the appropriate part, and use a “select” tool or “annotate” tool to highlight the part.

By annotate I mean he should be able to write, but probably not the best user experience. So it can be a “pin” tool instead.

An other idea :

  • The instructor ask user 1 to find a given pathology.
  • User 1 with VR controller manipulate the DICOM, zoom, unzoom…
  • The instructor load an other DICOM and ask user 2 what is the pathology.
1 Like

Thanks, much clearer now!

Yes, these features can be expected soon. This comes down to two main core functions that need to be added:

  • Collaborative VR: sharing a live scene between Slicer instances. We are quite sure that we have won a grant just for this (the scores are out, the final decision not yet), which starts in a few months.
  • In-VR interactive widget: It is a very important core feature that has not been possible to add due to being stuck on VTK8. Now that Slicer has been updated to use the new VTK, we can start working on this too (showing any Slicer widget in VR, and use the controller as a laser pointer and be able to click etc.), hopefully in terms of the aforementioned grant, or something else. Once this is done, adding any widget that allows you to do something that you can do in regular Slicer would be super easy.

Thank you very much for the complete answer and sorry for the lack of information in my first message.

Fingers crossed for your team to get the grant!

1 Like

Hi all,

I think there could be two slightly different use cases.
One is similar to what Sébastien described. On the other hand, that would more be “radiological anatomy”, as selecting anatomical features on scans (be it CT or MR) needs some practice. Both to use a (the) software, and actually to know what they are looking at or looking for. Like I can’t imagine using any software this way to teach first, maybe second year medical students.
Placing a pin as an answer might be okay.
Doing segmentations would involve a lot of manual work (in most cases) so unless this is what you/we want to teach it is not something we should hurry to implement.
Of course just segmenting out a small anatomical feature (as an answer, for example) can be okay.

The other way that would maybe be feasible is to load a scene with already segmented structures (be it vtk models or segments) and then teach details on those. It could be used instead of printed 3D models.

Probably the most important would be to have a way to annotate features on the fly in a convenient way. I have no idea how that could be done, ie. inputting text in VR is maybe not that fast and/or convenient.

Crossing my fingers for that grant too and let’s get back to it then! :slight_smile:


Thanks for your insight, Attila!

I think the other scenario you describe is easier, because we can already share a scene like that (~broadcasting, when one person controls a scene that the others can also see in VR). Placing/moving fiducials may be enough in this case, while the instructor speaks. Unless I misunderstand something.