Slicer with SlicerIGT extension and Plus toolkit is very well suited for this job. They provide not just offline volume reconstruction but also real-time tracked ultrasound visualization and live volume reconstruction. Probably SlicerIGT tutorial page is a good starting point.
Specification of sequence metafiles that can be used by Plus toolkit’s ultrasound volume reconstructor is available here. There are also many tracked ultrasound sequence files that you can use as examples.
If you want to take full advantage of the platform’s real-time visualization and processing capabilities, then first, you need to get the end-effector positions from the DaVinci system as a synchronized data stream with the ultrasound image. We are helping the VISE team at Vanderbilt to get a robust and high-performance solution to this. We’ll probably revive the DaVinci interface in Plus toolkit, which will allow acquisition of tracked ultrasound data, recording to file, live volume reconstruction, and streaming to 3D Slicer for interactive visualization. I think the plan is to make all this openly available.
Next step is spatial calibration of the tracked ultrasound (determine transformation between the image coordinate system and the robot end effector’s coordinate system). This can be done using Plus toolkit’s fCal calibration system.
Inaccuracy of tool position estimation from the robot kinematics may be significant, especially when the ultrasound probe shaft bends because it gets into contact with tissue. To compensate for this error, you might want to use an external electromagnetic or optical tracker (Plus toolkit already supports this), or use endoscopic camera-based tracking (Plus toolkit already provides 2D barcode based tracking using ArUco library, which might be applicable).
If you could arrange a visit at Vanderbuilt or attend the project week in Boston then probably you could learn quite quickly what’s available already and how to use them.