Yes, I mean that you cannot use a simple interpolation to recover details that are missing between slices because interpolation can only reconstruct the original signal if the Nyquist criterion is fulfilled (your sampling is sufficiently dense compared to the spatial frequency of the signal). If you apply anti-aliasing filter to ensure the criterion is not violated, then the filtering removes the high-frequency content, so essentially the high-resolution details in the ultrasound slices cannot contribute to the 3D image reconstruction.
A common, very visible result of Nyquist criterion violation is staircase artifacts on the reconstructed surface. See this and related posts for details.
You may be able to utilize the high in-plane resolution of the ultrasound images by using superresolution techniques, but not by simple image interpolation. If you don’t have time to explore such methods then I would recommend to remove the information content that you cannot use anyway, early in the data processing pipeline (downsample the ultrasound images), to avoid pushing through huge amount of data through the pipeline that actually cannot contribute to the end results.
It is hard to guess what could go wrong. Maybe voxels in you MRI image are floating-point? If you can share a scene that demonstrates the unexpected performance issue or you can reproduce the problem with any of the public sample data sets (maybe resampling them using Crop volume module) then we can investigate this further.