DICOM file format
It is highly structured and standardized, so it is great for data archival or sharing. However, we usually choose more generic representations (tables in csv files, images as nrrd files, etc.) as internal representation, as they are more flexible, efficient, simpler, and compatible with much more software. @fedorov has made nice progress in making DICOM directly usable for certain data analysis tasks, maybe he can add some more comments and give pointers to examples.
XNAT
It is still actively maintained, mostly keeps up with state-of-the art technologies (web APIs, docker, etc) and several groups use it. It is customizable by plugins, but as I’ve heard that it is not easy to create them (this seems to be confirmed by the low number of plugins - less than 30, although the project has been around for more than a decade).
Girder
Girder is Kitware’s fresh take on research data management and analysis (based on their experience with their previous-generation MIDAS data management system). Compared to XNAT, Girder is built on more modern basis and there seem to many more developers on the project. It has nice integration with data analysis and visualization features (Resonant platform).
Our usual workflow
In the past, we’ve tried to set up XNAT (when it was not yet dockerized) but we did not succeed. We have tried using CouchDB, which worked well for storing small data sets (descriptive data), but was not usable (synchronization was very unreliable) for large files, such as 3D images. We have now a Girder instance set up, but so far we only used it for simply sharing files with external collaborators.
What we usually do is storing Slicer scenes (mrb files) in folders on a shared drive. Mapping from internal IDs to patient information is in a password protected spreadsheet or REDCap database. The saved scene also contains additional annotation (landmarks, manual segmentation, etc.) and may contain computation/analysis results.
For batch processing and analysis we put the selected group of mrb files in a folder and iterate through them using a Python script. In some projects, the data processing is split into a generic data extraction step (done only once, it generates summary csv files, series of aligned/normalized images, meshes, etc.), and a processing/analysis step (which is very specific to the data set and research question).
Nowadays we mostly run Python scripts using Jupyter notebooks, as they are easier to run/modify/verify than using a plain command-line interface. Also, notebooks work the same way regardless of operating system and they can also run on high-performance computing clusters (using JupyterHub).