OpenAnatomy's reference .dicom. Visualisation of the whole project

Good afternoon,

I would like to know if the .dicom used to create the meshes of OpenAnatomy was available to the public; and how the collaborative work functions.
Is there a .dicom file containing hundreds of segmentations?
Is there a way to visualise/hide all of the structures and their names?

I read that a common file format was still under development; I was wondering if Blender had been an option or not. Its new collection system would fit the needs of visualisation but as far as I know, the dicom viewing was only experimented on an older version and not much developed.

Thank you

Source images should be available for most atlases. Which atlas are you interested in?

Segmentation storage in DICOM was suboptimal until a few years ago and modern information object types are still not widely supported. Segmentation is most commonly stored as labelmap volume.

You can show/hide groups of structures using Data module.

A glTF based atlas file format is still under development. Mesh export is already implemented in SlicerOpenAnatomy, which can be loaded into and edited in Blender.

Blender’s (or any particular editing software’s) internal storage file format is not considered as an archival format for atlases, as these formats are optimized for storing the application state and therefore they are more complicated than necessary, change frequently, and controlled by the application’s developers.

Hi Gauthier,

All of the current Open Anatomy datasets have NRRD files as their underlying image format. In almost all cases, the original DICOM files weren’t provided to us. Our goal is to provide DICOM data sets in the future; the open anatomy file format is agnostic to file format (much like HTML is agnostic for the formats of images on a page).

We don’t have large populations in open anatomy. I did a few of the MindBoggle brain data sets in open anatomy format. Let me know if a full conversion would be useful (those data files are NIFTI).

The current interface is limited and growing old in the tooth. We are looking to a cornerstone/OHIF based image viewer with a new 3D viewer, but no time frame at this time.

We are looking to glTF with extensions as an interchange format for annotated geometric models. I’m not familiar with Blender’s file format, but I don’t believe it’s likely to represent the association of images, region descriptions, annotations, and graphical styles for potentially multiple data sets. For that reason, Open Anatomy has a container file format with links to other data files. We have working prototypes of this concept, but it can always use more eyes looking at it, especially for use cases we haven’t yet really explored (like associating multiple atlases together).

I hope this helps a little to understand what the current state of things is. Let me know if you have additional questions or comments!

–Mike Halle

Thank you for the answer.
I beg you perdon because I made a terrible mistake: I had downloaded several years ago a huge serie of anatomical .obj files from this website:
https://dbarchive.biosciencedbc.jp/en/bodyparts3d/download.html

Then I forgot about it and assumed that it had been done through the OpenAnatomy project.
Sorry for the confusion.

Its viewer also lacks a collection system and a simple way to connect the mesh to the name of the structure. It would be great to get the dicom used for this modeling and a community working to turn it more precise.

Although the BodyParts3D modelling started from a whole-body MRI scan,
‘missing details were supplemented and blurred contours were clarified
using a 3D editing program by referring to textbooks, atlases and
mock-up models by medical illustrators’ (Mitsuhashi, 2009,
doi:10.1093/nar/gkn613). It might therefore not be useful to revert to
the original image data at this point. The latest version of the data
is dated 2018 Dec so it seems that development is on-going. It now
contains 1000’s of anatomical structures.

The BodyParts3D project makes use of the Foundational Model of Anatomy
ontology. I don’t know if the FMA is used in the Open Anatomy project.

  • Robert
1 Like

I wrote to the Life Science Database Archive (LSDB) institution to ask if they had meshes in higher definition because the downloadable folder mention ‘polygon reduction=99%’.
The post processing work is very appreciable since the final mesh is so much cleaner than most of the segmentation models.

By the way, I just discovered today the ‘MorphoSource’ website:https://www.morphosource.org
Many dicoms of animals are available directly or on demand.

If you are interested in biological specimens you may also want to have a look at SlicerMorph: https://slicermorph.github.io/

1 Like

You can get nrrd files and models from Embodi3d.com too. There are a few whole body CTs.

1 Like

If the input is a high-resolution image then 99% mesh reduction is not necessarily too much. Even with a basic method like VTK’s decimation, 90% reduction often does not result in visible difference in the mesh. If a more sophisticated method is used then 99% reduction may be achievable without any significant loss of detail.

This seems like a very nice database. I’ve checked out the “partof” model, containing 1258 structures and wrote a short script to create subject hierarchy folders from the relation list text file.

Script for creating subject hierarchy from relation list
shNode = slicer.mrmlScene.GetSubjectHierarchyNode()
sceneItemID = shNode.GetSceneItemID()

def getItemParentsFmaIds(shNode, itemShItemId):
    existingParentShItemId = shNode.GetItemParent(itemShItemId)
    existingParentFmaIds = []
    while existingParentShItemId != sceneItemID:
        existingParentFmaIds.append(shNode.GetItemUID(existingParentShItemId, "FMA"))
        existingParentShItemId = shNode.GetItemParent(existingParentShItemId)
    return existingParentFmaIds

# Create partof hierarchy
inclusionListTable = getNode('partof_inclusion_relation_list').GetTable()
parentIdArray = inclusionListTable.GetColumnByName('parent id')
parentNameArray = inclusionListTable.GetColumnByName('parent name')
childIdArray = inclusionListTable.GetColumnByName('child id')
childNameArray = inclusionListTable.GetColumnByName('child name')
for i in range(inclusionListTable.GetNumberOfRows()):
    parentFmaId = parentIdArray.GetValue(i)
    parentShItemId = shNode.GetItemByUID("FMA", parentFmaId)
    if not parentShItemId:
        parentShItemId = shNode.CreateFolderItem(sceneItemID, parentNameArray.GetValue(i))
        shNode.SetItemUID(parentShItemId, "FMA", parentFmaId)
    childFmaId = childIdArray.GetValue(i)
    childShItemId = shNode.GetItemByUID("FMA", childFmaId)
    if not childShItemId:
        childShItemId = shNode.CreateFolderItem(sceneItemID, childNameArray.GetValue(i))
        shNode.SetItemUID(childShItemId, "FMA", childFmaId)
    existingParentFmaIds = getItemParentsFmaIds(shNode, childShItemId)
    if parentFmaId in existingParentFmaIds:
        # this parent is already a parent of the current item
        continue
    shNode.SetItemParent(childShItemId,parentShItemId)

# Update part list with names and FMA IDs
partsListTable = getNode('partof_element_parts').GetTable()
fmaIdArray = partsListTable.GetColumnByName('concept id')
filenameArray = partsListTable.GetColumnByName('element file id')
for i in range(partsListTable.GetNumberOfRows()):
    partNode = slicer.mrmlScene.GetFirstNodeByName(filenameArray.GetValue(i))
    if not partNode:
      continue
    parentFmaId = fmaIdArray.GetValue(i)
    parentShItemId = shNode.GetItemByUID("FMA", parentFmaId)
    if not parentShItemId:
        # this hierarchy is not found in the SH tree
        continue
    itemShItemId = shNode.GetItemByDataNode(partNode)
    existingParentFmaIds = getItemParentsFmaIds(shNode, itemShItemId)
    newParentFmaIds = getItemParentsFmaIds(shNode, parentShItemId)
    if len(newParentFmaIds)>len(existingParentFmaIds):
        # New parent is more specific than the current (parent has more nesting levels)
        shNode.SetItemParent(itemShItemId, parentShItemId)

The resulting scene can be downloaded from here (compatible with Slicer-4.11.x, revision r28625 or later).

Unfortunately, the same model is usually listed in multiple branches of the hierarchy, and in the subject hierarchy tree a node can be only listed once, so structures are often “missing” from where you would expect to find it. For example, skin is listed in both human body/integumentary system and in human body/integument/skin, but in the subject hierarchy tree it can be only listed in one branch, so it appears only in human body/integument/skin. Due to this, probably the scene is not very useful as is, but may serve as an example of how to import hierarchical anatomical atlases into Slicer.

1 Like

Andras Lasso:
If the input is a high-resolution image then 99% mesh reduction is not necessarily too much. Even with a basic method like VTK’s decimation, 90% reduction often does not result in visible difference in the mesh. If a more sophisticated method is used then 99% reduction may be achievable without any significant loss of detail.

For these models, 99% is generally too much. Many of the brain structures are effectively useless nondescript lumps. To their credit, the lumps are nice and smooth looking, not strangely decimated.

Gauthier, if you get a response about the models or want us to help make a case for getting them, please let us know. We might be able to help because we clearly are not out to sell the results.

Andras Lasso:
Unfortunately, the same model is usually listed in multiple branches of the hierarchy, and in the subject hierarchy tree a node can be only listed once, so structures are often “missing” from where you would expect to find it. For example, skin is listed in both human body/integumentary system and in human body/integument/skin , but in the subject hierarchy tree it can be only listed in one branch, so it appears only in human body/integument/skin . Due to this, probably the scene is not very useful as is, but may serve as an example of how to import hierarchical anatomical atlases into Slicer.

We should talk more about how to organize these structures. I’m thinking more and more about faceted search rather than hierarchical trees. Overlapping groups of structures make sense. Imprecise ontologies such as FMA make handling cases like this a necessity, with Andras’s example being a prime case. One model should be able to be in two groups but only rendered once.

1 Like

Wow, I am impressed.
Unfortunately I do not achieve to visualise the models. I will watch it better later.

The links for the references are great; I wish I could dive into all these anatomical treasures soon in the future.

Thank you so much.
I will keep you informed if I get an answer from the japanese ‘Life Science Database Archive’.

Goodmorning.

I did not receive any answer from the ‘Life Science Database’.
It would be worth that you try and ask them if they can provide you the high resolution models, if you did not do it yet.