Assembling a whole human body

You can also find complete segmentation of certain body areas (abdomen, brain, knee, …) in DataStore module => Atlas collection.

For example, SPL Abdominal Atlas:

image

Dear Mr. Greg and Mr. Andras
Thank you so much for your help,
Have a nice day. I wish you the best,
Best,
Van,

Can anyone help please with access to better source data than “The Visible Human Project”, as I remember coming across this before and now I’ve downloaded both Male and Female data sets, they both have errors in the Data, Are not complete and because they look like cadavers in Body Bags, the bag its self is interfering with the scans its self.

Are you aware of the Visible Korean Human project?

  • Robert
1 Like

Do you need just model (surface mesh) or corresponding CT, MRI, optical cross-sectional images as well?

1 Like

Hi Robert,

Looked up the site and the Data use periods look like they might be an issue Data Sharing Policy | Visible Korean

Thanks though,
Sam

Hi Andras,

Yup I only really need accurate high-resolution Model (Meshes) and was looking at this route to be able to pull the Meshes (or point clouds) from the Data and ideally start to build a database so its different ages, races, gender, builds etc, for variation and representation. Including people with let’s call it physiological difference than Disability, like Dwarfism, etc, etc considering the project I am working on is about using technology combined with old school, a bit of novel and dash of common sense to improve Digital Arts and STEM Accessibility.

Another post on here-

The other is obviously the same principle for animals and then trying to add some automation & machine learning, why starting with the actual source, scan data its self might be better in the long term? Possibly a bit of evolution to then create the physiology of novel creatures and output a 3D Printed armature, that can be sculpted on top of (by even a Blind person say) and hey presto scanned easily ported to Visual Effects, Game Design, Special Effects, Animatronics that also leads into STEM and Robotics for Therapeutic and accessibility as well as teaching accessibility for STEM and Digital Arts at the same time.

The aim to keep it all open source, but due to limited initial funding at the moment, I have very limited technology myself so it’s doing enough to start with as proof of concept to better present for further funding and help/resources and additional expertise needed. At the moment it’s just little me and with both my own ADHD and Aspergers I’m a better at doing than explaining, so any additional help is very welcome.

So it would be Bones, Muscles/Tendons, Cartilage, Viens/Arteries (Most visible/and later novel routing or SFX use, like illuminated) and possibly Fat, but not sure how that would work best as yet. Soft tissue I can print in a 3D Printed Rubber later cast in various materials even encapsulated (gell filled Silicone), by either making 3D printed contra molds or conventional Molds from printed positives. Having added additional engineering for movable joints, plaining in for possibly nano sonic motors or even just wire lines to external servos for robotics later. Obvious once I get all this down R&D and test the principles, do the showing of all this and how it can be upscaled, the other part of the project will be a plug and play system to teach the skills etc, and the ultimate goal a maker space like environment/none profit socail enterprise for Disability Employment Accessibility in Digital Arts etc. There is more too it in terms of equally creating more instantly dynamic interaction between physical and digital as a future goal too.

I’ve likely waffled on way too much, such are my own quirks (I have a whole network in my head on how the core idea branches etc). I hope the waffling helps and thanks loads for your own reply Andras its truly lovely. :grinning::heart_eyes:

All my best,
Sam

If you only need mesh, then you can choose from tens of thousands of models available in 3D modeling sites, lots of them for free. You can also generate human models with tools such as http://www.makehuman.org/ and further refine them by manual editing.

3D Slicer’s main strength is that it can combine medical imaging data with meshes (import/export, simultaneously edit, …). If you only need 3D mesh modeling then Blender or similar tools are probably better suited for the task.

1 Like

Hi Andras,

Ah, I think I might not have explained myself well enough and it might be good to provide a couple of links on my background for context. I started out studying engineering Mechatronics and then went onto to work on the very technical side of Visual Effects, Animation etc actually initially based at Pinewood Studios, so is inspired by this, a love of science and sculpture. And it’s wanting to pay it forward using some smarts due to many years out as a result of my own Disabilities and some life experiences such as hate crimes due to being Trans also.

https://www.linkedin.com/in/sammihamer/
https://issuu.com/sammihamer/stacks/13cf72968f5a490f813f9afa3c79a8c2

My initial small pot of disability funding is Masters funding, currently on pause so I can have more time to put everything together again due to my own disabilities. So I have experience of the faking it and artistic interpretation of, but increasingly real physiology data is being used in the Digital Arts Sphere, such as War Horse. The basis for any is however solid anatomy and zoology. This then in its self-translates into Educational STEM accessibility also.

In fact, I have been doing early tests with Mesh data the surface of Bones and Muscles etc from a Academic Project in Japan but the source is terrible, very low poly, lots of none manfolds and lots of errors.

This might give you an idea of the conventional armature/sculpture approach, more technical testing pictures are on Linked-in and or digging further down on the original facebook page I posted.

I hope this helps tie in better with what I was trying to convey. So pulling accurate high detail, meshes of Bones, Muscles etc from real-world Data and a tad more of which I expanded on previously is very much what Id really appreciate help sourcing.

Another academic project in this case just 3D scanning bones based in the Netherlands
http://bonify.archaeolabs.nl/app/species/goat/humerus.php

All my best,
Sam :wink:

Hi Sam,

You mentioned a wide range of fields and applications, but in order to get from one to two, and have actionables, we need to narrow down the scope to a specific application and use case first. Can you very briefly describe your main use case? Then we could start figuring out what exactly you need.

csaba

1 Like

Hi Casba,

Thanks, loads, I will create a simple list, hopefully this will make things much more clear.

1.) Access to a wide selection of whole body data both human and none human if possible. As accurate and detailed as possible. Anonymised, NRRD and or DCOM as I will not be using the skin data also that might identify a patient (possibly pull small-scale texture information if possible, texture in the voxel sculpture sense than Displacement or Normal Maps, which are rendering efficiency tools for within the Virtual than outputing to the phusical in this case ). Happy to sign any use agreements to protect patient confidentiality, for obvious reason.

2.) To initially create posable detailed and accurate armatures and character/creature sculpting system mirroring Medical, Veterinary, Anthropology even Paleontology physical teaching models (as you would training to be a DR et al).The detail and accuracy although scaled down are very important, say for both blind and autistic traits for the memory of detail.
i) Bone, Muscle, Cartilage, Viens/Arteries and Nerves (forgot them last time, for possible novel Routing), and Fat if possible
ii) A selection of different ages, race, gender, including a few congenital conditions, like Dwarfism et al to be diverse and representative as the project aims are about accessibility, subject matter should reflect this.
ii) Actually acquiring fresh scans of volunteers for this use, but very conscious of time and cost where much more important work needs to be done than a project such as this to justify such. Unless anyone can ask permission at the same time as doing scans for a primary perpose.
iv) Planing in and investigating methods to then use said Armature as a robotic computer interface also and the system its self as its own STEM accessibility and educational tool.

3.) Where my explanation might have gotten us all in a muddle is testing and investigating methods and planning in future plans.
i.) Investigating A.I/Machine Learning and automation via Python etc and cloud distributed training. Both pulling the meshes and engineering the meshes.
ii) Investigating Genetic and Evolutionary models to create future or fictional physiology from such a simulation drawn from a database of such data.
ii) This in its self, lends its self to much wider applications within Science and Research also.

4.) Likely the best source data is the best scan source due to investigating methods above, but I was also looking for anyone that might have already been doing anything similar and a catalog of data turned into Meshes for entire person or animal might already be being done than reinventing wheels in the very short term, to have 3d Printed Prototypes and something to show more quickly.

I hope this helps and thanks, loads all for sticking with me.
All my best,
Samantha

Hi Sam,

Thanks for the list! This is still a very wide net, and I would think that a relatively large lab could take this on as a general theme, but as you’re alone (as far as I understood), you need to start small. I’d pick a very specific use case and start focusing on that 100%, while keeping the others in mind, to make whatever you do future-proof.

1.) This would be quite useful. I called it “generation of patient population”, and the plan was to start from atlases, computational phantoms, and parient population metadata from hospitals/census and generate deformation fields that can transform one of the available traits to one missing. However, we never got around to starting this project. A possible starting point could be Bender: https://public.kitware.com/Wiki/Bender
Please note that DICOM/NRRD is volumetric data, and if you want to use textures, then surface models are needed, which implies segmentation, which is a very hard task especially with fine details and on large number of datasets.

2.) I mentioned Bender above, it sounds like a good application for this. Again, segmentation will be necessary.

3/4.) Yes, the range of applications is very wide. You need to pick one, which will naturally come from your first collaborative partner who can share data with you.

1 Like

Hi Csaba,

Yup I am familiar with Volumetric Data, Voxels et al from my previous work in visual effects. Most High-End Commercial software i have been involved with uses such also in some capacity, Houdini, Nuke, Maya, Zbrush (+Derivitives) and of course Blender.

I am also aware of Meshlab and Instant Meshes part of a Siggraph research paper http://igl.ethz.ch/projects/instant-meshes/, that can pull quads from a point cloud (even more useful when you wish to port into a parametric modeler). The python and other automation are common, equally now the introduction of Machine learning and where slicer and like applications fit in also. Aka the segmentation pulling a point cloud from a density range then converting to a mesh (3D Slicer does this and can be refined) and cleaning up.

Item 2.) is the core aim but the method at which one arrives at such is equally crucial, what the rest is merely about, having come from having to plan pipelines and workflows and planning ahead for automation etc. Call wider applications in effect a welcome side effect of the core aim. However I need decent data to start with to do anything, so this is the first port of call and biggest dependency, so suggestions, help welcome on the collaborative partner element you raise.

My likey issue is I see what others might see as complex as simple and that sometimes simple or oversimplification can actually then get in the way of scaling up pipelines and workflows if you don’t plan in and investigate the road ahead. Its the nature of my gift that can also be a curse on trying to explain things than simply being in an environment doing.

One other element for now struggling with having to appeal UK Disability Benefit cuts, now barely surviving as I don’t have a maintenance grant or any other income, is a job helping on another project that can also help with putting this together as a proposal in my spare time.

Is this any better?

Cheers,
Sam

Ps. Love this shame it never happened, as obviously the logical progression is using Big Data methods and doing a massive scale simulation to do more with all this truly useful data. Using tools similar to this that now make that all the more possible. https://improbable.io/enterprise

Anyway, I digress and slide off topic a bit but love any of this sort of stuff and you and your colleagues clearly have amazing minds.

Fascinating conversation! There have been some experiments building bridges between Slicer and things like Maya, Blender etc but there is so much more potential and the systems are complex. It’s great to experts come in and get things going. Can’t wait to see how it evolves.

1 Like

Thanks Steve, Blender is indeed an option I will explore, equally for Cícero Moraes work, however, I still need help sourcing suitable scan data in my case hence posting in this particular thread. As the Armature system, I am working on is for physical models, engineering joints etc in the real world rather than rigging inside a virtual world (in the first instance) per Csaba’s link and my bad his link is Bender rather than Blender (might be good to create a T-pose to better pull bones etc, so big thanks on that).

The why I have a Form 2 3D Printer waiting to be used. The rest I discuss is to create a proposal to demonstrate and illustrate, the challenges, expertise and resources to implement bigger goals. Scoping, which is dependant on sourcing suitable data, the crucial bit I need help with, in the first instance. Then I can start throwing more questions and challenges at your collective expertise and document such, so its all the more properly informed.

Another list the initial elements-
1.) R&D Conventional manual methods for processing whole body scan data into suitable high-resolution meshes and engineering them for movement & accurate touch-based detail to be printed in 3D Resin and Rubber.

2.) Investigate automation and machine learning, in relation to this pipeline workflow and illustrate by having whole body sets of a diverse selection of subjects to hand. To demonstrate whats possible, whats not possible and what would likely be possible with the right resources and expertise, along with what those resources and expertise would be.

3.) Translate that into how such could be scaled up to create a character/creature based physical to digital accessibility system. Then robotic development system and actual physical to digital interface and vice versa.

i) Implementing low tech sculpture on top of Engineered 3D Printing of physiology datasets and 3D scanning the result back into a computer to provide accessibility in the first instance.

There are lots of scans available on TCIA. They tend to be an older sicker group (cancer patients) but there some good anthropometric variability there.

Also I second the idea of looking at Bender (no 'L") and also makehuman if you haven’t already.

1 Like

Hi Steve,

Amazing work (‘L’, none linear thinking and a dash of dyslexia, having ADHD and Aspergers myself, hence all my edits, he…he :joy:), however, I need help sourcing whole body datasets like I say why I posted on this thread. Make human is no use as I have also previously mentioned. The bigger idea is however like Make Human, but with key underlying anatomy using real data.

This is because it’s about printing engineered physiology to create an accurate and detailed armature (écorché) system, that physically exist in the real world and that is posable/moves in the first instance. So even a blind person has highly detailed reference to sculpt on top with conventional sculpting materials (Chavant), so both they and people with other disabilities can use such to access/contribute to digital work and or it standing on its own feet. The other element is translating that back again as a physical character-based interface to the digital which many with certain disabilities are locked out of.

Equally enabling than dumbing down in terms of it being a STEM educational tool at the same time. So even say a person who has Autism and maybe none verbal has an opportunity to apply both their mind and often gift for detail and spatial awareness (equally synesthesia & hypersensitivity of senses), hence a less overwhelming approach than directly interfacing with a computer in the first instance. To provide novel ways in which to demonstrate hidden abilities that could then be applied in many spheres.

So full data sets of hole humans and or animals truly are a need to do anything, it was just hoping to make them a diverse selection to reflect a project which is at its core about disability accessibility and enablement.

Cheers,
Sam

Oooo I did just notice only one entry for MyelomaTT3PET which is a Whole Body PET Scan coming soon, so that’s a definite possibility if the quality and resolution of a PET or that particular PET scan is good enough to pull needed detail, once its up and the Archive has wonderful license conditions too.

We have an aging population so a wholly fair and representative data set if so. My nan lived with us growing up as a child so was like a second mum and I equally helped care for her when she started having strokes and senile dementia kicked in. Let alone my own parents are now inching into that bracket themselves and I don’t even see them as old, my dad has already had cancer of the colon himself, luckily caught quickly enough and luckily stayed in remission, he still works and both still very active.

the visible human project

1 Like