Many core experiment

Just as a test, I created an N2D type virtual machine on Google Cloud with 224 cores running Ubuntu 20.04.

I did a standard linux build with the default settings, so SimpleITK and Testing enabled.

It took just under 20 minutes to build everything with make -j 500.

Watching top, there were a few places where a single perl or cmake process was a chokepoint but it was amazing to see how fast the C++ compilation went.

2 Likes

Oh, and I should mention the machine costs under $6 per hour.

Wow. On some computers, download of the binary takes longer then this build time.

It could be interesting to build with TBB support enabled.

1 Like

I wasn’t able to try running because I couldn’t set up a GPU on it. I think the limit is 16 cores if you need a GPU. But we could probably try some CLIs. Or tunnel X through ssh.

In theory, it should be possible to use a software renderer (Mesa), but I guess it may not be trivial to set up.

The problem I ran into is that I couldn’t start the X server without a display card on the system. It might be possible to do that with Xdummy like it is in the containerized Slicer.

Or actually, it could be interesting to just run the docker slicer version and see how it performs. It should be possible to build and run a TBB optimized Slicer inside the container environment and it should be able to access the same cores.

Is there any particular multithreaded workload that would worth trying?

Loading a very large volume and interactively segmenting it (with 3D display) and visualizing it could be interesting. Also, Sequence registration with the sample cardiac sequence data sets (using SlicerElastix).

But I guess this massively parallel system could be best leveraged with your parallel CLI execution module.