Speeding up Slicer builds with CCache

Building 3D Slicer is a very slow process. I’ve been experimenting with CCache to try speeding the process up and came up with this PR #5998. Here some background information for this idea.

What is CCache?

Ccache is a compiler cache. It speeds up recompilation by caching previous compilations and detecting when the same compilation is being done again. The first time compilation has no performance benefit, but subsequent compilations will benefit from a performance increase.

CCache is well tested on Linux and MacOS. On Windows, it is not fully supported, though it could work (CCache support). The proposed Slicer integration only considers Unix systems (Linux/MacOS).

First results

ccache

These results have been obtained with:

  • Intel(R) Xeon(R) CPU E3-1535M v6 @ 3.10GHz
  • M.2 memory for the cache

How to use it? PR #5998

  • Install ccache in your system (e.g., apt install ccache in Ubuntu)
  • Configure Slicer with -DSlicer_USE_CCACHE=ON
  • Build a fresh Slicer so CCache can do the caching.
  • Subsequent slicer builds (even from fresh repo clones) will use the cache (given that -DSlicer_USE_CCACHE=ON).

Discussion

Considering the initial results, I think this can be useful for developers who deal with multiple builds of Slicer (e.g., spawning multiple Slicer worktrees from the same repository to attend different issues). If used for the continuous integration process (e.g., nightly build of pre-cached containers that will be used for CI), this could potentially reduce the energy footprint of the project and lead to more agile CI. I think it would be also interesting to publish the pre-cached containers so they can be used as building environment with all the speedup benefits.

Please, feel free to give feedback on this idea.

2 Likes

Thank you for working on this.

Based on what you have described above and what is in “Why bother?” section it is not clear how ccache could be useful for me. In general I try to avoid working on multiple branches at once. While I have a few build trees on some of my computers that I use for parallel developments, these have to be built only once, and re-compiled hundreds of times, so speeding up the first compilation does not matter much.

Using ccache has several limitations/disadvantages:

  • I don’t see how this could work for debug symbol (pdb) files. If there are no debug symbols then this is completely unusable for debug builds.
  • There is a slight slowdown of every build for those files that has actually changed due to cache check and update. This happens most of the time during development (when I modify a file, build, and test). I’ve seen 10% slowdown reported for a similar build cache project.
  • The cache takes disk space, which may matter on the very fast but relatively small SSD drives, especially on laptops.
  • It complicates the build process and may introduce errors. I very rarely do a clean build and when I do then I want to make sure that it is completely clean and no previously cached content can play any role. I don’t think it is possible to make it robust for changing environment and potential CMake errors (e.g., if dependencies are not handled correctly then ccache make it worse/may expose it more often).

Caching could be useful for continuous integration, though. In that use case the above limitations are not relevant and we indeed rebuild the same files in the same build environment potentially many times. While we cache all the dependencies in the docker image, there is no caching of the Slicer build files. Caching could speed up these builds significantly.

The implementation of patching numerous CMakeLists.txt files seems a bit too invasive. In Paraview they enable ccache by using symbolic links. We could easily do this in the Slicer’s docker image (a complication could be to share these cache files, but maybe we could store in the image itself). Alternatively, since in Slicer’s build system we already pass the compiler executable (CMAKE_CXX_COMPILER) to all external projects, could we just change the value of this variable to the ccache executable?

Thank you @lassoan for your insights.

I agree with you on that the benefits for continuous integration are more evident than for developers. Different developers have different use cases and workflows, and the only way to say for sure ccache works for you is to use it for a while and look at the cache statistics (ccache -s). There are some other points I’m not completely aligned with:

That is true, caching has an overhead. Arguably, more often than not, your code changes are local and 10% increase on quick compilations will not be noticed by the developer. A 5x speedup on large compilations, however, will be noticeable every time. There are also situations where infrastructure code (CMake, etc.) can trigger unnecessary compilations of large portions of the project. an example of this is building a fresh Slicer (make) and hitting make again; even if the project was just built, there will be a substantial recompilation. For these cases, ccache gets you covered.

In my tests, I have a cache smaller than 4GB for Slicer. I think it is pretty affordable, but I guess this is a use-case consideration.

ccache is very robust. The caching is based on 1) compiler binary and timestamp, 2) compiler options used, 3) contents of the source code file and 4) contents of all the included files. All fo these give a pretty good fingerprint. I have been using ccache to build Gentoo for years; Gentoo compiles every package in the operating system in a rolling-release fashion, and there, changing of environment is very frequent. I would trust more a ccache fresh build than a build directory used over different branches.

I agree that patching CMakeLists.txt wasn’t what I like as a way of doing things. It is also true that the implementation is very succinct (to me it looks just minimally invasive) and it does not alter the behavior of the underlying libraries, just they way they are built. I think the ideal scenario would be to convince the underlying projects to introduce a flag for ccache (SimpleITK, for instance, has it already). I don’t like the symlinking approach because you have to modify globally your OS for one of its projects; it is easy to forget that your compiler is now something else and you may use cache space for projects you are not interested in. This could be a valid approach for continuous integration, though.

I haven’t had a look at changing CMAKE_CXX_COMPILER, maybe that does the trick. I’ll give it a try and let you know.

Maybe using CCache for continuous integration could be a good ProjectWeek project?

@RafaelPalomar

Thanks for the thorough description and pull request :pray:

Initiative to improve the developer experience are great :fire:

Also, thanks again for your patience and time working through this :100:

Alternative approach

Did you consider this approach to configure Slicer ?

cd Slicer-build

CC=/path/to/ccache  \
CXX=/path/to/ccache \
cmake -DQt5_DIR:PATH=/path/to/Qt5 ../Slicer

CTestUseLaunchers

Setting the property RULE_LAUNCH_COMPILE in projects (as currently proposed in PR-5998) may conflict with the CTestUseLaunchers

@jcfr thanks a lot for checking this out. Any attempt to change CMAKE_CXX_COMPILER/CMAKE_C_COMPILER or setting CXX/C variables leads to:

CMake Error at /usr/share/cmake-3.18/Modules/CMakeTestCCompiler.cmake:66 (message):
  The C compiler

    "/usr/bin/ccache"

  is not able to compile a simple test program.

  It fails with the following output:

    Change Dir: /home/rafael/src/Slicer/slicer-ccache/build/CMakeFiles/CMakeTmp
    
    Run Build Command(s):/usr/bin/ninja cmTC_4dcff && [1/2] Building C object CMakeFiles/cmTC_4dcff.dir/testCCompiler.c.o
    FAILED: CMakeFiles/cmTC_4dcff.dir/testCCompiler.c.o 
    /usr/bin/ccache    -o CMakeFiles/cmTC_4dcff.dir/testCCompiler.c.o -c testCCompiler.c
    ccache: error: missing equal sign in "CMakeFiles/cmTC_4dcff.dir/testCCompiler.c.o"
    ninja: build stopped: subcommand failed. 

This may be useful: Speeding up C++ GitHub Actions using ccache - Cristian Adam - it seems that you need to pass CMAKE_C_COMPILER_LAUNCHER and CMAKE_CXX_COMPILER_LAUNCHER flags to the external project.

Based on comments in this page, ccache does not support debug symbols (pdb files), so ccache is not suitable for debug builds (at least on Windows).

Thanks for the report.

Following these instructions may be helpful. See https://ccache.dev/manual/4.4.2.html#_run_modes

After setting up the symlinks into the directory of your choice, you would then set CC and CXX to the relevant"compiler" symlinks.

@jcfr thanks for the info. Symlinking and setting CMAKE_CXX_COMPILER and CMAKE_C_COMPILER seems to do the trick. This approach has the advantage that everything will be “ccached”; in PR-5998 not everything is “ccached” (e.g., subprojects of external dependencies such as PythonQt)

@jcfr I have run CTest after a fresh build with ccache and it does not seem to be any problems if that helps on this point.

@lassoan as for now ccache can only be enabled on UNIX systems. It seems ccache has not the same level of support for Windows users. For linux I could debug without problems in a ccached build.

@jcfr , @lassoan. Maybe we can discuss this in the next developers meeting. There’s no need to rush this in if there are concerns and available alternatives.

Thank you both for the nice discussion.