Slicer extension index update procedure

Different approaches are used for updating extension index. One is to specify git hashes, another is to define a branch name. We started an interesting discussion around a pull request. I copy it here so that the discussion can be joined by more developers.

From @lassoan:

Manual updates of git hashes takes time and error-prone. You can make copy-paste errors, update the wrong s4ext file or the wrong branch of the ExtensionIndex, etc. It should be avoided.
If for some reason you feel safer with having specific git hashes in the s4ext file then the update should be fully automatic. You can use @jcfr’s extension wizard scripts.
However, a much cleaner and better controllable workflow is to use protected release branches and set the release branch name in the s4ext file instead of a specific hash. The release branch can be master or any other name. Protected branches can be configured to require reviews and be writeable only by certain users. Therefore, instead of enforcing Slicer core developers to click blindly on merging a new git hash, you can control what gets into the release in your own repository by defining your own rules that you set for the protected branch.

From @ihnorton:

The protected branch is nice, simple, and almost zero-effort, so I think that would be great to encourage for all extensions where people don’t want to build off master in the short-term. :+1:
Another potential approach is a release bot. There are a few around, but the one I’m most familiar with is “attobot” for Julia packages. It is a travis hook that runs on AWS Lambda; essentially watches any release-tags on a repository, and automatically makes a validated PR to the central index when a new release is created. Automated update was imperative for Julia with close to 2000 packages, because full manual review of every update PR by only 3-4 people was starting to be a big time-sink and blocker. It’s not so much an issue for ~100 extensions, but clean, validated automation is always nice in any case. The code is quite short and MIT licensed, so it wouldn’t be too crazy to just adapt it (if/when we have full continuous integration for extensions).

1 Like

@ihnorton, this is very interesting. Do you know how Julia packages are managed?

  • What is the acceptance criteria for new revisions of a package? All tests defined in the package should pass?
  • What happens if there are test failures? What happens if tests fail due to Julia core changes?
  • Is there a dashboard where results can be monitored? Who keeps an eye on the dashboard? What happens if problems are found with some packages?

Current requirements to register a package are listed here. Essentially: OSI license, and must list supported julia and dependency versions. For new revisions, attobot/testing is mostly to ensure that a package and all dependencies can be installed and loaded.

All tests defined in the package should pass?

As far as I know, there is no official requirement of tests right now, but when a registration PR is made, I think the author may be kindly asked to add some tests. Most do – there are very nice tools possible in a language with full AST macros. The compiler also supports tracking coverage by line, and a package provides integration with a coverage reporting service (notifies if a PR changes coverage more than X %).

Responsibility of the authors. Package development is currently more bazaar than cathedral. (at first of course everyone was just happy to have people contributing packages, but the criteria has become a little more stringent over time. Some scaling and curation needs were identified, so there is currently a transition in progress to a new package manager for Julia 1.0).

Sort of, see https://pkg.julialang.org/pulse.html — shows packages which pass their tests by core release version, and also passing->failing status changes (at the bottom).

Everyone uses Travis and Appveyor to run tests, so most mature packages run tests on each PR. Like most open source, there is a range of maintenance commitment/availability by package developers, and varying contributor bases. There are companies maintaining various packages, which can indicate higher maintenance commitment in some cases.

If the original author/maintainer is unavailable to merge PRs and no one else has access, then the index can relatively easily be switched to point to a more maintained fork.

The language is stabilizing for 1.0 release this year, but of course there have been a number of core changes over time as the language was developed. Several strategies and tools have emerged:

  • the language has a deprecation system which allows to support old syntax and many non-syntax feature deprecations during the deprecation period, and users get (opt-out) warnings when loading code.
  • there is a compatibility package, which provides backward (and sometimes forward) compatible macros across versions (often as simple as putting @compat in front of some deprecated construct, and the macro figures out what code should be generated for the running version).
  • a bot is available which can automatically upgrade most syntax changes: GitHub - JuliaComputing/FemtoCleaner.jl: The code behind femtocleaner (it makes automatic pull-requests against registered packages)
  • there is a tool to automatically run all tests in all registered packages against a commit or PR: GitHub - JuliaCI/PackageEvaluator.jl: A tool to evaluate the quality of Julia packages.. This is run for most major core changes, and many regressions are caught ahead of time with the system. It can help to identify which constructs are heavily used, to give a sense what new rules need to be added to FemtoCleaner prior to a release.

—

[1] The main issues with the current system are: need for namespaced packages; need to support multiple, independent, federated indexes so that different groups can curate for their own needs (including companies with proprietary internal code); and need to stop using git to manage the extension index: the number of files and revisions in METADATA.jl scales very poorly at the current size, especially on Windows and NFS fileshares.

1 Like

federated indexes so that different groups can curate for their own needs (including companies with proprietary internal code

While the new implementation of the Slicer extension mamnager already support different application (Slicer, CustomSlicer1, …), we are currently discussing the concept of channel that would allow Slicer user to select “official extension” channel, or channel from a specific lab, …

1 Like

What we found to be a practical approach to work around the limitations of the current Slicer testing infrastructure and other Slicer-related issues is designing new functionality in modular fashion, and developing CLI modules in such a way that they can be built both as part of Slicer, but also as standalone projects using cmake superbuild. The advantages of this approach are numerous:

  • we can use standard continuous integration tools for testing (a lot more intuitive interface, notifications, integration with GitHub, orders of magnitude faster test execution, 0 maintenance of the server, ability to ssh to the server VM using GitHub credentials to troubleshoot failures if needed)
  • we can use CI for generating standalone packages of the tools that can then be used independently of Slicer
  • developers can build extensions without having to build the whole Slicer app
  • CLI functionality is accessible without the baggage of Slicer dependencies, if users want to use just the CLI tools
  • no launcher for the standalone packages, which makes them a lot more intuitive to use from command line on Windows as compared to the Slicer-packaged CLIs
  • size of a docker image containerizing the tool is a lot smaller

We applied this approach to dcmqi, and more recently to PkModeling, and I really like it. The initial effort was significant, with @jcfr doing all the heavy lifting for dcmqi, but we were able to do the same thing for PkModeling without much problems.

Going forward, I think it would be quite helpful to have a CLI extension skeleton generator that incorporates cmake superbuild to allow Slicer-independent extensions.

1 Like

I like this approach more, because of its simplicity. Using buildbots and more complicated configurations increase “barrier to entry” for the Slicer ecosystem.

@ihnorton Thank you for the answers. It is interesting to see how an extension system can scale. I’ve learned a lot.

1 Like