New slicer-skill ai tool

After learning from previous experiments, several productive discussions at project week and at the last developer meeting, I put together this slicer-skill that can be used with coding agents like Claude code.

It involves pulling down all the source code from Slicer and libraries like VTK, ITK, etc, together with all the discourse archives and other material like the source code for all the extensions. It also includes a new mcp server so that it can iterate on doing programming tasks.

I think this crosses a real threshold of utility where I could give it a pretty high-level description of a task and it accomplished it with no further intervention from me.

In this case I gave it two prompts. First: “use the slicer mcp server connection to download the MRHead sample data and set it up to volume render in the 3d viewer. Use the slicer-skill if you need to figure out how to do it.”

And then: “Can you use the screencapture module to generate a video that shows the MRHead volume rendering going through the range of settings of the shift slider in the volume rendering module?”

And it generated this:

shift_sweep

The AI, Claude in this case, examined the source code and figured out how to implement this and even pulled back screenshots of the app to look and see if the volume rendering looks correct.

This is significantly advanced over similar experiments I tried a year ago, or even last week. This is all new, so there are surely ways to improve it, but everything needed to experiment with this is in the slicer-skill repo. I’m really excited to see what we can build with this.

14 Likes

Steve, that is great, or to paraphrase the Sargent in Lawrence of Arabia meeting with Alenbi, Bloody marvelous! That would be very helpful for Alissa and musculoskeletal model integration. R

Ron N Alkalay
Associate Professor,
Dept of Orthopedic Surgery, Harvard Medical School
Center for Advanced
Orthopedic Studies
Beth Israel
Deaconess Medical Center
1 Overland Street
Boston, MA, 02215
Tel. 617-667-5185
Fax. 617-667-7175
email:
rn_alkalay@bidmc.harvard.edu

I was waiting for this for the past two weeks. I feel a bit sad for my selfishness level, but I’m happy that I waited. Thanks for sharing this.

This sounds very useful. Could it be used to fixed bugs in extensions? As an example task, I am finding the ANTs extension can no longer apply fixed masks (for hiding parts of the volume). It would be interesting to see if Claude could nail the problem and provide a fix for the next version.

No skill required (though I suspect it could automate the solution quite effectively).

Given a git clone of SlicerANTs and Slicer, Claude Code found the bug in about a minute, and OpenAI Codex concurs.

I had Claude Code file a pull request on SlicerANTs:

–Mike

1 Like

@mhalle Thank you very much for the pull request!

@sulli419 please apply the code change in the pull request by @mhalle and let us know if it fixes the masking problem for you. If it does then we’ll integrate those changes. Thanks!

1 Like

This is super helpful, thanks Steve. I believe 3D Slicer will enter a new era of development and updates due to the integration of tools and greater democratization. Super exciting to see everything working together, and this is just the beginning.

I made a simplified version– slicerSkill . This skill doesn’t require you to download any source code or configure any environment variables. Its core is simply the Prompt command.

(搞了个简化版: 这个技能不需要你下载任何源代码,也不需要配置任何环境变量。它的核心就是纯粹的 Prompt(提示词)指令。)

1 Like

Great to see all the excitement here!

@sulli419 Yes, just in the last few days this approach has helped me with development and debugging, such as a tricky vtk pipeline manipulation.

Thanks for jumping in here @mhalle and thanks for getting the ball rolling on all this Clauding!

@jumbojing I like the no-install option and we’ll see how it compares in practice. To me, having all the git repositories cached locally means that certain operations will be more efficient, but doing everything with web searches is also appealing. Let’s all experiment with different options and we we should be able to come up with a skill that we’ll want to put in the Slicer organization on github.

2 Likes

@pieper Based on the Prompts in you provided, this MCP has indeed started manipulating the slicer! Not download any source code. Not configure any environment variables. The simplified version– slicerSkill

First: “use the slicer mcp server connection to download the MRHead sample data and set it up to volume render in the 3d viewer. Use the slicer-skill if you need to figure out how to do it.”

And then: “Can you use the screencapture module to generate a video that shows the MRHead volume rendering going through the range of settings of the shift slider in the volume rendering module?”

The following is the answer from opencode regarding Kimi 2.5:

Perfect! I've successfully created an animation showing the MRHead volume rendering with the shift slider effect. Here's what was accomplished:
✅ Animation Created Successfully!
What was done:
1. Loaded MRHead sample data in Slicer
2. Enabled volume rendering with CT-Cardiac3 preset
3. Created 20 animation frames showing the shift slider effect
4. Saved frames to: /tmp/vr_anim/
The Animation Effect:
The script animates the shift of the transfer function (similar to the shift slider in the Volume Rendering module) using a sine wave pattern:
- Shifts the color and opacity transfer functions by ±150 HU units
- Creates a smooth back-and-forth animation showing different tissue densities
- Each frame shows a different "window" of the volume data
Files Created:
- 20 PNG frames in /tmp/vr_anim/:
  - frame_0000.png through frame_0019.png
  - Each frame is ~500KB - 1.3MB
  - Total: ~13MB of frames
To view the animation:
Option 1: Open frames in Slicer
open /tmp/vr_anim/frame_0000.png
# Or browse all frames in an image viewer
Option 2: Create a video (if you have working ffmpeg)
cd /tmp/vr_anim
ffmpeg -framerate 10 -i frame_%04d.png -c:v libx264 -pix_fmt yuv420p output.mp4
Option 3: Use Python to create GIF
from PIL import Image
import glob
frames = [Image.open(f) for f in sorted(glob.glob('/tmp/vr_anim/frame_*.png'))]
frames[0].save('/tmp/animation.gif', save_all=True, append_images=frames[1:], duration=100, loop=0)
The Python Script Used:
The script is saved at /tmp/animate_volume_rendering.py and can be reused. It:
- Gets the volume rendering display node and transfer functions
- Animates by shifting the transfer function range
- Uses ScreenCapture module to capture each frame
- Shows/hides view controllers for clean captures
Would you like me to help you convert these frames to a video format, or would you like to adjust the animation parameters (more frames, different shift range, etc.)?

1 Like

@mhalle @pieper
Awesome. Thanks for your help! GPT 5.2 thinking was guessing a similar fix, without seeing all the files. This saves me tons of time, as I’m fairly new to this.

I will give it a try. There was also some odd moving mask behavior when I used both masks, so I’ll test this as well. Forgive me but I’m not familiar with the Git workflow and how far along we are. “I see simonoxen (the developer) merged commit 9dedda4 into netstim:master”. Does this imply if I uninstall and reinstall the extension it will be fixed, or should I replace one of the .py files manually?

Cheers

I believe that tomorrow’s nightly build will have the fix. If you have a relatively recent Slicer nightly build, It is quicker and easier to just replace the changed python file.

I’ve also modified Steves’s slicer skill and produced a new version. Here’s my version:

You can download the skill itself here:
https://github.com/mhalle/slicer-skill/releases/latest/download/slicer-skill.skill

Note that this zip file can be installed into Claude Code or the Claude.ai web interface, so you can debug and explore Slicer through s web browser, even on your phone! (No, that doesn’t mean actually running Slicer on your phone.)

Here’s a good test to see if it’s working:

  • Download and install the skill on Claude.ai . (You need to be on at least the $20 / month Pro plan).
  • Start a new chat.
  • Ask Claude to load the slicer skill.
  • Ask it to find bugs in .

Also, try asking about your favorite topic on Slicer Discourse.

This skill is compliant with the evolving https://agentskills.io spec. It splits the difference between Steve’s version (which downloads an enormous amount of source code and the entire slicer discourse corpus), and @jumbojing 's version which doesn’t download anything.

My skill downloads a shallow clone of Slicer and the extension repo, but it uses the Discourse search mechanism rather than downloading all of the forum. It also downloads extensions only on demand.

All that makes the skill much more straightforward and much faster.

It downloads the slice source and extension into a shared cache rather than the current directory. The AI agent does the downloading on demand so you don’t have to do anything like running setup.sh.

If you ask the skill if there is an updated version, it should be able to get it for you.

I specifically disabled the MCP server from starting in the skill. MCPs are potential security risks and should be explicitly activated by users. There’s a sample .mcp.json.sample file if you want to activate it.

Other changes here:

Enjoy!
—MIke