Confidently incorrect posts appearing on forum

I have noticed that some confidently worded but incorrect responses to posts have been appearing on the forum recently, often with language suggestive of LLM generation. How should we handle those, as a community? If I know the answer, I try to weigh in with a different response, but sometimes I don’t know the answer, but am quite skeptical that the response given is correct, despite its confident wording that I would have (in the past) assumed came with some expertise. In some ways, I think this is just the world we live in now, but I’m curious how people think this sort of situation should be handled here. I am definitely in support of inviting posters to label a level of AI involvement in their posts. Sometimes, I find clearly labeled AI-generated posts to be genuinely very useful. Other times, they are just wrong, but with language which suggests expertise; these types of posts are actively unhelpful.

Good points @mikebind. It’s such an interesting time where we can get a lot of help from these agents but we can also be misled and lose time chasing around after the wrong ideas.

@ebrahim has some great ideas to make it easier to flag AI content that he’s probably ready to deploy. If we have some standard ways to flag AI posts it’ll help us all judge what’s helpful and what might not be.

1 Like

I agree, we need to make sure that the confident tone of LLMs don’t confuse users (and used carefully for future LLM training).

We have been discussing this issue here:
https://discourse.slicer.org/t/new-slicer-skill-ai-tool/46243/19?u=lassoan

I will move those posts here, to have the discussion at one place (and because that thread is already very long).

Thank you @jumbojing, and also for your experiment with answering a user question solely by copying an LLM-generated answer in Heavy lag when displaying segmentations via Trame + TotalSegmentator - #2 by jumbojing.

Using LLMs for answering forum questions could be very useful and it could reduce the workload of community members answering simple questions so that they can focus on more difficult problems. However, the answers may not be always useful or accurate. In this particular case the answers were not helpful at all (and therefore confusing/misleading). In the long term, having irrelevant or unreliable answers on the forum posted by overconfident chatbots masquerading as humans would undermine further training of LLMs on the forum content.

To avoid this, when a human user posts LLM-generated content, I would suggest to always start with a clear disclaimer, such as: Automatically-generated content using jumbojing/slicerClaw. Accuracy of the answer has not been verified and code has not been tested. If you review the content then you can update the disclaimer accordingly; and if you fully stand behind the answer (you verified and fully agree with the content, or fixed it up; and you tested the code snippets and confirmed they work) then you may not need the disclaimer at all.

What do you all think? What rules should we introduce for using LLM-generated answers on the forum?

2 Likes

The teacher is absolutely right! A disclaimer should be provided for unreviewed content!

Sorry @lassoan, I just copied the answer from slicerClaw on a whim without reviewing it. Sorry again!

老师说得很对! 对于未经审核的内容应该提供免责声明! 不好意思 @lassoan 我只是一时兴起, 尝试复制 slicerClaw的答案, 未经审核再次抱歉!

- **One-Click MCP Bridge Generator:** Easily generate a `slicer_mcp_bridge.py` script from the UI to seamlessly connect stdio-based AI clients (like Claude/Cursor).
- **Built-in Knowledge Base Downloader:** Directly download and extract Slicer AI Skills (e.g., `jumbojing/slicerSkill`, Slicer Source Code, Discourse Archives) from the UI to empower your models with 3D Slicer's specific coding context.
- **Auto Skill Discovery:** The internal MCP tools will automatically search your downloaded skills so external AIs don't have to manually mount the folders.
- **Evolution Memory (Long-term):** SlicerClaw now features a permanent memory bank! AI assistants can actively dump lessons learned, preferred workflows, and Slicer API corrections into global and project-level `.md` diaries. They will "read" this evolution memory in future sessions to avoid making the same mistakes twice!
- **Dr. Verboser Mode:** Enable the `Dr. Verboser` checkbox to force the AI into a cautious, hyper-analytical persona. Before any Slicer scene modification, the AI MUST output a detailed `🩺 Dr. Verboser Analysis`, explaining its reasoning, citing specific Slicer API docs from the Knowledge Base, and evaluating potential risks.

@pieper @lassoan I’ve updated GitHub - jumbojing/slicerClaw: A revolutionary, lightning-fast AI assistant natively integrated into 3D Slicer. · GitHub again! Teachers, please take a look and offer your comments and corrections—I’d like to apply to add it to the official extension library. To all friends using Slicer, hope you enjoy it!

@pieper @lassoan 我又更新了

GitHub - jumbojing/slicerClaw: A revolutionary, lightning-fast AI assistant natively integrated into 3D Slicer. · GitHub ! 老师们帮我看看批评指正, 申请加入官方插件库. 各位使用Slicer的朋友们, 欢迎大家愉快享用!

@lassoan Sorry about that. I was overstepping again by copying answers from AI in Grow from seed error — exceeded max number of voxels - #2 by jumbojing. I have an idea: could we use @pieper’s slicer‑skill for answering questions on the forum? Wouldn’t it be great to add an AI to help answer questions?

@lassoan 不好意思啊, 我又在越俎代庖地那里复制 AI 的答案了, 我有个想法, 是不是可以将 @pieper 的 slicer-skill用于论坛回答问题呢? 加个 AI 来帮助回答问题不是很好吗?