Slash command invocation and auto-trigger are two completely different systems. When one works and the other doesn't, the problem is always in the description — not the skill body. AEM (Agent Engineer Master) skills, like all Claude Code skills, rely on the description field to drive automatic activation.

TL;DR: Running /skill-name bypasses the description classifier entirely. Auto-trigger reads your description and decides whether to activate. A skill that works via slash command but not auto-trigger has a description that doesn't match the prompts you're typing. Fix the description's trigger phrases to match your actual vocabulary, and auto-trigger will work.

Why does /skill-name work when auto-trigger doesn't?

Explicit invocation with /skill-name is a direct command: Claude Code maps the slash command to the skill's folder name and loads the skill unconditionally, with no description check and no classification. Auto-trigger works the opposite way: Claude reads all loaded skill descriptions and decides whether any match your prompt. They're different code paths. Fixing one doesn't affect the other.

When you type a natural-language prompt without a slash command, Claude asks whether the prompt matches any skill's trigger condition. If your description's trigger vocabulary doesn't match the words you used, the answer is "no match" and the skill stays dormant.

"Developers don't adopt AI tools because they're impressive — they adopt them because they reduce friction on tasks they repeat every day." — Marc Bara, AI product consultant (2024)

A skill that requires explicit invocation every time has half the value of one that activates automatically. The friction is exactly the wrong kind.

What specifically in the description causes auto-trigger to fail?

Three description properties reliably cause auto-trigger failure. Each one reduces the classifier's ability to match your natural-language prompts to the right skill. The root issue is always a gap between the vocabulary your description uses and the vocabulary you actually type when you want the skill to activate:

  • Trigger phrases that don't match your actual vocabulary. If your description says "Invoke for content planning" and you type "help me brainstorm ideas for next month's posts," the classifier sees no overlap. "Brainstorm" and "ideas" aren't in the description. "Content planning" isn't in the prompt. The skill stays dormant.
  • Passive rather than directive phrasing. In our testing across 650 activation trials, descriptions with directive phrasing ("INVOKE for...", "Use when the user asks to...") activated at 94% on relevant prompts. Descriptions with passive phrasing ("This skill handles...", "For use when...") activated at 77% on the same prompts (AEM activation research, 2025). The gap is 17 percentage points. On auto-trigger, that gap matters.
  • Missing example prompts. Descriptions that include examples of actual user phrases activate more reliably because the classifier can pattern-match example phrases, not just infer intent from abstract descriptions. "Invoke when users say things like 'review my draft,' 'check this article,' or 'give me feedback on this post'" activates on all three phrase variations and semantically adjacent ones.

How do I fix the description for auto-trigger?

Three targeted changes fix auto-trigger in most cases. All three operate on the SKILL.md description field and take effect in the next session without restarting Claude Code. None of these changes touch the skill body, so the slash command behavior stays identical while you iterate on trigger coverage:

  1. Match the description vocabulary to how you actually phrase requests: For one week, note how you phrase requests that should trigger this skill. Write down the exact words you use. Add those words and phrases to the description. In Seleznov's 650-trial study, descriptions vocabulary-matched to real user phrasing activated at 85–94%, versus 37–77% for descriptions written by guessing at likely trigger terms (Ivan Seleznov, Medium, Feb 2026).

  2. Add at least 2 example trigger phrases: Format: "Invoke when the user asks to [action1], [action2], or [action3]." Examples make the classifier's job concrete rather than inferential. They also expand the surface area of trigger coverage without making the description vague.

  3. Switch to directive phrasing: Replace passive constructions with imperative ones:

    • "For use when..." → "INVOKE when..."
    • "This skill handles..." → "INVOKE for..."
    • "Useful when..." → "Use this skill when..."

Capital INVOKE is not required but signals directive intent clearly.

For a complete treatment of trigger phrase design, see How Do I Write Trigger Phrases That Make My Skill Activate Reliably.

How do I test auto-trigger without using the slash command?

Start a fresh Claude Code session with no prior context about the skill. Do not type the skill name or use a slash command. Type a natural-language version of the request the skill is designed to handle. If the skill activates, auto-trigger is working. If it doesn't activate, the description needs adjustment.

The test is only valid in a fresh session with no prior skill-related context. In a session where you've already been discussing the skill, Claude has contextual clues that help it find the skill even with a weak description. A fresh session has no such hints — it's the true test of whether the description alone is sufficient.

Two-pass testing method: start a fresh session, type five natural-language prompts that should trigger the skill (no slash commands). Then open a second fresh session and type five prompts that should NOT trigger it. Both passes passing is the bar for production-ready auto-trigger. Claude Code's official troubleshooting guidance confirms that rephrasing requests to match the description more closely is the primary recovery step when auto-activation fails (Claude Code Docs, code.claude.com/docs/en/skills). The baseline matters: across 50 structured fresh-session tests, skills with unoptimized descriptions activated only 20% of the time (10/50); optimized descriptions alone raised that to 50% before any eval hook was added (Scott Spence, scottspence.com, December 2025).

See How Do I Troubleshoot Skill Description Activation Issues Systematically for the full methodology.

What if I fix the description and still can't get auto-trigger to work?

Three causes remain after fixing the description: competing skills with overlapping vocabulary, trigger phrases too ambiguous to resolve uniquely, and descriptions that are silently truncated in the skill listing. These are structural problems in the skill library, not description-wording problems, and each requires a different diagnostic step.

  • Competing skills with overlapping descriptions: If another skill has a description that matches the same prompts as yours, Claude may activate the other skill instead. Run /skills and read every description. Look for vocabulary overlap. Community reports confirm that overlapping trigger vocabulary is the leading cause of intermittent activation failures after description fixes have been applied (GitHub community discussion #182117, 2026). A 25-skill library audit found that skills where descriptions "summarize behavior instead of listing conditions" scored 0% on positive activation cases; the same library's well-specified skills passed 50 of 64 eval tests (78%) (Lathesh Karkera, Medium, March 2026).
  • Trigger phrases that are inherently ambiguous: Some tasks share vocabulary with too many other requests to be reliably auto-triggered. "Help me write something" could be a dozen different skills. The solution is to add enough context to the description that the trigger becomes unambiguous: "Invoke when the user asks to write, draft, or create specifically a LinkedIn post, blog post, or email newsletter."
  • The skill loaded but description was truncated: Verify via /skills that your description shows the full text you wrote. If the description is cut off, trigger phrases at the end of the description are invisible to the classifier. Per the SKILL.md format specification, the description field has a 1,024-character maximum; the combined description and when_to_use fields are capped at 1,536 characters in the skill listing (Anthropic Skills spec, agentskills.io standard).

FAQ

After fixing the description, the most common residual cause of missed activations is competing skill vocabulary. If two skills share overlapping trigger terms, the classifier cannot resolve which to activate. The questions below address timing regressions, disambiguation strategies, and when to stop relying on auto-trigger entirely and invoke explicitly.

Q: My skill worked on auto-trigger last week and now I have to use the slash command. What changed? Most likely a code formatter rewrote your SKILL.md frontmatter and broke the description (multi-line wrapping is the typical culprit), or a new skill was installed that competes for the same trigger vocabulary. Check the description in /skills against your SKILL.md file. Then check whether any new skills were added since the skill stopped auto-triggering.

Q: I use the slash command constantly. Does it actually matter whether auto-trigger works? If you're always typing /skill-name anyway, you're using the skill as an explicit tool, not an intelligent assistant. That's a valid choice, but it means you're doing cognitive work that the description was supposed to do for you. Auto-trigger is only worth investing in if you want the skill to activate when you phrase things naturally, without remembering to type the command.

Q: Can I disable auto-trigger for a skill so it only runs via slash command? Not via a SKILL.md configuration flag. The closest approach is writing the description so narrowly that it never matches natural-language prompts. Include a note in the description like "Only invoke via /skill-name, not automatically." This signals intent to the classifier but doesn't guarantee zero auto-activation.

Q: How do I tell which slash command to use if I have a long skill name? The slash command maps to the skill folder name, not the name frontmatter field. If your skill folder is reviewing-marketing-copy, the slash command is /reviewing-marketing-copy. The folder name is also what appears in the /skills output alongside the description.

Q: Should I rely on auto-trigger or slash commands for production workflows? For any workflow where skipping the skill would cause a quality problem, explicit slash command invocation is safer. In our AEM production skill library, well-optimized descriptions still miss 5-15% of intended activations across real developer sessions (AEM production testing, 2025-2026). For casual productivity tasks, auto-trigger's convenience outweighs that miss rate. For high-stakes workflows — publishing, code deployment, client-facing output — invoke explicitly.

Last updated: 2026-04-21