title: "Why Isn't My Claude Code Skill Working?" description: "Claude Code skills fail for five reasons: wrong location, passive description, malformed YAML, reference file errors, or vague instructions. Fix in order." pubDate: "2026-04-14" category: skills tags: ["claude-code-skills", "troubleshooting", "skill-engineering"] cluster: 23 cluster_name: "Troubleshooting & Debugging" difficulty: beginner source_question: "Why isn't my Claude Code skill working?" source_ref: "23.Beginner.1" word_count: 1450 status: draft reviewed: false schema_types: ["Article", "FAQPage"]

Why Isn't My Claude Code Skill Working?

Quick answer: Claude Code skills stop working for five specific reasons: the file is in the wrong location, the description is passive instead of imperative, the YAML frontmatter is malformed, reference files have path errors or circular dependencies, or the instructions are too vague. Work through these in order, the first one that's wrong is the actual problem.


A skill that isn't working is either not triggering, triggering on the wrong prompts, or triggering correctly but producing wrong output. These are different problems with different fixes. Diagnosing the wrong one wastes time.

Start with the simplest check and work down. The problem is almost always at the first layer that fails, not at all of them simultaneously.

What Are the Most Likely Reasons My Skill Isn't Working?

Five causes account for 95% of skill failures: passive description, wrong file location, malformed YAML frontmatter, reference file path errors, and vague instructions (AEM diagnostic audits, 2026). These appear in roughly this order of frequency across AEM diagnostic audits. Work through them top to bottom — the first one that matches your symptom is the problem. In order of how often they appear in AEM audits:

  1. Passive description. The description summarizes capability instead of stating a trigger condition. Claude's classifier passes over it. The skill doesn't auto-activate. In practice, passive descriptions are the most frequent structural error AEM encounters in submitted skills.
  2. Wrong file location. The SKILL.md file is in a folder Claude Code doesn't scan for skills, or in a nested subfolder that breaks the expected path structure.
  3. Malformed YAML frontmatter. A missing closing ---, an unquoted string value, or a multi-line description field. YAML errors prevent the file from loading correctly. Most load failures AEM has diagnosed trace to one of these three YAML formatting issues.
  4. Reference file errors. Incorrect paths, files that are too large, or reference files pointing to other reference files (circular dependency). The skill activates but runs with incomplete context.
  5. Vague or incomplete instructions. The skill loads and triggers correctly, but Claude interprets ambiguous steps inconsistently. Output varies across sessions. OpenAI structured output research shows compliance improves from approximately 35% with prompt-only instructions to near-100% with explicit output contracts (Source: OpenAI, leewayhertz.com/structured-outputs-in-llms).

How Do I Check If Claude Can See My Skill?

Run /skills in your Claude Code session. This command lists every skill Claude can currently see, along with its description. If your skill isn't in the output, the problem is file location or YAML parsing — Claude never loaded it. If it appears but the description looks wrong or truncated, the problem is YAML formatting inside the file. Both are fixable in under two minutes.

If your skill is not in the /skills output:

  • Check the file path: the skill must be in the directory Claude Code is configured to scan. Default location is .claude/skills/ relative to the project root.
  • Check the filename: it must end in .md and not be inside a nested subfolder deeper than the first level of the skills directory.
  • Check the YAML frontmatter: open the file and verify the frontmatter block opens and closes correctly with --- on its own line.

If your skill appears in the /skills output but the description text looks wrong or truncated, the YAML has a formatting issue. Common causes: multi-line description that Prettier reformatted, missing closing quote on the description value, or the description field name is misspelled.

My Skill Is Visible but Won't Activate: What Do I Check First?

Check the description format first — it is the cause in the majority of non-activating skills. Claude's meta-tool classifier matches incoming prompts against skill descriptions. If the description summarises capability rather than stating an imperative trigger condition, the classifier consistently passes over it. A single word change from "A skill for" to "Use this skill when" can move a skill from 77% to 100% activation rate.

Claude's skill activation relies on a meta-tool classifier that matches incoming prompts against skill descriptions. The classifier is calibrated for imperative trigger conditions. It responds poorly to capability descriptions.

Two description types produce very different results:

# Passive — low activation rate
description: "A skill for writing technical blog posts with SEO optimization and developer focus."

# Imperative — reliable activation
description: "Use this skill when the user asks you to write, draft, or outline a technical blog post. Invoke automatically for developer-facing article content."

AEM's activation testing found that imperative descriptions achieve 100% activation on matched prompts. Passive descriptions achieve 77% (AEM activation testing, 650 trials, 2026). Changing "A skill for" to "Use this skill when" makes a measurable difference.

If the description is already imperative and the skill still doesn't activate, check two more things:

  • Is the description on a single line? Multi-line descriptions in YAML are parsed incorrectly. The classifier sees a broken trigger condition.
  • Is the total character count of all your skill descriptions under 15,000? Exceeding this budget causes descriptions to get silently truncated.

"Probably the most important thing to get great results out of Claude Code: give Claude a way to verify its work. If Claude has that feedback loop, it will 2-3x the quality of the final result." — Boris Cherny, Creator of Claude Code, Anthropic (January 2026, https://x.com/bcherny/status/2007179861115511237)

For the full activation diagnostic, see Why Your Claude Code Skill Isn't Triggering (and How to Fix It).

My Skill Activates but Produces Wrong Output: What Now?

This is an instruction problem, not a loading problem. The skill is being found and loaded correctly — activation is working. The steps or rules inside the skill are not constraining the output precisely enough, so Claude fills in the gaps with whatever seems appropriate. The fix is adding specificity to the instructions, not touching the description or file location.

Three common instruction failures produce wrong output:

Steps are too vague. "Write the content based on what the user requested" gives Claude complete latitude. It fills in gaps with whatever seems appropriate. The output varies because the gaps vary. Replace vague steps with specific constraints:

  • Format requirements (e.g., "output as markdown with H2 sections")
  • Field names (e.g., "always include a slug field in the frontmatter")
  • Length targets (e.g., "each section must be 100–150 words")
  • Explicit defaults (e.g., "if no tone is specified, use neutral-professional")

Missing output contract. Without a defined output contract, Claude decides what the output should look like each time. Add a contract section to your SKILL.md that specifies exactly what the skill produces and explicitly what it does not produce:

## Output Contract
**Produces:** A markdown blog post with H1 title, 3-5 H2 sections, and a FAQ block.
**Does NOT produce:** Published files (always draft status), social media copy, or email versions.

Testing context contamination. If you built and tested the skill yourself, your session filled in gaps the instructions don't cover. A teammate using the skill cold, without your context, experiences those gaps as wrong output. Test the skill in a fresh session with a prompt you didn't craft specifically for it.

How Do I Verify That My Fix Worked?

Use the three-check protocol after any change: confirm the skill is visible, confirm it auto-activates in a fresh session, and confirm the output matches the output contract without corrective prompting. Run all three checks in order after every fix. Skipping to Check 3 without passing Check 1 wastes time — a loading failure will show up as wrong output and misdirect you.

Check 1, File is visible. Run /skills and confirm your skill appears with the correct, full description. If the description looks different from what's in the file, there's a YAML parsing issue.

Check 2, Skill auto-activates. Open a fresh Claude Code session. Type a natural-language prompt that should trigger the skill, not the slash command, a description of the task. If the skill activates without you invoking it manually, the description is working.

Check 3, Output is correct. In the same fresh session, let the skill complete without any corrective prompting from you. Review the output against the output contract. If something deviates, identify the specific step that produced the deviation and add a constraint.

Make one change at a time. If you update the description and the reference file paths simultaneously and the skill starts working, you won't know which fix worked. If the skill breaks again later, you won't know what to change.

What this guide does not cover: This diagnostic addresses the five most common skill failure modes. It does not cover failures caused by Claude Code version upgrades or breaking API changes, tool-tier permission errors requiring harness configuration changes, or multi-agent orchestration conflicts where a parent agent suppresses skill activation. Those failure patterns require separate investigation outside the scope of this guide.

Frequently Asked Questions

Why does Claude say "No skills found" when I run /skills?

Either no SKILL.md files exist in the skills directory Claude is scanning, or the files exist but have malformed YAML that prevents loading. Check that your skills directory is at .claude/skills/ (or wherever your project's Claude Code configuration points), and that each SKILL.md file has valid frontmatter: a --- opening line, all string values double-quoted, and a --- closing line.

My skill worked yesterday but doesn't work today: what changed?

Three changes break a working skill. Check all three before assuming the skill itself changed:

  • A code formatter ran and introduced line breaks in your description field
  • You added a new skill whose description overlaps yours and is winning the classifier competition
  • You exceeded the total system prompt budget for skill descriptions and your skill's description is being silently truncated

Why does my skill work when I invoke it with /skill-name but not automatically?

Manual invocation bypasses the classifier. Auto-triggering requires the classifier to match your prompt against your description. A skill that works via slash command but not via auto-trigger has a description problem. Fix the description before looking at anything else. See How Do I Write a Good Skill Description? for the correct format.

Claude seems to ignore half the rules in my SKILL.md: why?

Rules stated late in long SKILL.md files receive lower effective weight than rules stated early. In a 600-line SKILL.md, rules at line 450 are violated more often than rules at line 50. Liu et al. found a 30%+ accuracy drop for information at mid-to-late context positions in long inputs (Source: Liu et al., "Lost in the Middle: How Language Models Use Long Contexts," Stanford University, 2023, arxiv.org/abs/2307.03172). Move the most critical rules to the first 100 lines of the file, and move domain knowledge and reference data to separate reference files. For the full list of structural mistakes that cause this, see What Are the Most Common Mistakes When Building Claude Code Skills?.

What's the fastest way to debug a skill that's producing inconsistent output?

Add one diagnostic step to the skill: instruct Claude to state which reference files it loaded and which step it's currently on before producing output. Run the skill 3 times on the same prompt. If the declared steps or loaded files vary across runs, the inconsistency is in loading. If they're consistent but the output still varies, the inconsistency is in instruction interpretation, a vagueness problem.

Can having too many skills in one project cause a single skill to stop working?

Yes. The total system prompt budget for skill descriptions is approximately 15,000 characters. Exceeding it truncates descriptions in load order. Skills installed early in the session keep their full descriptions. Skills loaded later get truncated, receive incomplete trigger conditions, and activate unreliably. If your skill library has grown and a previously-working skill has become inconsistent, check total description character count first.


Last updated: 2026-04-14