title: "What Are the Most Common Mistakes When Building Claude Code Skills?" description: "The most common Claude Code skill mistakes: passive descriptions, no output contract, and building before designing. Each has a specific fix under 10 minutes." pubDate: "2026-04-14" category: skills tags: ["claude-code-skills", "anti-patterns", "skill-engineering"] cluster: 22 cluster_name: "Anti-Patterns & Failure Modes" difficulty: beginner source_question: "What are the most common mistakes when building Claude Code skills?" source_ref: "22.Beginner.1" word_count: 1560 status: draft reviewed: false schema_types: ["Article", "FAQPage"]
What Are the Most Common Mistakes When Building Claude Code Skills?
Quick answer: The most common skill mistakes are:
- A passive description that prevents auto-triggering
- Domain knowledge embedded in SKILL.md instead of reference files
- No output contract
- Building before writing the description
- Including README or CHANGELOG files in the skill folder
Each one has a specific fix and most take under 10 minutes to resolve.
Building a skill that doesn't work is mostly a process failure, not a design failure. The same mistakes appear across most skills AEM has audited. None of them are subtle. All of them are fixable in one sitting.
Here is the ranked list, starting with the mistake that causes the most damage.
What Is the Most Damaging Mistake?
A passive description. This is the mistake that makes a skill functionally invisible. A passive description tells Claude what the skill does instead of when to use it. Because the meta-tool classifier is calibrated for trigger conditions, passive descriptions produce roughly a 23-percentage-point activation gap — enough to silently drop 1 in 4 relevant prompts.
Claude Code's skill activation system works through a meta-tool classifier that compares incoming prompts against skill descriptions. The classifier is calibrated for trigger conditions, instructions that tell Claude when to use a skill. A passive description tells Claude what the skill does instead of when to use it. The classifier responds poorly to capability descriptions.
The measured performance gap: imperative descriptions achieve 100% activation on matched prompts. Passive descriptions achieve 77% on the same prompts (AEM activation testing, 650 trials, 2026). A passive description means roughly 1 in 4 relevant prompts ignores the skill entirely.
The difference between passive and imperative:
# Passive — 77% activation
description: "A skill for writing technical blog posts with SEO optimization."
# Imperative — 100% activation
description: "Use this skill when the user asks you to write, draft, or outline a technical blog post. Invoke automatically for developer-facing article content."
If your skill isn't triggering reliably, check the description first. Every time. Adding a negative trigger ("Do NOT use for...") to an imperative description measurably reduces multi-skill disambiguation errors in practice.
For the full diagnosis and fix process, see Why Your Claude Code Skill Isn't Triggering (and How to Fix It).
What Mistakes Break a Skill Before It's Used?
Four structural mistakes prevent a skill from working at all, regardless of instruction quality. These failures happen before a single instruction is read: no description means no auto-trigger, wrong build order means wrong scope, no output contract means unpredictable output, and circular reference files cause silent context loss mid-execution. Each is a structural fault, not an instruction quality problem.
What Happens If a Skill Has No Description Field?
A skill with no description field cannot auto-trigger. The meta-tool classifier needs a description to match against incoming prompts. Without one, the classifier has nothing to score. The skill only runs when invoked as an explicit slash command — it never activates automatically. For skills intended to trigger passively on relevant prompts, the description field is not optional.
Why Should You Write the Description Before the Steps?
Writing the description first forces you to define scope before you build. When engineers write process steps first, they produce a description that fits what they built — not what should trigger the skill. The trigger condition is never cleanly defined. The result is a skill whose description and actual behavior are misaligned, and that misalignment shows up as low activation precision or incorrect scope on matched prompts.
Writing the process steps before writing the description produces a skill with the wrong scope. You write steps that cover what you built. You write a description that describes what you built. Neither is calibrated to the actual trigger condition that should activate the skill.
Write the description first. The description defines scope. If you can't write a clear trigger condition in under 200 characters, the skill's boundaries aren't defined yet. Clarify the scope, then build the steps. Every AEM commission starts with a description draft, before any other part of the SKILL.md file.
What Does a Missing Output Contract Break?
A missing output contract breaks reproducibility. Without an explicit definition of what the skill produces and what it does not produce, Claude improvises the output format on every execution. Two identical prompts produce structurally different outputs. This is not a model inconsistency problem — it is a missing specification problem. An output contract that names formats, fields, and structures removes the ambiguity that causes variation.
An output contract defines what the skill produces and, critically, what it does not produce. Without one, Claude improvises the output format each time the skill runs. Improvised formats are inconsistent. An output contract that states exactly what fields, formats, or structures the skill produces makes the output reproducible. OpenAI structured output research shows compliance improves from approximately 35% with prompt-only instructions to near-100% with explicit output contracts (Source: OpenAI, leewayhertz.com/structured-outputs-in-llms).
A minimal output contract:
## Output Contract
**Produces:**
- A markdown blog post with an H1 title, 3-5 H2 sections, and a summary paragraph
- Frontmatter block with title, description, and tags fields
**Does NOT produce:**
- Published files (always draft status)
- Social media copy based on the post content — that requires the social skill
Two paragraphs. The skill now has a defined scope boundary.
Mistake 4: Circular reference files
Circular reference files cause silent context loss: the skill loads and runs, but with incomplete instructions. Reference files in a skill folder may only be referenced from SKILL.md — they cannot reference each other. When a chain exists (SKILL.md → ref-a.md → ref-b.md), Claude follows the chain, encounters the cycle, and stops loading. Behaviour appears normal; the missing context is invisible until specific rules go unenforced.
Reference files in a skill folder can only reference SKILL.md's process steps, they cannot reference each other. A chain where SKILL.md → ref-a.md → ref-b.md is a circular dependency. Claude follows the chain, encounters the cycle, and stops loading. The skill runs with incomplete context. The one-level-deep rule is absolute: SKILL.md points to reference files; reference files point to nothing. In practice, most circular reference errors AEM has encountered appear in skills with three or more reference files.
What Mistakes Degrade Quality Without Breaking the Skill Entirely?
Five mistakes produce a skill that runs but delivers inconsistent or low-quality output. These are harder to diagnose than structural failures because the skill appears to work. The degradation shows up as format variation, ignored rules, inconsistent naming, or version-dependent behaviour — all symptoms of specification gaps rather than instruction errors. Each has a targeted fix that does not require rebuilding the skill from scratch.
Embedding domain knowledge in SKILL.md. Domain knowledge belongs in reference files. SKILL.md is a process file, it contains steps, rules, and output contracts. When domain knowledge gets embedded directly in SKILL.md — including:
- Style guides
- Technical specifications
- Data dictionaries
- Approved examples
— the file grows past 500 lines. In long SKILL.md files, Claude's attention distributes unevenly. Instructions in the final third of the file receive lower effective weight than instructions in the first third. Rules stated at line 400 get violated more often than rules stated at line 40. Research confirms this is a structural attention effect: Liu et al. found a 30%+ accuracy drop for information at mid-to-late context positions in long inputs (Source: Liu et al., "Lost in the Middle: How Language Models Use Long Contexts," Stanford University, 2023, arxiv.org/abs/2307.03172).
Vague skill names. Filenames like helper.md, utils.md, or tools.md hurt both discoverability and slash-command usability. If a teammate opens your skills library and sees /helper, they don't know when to use it. Names should describe the specific task: technical-docs-writer.md, linkedin-post-generator.md, code-reviewer.md. The name won't fix a bad description, but a bad name makes the skill harder to use correctly.
Including README or CHANGELOG files in the skill folder. In some Claude Code configurations, all markdown files in the skill directory load into the skill's context. A README.md explaining how to set up the skill, or a CHANGELOG.md documenting version history, gets included in the system prompt as if it were part of the skill. This adds tokens, dilutes focus, and occasionally introduces conflicting instructions. Skill folders should contain: SKILL.md, a references/ subfolder, and an assets/ subfolder if needed.
Offering too many output options without a default. A skill step that says "output in JSON, YAML, or markdown depending on user preference" without a stated default produces different formats across sessions. When no preference is stated, Claude picks one. It doesn't always pick the same one. State a default: "Output in JSON. If the user explicitly requests markdown, use markdown instead."
Time-sensitive conditionals. Instructions like "before Claude 4.5, use method A; after Claude 4.5, use method B" age poorly. Claude does not have reliable knowledge of its own version within a session. These conditionals get interpreted inconsistently and sometimes produce the wrong behavior for the current model. Remove version gates and write instructions that work for the current model.
How Do I Check an Existing Skill for These Mistakes?
Run this five-point audit on any skill that's performing inconsistently. The audit covers description format, file structure, output contract presence, reference file depth, and SKILL.md length. Each check is binary: it either passes or flags a specific fix. A skill that clears all five checks is structurally sound — remaining issues are instruction quality problems, not architecture problems, and those are solvable with targeted step edits.
- Description check: Is the description an imperative trigger condition starting with "Use this skill when"? If not, rewrite it.
- File structure check: Does the skill folder contain only SKILL.md and the references/ and assets/ subfolders? If README.md or CHANGELOG.md are present, move them out.
- Output contract check: Does SKILL.md have an explicit "Output Contract" or "What This Skill Produces" section? If not, add one.
- Reference file depth check: Do any reference files contain links to other reference files? If yes, flatten the structure.
- SKILL.md length check: Is SKILL.md over 500 lines? If yes, identify domain knowledge sections and move them to reference files.
A skill that passes all five checks is structurally sound. Remaining performance issues are instruction quality problems, solvable with targeted step refinement.
"The best skills fit cleanly into one of these. Skills that blur multiple categories tend to confuse both the agent and the user." — Tort Mario, Engineer, Anthropic (April 2026, https://medium.com/@tort_mario/skills-for-claude-code-the-ultimate-guide-from-an-anthropic-engineer-bcd66faaa2d6)
Frequently Asked Questions
What's the most common mistake that even experienced skill engineers make?
Missing negative triggers in descriptions. Experienced engineers write imperative descriptions that achieve 100% activation in isolation. When they add competing skills to the same project, activation drops, because the description doesn't tell the classifier what the skill is NOT for. Adding a "Do NOT use for..." line to every description is a habit that takes a week to form and prevents a whole category of disambiguation failures.
Is it worse to have no output contract or a vague output contract?
A vague output contract is worse. No output contract is an obvious gap, Claude improvises freely and the inconsistency is visible. A vague output contract gives Claude the appearance of constraints without enforcing them. "Output clear, well-structured content" sounds like a contract. It isn't. Claude interprets "clear" and "well-structured" differently across sessions. Specific output contracts name formats, fields, and structures.
How do I know if my SKILL.md is too long?
The 500-line threshold is a guideline, not a hard limit. The real signal is failure mode: if specific rules and constraints stated in the file are being ignored during execution, and those rules appear late in the file, the file is too long. Move domain knowledge sections to reference files until the core SKILL.md stays under 300 lines. Test whether compliance with the moved rules improves.
My skill folder has a README: should I always remove it?
Remove it from the skill folder. If you need documentation about how to install or use the skill, put it in the project's main README or in a separate documentation directory outside the skill folder. The skill folder's contents affect what Claude loads into context. Documentation that's useful to humans but not to Claude doesn't belong there.
Can I have both SKILL.md and AGENTS.md in the same skill folder?
AGENTS.md is a different type of file, it configures agent behavior and is processed differently from SKILL.md. If your project uses both, keep them in the appropriate locations. An AGENTS.md in a skill folder may be processed as skill context, which is not its intended role. Check your project's Claude Code configuration for how each file type is processed before combining them in a single directory.
For the full diagnostic framework when a skill stops working, see Why Your Claude Code Skill Isn't Triggering (and How to Fix It). For the correct structure of a SKILL.md file, see What Goes in a SKILL.md File?.
Last updated: 2026-04-14