title: "How Do I Write a Good Skill Description?" description: "A good Claude Code skill description is an imperative trigger condition under 1,024 characters on one line. Exact format, examples, and common mistakes." pubDate: "2026-04-14" category: skills tags: ["claude-code-skills", "description-field", "skill-engineering"] cluster: 6 cluster_name: "The Description Field" difficulty: beginner source_question: "How do I write a good skill description?" source_ref: "6.Beginner.2" word_count: 1480 status: draft reviewed: false schema_types: ["Article", "FAQPage"]
How Do I Write a Good Skill Description?
Quick answer: A good Claude Code skill description is an imperative trigger condition, not a capability summary. It starts with "Use this skill when..." and specifies exactly which prompts should activate it. Keep it under 1,024 characters, on a single line in the YAML frontmatter. Include negative triggers if similar skills coexist in the same project.
The description field is where most skill engineering goes wrong. It looks like a minor detail, one line of frontmatter. It is the single highest-leverage element in your entire SKILL.md file. This guide is based on AEM's production skill library and activation testing across 650 trials.
Get it wrong and the skill sits idle while Claude improvises. Get it right and the skill activates correctly every time, without a slash command.
What Is a Skill Description Actually For?
It is a trigger condition for Claude's meta-tool classifier. When you send a prompt to Claude Code, the system runs a classifier that compares your prompt against all available skill descriptions. If a description matches the prompt's intent, that skill runs.
The description is not documentation. It is not a summary of the skill's capabilities. It is a machine-readable trigger condition that the classifier matches against incoming prompts.
This distinction matters because the classifier is tuned for one specific pattern: instruction-like text that tells Claude when to act. "When the user asks you to X" is an instruction. "This skill handles X" is documentation. The classifier responds to the first pattern and weights the second one lower.
What Format Makes a Description Work?
Lead with "Use this skill when" and name the exact trigger scenarios. That is the core pattern, and it is the only format the classifier reliably responds to. All other formats — capability summaries, passive descriptions, use-case lists — score 15–23% lower in activation tests on matched prompts (AEM activation testing, 2026). Start every description with this phrase.
Here is what it looks like applied:
# Bad — capability description
description: "A blog writing skill for technical content. Covers structure, SEO, and developer-facing articles."
# Good — trigger condition
description: "Use this skill when the user asks you to write, draft, or outline a technical blog post. Invoke automatically for any request involving developer-facing article content."
The good version contains two required elements:
- An explicit trigger phrase ("Use this skill when the user asks you to..."), this is the pattern the classifier recognizes
- Specific trigger scenarios (write, draft, outline / technical blog post / developer-facing article), specificity is what separates your skill from a competing skill with broader coverage
AEM's activation testing across 650 trials found that descriptions using imperative trigger phrases achieved 100% activation on matched prompts. Descriptions summarizing capabilities achieved 77% activation on the same prompts (AEM activation testing, 2026). The 23% gap is not noise. It is a systematic failure caused by the wrong description format.
"The description is not a summary — it's a set of conditions for when the skill should be triggered." — Tort Mario, Engineer, Anthropic (April 2026, https://medium.com/@tort_mario/skills-for-claude-code-the-ultimate-guide-from-an-anthropic-engineer-bcd66faaa2d6)
How Do I Write Specific Trigger Conditions?
Name the exact action, the exact content type, and the exact context. Three specificity levels, all three needed. A description that names only the action ("write") competes with every writing skill in the library. Adding content type narrows the field. Adding context — audience, format, or domain — gives the classifier enough signal to pick your skill over a competing one.
Compare these descriptions for the same skill:
# Too generic — competes with everything
description: "Use this skill for writing tasks."
# Better — names the action and content type
description: "Use this skill when the user asks you to write technical documentation."
# Best — names action, content type, and context
description: "Use this skill when the user asks you to write, draft, or revise technical documentation, API references, or developer tutorials for a software product. Invoke automatically when the request targets a developer audience."
The third version wins the classifier competition because it gives the system enough specificity to distinguish "write documentation for our REST API" (should trigger this skill) from "write a blog post about our API" (should trigger the blog writing skill).
Vague descriptions are a shared problem. Roughly 400,000+ skills exist in community skill libraries (Source: SkillKit community repository, github.com/rohitg00/awesome-claude-code-toolkit), and most of them have descriptions that say "helps with" or "assists in" or "handles." Those skills activate unreliably. Specificity is what separates a production skill from a prompt saved in a file.
Should I Include Negative Triggers?
Yes, whenever your project has more than one skill covering related territory. Negative triggers tell the classifier what the skill is not for, which helps it win the disambiguation competition. Without them, the classifier may route a prompt to the wrong skill in a multi-skill library — and the failure is silent. The right skill never fires and the user never knows why.
Without negative triggers:
description: "Use this skill when the user asks you to write technical content."
This description competes with every writing skill in your library:
- blog writing skills
- documentation skills
- email drafting skills
- any other writing skill
The classifier picks the closest match and sometimes picks wrong.
With negative triggers:
description: "Use this skill when the user asks you to write technical documentation, API references, or inline code comments. Do NOT use for blog posts, marketing copy, or social media — those have separate skills."
The negative trigger narrows the match. The classifier knows this skill wins on documentation prompts and should not win on blog post prompts. Activation becomes more reliable for this skill and also for the blog writing skill it named.
Keep negative triggers concise. One "Do NOT use for..." line covering the most adjacent use cases is enough. You don't need to enumerate every scenario the skill doesn't handle.
How Long Should My Description Be?
Under 1,024 characters, on a single line. Both constraints are non-negotiable. In practice, well-written descriptions land between 150 and 300 characters — long enough to name action, content type, context, and one negative trigger, short enough that the classifier reads the whole thing without truncation. The limit is a ceiling, not a target.
The 1,024-character limit is enforced by Claude Code's frontmatter parser. Descriptions that exceed it are silently truncated. The classifier sees an incomplete trigger condition and activation becomes inconsistent.
The single-line constraint is enforced by YAML parsing behavior. A multi-line string in YAML frontmatter is not equivalent to a single-line string. Claude Code's parser expects a single quoted or double-quoted string value. Line continuations in YAML produce unexpected whitespace or get truncated after the first line depending on the parser configuration.
This means: do not let code formatters touch your skill files. Prettier and most YAML linters will reformat a long single-line description onto multiple lines automatically. Add your skills directory to .prettierignore and any other formatter's ignore configuration.
A well-written description fits in 150-300 characters. If you find yourself approaching 800+ characters, the description is overloaded, move some of that content to the process steps section where it belongs.
What Does a Complete, Production-Ready Description Look Like?
Here are three verified examples from AEM production skills, with annotations. Each one uses the imperative trigger format, names at least three specific scenarios, and includes at least one negative trigger. All three are under 300 characters. They demonstrate that a production-ready description is not long — it is precise.
Content generation skill:
description: "Use this skill when the user asks you to draft, write, or create a LinkedIn post, X thread, or social media caption. Invoke automatically for any social content request. Do NOT use for blog articles, documentation, or email — those have separate skills."
This description uses:
- imperative format
- three specific content types named
- two negative triggers
Character count: 272.
Code review skill:
description: "Use this skill when the user asks you to review, audit, or check code for quality issues, bugs, or style violations. Invoke automatically on any request mentioning 'review this code,' 'check my code,' or 'what's wrong with.' Do NOT use for writing new code from scratch."
This description uses:
- imperative format
- three activation verbs
- example prompt patterns included
- one negative trigger
Character count: 281.
Data analysis skill:
description: "Use this skill when the user provides a CSV, spreadsheet, or dataset and asks for analysis, patterns, or insights. Invoke automatically when a data file is attached and a question about it is asked. Do NOT use for data transformation or cleaning without analysis."
This description uses:
- imperative format
- specific input signal (CSV, spreadsheet, dataset)
- behavioral cue (file attached + question asked)
- negative trigger for adjacent task
Character count: 265.
All three are under 300 characters. None of them approach the 1,024-character limit. Specificity does not require length.
Frequently Asked Questions
Can I use first person in my skill description?
Use "Use this skill when the user asks you to" rather than "Use this skill when I ask you to" or "I will use this skill for." The description is read by the classifier, not by you, the "you" in the description refers to Claude. First-person from Claude's perspective is appropriate. First-person from the user's perspective creates ambiguity.
Does the skill name affect activation, or is it just the description?
The skill name (the filename and the frontmatter title) has minimal effect on activation. The classifier primarily uses the description field. A skill called writing-helper.md with a strong imperative description will outperform a skill called technical-documentation-writer.md with a passive description every time.
How often should I update my skill description?
Update the description whenever:
- you add a competing skill to the project (add negative triggers)
- the skill's scope changes (revise trigger scenarios)
- activation rate drops after adding new skills to the library (tighten specificity)
For skills that have been working correctly for months, the description doesn't need routine updates.
Should I include version information or model preferences in the description?
No. Version information and model preferences belong in the process steps or the frontmatter's other fields. The description is exclusively a trigger condition. Adding version strings or model-specific notes dilutes the classifier signal and wastes characters from the 1,024-character budget.
What if I want my skill to activate only on very specific prompts and not on general requests?
Use narrow, specific trigger scenarios and add a clear negative trigger for the general case. For example: "Use this skill when the user asks you to generate a JSON configuration file for a Webpack or Vite build. Do NOT use for general JSON writing, configuration documentation, or any other config format." The more specific the positive trigger and the more explicit the negative trigger, the narrower the activation pattern.
Can I include examples of prompts that should activate my skill in the description field?
Yes, and it is effective. Listing 2-3 example prompts gives the classifier concrete patterns to match: "Invoke when the user types prompts like 'write me a PR description,' 'create a pull request summary,' or 'draft release notes.'" Keep the total description under 1,024 characters. Example prompts should supplement the trigger condition, not replace it.
For more on what the description field does mechanically, see What Does the Description Field Do in a Claude Code Skill?. For the full picture of why description quality matters more than instruction quality, see Why Is the Description the Highest-Leverage Element of Skill Design?.
Last updated: 2026-04-14