In AEM-built Claude Code skill libraries, vague skill names cause two separate problems. The first is an activation problem: the name field in SKILL.md frontmatter contributes to Claude's trigger classifier, and a name with no semantic content provides zero signal. The second is a maintenance problem: when your library reaches 10 or more entries, names like "helper" or "utils" are indistinguishable from each other, and curation becomes guesswork. A skill named "helper" tells Claude nothing about when to use it and tells you nothing about what it does six months later.

TL;DR: Vague names fail at both the activation layer and the curation layer. Names should be gerund-form action verbs that describe a specific operation ("analyzing-contracts", "drafting-rfq-messages"), not a category of tool. The name is part of the trigger spec, not just a label. Getting it wrong is invisible in week one and expensive in month six.

Why does the skill name matter to Claude's activation mechanism?

Claude's skill discovery process matches user intent against skill metadata, primarily the description field, but the name contributes context as well. A name like "reviewing-pull-requests" carries semantic content: it contains the action (reviewing) and the target (pull requests). A name like "helper" contributes nothing to the classifier and forces the description to carry all the disambiguation work alone.

This creates a fragile skill. Claude's classifier treats a vague name as a null signal and relies entirely on the description to compensate. When the description does the naming work AND the trigger work AND the exclusion work, it becomes over-engineered. In our builds, skills with generic names consistently have longer descriptions than skills with precise names, because the builder compensates for the name's semantic emptiness with extra description text. Each skill uses approximately 100 tokens during the metadata scanning phase (community documentation, December 2025); a bloated description consumes a larger share of that budget, leaving less room for actionable trigger logic. That extra text causes its own problem: it pushes past the natural length for description clarity and starts creating trigger conflicts with adjacent skills.

"The single biggest predictor of whether an agent works reliably is whether the instructions are written as a closed spec, not an open suggestion." — Boris Cherny, TypeScript compiler team, Anthropic (2024)

The name is part of the spec. An open-ended name produces an open-ended skill. According to the Claude Code official documentation, the combined description and when_to_use text for each skill is truncated at 1,536 characters in the skill listing to reduce context usage. Every character wasted on a compensating description is a character that cannot carry trigger or exclusion logic (Claude Code docs, code.claude.com, 2025).

What specific names consistently fail?

Names that fail share one trait: they describe a category or role, not a specific operation. The three patterns below account for the majority of mis-named skills we encounter in AEM production library audits. Each pattern contributes zero classifier signal and forces Claude to guess intent from the description field alone.

  • Category-level names: Names that describe a type of tool rather than a specific operation: "helper", "utils", "tools", "assistant", "agent", "skill". These contain no action and no target. They are noun categories, not skill names.
  • Project-scoped internal labels: Names that make sense only to the builder at installation time: "myskill", "todo", "temp", "new", "v2". These work in week one and become unrecognizable in month three.
  • Platform-reserved terms: Skill names should not contain "claude" or "anthropic", as these are reserved by the platform and can cause discovery issues. A name like "claude-assistant" also contributes no semantic content.

Research on identifier naming in software projects found that approximately 70% of source code consists of names (Makimo, scientific literature review, 2023). Generic names like "helper" carry the lowest possible information density, a finding consistent across both human code review and LLM tool selection contexts. Separate function-calling research found that LLMs experience up to a 20% reduction in tool selection accuracy when required to handle ambiguous tool schemas alongside format constraints simultaneously (ToolACE, ICLR 2025).

What works instead: gerund-form action names that specify the operation and the target. "analyzing-contracts", "drafting-press-releases", "reviewing-pull-requests", "generating-test-cases". The name answers two questions: what verb does this skill perform, and on what object?

How does naming affect long-term library maintainability?

At the 30-skill threshold, libraries hit a curation problem. Names that seemed clear at installation become ambiguous over time. "helper" from three months ago — was that the content helper or the code helper? "utils" — was that the string utilities or the date utilities?

In our work maintaining skill libraries for clients, skills with generic names get disabled rather than updated when they stop working. The person doing the audit cannot identify what the skill was supposed to do. Precise names make maintenance possible: "analyzing-seo-metadata" can be audited, updated, and improved by anyone who reads the name. "helper" cannot.

This is the same reason software engineers use specific function names over generic ones. processData() is technically correct and practically useless for code review. normalizeUserEmailToLowercase() is specific and maintainable. Skill naming follows the same principle. Scientific literature on programming identifier naming found that two developers working independently will assign the same name to a given identifier only 7% to 18% of the time, depending on specificity of the concept (Makimo, scientific naming in programming review, 2023). Generic skill names fall into the low end of that range and make shared library maintenance a coordination problem.

For more on how library organization affects skill discovery at scale, see What Are the Most Common Mistakes When Building Claude Code Skills?.

What naming conventions produce reliable skills?

Four rules consistently produce skill names that trigger correctly and survive library curation. Each rule addresses a specific failure mode: action-less names, target-less names, format mismatches, and reserved-term conflicts. Applied together, they produce names that give Claude's classifier a usable signal and give you a maintainable library.

  1. Gerund form: The name starts with a present participle: "analyzing", "drafting", "reviewing", "generating", "converting". Not "analyzer", "drafter", or "analysis". Gerund form signals action, not category.
  2. Target included: The name specifies what the skill operates on: "analyzing-contracts" not "analyzing"; "drafting-rfq-messages" not "drafting". The name answers "doing what, to what?"
  3. Lowercase with hyphens: No underscores, no camelCase, no spaces. The folder name and the name field in SKILL.md frontmatter must match exactly.
  4. No reserved terms: No "claude", "anthropic", "skill", or other platform-reserved words. They provide no discriminating signal and risk naming conflicts.

A 2-4 word name following these rules is sufficient. "analyzing-customer-support-tickets-for-sentiment-and-escalation-flags" is too long. "analyzing-support-sentiment" is specific and usable. Community testing across 200+ prompts found that well-optimized skill names and descriptions improve activation rates from 20% to over 50% in libraries with more than 15 skills (community research, December 2025 documentation).

If your naming is correct but the skill still fails to trigger reliably, the description field is the more likely cause. See What Does the Description Field Do in a Claude Code Skill? to diagnose that separately.


FAQ

Skill naming in Claude Code follows strict formatting rules that differ from most coding conventions: hyphens not underscores, gerund form not noun form, and no platform-reserved terms. Getting any of these wrong produces silent failures in AEM production libraries. The skill installs without error but triggers inconsistently or becomes unrecognizable during curation within months.

Can I use underscores in my Claude Code skill name?

No. Skill names use hyphens between words, not underscores. The folder name and the name field in SKILL.md frontmatter must both use hyphens and must match each other exactly. Underscores can cause the skill discovery mechanism to misclassify or ignore the skill.

Why can't my skill name contain "claude" or "anthropic"?

These terms are reserved by the platform. Including them in skill names risks naming conflicts with built-in functionality and provides no additional semantic signal. Name the skill by what it does ("analyzing-emails") rather than what tool powers it.

How do I rename a skill without breaking existing references?

Update the folder name and the name field in SKILL.md frontmatter together — they must match. Then search your CLAUDE.md and other skill files for any references to the old name and update them. If the skill is installed at user level, reinstall it from the updated folder. There is no automatic propagation.

Do vague names actually prevent a skill from triggering?

A vague name does not prevent triggering by itself — the description field does most of the trigger work. But a vague name provides no classifier signal that supplements the description, which means the description carries the full disambiguation burden. This produces over-long descriptions that cause trigger conflicts with adjacent skills.

What's the best folder structure for a team using 50+ skills?

Organize by domain of operation, not by tool type. A structure like /skills/content/, /skills/engineering/, /skills/finance/ groups skills by the work they support. Generic categories like /skills/utils/ or /skills/helpers/ replicate the same naming problem at folder level and defeat curation.

Last updated: 2026-04-19