A distribution-ready Claude Code skill needs 4 additions beyond a working SKILL.md: a README with installation steps, a LICENSE file, a CHANGELOG (kept outside the skill folder), and test coverage in evals.json. The CHANGELOG and LICENSE go in the repository root so Claude never loads them as context during skill execution. This is the AEM packaging standard: the same structure we require before any skill ships to a client or goes public.
TL;DR: Packaging is about making your skill usable by someone who has never seen it before. That means: a README that explains what it does and how to install it in under 60 seconds, a LICENSE that grants legal permission to use it, test coverage that proves it works, and a folder structure that Claude can navigate correctly.
What Does a Distribution-Ready Skill Look Like?
A distribution-ready Claude Code skill has four required elements: a README with installation steps, a LICENSE granting legal permission, a CHANGELOG tracking version history, and an evals.json inside the skill folder. README, LICENSE, and CHANGELOG live at the repository root so Claude never loads them as context; evals.json lives inside the skill folder so Claude knows the skill was tested.
skill-repo/
├── README.md (required: installation and usage)
├── LICENSE (required: legal permission to use)
├── CHANGELOG.md (recommended: version history)
└── your-skill-name/ (the skill folder Claude reads)
├── SKILL.md
├── evals.json
├── references/
│ └── domain-knowledge.md
└── assets/
└── output-template.md
The 3 files at the repository root (README, LICENSE, CHANGELOG) serve human readers. The skill folder serves Claude. Keep them separate. README, LICENSE, and CHANGELOG placed inside the skill folder get loaded into Claude's context every time the skill runs, wasting tokens on content the model doesn't need.
This separation is the first thing we check when auditing community skills for client projects. A skill with a README inside the skill folder is a fair-weather skill: it works, but it's wasting 300-500 tokens per run on human-readable documentation the model doesn't use. According to Claude Code's skill architecture research, each skill's metadata (name and description) consumes approximately 109 tokens at startup; placing a full README inside the skill folder multiplies that token cost on every invocation (Anthropic, Claude Code Docs, 2024).
How Do I Write a README That Drives Actual Installs?
Write four sections in order: what the skill produces (2-3 sentences naming the actual output), when to invoke it (trigger condition stated explicitly), exact installation commands (copy-paste ready, not paraphrased), and configuration notes. That covers everything a stranger needs to go from discovery to first run. Nothing in a README matters if the installation block fails.
- What this skill does (2-3 sentences): Write the output, not the feature. "Produces a structured code review in 3 sections: critical issues, warnings, and style feedback." Not "helps you review code." The person reading this wants to know if the skill solves their specific problem. Give them the answer in 10 words.
- When to use it (1-2 sentences): State the trigger explicitly. "Invoke when you want a structured review of a TypeScript file before submitting a pull request. Works via slash command (
/your-skill-name) or auto-triggers when you describe a code review task naturally." - Installation (numbered steps): Show exact commands. Don't paraphrase. A developer who installs skills frequently has zero patience for "copy the skill folder to your skills directory." Show the path:
1. git clone https://github.com/your-username/your-skill-repo.git 2. cp -r your-skill-repo/your-skill-name .claude/skills/ 3. Start a new Claude Code session and run /skills to verify installation. - Configuration (if applicable): List any reference files the user needs to customize. If the skill works without configuration, say so: "No configuration required. Install and run."
We've released public skills on GitHub and the metric that predicts install count is README clarity, not skill quality. A skill with a 3-step installation block gets installed. A skill with "refer to the SKILL.md for details" gets skipped. The skill's quality becomes irrelevant if no one installs it. Research analyzing 5,000 GitHub repositories found that the number of lists, links, and frequency of README updates are statistically significant factors distinguishing popular from non-popular repositories (Wang et al., Journal of Systems and Software, 2023).
"The failure mode isn't that the model is bad at the task. It's that the task wasn't specified tightly enough. Almost every production failure traces back to an ambiguous instruction." — Simon Willison, creator of Datasette and llm CLI (2024)
Write your README like a spec. Every ambiguous instruction is a potential failed install.
What License Should I Use?
MIT is the standard for community Claude Code skills. It allows anyone to use, copy, modify, and distribute your skill, including for commercial purposes, with no restrictions beyond retaining the copyright notice. That covers personal projects, team tools, and commercial redistribution without any additional agreement. MIT accounts for approximately one-third of all licensed repositories on GitHub, making it the most widely used open source license on the platform (GitHub Innovation Graph, 2025). The Open Source Initiative reported its MIT License page drew over 1 million unique visitors in 2024, more than 4 times the traffic of the next most-viewed OSI-approved license (Open Source Initiative, 2025).
Place the LICENSE file at the repository root. The content for MIT license:
MIT License
Copyright (c) [YEAR] [YOUR NAME]
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
No license file means "all rights reserved" under copyright law, which technically prohibits use even for skills published publicly. Without a license, nobody can legally use, modify, or share your code regardless of whether it is publicly visible (choosealicense.com, GitHub). Every community skill needs a LICENSE file.
How Do I Add Test Coverage Before Releasing?
Create an evals.json file inside the skill folder. This is the minimum test coverage that SkillHub requires, that awesome-claude-skills recommends, and that tells any developer the skill was tested before release. Each file needs at least two test cases: one that confirms the skill triggers correctly, and one that confirms it does not trigger on unrelated input.
The format:
{
"evals": [
{
"id": "trigger-test-01",
"description": "Skill triggers on a natural language code review request",
"input": "Can you review this TypeScript function for issues?",
"expected_behavior": [
"Skill auto-triggers without slash command",
"Output contains 3 sections: critical issues, warnings, style feedback",
"Output does NOT contain implementation fixes or rewritten code"
]
},
{
"id": "trigger-test-02",
"description": "Skill does NOT trigger on unrelated requests",
"input": "What's the best way to structure a React component?",
"expected_behavior": [
"Skill does NOT trigger",
"Claude responds to the question directly without invoking the review skill"
]
}
]
}
A minimum of 2 test cases: one that should trigger and one that should not. In our AEM production skill reviews, skills released without evals generate 3-5x more support requests from other developers, because there is no way to tell whether a failure is a broken skill or a mismatch with the user's context. A study of machine-learning library issues on GitHub found that documentation and support requests consistently rank among the top issue categories, second only to bugs (arXiv:2312.06005, 2023).
For more on writing evals, see What is an evals.json file?.
How Do I Know If My Skill Is Ready for Public Release?
Run the 5-point pre-release checklist before publishing: verify the description field triggers correctly, confirm the output contract names what the skill does NOT produce, check evals.json exists with at least one positive and one negative test case, pass a fresh-session install test, and pass a cross-machine install where someone follows your README without asking a clarifying question.
Description field: Single line, under 1,024 characters, imperative trigger statement. The 1,024-character limit is the official maximum for the SKILL.md
descriptionfield (Anthropic, Claude Code Skill Authoring Docs, 2024). Open the SKILL.md and read thedescriptionfrontmatter. Multi-line descriptions mean the skill requires a slash command, which is functional but not auto-triggering.Output contract: The skill defines both what it produces and what it does NOT produce. A skill with no "does not produce" clause produces whatever seems relevant in context, which creates scope creep.
Evals present: At minimum 1 trigger test case and 1 negative test case in evals.json.
Fresh session test: Install the skill in a clean project with no prior context. Describe the task naturally without using the slash command. If the skill auto-triggers correctly, it's ready. If it doesn't, the description field needs adjustment.
Cross-machine test: Install the skill on a different machine or have someone else install it from your README instructions. If they succeed without asking you a clarifying question, the README is adequate.
This applies specifically to skills designed for your own context: they work because you know the skill's trigger patterns, your reference files contain your domain knowledge, and your test cases come from tasks you've run before. A stranger installing your skill has none of that context. The 5-point checklist forces you to verify the skill works without it.
For guidance on where to publish after passing this checklist, see Can I publish my skill on GitHub? and What is SkillHub?.
FAQ
Should I include a version number in my skill?
Yes. Add a version field to the SKILL.md frontmatter or the README header. Semantic versioning (1.0.0, 1.1.0) lets users know when a change is breaking vs additive. Semantic versioning (SemVer) is the most widely adopted version numbering scheme across the software industry, originally specified by Tom Preston-Werner, co-founder of GitHub, and used by default in npm, PyPI, and most modern package registries (semver.org). Tag GitHub releases to match.
Can I distribute a skill without a GitHub repository?
Yes, via SkillsMP's direct zip upload. You don't need a GitHub repository to publish on SkillsMP. For SkillHub, GitHub is required. For the awesome-claude-skills list, GitHub is required. GitHub is optional for SkillsMP-only distribution.
How do I update a distributed skill without breaking users' installs?
Users who installed via git submodule get your updates automatically when they run git submodule update --remote. Users who installed via direct copy don't. Add a "Last updated: YYYY-MM-DD" line to the README. If a major update changes skill behavior in a breaking way, increment the major version and document the breaking change in CHANGELOG.md.
What should I do if someone reports a bug in my public skill?
Fix the issue in your repository, update the CHANGELOG, increment the version, and post a note in the original bug report or issue. Users who installed via submodule get the fix automatically. Users who installed via copy need to re-copy the updated skill folder.
Do I need to include all reference files if my skill uses private data?
No. For skills that use reference files containing private or proprietary data (API schemas, internal style guides, confidential domain knowledge), distribute the SKILL.md and a README that explains what reference files the user needs to create, with a template showing the required structure. The user populates the reference files from their own data.
Last updated: 2026-04-25