title: "How Do I Create My First Claude Code Skill?" description: "Step-by-step guide to creating your first Claude Code skill: folder setup, SKILL.md frontmatter, process steps, and testing before you ship." pubDate: "2026-04-13" category: skills tags: ["claude-code-skills", "foundations", "beginner", "tutorial"] cluster: 1 cluster_name: "Foundations & Getting Started" difficulty: beginner source_question: "How do I create my first Claude Code skill?" source_ref: "1.Beginner.4" word_count: 1580 status: draft reviewed: false schema_types: ["Article", "FAQPage"]
How Do I Create My First Claude Code Skill?
Quick answer: Create a folder at .claude/skills/your-skill-name/, add a SKILL.md file with YAML frontmatter containing name and description fields, write numbered process steps in the body, and test with 5 distinct inputs including at least one edge case. The whole process takes under 30 minutes for a well-scoped first skill.
What should I build my first skill around?
Build your first skill around a task you have typed out for Claude at least 3 times this week — not a task you plan to do eventually, but one you are already doing repeatedly right now. Three repetitions is the minimum threshold where your instructions have stabilized enough to be worth encoding into a skill.
The 3-repetition threshold is specific for a reason (AEM engineering practice, 2026). Below 3 repetitions, the task is still new enough that your instructions are evolving. At 3 or more, the pattern is stable: you know what you want, you know what good output looks like, and you have enough experience with the task to write reliable instructions.
Good candidates for a first skill:
- Code review comments in a specific style
- Git commit messages following a project convention
- PR description templates
- Test case generation from requirements
- Daily standup summary formatting
Bad candidates for a first skill:
- Tasks you do once a month (not enough repetition to justify the maintenance)
- Tasks where the requirements change every time (the instructions will be stale before you finish writing them)
- Tasks that span multiple unrelated domains (break these into multiple smaller skills first)
Once you have a specific, repeated task, you are ready to build.
How do I set up the file structure?
Set up a named subdirectory inside .claude/skills/ in your project root, then create a single SKILL.md file inside it. That is the complete required structure — one folder, one file. Everything else (a references/ subfolder, an assets/ folder) is optional and only needed if your skill uses external knowledge files.
.claude/
skills/
your-skill-name/
SKILL.md
references/ (optional — add if you have domain knowledge to offload)
Create the folder at .claude/skills/your-skill-name/ in your project root. The folder name becomes the skill's identifier. Use lowercase with hyphens. No spaces.
The SKILL.md file goes directly in that folder. Reference files, if you need them, go in references/. The one-level-deep rule applies: reference files can be read but cannot point to other reference files (Claude Code specification, 2026).
Project-level skills in .claude/skills/ are visible to everyone on the team who uses Claude Code with this project. User-level skills in ~/.claude/skills/ are visible only to you across all your projects. For your first skill, use the project level. Share the benefit with the team.
How do I write the SKILL.md frontmatter?
Write two fields in the YAML block at the top of SKILL.md: name and description. These are the only required fields. The name field identifies the skill; the description field controls when Claude activates it. Getting the description right is the single most important step in building a reliable skill — everything else depends on it.
Minimum required fields:
---
name: your-skill-name
description: "Use this skill when [trigger condition]. Do NOT use for [anti-trigger]. [Core action]."
---
The name field is the skill's identifier. It matches the folder name.
The description field controls activation. It is the most important field in the entire file. It must stay on a single line. Multi-line descriptions break YAML parsing and the skill fails silently. The character limit is 1,024 (Claude Code specification, 2026).
Most people build their first skill, watch it trigger once, and ship it to the team. That is checkpoint 1 of 4. The description needs to survive checkpoints 2 through 4 as well.
An imperative description achieves 100% activation rates in testing. A passive description sits at 77% (AEM internal testing, 2026, 650 activation trials). Write your description as an imperative:
Weak (passive): "A skill for summarizing pull requests."
Strong (imperative): "Use this skill when asked to summarize a pull request. Do NOT use for code review comments, commit messages, or issue descriptions. Reads the PR diff and outputs a structured 3-section summary."
The strong version triggers on the right input, stays silent on the wrong input, and tells Claude exactly what it is about to do.
"The description is not a summary — it's a set of conditions for when the skill should be triggered." — Tort Mario, Engineer, Anthropic (April 2026, https://medium.com/@tort_mario/skills-for-claude-code-the-ultimate-guide-from-an-anthropic-engineer-bcd66faaa2d6)
How do I write the process steps?
Write the process steps as a numbered list in the body of SKILL.md, below the frontmatter. Each step should describe one action, name the specific tool Claude should use, and appear in execution order. Numbered steps in execution order are the minimum required; everything else is optional structure built on top of that foundation.
A minimal process body:
## What this skill does
Generates a structured pull request summary from the diff and description.
## Process
Step 1: Ask the user for the pull request URL or diff if not already provided.
Step 2: Use the Read tool to load the diff content.
Step 3: Identify the three most significant changes in the diff.
Step 4: Output the summary using this format:
- **What changed:** 1-2 sentences describing the core change
- **Why it matters:** 1-2 sentences on the business or technical impact
- **What to review carefully:** 2-3 specific things the reviewer should focus on
## Output contract
Produces: a structured 3-section PR summary.
Does NOT produce: line-by-line code review, approval recommendations, or merge decisions.
Key rules for steps that work consistently:
Name the tool. "Read the file" leaves tool selection to Claude. "Use the Read tool" removes ambiguity. Named tools produce stable output (AEM production data, 2026).
Handle empty inputs. Step 1 asks for the PR URL if not provided. This covers the common case where the user invokes the skill without giving Claude what it needs.
Specify the output format. Step 4 gives the exact structure. Without it, Claude decides the format. The format varies across sessions.
Keep steps atomic. One action per step. "Analyze the diff and identify gaps and suggest fixes" is three steps. Write it as three steps. OpenAI structured output research shows compliance improves from approximately 35% with prompt-only instructions to near-100% with explicit output contracts (Source: OpenAI structured output research, leewayhertz.com/structured-outputs-in-llms).
How do I test my first skill before the team uses it?
Run four checkpoints before sharing the skill: trigger test, process adherence test, output contract test, and edge case test. All four are required. A skill that passes only checkpoint 1 has been demonstrated, not tested. The remaining three checkpoints are where most first skills break (AEM internal testing, 2026).
Checkpoint 1: Trigger test. Type something that matches your description and confirm the skill activates. Then type something that matches your anti-trigger and confirm it stays silent. If it triggers on the anti-trigger input, tighten the description.
Checkpoint 2: Process adherence test. Walk through a full run and verify Claude executes each step in order. If a step gets skipped, add "Always complete this step" or make the step's output a named prerequisite for the next step.
Checkpoint 3: Output contract test. Check that the output exactly matches the format you specified in the output contract. Missing sections, extra unrequested content, or format variations are contract failures. Adjust the instructions until the output is consistent.
Checkpoint 4: Edge case test. Run the skill on inputs you did NOT specifically design it for. Use an empty PR diff. Use a PR with binary file changes. Use a PR from a different repository format. A skill that fails here is a fair-weather skill. Note every failure and add a failure-handling step for each one.
The production bar for a first skill is 5 distinct inputs passing all 4 checkpoints without instruction edits between runs. That is the minimum standard before sharing with the team (AEM quality protocol, 2026). Instructions buried mid-file are especially vulnerable at this stage — Liu et al., "Lost in the Middle: How Language Models Use Long Contexts," Stanford University, 2023 (arxiv.org/abs/2307.03172) found a 30%+ accuracy drop for information at mid-context positions, which is why process steps belong near the top of the skill body, not the bottom.
What's the typical workflow after the first skill works?
After the first skill passes all four checkpoints, the ongoing workflow is three recurring tasks: monitoring for trigger failures, adding edge case handling as failures surface, and archiving the skill when the underlying task changes. A skill is not a one-time build; it is a living document that gets more reliable over time as failure modes are discovered and addressed.
Monitor for trigger failures. If a team member reports the skill not activating when expected, read the description. Add a trigger phrase that covers their use case.
Add edge case handling as you find it. Every new failure mode gets a failure-handling step added to SKILL.md. The skill gets more reliable over time, not just at initial launch.
Archive when the task stops repeating. If the underlying task changes enough that the skill's instructions are no longer accurate, update or archive it. A stale skill that produces incorrect output is worse than no skill at all.
For the complete overview of all four skill components, the production bar, and how to scale from one skill to a library, see The Complete Guide to Building Claude Code Skills in 2026.
For a field-by-field explanation of every SKILL.md section, see What goes in a SKILL.md file?.
For the detailed guide to what separates a Claude Code skill from a prompt or agent, see What is a Claude Code skill?.
Frequently Asked Questions
Do I need to know how to code to create a Claude Code skill?
No. A SKILL.md file is a markdown file. You write instructions in plain English, structured with numbered steps and a YAML frontmatter block. No programming language is required. If you can write a clear set of instructions for a colleague to follow, you can write a skill.
What can I automate with Claude Code skills?
Any task Claude can perform repeatedly on similar inputs. The constraint is repeatability — the task needs the same core process each time. Common examples:
- Writing commit messages
- Summarizing code reviews
- Generating test cases
- Drafting documentation
- Converting data formats
- Running analysis workflows
- Creating standup summaries
How do I invoke a skill once I've created it?
Two ways. Type /skill-name to invoke it directly by name. Or describe the task in natural language that matches the description field. If the description says "Use this skill when asked to summarize a pull request," typing "can you summarize this PR?" triggers it. The description controls both paths.
What does a simple Claude Code skill look like?
The minimum viable skill is a SKILL.md file with a name field, a description field, and 2-3 numbered process steps. No reference files, no assets folder, no self-improvement section. Start with the minimum, test it, and add complexity only where testing reveals a gap.
What's the difference between invoking a skill with /skill-name and having Claude trigger it automatically?
Both paths read the same SKILL.md file and follow the same process. The /skill-name path is explicit, you name the skill directly. The automatic path works when your natural language matches the description field closely enough that Claude selects the skill from the library. Both paths produce identical output for identical inputs.
Last updated: 2026-04-13