Yes. Copying instructions across projects is the clearest signal in Claude Code development that something should become a reusable skill. The moment you paste the same block of instructions into a second project, you have created a maintenance problem: two copies that will drift apart the first time you improve one and forget to update the other. Code duplication affects between 5% and 20% of software systems and is the leading driver of divergent behavior across codebases (arXiv 2502.04073, 2025). The same principle applies to AI instruction sets.
TL;DR: If you have copied the same instructions into more than one project, extract them into a SKILL.md file. Place the skill in a shared location, write a description that triggers it reliably, and delete the copies. The cost is 30 minutes of setup. The benefit is a single source of truth that improves across every project simultaneously.
What is the two-copy rule?
The two-copy rule is simple: if identical or near-identical instructions exist in two places, they belong in one place. Skills are that one place. One canonical SKILL.md file, stored at ~/.claude/skills/, replaces every copy and makes every future improvement apply universally. Each duplicated instruction block in CLAUDE.md loads on every turn for the entire session, so two copies means paying the token cost twice, on every message, for the rest of the session (Anthropic, Claude Code docs, 2025).
At AEM, we've audited more than 40 developer Claude Code configurations over the past year. The most common finding: three to five blocks of instructions duplicated across CLAUDE.md files in different projects, each slightly out of sync with the others. The developer improved the instructions in the project where they noticed a problem and forgot to propagate the update. By the time we examined the configurations, each copy had drifted enough to produce noticeably different behavior.
This is not a discipline problem. It is an architecture problem. Copying instructions is the path of least resistance when you are working inside a project and need a capability that has worked before. The fix is not to be more careful about copying. The fix is to make the skill accessible in two commands: create the directory and write the SKILL.md file. Once it exists in ~/.claude/skills/, it is available in every project with no further setup.
"The single biggest predictor of whether an agent works reliably is whether the instructions are written as a closed spec, not an open suggestion." — Boris Cherny, TypeScript compiler team, Anthropic (2024)
A skill is that closed spec. Maintaining it in one place means improvements are applied universally, not project by project.
How do you know if something is ready to become a skill?
Three signals indicate that a block of instructions is ready for extraction: portability, stable domain knowledge, and observed drift. Any one of the three is sufficient. All three together means the block has already been maintained manually in at least one project, which is the clearest possible evidence that centralization has value.
- It works the same way regardless of which project you are in. If the instructions describe a methodology (a code review process, a writing workflow, a debugging sequence) rather than a project-specific behavior, the methodology is portable. Portable instructions belong in a skill.
- It contains domain knowledge that does not change between projects. Brand voice guidelines, a technical review rubric, an output format specification. This knowledge does not depend on the project. It belongs in a reference file attached to the skill, not in a per-project CLAUDE.md.
- You have edited it in one project and not others. If you have improved the instructions in one place and not propagated the change, you already know it should be centralized. The improvement you made is evidence that you are actively maintaining this workflow — it belongs in a skill where maintenance has effect everywhere.
Models placed in longer contexts lose track of instructions that appear in the middle of their context window, making position and centralization both matter for reliable behavior (Nelson Liu et al., Stanford NLP Group, "Lost in the Middle," arXiv 2307.03172, 2023). A single well-placed skill description produces more reliable triggering than duplicate instructions scattered across multiple CLAUDE.md files.
For context on what kinds of instructions belong in a skill versus CLAUDE.md, see Should I Put My Instructions in CLAUDE.md or Create a Separate Skill?.
How do you extract an instruction block into a skill?
The extraction process has four steps: create the SKILL.md file in a shared directory, write a description that triggers it correctly, strip out project-specific details, and delete the copies you're replacing. The critical one is step two: writing a description that triggers the skill correctly. Claude uses the description field to decide when to invoke the skill automatically, and the combined description and when_to_use text is capped at 1,536 characters in Claude's skill listing (Anthropic, Claude Code docs, 2025). Front-load the key use case.
- Create the SKILL.md file. Copy the instructions into a new SKILL.md file in a shared skills directory, either
~/.claude/skills/for user-level skills available across all projects, or.claude/skills/in a shared repository that each project references. - Write a description that triggers it. The description field in the SKILL.md frontmatter is what Claude reads to decide when to invoke the skill. Make it specific to the workflow you are extracting. "Performs a structured code review checking security, performance, and style compliance" is a description. "Helps with code" is not (source: AEM skill-engineering methodology, 2026).
- Move project-specific details out. Instructions that reference project-specific databases, file paths, or team conventions do not belong in the shared skill. Those details stay in the project's CLAUDE.md. The skill should contain only the methodology that applies universally.
- Delete the copies. Remove the duplicated instruction blocks from each project's CLAUDE.md. Test that the skill triggers correctly in each project. If it does, the copies are gone and you have one maintained source.
Where should you store a shared skill?
Two options exist. The choice is determined by who needs access: just you, or your whole team. Personal skills stored at ~/.claude/skills/ are available across every Claude Code session for your account. Team skills live in a shared repository and install from a single source. Both use the same SKILL.md format; only the storage path and access scope differ.
User-level skills (~/.claude/skills/): Skills stored here are available in every Claude Code session for your user account, across all projects. This is the right location for personal productivity workflows: writing styles, debugging protocols, research methodologies. Any workflow you use across projects but do not need to share with teammates.
Shared repository skills: Store the skill in a shared internal repository and reference it from each project using Claude Code's configuration. This is the right approach for team-wide workflows: code review standards, deployment checklists, documentation templates. Every team member installs from the same source, and updates propagate when they pull the repository. Development teams using version control systems for shared assets report a 20% increase in productivity compared to teams without centralized source control (industry analysis, hutte.io, 2024). The mechanism is the same for skills: one canonical source eliminates the coordination tax.
In both cases, user-level and shared repository, the skill file itself is identical. Only the storage location differs, and that determines who has access to it. Unlike CLAUDE.md content, a skill's body loads only when it is invoked, not at session start: skill descriptions appear in context always, but the full content enters the conversation only when Claude activates the skill (Anthropic, Claude Code docs, 2025). For large reference material, this means near-zero token cost until the skill is actually used.
"Developers don't adopt AI tools because they're impressive — they adopt them because they reduce friction on tasks they repeat every day." — Marc Bara, AI product consultant (2024)
What cannot be extracted into a shared skill?
Not every instruction block is portable. Three categories stay in CLAUDE.md: project-specific paths, behaviors that must differ between projects, and genuinely one-off instructions. If an instruction block references a specific directory or team convention, extract the methodology but leave the specifics behind. The skill holds the pattern; the project holds the parameters. Duplication rates matter here: copy/pasted code lines rose from 8.3% to 12.3% of all changed lines between 2021 and 2024, driven partly by AI tools taking the path of least resistance (GitClear, analysis of 211M changed lines, 2025). The same dynamic applies to AI instructions: copying is fast and centralizing takes a few minutes, so most developers copy until drift becomes painful.
- The instructions reference project-specific paths or names. If the instructions say "check the
src/api/directory" or "follow the naming conventions inteam-standards.md," those are project-local details. Extract the methodology, leave the specifics in CLAUDE.md. - The behavior needs to differ between projects. A code review process that checks different security rules for a financial application versus an internal tool needs per-project customization. You can extract the structure into a shared skill and override specific rules in each project's CLAUDE.md, but a single unified skill will produce the wrong behavior in at least one of the projects.
- The instructions are genuinely one-off. A block of instructions written for a single specific migration, deadline, or context does not repeat. Extract it only if you expect to reuse the pattern.
This applies to single-domain skill extraction. For workflows that span multiple domains or require different behaviors across projects, a shared skill with per-project reference files is a more appropriate architecture than a single monolithic skill.
For a guide to deciding when a workflow needs a skill versus remaining as project-level instructions, see When Should I Use a Skill Instead of Writing Instructions in CLAUDE.md?.
Frequently Asked Questions
Two copies is the extraction threshold. A skill body loads only when invoked, so setup cost is a one-time 30-minute investment that pays back immediately: one canonical version, zero propagation work, and every project improvement for free on the next invocation. The questions below address the specific edge cases developers encounter most.
How many projects does it take before something should become a skill? Two is the threshold. The moment you have copied the same instructions into a second project, you have a maintenance problem. Some developers wait for three or four copies before extracting, which is reasonable, but it means living with drift in the interim. Two copies is the earliest point where extraction pays off.
Can I extract just part of a CLAUDE.md into a skill? Yes. Skills work at the instruction block level, not the whole-file level. If one section of your CLAUDE.md describes a portable methodology and the rest is project-specific, extract only the portable section into a skill and leave everything else in CLAUDE.md. The two can coexist in the same project without conflict.
What happens to the original copies after I extract a skill? Delete them. The copies in each project's CLAUDE.md become dead weight: they consume tokens in Claude's context window at startup, they drift from the canonical version in the skill, and they can produce behavior that conflicts with the skill. One refactoring case study found that removing duplicated code reduced maintenance time by 89% in subsequent updates to the same codebase (ScienceDirect, Journal of Systems and Software, 2025). The extraction is only complete when the copies are gone.
Will a shared skill apply its instructions to every project, even ones where it is not relevant? Only if the skill's description is too broad. The description determines when Claude activates the skill. A well-written description includes trigger conditions that make the skill activate for appropriate requests and stay dormant for unrelated work. If your skill is triggering where it should not, the description needs tightening, not the skill's storage location.
Can I version-control shared skills? Yes, and you should. Storing shared skills in a git repository gives you a change history, the ability to roll back to previous versions, and a review process for changes that will affect every project using the skill. Git's remote repository maintains a single source of truth even when multiple developers are working from the same skill base (Atlassian, "What is version control?", 2024). This is the same discipline you would apply to any shared library or configuration. Git is not exotic tooling: 93% of developers already use it as their primary version control system (Stack Overflow Developer Survey, 2024). Skills versioned in git fit directly into existing workflows with no new tooling required.
Is it better to have one shared skill with many instructions, or several smaller skills? Several smaller skills, each with a clear and narrow scope. A skill with 200 lines of instructions covering 10 different workflows is harder to maintain, harder to trigger precisely, and more likely to bleed into contexts where only one workflow is relevant. Extract each distinct workflow as its own skill. See Is It Better to Have One Complex Skill or Several Simple Ones? for the full tradeoff analysis.
Last updated: 2026-05-06