A skill that only works on your machine is not a skill. It is an alias. At Agent Engineer Master (AEM), this is the first portability test every skill must pass before it leaves a commission: does it work on any machine in the team's repo, or just the machine that built it?

TL;DR: The simplest way to share a Claude Code skill with your team is to commit it to the project repo as a project-level skill. Anyone who clones the repo gets the skill automatically. Before sharing, verify portability: no hardcoded paths, no machine-specific secrets. Skills travel with the code and update through normal PR review.


What's the simplest way to share a skill with teammates?

Commit it to the project repo. Add .claude/skills/ to git, push it, and every teammate who clones or pulls gets the skill at session start. No Slack messages, no manual installs. The 93% of development teams already on Git (Stack Overflow Developer Survey, 2024) need zero new tooling to make this work.

Project-level skills live in .claude/skills/ inside your project directory. Any teammate who clones or pulls the repo picks up every skill in that folder without a manual install step.

This is the design intent. Claude Code treats .claude/skills/ as part of the project. Sharing via git is not a workaround; it is the expected path.

The PR that adds the skill is also the place to explain what the skill does, why the description is written the way it is, and any trigger conditions teammates should know about. That context lives in the PR description, not in the SKILL.md. Six months later, when someone wants to change the skill, that PR is the institutional memory.

This approach doesn't solve every sharing scenario. It doesn't work for teams without a shared repository, for skills that embed sensitive business logic that can't be committed to version control, or for cross-org distribution where teammates don't have repo access. For those cases, see How Do I Package a Skill for Distribution to Others?.

What makes a skill portable vs machine-specific?

Three failure modes turn a skill that works for you into one that breaks for everyone else: hardcoded absolute paths, machine-specific environment variables, and personal context assumptions baked into the description. A skill that hits any one of these will work for its author and fail silently for every teammate on day one.

  1. Hardcoded absolute paths. A step that reads "Load the file at /Users/yourname/project/docs/api-spec.md" works for you and no one else. Use relative paths from the project root: ./docs/api-spec.md. Test in a clean clone before sharing.
  2. Machine-specific environment variables. A skill that relies on $HOME/custom-config.yaml fails silently on machines without that file. Document required setup explicitly. Better: make the skill fail visibly with a clear error rather than reading a missing file silently.
  3. Personal context assumptions. A description written to trigger on "my usual PR thing" works for you because you know what it means. A teammate does not. Write for the natural language the whole team uses, not your personal shorthand.

"When you give a model an explicit output format with examples, consistency goes from ~60% to over 95% in our benchmarks." -- Addy Osmani, Engineering Director, Google Chrome (2024)

The same principle applies to sharing skills. A skill with explicit output contracts and example outputs produces consistent results across team members. A skill with vague instructions produces different results for each person who invokes it.

The most common portability failure we see in skill engineering commissions at Agent Engineer Master: an absolute file path embedded in a step instruction. The skill works for the original developer and breaks for everyone else on day one. Studies of shared AI tooling found a 45% reduction in hours per developer contribution when configuration was version-controlled and shared across the team (UCSD / GitHub Copilot study, 2024). Portability is the prerequisite for that gain.

How do naming conventions affect shared skill usability?

Skill naming matters more for shared skills than for personal ones. An ambiguous name costs one person a moment of confusion. The same name costs a ten-person team ten moments of confusion, every time it is invoked. Use the gerund form with a domain qualifier: reviewing-pull-requests not review, generating-commit-messages not commits.

A skill named review could mean code review, PR review, document review, or any other kind of review. When you are the only person using it, you know exactly which one. When the whole team uses it, the name needs to be unambiguous.

The recommended convention for team skills: use the gerund form with a clear domain qualifier. reviewing-pull-requests, generating-commit-messages, documenting-api-endpoints. These names are immediately clear to a new team member who has never used the skill before.

  • Avoid names that conflict with common words (test, build, run).
  • Avoid names that describe the technology rather than the action (typescript, python, react).
  • Avoid generic names that could describe anything (helper, utils, assistant).

According to DX research (Q4 2025), developers using AI coding tools daily save an average of 3.6 hours per week, but only when they can locate and trigger the right tool immediately. A skill named helper costs discovery time on every invocation. A skill named reviewing-pull-requests costs nothing.

For naming rules and the discovery impact of skill names, see How Do I Organize Multiple Skills in a Project?.

How does team skill maintenance work over time?

Through the same PR process as the rest of your code. A team member opens a branch, edits the SKILL.md, and opens a PR. Teammates review and merge. Code reviews catch up to 65% of defects before they reach production (Codinghorror / IBM research); the same discipline applied to skill files catches instruction drift before it reaches every session.

The review question is specific: does this change make the skill more reliable, or does it change the skill's behavior in a way that will break existing users?

This review discipline prevents the drift that happens when skills are distributed informally. If your skill lives in the repo and updates through PRs, every change is visible, reversible, and documented. An AT&T study of code review programs found a 90% decrease in defects after introducing mandatory reviews. That same discipline applies to skill files: a SKILL.md modified without review is the starting point for a skill that works differently in every teammate's session.

For teams with many skills, designate an owner for each skill. The owner is the person who understands the original design intent, reviews proposed changes, and keeps the skill's behavior consistent with its brief. This is the same model as code ownership, applied to skill files. Organizations using unified developer platforms with version-controlled shared tooling report 60% shorter onboarding time (GitKraken / industry research, 2024). Skill ownership and PR review are the mechanism behind that number.

Skills can also be tested in the PR process. A reviewer can clone the branch, load the modified skill, and verify it triggers correctly and produces correct output before merging.

What about distributing skills outside your team?

For open-source or community distribution, strip company-specific conventions, document all prerequisites, and write a SKILL.md description specific enough to work across project structures a stranger has never seen. Without documented prerequisites, community installs fail silently. A skill dropped into an unfamiliar project without a setup section is a prompt in a trenchcoat.

The SKILL.md description is doing more work in a community context. It has to be specific enough that someone who has never seen your project structure can still trigger the skill correctly. Generic descriptions that worked for your team because everyone shared the same context will fail in the wild.

Prerequisites are the most commonly skipped item. A community user who installs a skill that requires a specific MCP server or reference file gets a silent failure and a broken first impression. List every dependency in a setup section before publishing. 84% of developers are already using or planning to use AI tools (Stack Overflow Developer Survey, 2024); a well-documented skill finds an audience. A skill with no setup docs gets abandoned silently.

For the full distribution workflow, including platform options and packaging approaches, see How Do I Package a Skill for Distribution to Others?. For the install-level decision that determines whether a skill is project-scoped or personal, see What's the Difference Between Project-Level and User-Level Skills?.


FAQ: Sharing skills with your team

Git-based project-level sharing handles the common cases: the whole team gets the skill on clone or pull, updates travel through PRs, and no install step is needed. The questions below address edge cases, model-tier differences, documentation patterns, and the manual-file-send workaround.

Can I share a skill by just sending the SKILL.md file to a teammate? Yes. They can add it to their .claude/skills/ directory manually. But this approach creates version drift immediately: if you update the skill, they need to be told and need to update manually. Git-based sharing eliminates this problem.

Do teammates need to do anything after pulling the updated repo to get a new skill? No. Claude Code reads the .claude/skills/ directory at session start. As long as they have pulled the latest commit, the skill is there.

What if teammates are using different Claude models (some on Opus, some on Haiku)? Shared skills should work reliably across model tiers. If a skill only works on Opus, its instructions are too loosely written for lower tiers. The right fix is to tighten the instructions, not to restrict the skill to one model.

How do I document what a skill does for new team members? Three places work well: the PR description when the skill was added, a comment in the SKILL.md frontmatter (the description field), and a brief entry in your team's developer onboarding docs. The SKILL.md description is the most important because it is always co-located with the skill.

Should team skills go in a separate folder from personal project files? No. .claude/skills/ is the standard location. Creating a separate folder or nesting structure does not match the convention Claude Code expects. Project skills go in .claude/skills/. That is the folder.

Can I have both team skills (project-level) and personal skills (user-level) active at the same time? Yes. Both levels load in every session. A developer's personal user-level skills and the team's project-level skills all work simultaneously. If there is a name conflict, the project-level version wins.

Last updated: 2026-05-02