Adding a Claude Code skill is supposed to add capability. If it breaks a skill you already depend on, you didn't add capability: you added a conflict. The two causes are distinct and need different fixes: description conflicts between skills, and system prompt budget pressure from total skill count.
TL;DR: Skill conflicts in Claude Code come from two sources: overlapping trigger descriptions (the classifier picks one skill based on linguistic signals, not a deterministic rule) and system prompt budget pressure (descriptions get truncated past ~15,000 characters). Diagnose with
/skills, compare descriptions, and add scope exclusions.
What are the two ways a new skill breaks an existing one?
Description conflicts account for roughly 70% of inter-skill interference reports in AEM commissions. Two skills with overlapping trigger conditions create an ambiguous signal for Claude's classifier. The classifier picks one skill based on subtle language patterns in the prompt, not a deterministic rule.
The result looks like: the old skill starts triggering on some prompts and failing to trigger on others, with no apparent pattern. Or the new skill triggers when the old one should. Or both try to co-trigger on the same prompt, creating blended output that doesn't match either skill's intended behavior.
System prompt budget pressure accounts for the other 30%. Every skill description occupies space in Claude's system prompt. The total budget for all skill descriptions is approximately 15,000 characters. A project with 10 skills at 150 characters each uses 1,500 characters, well within budget. A project with 20 skills averaging 700 characters each uses 14,000 characters. Adding one more skill at 600 characters pushes the total over 14,600, which starts forcing truncation on the longest descriptions.
When a description gets truncated, the skill it describes loses its trigger conditions mid-sentence. The classifier reads a description that says "Use this skill when the user needs to... [truncated]." The trigger condition is gone. The skill either never activates or activates on everything. Research published via the MLOps Community found that LLM reasoning performance degrades at around 3,000 tokens of context, well below the technical context window limit (Goldberg et al., via MLOps Community, 2024). Budget truncation is the extreme version: the instruction is cut off entirely, not just deprioritized.
"Models placed in the middle of long contexts lose track of instructions at a rate that makes mid-context policy placement unreliable for production systems." — Nelson Liu et al., Stanford NLP Group, "Lost in the Middle" (2023, ArXiv 2307.03172)
An Altexsoft audit of enterprise LLM deployments found that 68% of implementations used bloated system prompts that measurably hurt model performance (Altexsoft, 2024). Skill libraries are one of the fastest ways to hit that threshold without noticing.
How do you diagnose a description conflict?
Run /skills in Claude Code. This shows all skill names and their active descriptions exactly as Claude's classifier reads them in the system prompt. It is the right starting point because conflicts live in the description text, not the skill files themselves. Read them side by side.
Look for three patterns:
- Trigger overlap. Two descriptions that both say "Use this skill when the user wants to write [X]" or "Invoke when the user asks about [Y]." If both descriptions would fire on the same prompt, there's a conflict.
- Scope ambiguity. One description says "Use for content creation" and another says "Use for writing tasks." These are semantically the same trigger condition written differently. Claude will pick based on random linguistic variation in the user's prompt.
- Missing scope boundaries. A description that doesn't say what it doesn't cover. "Use this for LinkedIn posts" might conflict with a general writing skill that covers LinkedIn as one of many platforms. The first skill needs a scope boundary: "Use for LinkedIn posts only, not other social platforms or formats."
The test: write a sample prompt that should trigger your old skill. Ask Claude what skill it would use. If it names the new skill, you have a conflict. Simon Willison, creator of Datasette and the llm CLI, identified this pattern: almost every production AI failure traces back to an ambiguous instruction, not a model capability gap (Willison, 2024). A vague or overlapping description is an ambiguous instruction.
How do you fix a description conflict?
Rewrite one description to add explicit scope exclusions that the classifier can evaluate. The primary fix is negative trigger language: a "SKIP for" list that names what the skill does not cover. Supporting fixes are structural differentiation of trigger language and renaming skills to reflect distinct function.
Rewrite one description to explicitly exclude the other's territory. This is the direct fix. If skill-a covers "blog posts" and skill-b covers "long-form writing," rewrite skill-a's description to say "blog posts only, not long-form articles or other writing types." The exclusion creates a hard boundary that the classifier can evaluate. The pattern for negative trigger language:
description: "Use this skill when the user needs to write a LinkedIn post (short-form, under 300 words, for professional audiences). SKIP for: long-form articles, Twitter/X posts, email, or other formats."The "SKIP for" list is the scope boundary. It tells the classifier what this skill explicitly doesn't cover. Any prompt that matches the skip conditions goes to the other skill.
Differentiate the trigger language structurally. If both skills use the same imperative pattern ("Use this skill when..."), the classifier defaults to vocabulary matching. Differentiate the structure: make one skill a directive trigger ("When the user types /linkedin or asks for a LinkedIn post...") and one a semantic trigger ("When the user describes a professional audience and needs short-form content..."). Different structures reduce accidental overlap. Addy Osmani, Engineering Director at Google Chrome, found that giving a model an explicit output format with examples raises consistency from roughly 60% to over 95% in benchmark testing (Osmani, 2024). The same principle applies to trigger language: structure specificity is not optional.
Rename one skill. Skill names contribute to discovery alongside descriptions. Two skills with similar names (writing-assistant and content-writer) create classifier confusion before the descriptions are even evaluated. Rename to reflect distinct function: linkedin-post-writer and longform-article-writer. Unambiguous names reduce conflicts at the first layer of selection.
Negative trigger language cannot fully eliminate classifier ambiguity when two skills have genuinely overlapping domains. If skill-a covers "technical writing" and skill-b covers "documentation," no amount of scope exclusion language will create a clean boundary, because the domains themselves overlap. In that case, consolidate the two skills into one.
For how descriptions control skill activation, see What Does the Description Field Do in a Claude Code Skill? and My Skill Produces Great Output but Never Activates Automatically. For the full diagnostic framework when a skill isn't working, see Why Isn't My Claude Code Skill Working?.
When is system prompt budget the issue?
If /skills shows descriptions ending mid-sentence or missing their trigger conditions entirely, budget pressure is the cause. Claude Code's skill metadata budget is finite: descriptions are shortened when total usage exceeds the limit, and the skills with the longest descriptions are cut first.
Claude Code's overall skill metadata budget defaults to 8,000 characters across all skills, and each entry's combined description text is capped at 1,536 characters in the listing (Anthropic, Claude Code Docs, 2025).
Check total description character count: add up the character counts of all your skill descriptions. If the total exceeds 12,000 characters, you're in the risk zone. At 15,000 characters, truncation is guaranteed for the longer descriptions.
Three fixes:
- Trim descriptions. The 1,024-character limit per description is a ceiling, not a target (Anthropic, Agent Skills Docs, 2025). A focused 150-character description often works better than a 900-character one. Cut everything from descriptions that isn't a trigger condition or a scope boundary.
- Split skill libraries by context. Project-level skills load only in that project. User-level skills load in every session. Move skills that are only relevant to one project out of user-level storage and into the project's
.claude/skills/directory. - Retire unused skills. Skills that you added during exploration but no longer use for real work occupy budget without providing capability. Remove them from the active skill directory.
Frequently asked questions
The /skills command is the starting point for all skill conflict diagnosis in Claude Code: it shows active descriptions exactly as the classifier reads them, making trigger overlaps and scope gaps visible in one command. The questions below address specific diagnostic and resolution scenarios.
How do I find out if my skills are competing with each other?
Run /skills to see all active descriptions. Then run a few representative prompts and ask Claude which skill it invoked: "Which skill did you use for that?" If the answer is wrong, compare the description of the skill that fired against the one that should have fired. Look for trigger language overlap.
Can I have two skills with the same trigger condition if they do different things?
No. Claude can only invoke one skill per prompt. If two descriptions both match the same trigger, Claude picks one. Not both. If you genuinely need two workflows triggered by the same user request, use a single skill that presents options or delegates to sub-steps.
I renamed the new skill but the old skill is still misfiring. What now?
The name is one factor; the description is the primary one. Read both descriptions looking for overlapping imperative phrases. "Use when..." statements with the same completion are the most common conflict source. Rewrite one to add explicit scope exclusions covering what the skill does not handle.
Does installing a skill from the community cause this problem?
Yes. Community skills often have broad descriptions designed to trigger on many inputs. A community skill with "Use for any writing task" will conflict with every specific writing skill you have. Before installing community skills, read their descriptions and check for overlap with your existing library.
My project has 40+ skills and many are conflicting. How do I untangle this?
Start with an audit: list all skill names and descriptions in a document. Group them by topic. Within each group, identify which skills have overlapping trigger language. Rewrite or retire down to one skill per distinct use case. Then test each remaining skill in isolation before re-adding others.
Last updated: 2026-04-18