Skill interference in Claude Code comes in three forms. Each has a different cause and a different fix, and mixing them up is what turns a half-hour debugging session into a three-hour spiral. At AEM, we diagnose these failures across every commission that involves a skill library of more than five skills.
TL;DR: Three types of interference account for almost all multi-skill conflicts. Activation collision: two skills compete for the same prompts and Claude picks one unpredictably. Character budget displacement: the total description length exceeds 15,000 characters and later skills get truncated. Instruction contradiction: two skills give conflicting rules for the same situation. Identify the type first, then apply the specific fix.
What are the three types of skill interference?
Three types account for almost every multi-skill failure: activation collision, where overlapping trigger vocabulary causes Claude to pick the wrong skill; character budget displacement, where the combined description length exceeds the 15,000-character system prompt budget and later skills get truncated; and instruction contradiction, where two skills apply conflicting rules to the same task.
- Activation collision: Two or more skill descriptions overlap in their trigger vocabulary. When a prompt matches both, Claude picks one based on factors you don't control. The winning skill is not always the one you want. Result: your skill activates sometimes, not others, even on identical prompts.
- Character budget displacement: Claude Code allocates roughly 100 tokens per skill description at startup (Anthropic documentation, 2025). The total system prompt budget for all skill descriptions is approximately 15,000 characters. When your combined skill descriptions exceed this limit, the descriptions that load last (alphabetically, by default) get truncated or excluded. A skill that appears in
/skillswith a truncated description has lost its trigger phrases. - Instruction contradiction: Two skill bodies contain rules that directly conflict. When both skills load for the same task, or when context from a prior skill invocation is still active, Claude encounters contradictory instructions and resolves the conflict unpredictably. Result: inconsistent output quality that's hard to trace to a specific skill.
"Models placed in the middle of long contexts lose track of instructions at a rate that makes mid-context policy placement unreliable for production systems." — Nelson Liu et al., Stanford NLP Group, "Lost in the Middle" (2023, ArXiv 2307.03172)
Multiple skills in the same session multiply this effect. More instruction layers mean more opportunities for context conflicts.
How do I identify which type of interference I have?
Each interference type has a distinct symptom. Activation collision shows as the wrong skill running on a prompt that should trigger a different one. Budget displacement shows as truncated skill descriptions in /skills. Instruction contradiction shows as correct output on some inputs and wrong output on others, with no obvious pattern. Run these diagnostic checks in order:
- For activation collision: Ask Claude: "List all skills you have loaded." Then ask the same prompt that should trigger your skill in a fresh session. If the wrong skill activates, read both skill descriptions and identify the shared vocabulary. The shared vocabulary is the collision point.
- For character budget displacement: Run
/skillsand read the full description for each skill. If any description is shorter than you wrote it, it's been truncated. Count the total characters across all skill descriptions. Above 14,000 characters, you're at risk. Above 15,000, displacement is happening. - For instruction contradiction: Look for the pattern in output: the skill produces correct output on some inputs and subtly wrong output on others, with no obvious trigger for the difference. If this happens after adding a second skill to the project, compare the rules sections of both SKILL.md files. Look for rules that could apply to the same situation with different results.
How do I fix an activation collision?
The fix is mutual exclusivity in the descriptions: each conflicting skill needs negative triggers that explicitly exclude the other's domain. This is a one-time edit to both skill descriptions that tells Claude exactly which requests each skill should reject, eliminating the overlap that causes unpredictable activation.
Process:
- Read both descriptions and identify the overlapping trigger vocabulary.
- Decide which skill owns each domain.
- Add a negative trigger to the skill that should NOT activate: "Do NOT invoke for [the other skill's domain]."
- If both skills legitimately apply to the same prompt, consider whether they should be merged into one skill with conditional logic, or whether one should be invoked explicitly via
/skill-nameinstead of auto-triggered.
Example: A "review" skill for content and a "check" skill for code both activate on "review this." Fix: content review skill adds "Do NOT invoke for code or technical files." Code check skill adds "Do NOT invoke for written prose, blog posts, or non-code text."
In AEM commissions, activation collision is the most common interference type, present in roughly 60% of skill conflict reports we diagnose. In nearly every case, the collision traces to 3-5 shared words in the trigger vocabulary that neither skill author noticed (AEM commission analysis, 2025). After adding negative triggers, test both skills against a shared test prompt set. See How Do I Debug a Skill That Triggers on the Wrong Prompts for the test methodology.
How do I fix character budget displacement?
Fix character budget displacement by reducing total description length below the 15,000-character system prompt limit. Libraries under 20 skills should compress their longest descriptions. Libraries at or above 30 skills need a curation audit first, because unused skills are the fastest path to reclaiming budget. Two approaches, by library size:
- If you're under 20 skills: Shorten the descriptions of your longest skills. Count each description's characters. The 5 longest descriptions are the best targets for compression. Remove redundant phrasing and collapse multi-sentence descriptions into single compound sentences. Getting every skill description under 200 characters buys significant headroom.
- If you're at or above the 30-skill curation threshold: Audit every skill and ask: Is this still used? Is this duplicating something another skill already does? In our work with teams at the 30+ skill mark, 20-30% of skills are unused or superseded, and removing them resolves budget problems without any description rewriting (AEM skill library audits, 2025).
The curation threshold is real. Above 30 skills, the cognitive cost of maintaining a library outpaces the productivity gain of adding more skills. More skills do not mean more capability — past 30, they mean more interference.
For more on how context overload hurts skill performance, see What Is Context Bloat and How Does It Hurt Skill Performance.
How do I fix instruction contradiction?
Fix instruction contradiction by reading both skill bodies in the same session and finding the specific rule that conflicts. The contradiction is always a universal rule in one skill that collides with a universal rule in another. Once identified, the fix is to make the rule context-specific so it no longer applies to both skills at once.
- Rules about output format: Skill A says "always output JSON," Skill B says "always output plain text"
- Rules about tool use: Skill A says "always read the file before responding," Skill B says "never read files without asking"
- Rules about verbosity: Skill A says "be concise," Skill B says "always explain your reasoning in detail"
Output format contradictions are the most damaging category. When a model receives conflicting format instructions, output consistency drops from roughly 95% to below 60% (Addy Osmani, Google Chrome Engineering, 2024). Once you find the contradiction, fix it by:
- Making the rules context-specific rather than universal. Instead of "always output JSON," write "output JSON when the requester is a developer tool or API, plain text when responding to a human."
- Removing the conflicting rule from the skill where it's less important.
- Separating the skills so they never activate in the same session (use explicit invocation for one of them instead of auto-trigger).
Does skill order matter?
Yes, in two ways. First, descriptions load in alphabetical order by skill folder name. Skills whose names start with letters later in the alphabet are more likely to be truncated when the budget is exceeded. If a critical skill is being displaced, rename its folder to start with a- or 0- to load it first.
Second, within a single session, the instructions from skills loaded earlier are further from the end of the context than instructions loaded later. Per the "Lost in the Middle" research, accuracy drops over 20 percentage points when relevant instructions are placed mid-context versus at the start or end (Liu et al., Stanford NLP Group, 2023). For skills where strict rule adherence matters, this means the first and last skills loaded have an advantage.
What is a safe maximum skill library size?
There is no universal maximum, but the patterns we see across commissions are consistent: libraries under 15 skills rarely experience interference. Libraries of 15-30 skills sometimes experience activation collisions. Libraries above 30 skills almost always require active curation and description management to stay healthy. In one published analysis of 858 Claude Code sessions with 42 skills loaded, 19 of those skills had 2 or fewer invocations across the entire dataset, confirming that large libraries routinely carry dead weight (Gupta, 2026).
The practical ceiling is determined by your total description character count, not your skill count. Claude Code's base system prompt and tool definitions consume approximately 14.8k tokens before any skill descriptions load, leaving the remaining budget strictly for skill content (eesel.ai Claude Code context analysis, 2025). Keep total skill descriptions under 12,000 characters and you have reliable headroom. The discipline of writing concise, specific descriptions pays double dividends: better activation precision AND more budget capacity.
FAQ
Most skill interference traces to one of three root causes: overlapping trigger vocabulary, combined description length above 15,000 characters, or conflicting universal rules across skill bodies. The questions below address the specific diagnostic and repair scenarios that come up most often in AEM commissions involving skill libraries above 10 skills.
Q: Two of my skills keep conflicting and I can't figure out which description to change. Read both descriptions aloud and ask: "Which skill should handle this specific type of request?" The answer tells you which skill owns the territory. The other skill gets the negative trigger that excludes that territory.
Q: My skill library worked fine until I hit 25 skills, then things got unpredictable. Is this the budget problem?
Run /skills and look for truncated descriptions. If 4-5 skills have shorter descriptions than you wrote, budget displacement is happening. Count total description characters. If you're above 12,000, start shortening descriptions. If you're above 15,000, also consider removing unused skills.
Q: I removed a skill and now a different skill started activating incorrectly. Why? When you remove a skill, its character budget gets reallocated. Skills that were previously truncated now load fully, including their trigger phrases. If one of those newly-restored trigger phrases conflicts with another skill, you've surfaced a collision that was hidden by the truncation. Diagnose it as an activation collision and add negative triggers.
Q: Can two skills in different projects interfere with each other?
If one skill is installed at user level (~/.claude/skills/) and another at project level (.claude/skills/), both load in the same session. User-level and project-level skills share the same budget and the same activation space. They interfere in the same ways as two project-level skills.
Q: I have a skill that's supposed to run every time, but another skill keeps preempting it. How do I make my skill take priority?
Skill priority isn't configurable directly. The workaround: make the high-priority skill's description more specific and add explicit trigger phrases that the other skill doesn't match. The more specific description wins more confidently. If the skill must always run for certain prompts, consider requiring explicit /skill-name invocation rather than relying on auto-trigger.
Q: How do I detect instruction contradictions before they cause problems in production? After adding a new skill to a library, test the skill against the prompts that your other active skills handle. If the new skill activates on those prompts, read its body for rules that conflict with the existing skills. Contradictions are easiest to catch at install time, before they've caused observable problems in real work.
Last updated: 2026-04-21