title: "What Are Negative Triggers and Why Should I Include Them in the Description?" description: "Negative triggers are 'Does NOT apply to' clauses that stop false positives in Claude Code skill descriptions. Here's when to add them and how to write them." pubDate: "2026-04-14" category: skills tags: ["claude-code-skills", "negative-triggers", "skill-description", "false-positives"] cluster: 6 cluster_name: "The Description Field" difficulty: intermediate source_question: "What are negative triggers and why should I include them in the description?" source_ref: "6.Intermediate.3" word_count: 1480 status: draft reviewed: false schema_types: ["Article", "FAQPage"]
TL;DR: Negative triggers are "Does NOT apply to" clauses at the end of a SKILL.md description. They tell Claude which near-miss requests to skip. Without them, broad skills fire on adjacent requests they weren't built to handle. Adding two or three explicit exclusions prevents the most common false positive categories and costs fewer than 100 characters of description budget.
At AEM, we build Claude Code skills as a service — and this problem surfaces in nearly every commission we review.
A Claude Code skill without negative triggers is a trigger condition with no off-switch. The description says what the skill handles. It says nothing about what it doesn't handle. Claude fills the gap using its best judgment, and its best judgment is optimistic. If the request is close enough to matching, the skill fires.
This is the false positive problem. A content writing skill without exclusions fires on "summarize this article." A code review skill without exclusions fires on "explain this code." A documentation skill without exclusions fires on "what does this function do?" All near-miss requests. All in the same category. All wrong outputs.
Negative triggers are the fix. One sentence. Thirty characters. They tell Claude exactly where the skill's scope ends. In commission reviews, every broad content or communication skill built without a "Does NOT apply to" clause produced false positives on near-miss requests — summarization, editing, and analysis requests all reaching skills built for original writing (AEM commission reviews, Q1 2026).
What exactly is a negative trigger in a skill description?
A negative trigger is a "Does NOT apply to" clause added to the end of the description field that names specific request types, output types, or use cases the skill does not handle, so Claude can read those exclusions during routing and reject near-miss requests before activating the skill on the wrong input.
The format is fixed:
description: "Use this skill when [trigger conditions]. Does NOT apply to [exclusion 1], [exclusion 2], or [exclusion 3]."
A content skill with negative triggers:
description: "Use this skill when the user asks to write, draft, or create a blog post, article, or email. Does NOT apply to summarizing existing content, editing for grammar, or generating code."
The exclusions name three specific near-miss categories: summarization, grammar editing, and code generation. All three are adjacent to "writing" in the general sense. All three are requests the content writing skill would get wrong because it wasn't built for them.
"The single biggest predictor of whether an agent works reliably is whether the instructions are written as a closed spec, not an open suggestion." — Boris Cherny, TypeScript compiler team, Anthropic (2024)
A description without negative triggers is an open suggestion about scope. A description with them is a closed spec.
Why do skills fire on requests they shouldn't handle?
Skills fire on requests they shouldn't handle because Claude's skill classification evaluates semantic alignment, not exact-match intent, meaning a broad skill description creates an overlap zone where adjacent request types score above the activation threshold and the skill fires even when the request falls outside what the skill was built to do. When a request arrives, Claude scores each loaded skill's description against the request and activates the best match above a relevance threshold.
The problem: broad categories have overlapping boundaries. "Writing content" is semantically adjacent to "summarizing content," "editing content," and "analyzing content." Without exclusion clauses, Claude uses its general knowledge to decide whether the request is close enough to the trigger conditions. That decision is frequently optimistic for broad skills.
Three near-miss categories account for the majority of false positives in content and communication skills:
- Summarization requests. "Can you summarize this article?" fires on writing skills because summarization involves text.
- Editing and revision requests. "Fix the grammar in this paragraph" fires on writing skills because editing involves text modification.
- Analysis requests. "What's the tone of this email?" fires on writing skills because it involves evaluating written content.
In our commissions, these three categories generate nearly all false positives for content skills — we have not reviewed a broad writing skill without exclusion clauses that didn't fire on at least one of these three request types (AEM commission reviews, Q1 2026). Adding one exclusion clause that names all three cuts false activations to near zero. In testing, skills with a "Does NOT apply to" clause covering summarization, editing, and analysis produced no false positives against those three request types in subsequent activation trials (AEM activation testing, Q1 2026).
When do you need negative triggers?
You need negative triggers when your skill covers a broad category with adjacent request types, when two or more related skills are loaded in the same session and risk competing for the same requests, or when you have already observed the skill firing on requests it clearly shouldn't handle in production.
Three signals:
- The skill covers a broad category. Content, code, communication, and analysis are broad categories with many adjacent request types. Any skill in these categories needs negative triggers. A narrow skill ("Use when the user asks to convert CSV to JSON") has specific enough trigger conditions that near-miss requests are rare.
- You have two or more skills in the same session that cover related areas. A content writing skill and a content editing skill need to exclude each other's use case explicitly. Without exclusions, Claude picks between them inconsistently — in our observation, related skills without mutual exclusions produce routing conflicts on requests that live in the overlap zone between their trigger conditions, and the conflict is not deterministic: the same request can route to different skills in different sessions (AEM commission reviews, Q1 2026).
- The skill has fired on requests it clearly shouldn't handle in production. If you've observed a false positive, that request type belongs in the exclusion list.
You don't need negative triggers for every skill. A skill with a very specific trigger ("Use when the user asks to create a diagram in Mermaid syntax") is specific enough that false positives are unlikely. Adding exclusions to a skill that doesn't have false positive risk wastes character budget and adds noise.
How do you identify what to exclude?
You identify what to exclude by running category analysis first, then near-miss testing: list the common adjacent tasks in your skill's category that the skill doesn't handle, then write five representative near-miss requests and check how many would incorrectly activate the skill without an exclusion clause in place.
Two methods:
- Category analysis. Identify the broad category your skill covers. Then list the four or five most common things people do with content in that category that your skill doesn't handle. For a writing skill: summarize, edit, analyze, translate, format. Add the top two or three to the exclusion clause.
- Near-miss testing. Write 5 requests that live in the same category as the skill but are clearly not intended for this skill. Test each one against the description without negative triggers. If Claude would fire the skill on more than 1 out of 5, add exclusions for the types that match.
The goal isn't an exhaustive exclusion list. Two or three exclusions covering the most common near-miss categories are enough. In AEM production builds, descriptions that land in the 200-400 character range consistently include one to three exclusion clauses and those descriptions hit the best balance between trigger specificity and false positive prevention without padding (AEM production builds, Q1 2026). A 12-item exclusion list suggests the skill's positive trigger conditions aren't specific enough.
How do you write a negative trigger clause?
Write a "Does NOT apply to" sentence at the end of the description field, before the character limit, using short noun phrases for each exclusion rather than full sentences, because noun phrases pack two or three exclusion categories into under 70 characters while full sentence equivalents consume more than 200 characters for the same semantic content.
The format is a "Does NOT apply to" sentence at the end of the description, before the character limit:
Does NOT apply to [noun phrase 1], [noun phrase 2], or [noun phrase 3].
Use noun phrases, not full sentences. Each exclusion is an output type or request type, not a description of a scenario:
# Correct: noun phrases
Does NOT apply to summarizing existing content, editing for grammar only, or generating code.
# Incorrect: full sentences (too long, uses character budget inefficiently)
Does NOT apply to cases where the user wants to summarize a document they already have, or when the user only wants grammar corrections, or when the user is asking for help with code generation tasks.
The noun phrase version is 70 characters. The full sentence version is 234 characters for the same semantic content. Use noun phrases. Claude Code's SKILL.md description field has a 1,024-character hard limit — the YAML value is silently truncated at runtime with no error, which means exclusions placed in the second half of a long description may never reach Claude (AEM engineering, confirmed in production).
For skills with multiple competing alternatives, be explicit about which other skill handles the excluded cases:
description: "Use this skill when the user asks to write, draft, or create original blog posts or articles. Does NOT apply to editing existing posts (use the editing skill), summarizing content, or generating code."
The parenthetical "(use the editing skill)" is optional but helpful when two related skills are both loaded. It tells Claude which alternative to prefer for excluded requests.
What's the cost of not including negative triggers?
Skipping negative triggers creates two costs: a direct cost where the skill produces wrong output on near-miss requests, and a hidden cost where users lose confidence and stop invoking the skill entirely, even for the requests it handles correctly, because a skill that fires unpredictably reads as broken regardless of its actual accuracy rate.
Two costs, and one of them is hidden:
Direct cost: The skill fires on wrong request types and produces incorrect output. The user gets a blog post draft when they asked for a summary. The skill functionally fails for those requests.
Hidden cost: The user loses confidence in the skill. A skill that fires unpredictably — even if it fires correctly 70% of the time — reads as unreliable. Users stop invoking it. The skill's invocation rate drops to near zero even though it works correctly most of the time.
The hidden cost is the bigger one. It's why false positive rate matters as much as true positive rate in skill design. In our commission work, clients with skills that produced false positives reported the skill as "not working" even when the true positive rate was above 70% — the false positives anchored their confidence, not the correct activations, and they stopped relying on the skill for the requests it handled well (AEM commission reviews, Q1 2026). A skill that users don't trust is a prompt in a trenchcoat.
For a complete framework on description design including negative triggers, see The SKILL.md Description Field: The One Line That Makes or Breaks Your Skill.
For the related decision on trigger phrasing, see How do I write trigger phrases that make my skill activate reliably?.
FAQ
How many exclusions should I include in the "Does NOT apply to" clause? Two to three for most skills. One exclusion often isn't enough to cover the near-miss category fully. Four or more exclusions suggest the positive trigger conditions aren't specific enough. If you need five exclusions to prevent false positives, the skill probably covers too much.
Do I need negative triggers for a narrow, specific skill? Not usually. A skill with a very specific trigger ("Use when the user asks to convert a CSV to JSON") has low false positive risk because the intent is precise. Add negative triggers only when there are adjacent request types in the same category.
What if the excluded request types are handled by another loaded skill? Name the other skill in the exclusion clause: "Does NOT apply to editing existing content (use the editing skill for that)." This helps Claude route correctly when both skills are loaded.
Can negative triggers interact with each other across skills? Yes. If Skill A excludes "editing existing content" and Skill B excludes "creating new content," Claude reads both descriptions and routes correctly between them for most requests. The exclusions help differentiate the two skills' scopes.
What if my exclusion clause is too long for the character limit? Use shorter noun phrases. "Summarizing existing documents, editing for grammar, generating code" covers three exclusion categories in 58 characters. Full sentence exclusions are a sign of over-specification. Trim to noun phrases.
Is a "Does NOT apply to" clause the only format for negative triggers? It's the most consistent format. Some developers use parenthetical notes like "(not for code generation)" at the end of the trigger clause. Both work. The "Does NOT apply to" format is more explicit and easier for Claude to parse as an exclusion rather than a qualification.
Last updated: 2026-04-14