A skill that says "before v2, do X; after v2, do Y" has two problems: Claude doesn't know what version is current, and whoever maintains this skill in 8 months won't know what "v2" meant. One condition goes stale. The other fires on ambiguous inputs. The skill degrades quietly.
TL;DR: Time-sensitive conditionals are version-gated instructions that become permanently stale. Once the old state no longer exists, the conditional branch becomes dead code, but Claude still reads it. The fix is to delete old branches when you ship new behavior and write structural conditionals based on input properties rather than historical versions.
AEM builds Claude Code skills to production standards, where SKILL.md is the skill's sole instruction source and each skill targets a single, tightly-scoped behavior. Time-sensitive conditionals violate both of those properties by encoding deployment history into the instruction layer. As Marc Bara, AI product consultant, put it in 2024: developers adopt AI tools because they reduce friction on tasks they repeat every day. A skill with dead conditional branches does the opposite.
What is a time-sensitive conditional?
A time-sensitive conditional is an instruction whose validity depends on when the skill is being executed or what version of the workflow it supports. Unlike structural conditionals that read from input properties, time-sensitive ones encode deployment history directly into the skill instructions. The condition is not observable from the input itself. They appear in several forms:
- Version-gated: "Before v2 of the approval workflow, check for the legacy_format field first. After v2, use the new schema directly."
- Date-gated: "Until the API migration completes (estimated March 2026), use the v1 endpoint. After migration, switch to /api/v2/..."
- Phase-gated: "During the beta period, skip the quality check step. Once out of beta, always run it."
All three share the same structure: a condition that will eventually be permanently true on one side, leaving the other branch as dead code embedded in the skill's instructions.
A skill that says "before v2, do X" is a skill that remembers the past. Claude doesn't share that memory.
"The single biggest predictor of whether an agent works reliably is whether the instructions are written as a closed spec, not an open suggestion." — Boris Cherny, TypeScript compiler team, Anthropic (2024)
Why do these conditionals go stale?
When v2 ships, three things happen to a time-sensitive conditional. The branch that evaluated the old state becomes permanently unreachable. The context that gave those conditions meaning disappears from institutional memory. The conditional itself becomes a structural liability: Claude evaluates both paths on every invocation, even though only one path is now reachable.
- The "before" branch becomes permanently false. The skill is now always in the "after v2" state. But the conditional still occupies space in SKILL.md. Claude reads it. It evaluates the condition. On most inputs, it correctly applies the v2 path. On certain inputs, particularly edge cases where the language of the input is ambiguous, it's less certain about which branch applies.
- The historical context disappears. The person who wrote "before v2" knew exactly what they meant. The person who inherits the skill 8 months later does not. In our skill audits, we find time-sensitive conditionals in roughly 60% of skills that have been actively maintained for more than 6 months. In every case we've reviewed, the skill maintainer couldn't explain which branch was current without reading the git history. Technical debt of this kind compounds: McKinsey research from 2023 found that technical debt accounts for approximately 40% of IT balance sheets, a burden that applies directly to unmaintained skill instructions. The 2024 Stack Overflow Developer Survey found that 62% of developers cite technical debt as their greatest source of work frustration, which tracks directly with stale conditional logic that nobody owns.
- The conditional becomes a source of inconsistency. Consistency depends on unambiguous instructions. A conditional with a dead branch introduces structural ambiguity: the model evaluates both paths on every invocation, even when the condition is deterministically one-sided.
"When you give a model an explicit output format with examples, consistency goes from ~60% to over 95% in our benchmarks." — Addy Osmani, Engineering Director, Google Chrome (2024)
"The failure mode isn't that the model is bad at the task — it's that the task wasn't specified tightly enough. Almost every production failure traces back to an ambiguous instruction." — Simon Willison, creator of Datasette and llm CLI (2024)
How does Claude handle dead conditional branches?
Claude reads instructions sequentially and evaluates conditional logic relative to the context it receives. When a skill says "if legacy_format is present, use old schema; otherwise use new schema," Claude evaluates the input for the presence of legacy_format. That's a structural conditional — it works correctly.
When a skill says "if we're still on v1, use old schema; if v2, use new schema," Claude has no way to determine the current version from context alone. It falls back to probabilistic interpretation. Which branch looks more likely given the input? The result is correct more often than it's wrong, but it fails in exactly the edge cases where precision matters most.
The deeper problem: dead branches don't fail loudly. They produce subtly wrong outputs, outputs that look plausible but apply behavior from a prior state of the world. You won't catch these without test cases that explicitly verify the "new" behavior, run against inputs where the "old" behavior was also plausible.
Research on instruction position in long contexts confirms the mechanism: "Models placed in the middle of long contexts lose track of instructions at a rate that makes mid-context policy placement unreliable for production systems" (Nelson Liu et al., Stanford NLP Group, "Lost in the Middle," 2023, ArXiv 2307.03172). Dead branches extend the effective instruction length, pushing active instructions further into contexts where recall degrades.
What are better alternatives?
Three patterns replace time-sensitive conditionals without the staleness problem. Each works by eliminating the temporal reference entirely: either by deleting the obsolete branch, rewriting the condition to read from observable input properties, or moving date-gated changes into the eval and versioning layer where they belong. None requires special tooling.
Specification ambiguity accounts for 41.77% of multi-agent LLM system failures according to the MAST taxonomy, validated across 1,600+ execution traces (Cemri et al., UC Berkeley Sky Lab, 2025, ArXiv 2503.13657). Time-sensitive conditionals are a direct instance of this pattern: the model cannot resolve the ambiguity from context, so it approximates.
- Delete the old branch. This is the right answer in 90% of cases. When v2 ships, the v1 behavior is gone. Delete the conditional entirely. The skill now contains only the current behavior. Git history preserves the old behavior for anyone who needs to read it.
# Before
If using v1 format, apply legacy_mapping before processing.
If using v2 format, process directly.
# After v2 ships
Process the input directly using the v2 schema.
One instruction. Zero ambiguity.
- Write structural conditionals. Replace version references with properties Claude can detect from the input itself. "If the input contains a legacy_format field, apply legacy_mapping before processing" is a conditional Claude can evaluate from context. It doesn't require knowing what version is current. It reads the actual input.
# Structural conditional (no version reference)
If the input contains a legacy_format field (identified by the presence of
the "lf_schema_v" key), apply legacy_mapping before processing.
Otherwise, process directly.
This conditional stays valid indefinitely. It describes input structure, not deployment history.
- Put date constraints in evals, not instructions. If a skill behavior genuinely needs to change on a specific date, that's a trigger for a skill update, not a conditional inside the skill. Write an eval test case that verifies the new behavior. When the date arrives, update the skill and verify against the test. The skill always reflects current expected behavior.
For the full list of structural problems that cause skill degradation over time, see What Are the Most Common Mistakes When Building Claude Code Skills?. For why CHANGELOG.md in the skill folder amplifies this pattern, see Why Shouldn't I Include README.md or CHANGELOG.md in My Skill Folder?. For how context bloat from accumulated conditional branches degrades overall skill performance, see What Is Context Bloat and How Does It Hurt Skill Performance?.
Frequently asked questions
What if the two behaviors really do need to coexist for a transition period?
Handle the transition period explicitly: "During the transition period (until [specific property] is removed from all inputs), check for [property] and apply legacy mapping. After transition, delete this step." Then put a calendar reminder to delete the step. The transition window should have a defined end state, not an indefinite "after migration completes."
Is it safe to leave an old conditional branch in place if it "probably won't fire"?
No. "Probably won't fire" is the definition of a fair-weather skill. Production skills must handle every input they receive correctly, including the edge cases where the old conditional's language is ambiguous. The cost of deleting the branch is zero. The cost of leaving it is a class of silent failures.
How do I handle genuinely different behavior for different clients or projects?
Use separate skills. A skill that says "for Client A, do X; for Client B, do Y" is trying to be two skills at once. Install skill-client-a and skill-client-b with their own SKILL.md files. Description specificity ensures Claude triggers the right one.
My skill has 5 time-sensitive conditionals. Do I need to fix all of them at once?
Audit which branches are still reachable from current inputs. For each conditional: if the "old" branch can still be reached by real inputs, rewrite it as a structural conditional. If it cannot be reached, delete the branch. Then test each change in a fresh session.
What about conditionals based on the current date?
Date-based conditionals are the most dangerous time-sensitive variant because the condition flips silently. "Until April 2026, use the old format" means the skill was wrong the day after April 2026 and nobody noticed. Date-based behavior changes belong in version updates, not in skill instructions.
Last updated: 2026-04-18