14+ AI coding tools support SKILL.md files in some form. "Support" covers a wide range, from full native compatibility with frontmatter processing to reading the file as a plain markdown context document. Understanding the difference prevents you from building a skill that works perfectly in Claude Code and silently fails everywhere else. The question matters more than ever: 84% of developers now use or plan to use AI tools in their workflow (Stack Overflow Developer Survey, 2025), and the AI coding tools market reached $7.93 billion in 2025 (Precedence Research, 2025), meaning the list of platforms they work across keeps growing.
TL;DR: Claude Code is the only platform with native SKILL.md support, which means it processes frontmatter, reads the description field for trigger routing, and loads reference files progressively. Every other platform that supports SKILL.md does so in compatibility mode: the instruction body works, the frontmatter is ignored, and triggering is manual. Your skill quality transfers. Your trigger infrastructure doesn't.
AEM tests and maintains this compatibility list across all 14+ platforms as part of its production skill engineering work.
Which AI coding tools support SKILL.md natively?
As of 2026, Claude Code is the only production AI coding tool with full native support across all four features of the SKILL.md specification: YAML frontmatter parsing, description-field trigger routing, progressive reference file loading, and multi-skill library management with selective activation. No other platform combines all four, because they depend on Claude Code's internal skill classifier and context management system.
The four native features in full:
- YAML frontmatter parsing
- Description field processing for automatic trigger routing
- Progressive reference file loading on demand
- Multi-skill library management with selective activation
Which AI coding tools support SKILL.md in compatibility mode?
Fourteen tools support SKILL.md in compatibility mode, including Cursor, Windsurf, Cline, Roo Code, JetBrains AI Assistant, Continue, and Aider, each reading the SKILL.md instruction body as a plain markdown context file where the steps, output contracts, and constraints work correctly, but frontmatter is ignored and triggering is always manual rather than description-field-driven.
Tools with confirmed compatibility mode support:
| Tool | Installation path | Invoke method |
|---|---|---|
| Cursor | .cursor/rules/ |
@-mention or glob pattern |
| Windsurf | Project rules directory | Manual or pattern-based |
| Bolt | Project context | Pasted or attached |
| Zed | Assistant rules | Manual include |
| Cline | VS Code extension config | Manual or rule-based |
| Roo Code | VS Code extension config | Manual or rule-based |
| Aide | Project context | Manual |
| Void | Context files | Manual |
| PearAI | Project rules | Manual |
| JetBrains AI Assistant | Editor instructions | Manual |
| Continue | VS Code extension config | Manual |
| Kilo Code | Rules directory | Manual |
| Cody | Context files | Manual |
| Aider | Config files | Manual |
The table shows the scope of the compatibility layer: every platform except Claude Code requires manual invocation. None of them read the description field for automatic routing. Cursor alone has over 1 million daily active users (Cursor.com, 2025), and Cline passed 2 million VS Code Marketplace installs by July 2025 (VS Code Marketplace, 2025). A large share of developers building with SKILL.md-compatible tools are doing so through manual invocation.
What is the real difference between native and compatibility support?
Two things separate native from compatibility: trigger automation and progressive reference file loading, where native mode fires the skill automatically when a task matches the description field, while compatibility mode requires manual invocation every time and silently ignores any reference-loading directives in the skill body. In practice, these two gaps determine how much of your skill infrastructure transfers to another platform.
Trigger automation. In Claude Code, the description field makes a skill fire when the task matches its trigger condition. "Use this skill when the user asks to write a changelog entry" fires the skill every time you ask for a changelog, without any @-mention or manual include. In compatibility mode, no platform does this. You invoke the skill every time.
Progressive reference loading. In Claude Code, a skill can load a 200-line reference file only when needed. "If the task involves authentication patterns, load references/auth-patterns.md" is an execution directive. In compatibility mode, the tool either ignores this instruction or requires you to manually attach the reference file.
The practical gap: in Claude Code, a skill with 5 reference files costs almost no tokens at startup because the references load on demand. In compatibility mode, you either inline all the reference content (paying the full token cost every time) or accept that the reference loading step fails silently.
AI tools now generate 46% of the code written by developers on GitHub (GitHub Octoverse, 2025). A skill with a poorly scoped output contract produces inconsistent results across that entire surface, which is why the instruction body design matters more than the platform.
"The single biggest predictor of whether an agent works reliably is whether the instructions are written as a closed spec, not an open suggestion." — Boris Cherny, TypeScript compiler team, Anthropic (2024)
This applies across platforms. A skill with a tightly written instruction body, clear output contract, and no reliance on progressive loading works in compatibility mode as reliably as it works in native mode. The skill quality is portable. The infrastructure is not.
How do I write a SKILL.md that works across all compatible platforms?
Write the instruction body to be platform-agnostic, then layer Claude Code-specific features on top as enhancements the skill can run without: avoid tool-specific step names, keep all critical content inline rather than in reference files, define an explicit output contract, and validate on the lowest-common-denominator platform before treating the skill as production-ready across tools.
Four rules:
- No tool-specific calls in steps: "Use the Bash tool to run the linter" is Claude Code-only. "Run the linter" works everywhere.
- Self-contained steps: Don't rely on progressive reference loading for core functionality. If a reference file is critical, inline the content in the skill body. Reference loading is an optimization, not a dependency.
- Explicit output contracts: Every platform respects output constraints. A skill that produces "a JSON object with
severity,description, andfixfields" behaves consistently across all platforms. - Test on the lowest-common-denominator platform: If the skill works correctly when manually invoked in Cursor (no automatic triggering, no reference loading), it works everywhere.
In our cross-platform builds, skills written with these four rules achieve consistent output quality across 8+ platforms. The only variance is in how the skill gets invoked, not in what it produces. Consistent output contracts matter because the productivity gains are real: developers using AI coding tools complete tasks 55.8% faster than those without (Peng et al., arXiv 2302.06590, 2023). A January 2026 JetBrains survey found GitHub Copilot (29%), Cursor (18%), and Claude Code (18%) are the top three AI coding tools used at work: developers are not choosing one platform and stopping there.
For how to use SKILL.md files specifically in Cursor, see Can I Use Claude Code Skills in Cursor?. For Copilot compatibility, see Do Claude Code Skills Work in GitHub Copilot?.
How do I check whether a specific tool supports SKILL.md before building for it?
Three methods work reliably: check SkillsMP compatibility tags for community-verified results, search the tool's documentation for "custom instructions" or "rules file" support, or drop a minimal 20-line SKILL.md into the tool and verify it follows the output contract, which takes five minutes and produces a more accurate answer than documentation claims alone.
- Check SkillsMP compatibility tags: Skills listed on SkillsMP include compatibility tags showing which platforms support them and at what level. If a published skill works in your target platform, the format is confirmed.
- Check the tool's documentation for "custom instructions" or "rules": Any tool that supports markdown-based custom instructions or rules files reads SKILL.md in compatibility mode. Search for "custom instructions," "rules file," or "AI context" in the tool's documentation.
- Test with a minimal skill: Create a 20-line SKILL.md with a simple output contract. Drop it in the tool's context or rules folder. If the output contract is followed, the tool is reading the instruction body. If the output is generic, the file isn't being read.
The test takes 5 minutes. It's more reliable than documentation claims because "supports SKILL.md" sometimes means "reads markdown files," which every tool does. Platforms in this space move fast: OpenAI acquired Windsurf (Codeium) for approximately $3 billion in May 2025, meaning compatibility claims can change as products are absorbed and rebuilt.
FAQ
Does SKILL.md work in Windsurf?
Yes, in compatibility mode. Install SKILL.md files in Windsurf's project rules directory. The instruction body, steps, and output contracts work. The frontmatter and description trigger mechanism are ignored.
Can I use SKILL.md files in VS Code directly?
VS Code itself doesn't read SKILL.md files, but VS Code extensions do. Cline, Roo Code, Continue, and Kilo Code all support SKILL.md in compatibility mode within the VS Code environment. Install the extension, then configure it to reference your skill files.
Do Claude Code skills work in JetBrains IDEs?
JetBrains AI Assistant supports SKILL.md in compatibility mode. Install the file, invoke it manually in the AI chat, and the steps and output contracts function. Automatic triggering via the description field is not supported.
Can I use skills from SkillsMP in Windsurf or Bolt?
If the skill is tagged "Universal" or includes Windsurf/Bolt compatibility notes on SkillsMP, yes. Skills tagged "Claude Code only" use features that don't port. Check the compatibility tag before building a workflow dependency on a downloaded skill.
What's the simplest way to test whether my skill works in another tool?
Write a minimal version of the skill with a single specific output contract, drop it into the target tool's rules or context directory, and ask the tool to perform the task. If it produces the specified output, the skill body is being read. If not, the file isn't in the right location or the tool doesn't read it at all.
Is there a way to get automatic skill triggering in tools other than Claude Code?
Not with the SKILL.md description field mechanism. Some tools (Cursor, Windsurf) support glob-pattern-based auto-attachment, which is a coarser form of automation: files attach based on file type or project structure, not based on task intent. For intent-based automatic routing, Claude Code is currently the only option.
Last updated: 2026-04-17