Yes. Claude Code skills and agents work together, and combining them is the AEM production pattern for complex workflows. Skills handle the individual expert tasks. Agents handle the coordination, routing, and multi-step sequencing. The distinction matters for how you divide work between them.

TL;DR: Skills and agents are complementary, not competing. An agent runs a workflow across multiple steps. A skill handles one specific step with precision. Combined: the agent knows the order of operations; the skills know how to execute each operation. One agent, eight skills, zero repeated context across the session.

How Do Claude Code Skills and Agents Work Together?

An agent in Claude Code is a configuration that defines how Claude should behave across a multi-step autonomous task. Skills are precision tools the agent invokes for specific sub-tasks within that workflow. The agent owns the sequence and the routing logic. The skills own the domain knowledge.

A concrete example: a content review workflow. The agent knows to:

  1. Read the draft
  2. Run a quality check
  3. Check for brand voice compliance
  4. Generate revision suggestions
  5. Present options for the writer

Each of steps 2, 3, and 4 is a different skill. The quality check skill knows your specific quality bar. The brand voice skill knows your client's rules. The revision suggestions skill knows your preferred output format. The agent knows the order. The skills know the domain.

Without the skills, the agent would need all that domain knowledge embedded in its own definition. Without the agent, each skill would need to manually trigger the next step rather than being orchestrated. The Stack Overflow 2025 Developer Survey found that 51% of professional developers now use AI tools daily, up from a much smaller base in 2023, which is why orchestrated multi-skill setups are moving from experimental to standard (Stack Overflow, 2025).

What Does a Skills-Plus-Agents Workflow Look Like in Practice?

The most common pattern is a coordinating CLAUDE.md configuration that routes to specialized skills. CLAUDE.md acts as the agent layer: it holds the workflow order and the routing rules. The skills contain the domain expertise. Claude reads the routing instructions and invokes the relevant skills in sequence.

CLAUDE.md
  "When reviewing content, invoke the brand-voice skill first,
   then the quality-check skill, then present options."

Claude reads the routing instructions from CLAUDE.md and invokes the relevant skills in order.

For more sophisticated setups, an agent definition in a SKILL.md can explicitly reference other skills:

Step 1: Invoke the brand-voice-check skill on the submitted content.
Step 2: Invoke the quality-bar-check skill on the same content.
Step 3: Combine outputs and present the top 3 revision paths.

This is an orchestrating skill: a skill that coordinates other skills. One agent, eight skills, zero repeated context. That is the production setup. The agent knows the workflow; the skills know the domain.

"Developers don't adopt AI tools because they're impressive — they adopt them because they reduce friction on tasks they repeat every day." — Marc Bara, AI product consultant (2024)

A skills-plus-agents workflow reduces friction on the routine parts: the brand check, the quality check, the format validation. Those run reliably. The agent handles the judgment: what order, which output to present, when to ask for human input. Research by ETH Zurich (Gloaguen et al.) found that developer-written context files for AI agents produce a modest 4% improvement in task success, while machine-generated files can reduce success rates by approximately 3% and increase inference costs by over 20%.

When Does the Combination Add Value vs Just Adding Complexity?

The combination earns its complexity when a workflow has three specific conditions: distinct expert domains that require separate context, a repeatable sequence that stays consistent while content changes, and team members who own different pieces of the workflow independently. All three conditions must be present. One or two conditions alone do not justify the coordination overhead. Gartner recorded a 1,445% surge in multi-agent system inquiries from Q1 2024 to Q2 2025, reflecting exactly this need (Gartner, 2025). The specific conditions are:

  1. Multiple distinct expert tasks are required: If a workflow needs code review expertise AND documentation style expertise AND deployment constraint checking, three separate skills serve better than one mega-skill trying to do all three. The agent routes between them.
  2. The workflow has a consistent sequence but variable content: The order of operations is the same every time: check, validate, suggest, present. The content changes every time. The agent holds the sequence; the skills hold the expertise.
  3. Different team members maintain different parts of the workflow: The frontend team owns the UI style skill. The backend team owns the API standards skill. The content team owns the writing standards skill. One agent coordinates all three without any team needing to know the others' rules.

The combination adds complexity without adding value when:

  1. The workflow is a single task: If you need to review a PR for code quality, one skill does this well. Adding an agent to invoke the skill adds a layer with no benefit.
  2. The agent's routing logic is trivial: "Run skill A, then run skill B" is two sentences in a skill's process steps. That does not need an agent definition. The coordination overhead is zero.

For teams starting out, start with skills alone. Add agent coordination when you find yourself writing "then invoke X" in multiple skill step sections. AI coding tools now generate 41% of all code written worldwide (Stack Overflow, 2025), which makes workflow discipline more important, not less: knowing when to add orchestration is the judgment that separates a maintainable setup from an over-engineered one.

Which Claude Code Skills and Agents Combination Patterns Are Common?

Three patterns appear consistently in production systems: the sequential expertise chain (agent routes through 3–5 skills in order), the parallel specialist check (agent runs 2–3 skills simultaneously and aggregates findings), and conditional routing (agent checks task type first, then routes to the relevant skill branch). Each pattern suits a different workflow shape. Research from Addy Osmani shows that three to five focused agents consistently outperform a single generalist agent working the same task volume, with token costs scaling linearly across teams of that size (Osmani, addyosmani.com, 2025). The patterns are:

  1. Pattern 1: Sequential expertise chain: Agent routes through 3-5 skills in order. Each skill's output feeds the next skill's input. The agent assembles the final deliverable.
  2. Pattern 2: Parallel specialist check: Agent runs 2-3 skills simultaneously (where Claude can handle parallelism). Skills report findings; agent aggregates them into a summary.
  3. Pattern 3: Conditional routing: Agent checks the task type first (is this a bug fix or a feature?), then routes to the relevant skill. One workflow, multiple specialist branches.

The sequential pattern is the most common starting point. Parallel and conditional routing require more careful context management but suit specific task types well. Anthropic documentation notes that SKILL.md description fields load at roughly 50-100 tokens per skill at the discovery level before full bodies are activated (Anthropic, 2025), which means the discovery overhead for a large library is small relative to the activation cost.

For a deeper look at the boundary between when to use a skill vs when to use agents, see What's the Difference Between a Claude Code Skill and an Agent? and When Does a Workflow Need Multiple Agents vs a Single Skill?.

Frequently Asked Questions

Can a skill invoke another skill inside its steps?

Yes. A skill's process steps can explicitly instruct Claude to activate another skill. "After completing the quality check, invoke the revision-suggestions skill with the flagged issues as input." This makes the first skill an orchestrating skill. Keep orchestrating skills to one level of depth: a skill that calls skills that call skills creates context management problems.

Does an agent have access to all installed skills automatically?

Yes. Skills installed at project or user level are available to any session, including agent-driven ones. The agent does not need to explicitly list which skills it can use. It navigates the installed library via the same discovery classifier as any other session. Claude Code now supports a 1M token context window at no extra cost, generally available for Sonnet 4.6 and Opus 4.6 (Anthropic, 2025), which means even a large skill library with 20+ skills is a small fraction of available context.

Can I have an agent that only uses specific skills and ignores others?

Not with native skill scoping as of 2024. All installed skills are available in all sessions. If you need to limit which skills an agent can use, the practical approach is to install the agent and its target skills in a separate project directory with only those skills installed.

How does a skills-plus-agents setup affect the context budget?

It adds up faster. An agent-driven session that invokes three skills sequentially loads the body of each skill into context as it activates. In our production skill library, we measure skill bodies at 1,500-2,000 tokens each: a four-skill workflow spends 6,000-8,000 tokens on skill instructions alone before touching the content being processed. That is before any conversation history accumulates. Plan the workflow with context budget in mind, especially for long-running sessions.

Should I build the agent logic inside a skill or in CLAUDE.md?

For consistent project-wide workflows, CLAUDE.md is the right place for coordination logic because it loads automatically in every session. For workflows that are specifically invoked rather than always present, build the coordination logic in a dedicated orchestrating skill. The difference is triggering: CLAUDE.md is always-on, skills are on-demand. GenOptima research (2026) found that pages updated within the last 30 days are cited 40% more often by AI systems than pages with identical content updated 6+ months prior, which means keeping your routing logic in a maintained skill file supports long-term discoverability.

Do skills and agents work together in all Claude Code deployment environments?

Skills and agents work together in the Claude Code CLI and in Claude Code integrations within development environments that support the full skill loading mechanism. In constrained environments where CLAUDE.md support is limited, the agent coordination layer may not function as expected. Check your specific deployment environment's skill support before designing a production orchestration workflow.

Last updated: 2026-05-04