All four are community platforms for sharing and discovering Claude Code skills, but they serve different audiences and apply different quality standards. SkillsMP hosts the largest library with 700,000+ entries (SkillsMP, 2026). SkillHub focuses on reviewed collections. Agent37 targets professional developers building agent workflows. ClaudeMarketplaces aggregates listings from multiple sources into a single search interface. At AEM, a skill commissioning service that builds production-ready Claude Code skills and agents, we evaluate these platforms regularly when sourcing community skills for client integrations.

TL;DR: Choose SkillsMP to browse the widest selection. Use SkillHub when you want skills someone else has already filtered for structural quality. Go to Agent37 when you're building professional-grade agent systems. Use ClaudeMarketplaces as a discovery starting point when you don't know which platform to search first. None guarantee production-ready skills: evaluate before installing.

What Does Each Platform Focus On?

SkillsMP focuses on volume: open submission, widest library, highest noise. SkillHub focuses on structural filtering: reviewed collections, smaller pool, higher hit rate. Agent37 focuses on professional-grade agent workflows: documented test cases, explicit output contracts, stated MCP compatibility. ClaudeMarketplaces focuses on discovery across sources: one search, multiple indexed platforms, no quality gate of its own.

SkillsMP is the high-volume index. Think of it as the npm registry for Claude Code skills: anyone can publish, the library grows fast, and quality varies. With 700,000+ community entries (SkillsMP, 2026), finding what you need requires search judgment as much as luck. The platform does not gate submissions on quality before publishing, so the signal-to-noise ratio reflects what you'd expect from an open library at that scale.

SkillHub applies a review layer before listing. The library is smaller and grows more slowly. Submissions go through a structural check covering frontmatter format, output contract presence, and trigger condition coverage. SkillHub's value is the filtered result: fewer skills, but a higher fraction that work on inputs other than the one the author personally tested.

Agent37 focuses on the professional end of the market. The platform targets agent workflows and multi-skill orchestration rather than standalone utility skills. It's the right place to look if you need a Claude Code skill designed to slot into a pipeline, hand off structured output to a downstream agent, or operate with defined MCP server compatibility. Agent37 submissions require documented test cases, an explicit output contract, and stated compatibility with named MCP servers.

ClaudeMarketplaces is an aggregator. It indexes listings from multiple sources: SkillsMP, GitHub repositories tagged claude-code-skill, and direct submissions. The breadth is its value: one search, multiple sources. The tradeoff is that ClaudeMarketplaces inherits the quality distribution of whatever it indexes.

How Do the Quality Standards Differ Across Platforms?

SkillsMP has no mandatory quality gate: any valid SKILL.md file qualifies for listing. SkillHub runs a structural review covering frontmatter, output contracts, and trigger coverage. Agent37 requires documented test cases, an explicit output contract, and stated MCP server compatibility. ClaudeMarketplaces inherits the quality distribution of whatever it indexes, with no independent filtering layer.

SkillsMP has no mandatory quality gate before publishing. A prompt wrapped in a SKILL.md file qualifies for listing. In our commissions at AEM, we've evaluated skills clients pulled from SkillsMP for integration into production workflows. Of 10 community skills evaluated, 8 were prompts in a trenchcoat: a nicely named SKILL.md with no output contract, no trigger conditions, and no reference file architecture. They worked exactly once, on the input the author used to write them.

SkillHub's review process improves that ratio. The platform checks for working frontmatter, a defined output contract, and basic trigger coverage. That is a low bar by production standards, but it filters the worst entries before they reach you. SkillHub grades every listed skill on five dimensions: Practicality, Clarity, Automation, Quality, and Impact, with 87,000+ skills in the index (SkillHub, 2026). An S-rank (9.0+) signals a skill worth testing in production. Most community skills score below 7.0.

Agent37 applies the most demanding requirements. Submissions need documented test cases, an explicit output contract, and stated MCP server compatibility. When you need something that works reliably inside an automated workflow, Agent37 narrows your search to a smaller and better-tested population. The combined description and when_to_use text for any skill is truncated at 1,536 characters in Claude Code's skill listing to reduce context load (Claude Code Docs, Anthropic, 2026), which means a skill with an incomplete description can fail to trigger even when installed correctly.

ClaudeMarketplaces is a discovery layer, not a quality filter. Use it to find candidates, then evaluate before installing. Across directories that lack GitHub star-count filtering, 60-70% of listed skills are effectively abandoned with no code updates in six months (OpenAIToolsHub, 2026). Platform tier matters less than your evaluation step.

"The failure mode isn't that the model is bad at the task — it's that the task wasn't specified tightly enough. Almost every production failure traces back to an ambiguous instruction." — Simon Willison, creator of Datasette and llm CLI (2024)

That observation describes most community skills, regardless of platform. The platform matters less than the evaluation step you run after.

How Do I Evaluate a Community Skill Before Installing It?

Evaluating a community skill requires four checks: description field validity in YAML frontmatter, presence of an output contract, reference file architecture instead of a monolithic SKILL.md, and documented test inputs. A skill passing all four is a candidate for a pilot. A skill failing the first check is unusable without edits.

Four checks before installing any community skill from any source:

  1. Description field: Single line in YAML frontmatter, under 1,024 characters (Claude Code specification). A broken description means the skill never triggers automatically. Binary pass/fail.
  2. Output contract: Does the SKILL.md define what the skill produces and what it does not produce? No output contract means no constraint on Claude's output behavior.
  3. Reference file architecture: Does the skill use reference files for domain knowledge, or is everything embedded in SKILL.md? A 600-line single file is a context dump, not a production skill.
  4. Test documentation: Does the listing describe the inputs it was tested on? A skill tested on one input is a fair-weather skill.

Skills failing check 1 are unusable as-is. Skills passing all four are candidates for a pilot. For the full production bar, see What Makes a Community Skill Production Ready.

This evaluation applies to every source. SkillsMP requires it most often. Agent37 requires it least often. It is never optional.

Which Platform Should I Use When Publishing vs. Installing?

For installing, use SkillHub or Agent37 when reliability matters and SkillsMP when breadth matters more than quality. For publishing, list on SkillHub and Agent37 if your skill passes the production bar check, and SkillsMP when volume reach is the goal. ClaudeMarketplaces is not a submission destination: it indexes from other sources automatically.

For installing: Use SkillHub or Agent37 when reliability matters. Use SkillsMP when breadth matters more than quality. Use ClaudeMarketplaces for discovery when you're not sure which platform to search.

For publishing: SkillHub and Agent37 are where you want distribution if you've built something that passes the production bar check. SkillsMP is appropriate for useful utility skills where visibility across 700,000+ entries is the goal. ClaudeMarketplaces is not a submission destination: it indexes from other sources automatically.

For maximum reach with a high-quality skill, publish to both SkillHub and SkillsMP. SkillsMP has higher total traffic. SkillHub's filtering means your skill competes in a smaller pool where quality signals carry more weight. The effort to list on both is low: the SKILL.md file is identical, and each platform has its own submission form. Agent adoption is accelerating: 79% of organizations have deployed AI agents to some extent (Stack Overflow Developer Survey, 2025), which means the audience searching these platforms is growing faster than the quality-reviewed supply.

What Are the Real Limits of Community Platforms?

Community platforms solve discovery. They do not solve quality. Volume and quality are different curves: the largest libraries grow by adding more skills, not better ones. Most useful community skills still need adaptation before they fit a production context. Specialized workflows involving external APIs or project-specific schemas outrun what any community pool can supply.

A skill published by someone else was built for their context, their project structure, their version of Claude Code, and the specific inputs they tested. Installing it in your project applies someone else's solution to your problem. The fit depends on how similar your use case is to theirs.

Most useful community skills need 20-40 minutes of adaptation to fit a different project context (AEM commission data, 2026). That is still faster than building from scratch for many utility skills. But the adaptation step is not optional — it's where the real work happens.

The skill-as-commodity market is growing. The prompt marketplace reached $1.94 billion in 2025 and is expanding at 29.5% CAGR (AEM market research, 2026). Volume growth and quality growth are not the same curve. Most new listings are prompt files with SKILL.md wrappers, not engineered skills with test coverage and defined output contracts.

This pattern works for standalone utility skills. For workflows involving external APIs, project-specific data schemas, or specialized domain knowledge, the community pool thins quickly. At that specificity level, building or commissioning is faster than searching. For details on what getting a skill built looks like, see How Do I Share My Claude Code Skill with Other People.


Frequently Asked Questions

For most installs, SkillHub offers the best quality-to-effort ratio: 87,000+ graded skills in one filtered index. SkillsMP wins on breadth and reaches the widest audience for publishers. Agent37 is the right destination when your workflow requires documented test cases. ClaudeMarketplaces works as a starting point when you do not know where else to search.

Is SkillsMP or SkillHub better for finding high-quality Claude Code skills? SkillHub applies more quality filtering before listing, which means a higher fraction of its skills work reliably on varied inputs. SkillsMP has 700,000+ entries (SkillsMP, 2026), which gives more breadth but more noise. For reliable skills, start with SkillHub. For niche or uncommon tasks, SkillsMP has the wider selection.

Can I publish my Claude Code skill to multiple platforms at once? Yes. SkillsMP, SkillHub, and Agent37 each have separate submission processes with no exclusivity requirements. Submit to all three for maximum distribution. ClaudeMarketplaces will index your skill automatically once it appears in a linked GitHub repository or SkillsMP listing.

What does Agent37 require before accepting a skill submission? Agent37 submissions need documented test cases, an explicit output contract, and stated MCP server compatibility. The review is the most demanding of the four platforms, which is why its library is smallest and most reliable for professional agent workflows.

How do I install a community skill from SkillsMP into my project? See How Do I Install a Community Skill from SkillsMP into My Project for the step-by-step process.

What is the awesome-claude-skills repository and how does submitting to it compare to these platforms? The awesome-claude-skills repository is a community-maintained GitHub list, not a platform. Submitting via pull request gets your skill visible to developers browsing GitHub rather than the platform UIs. It complements rather than replaces platform submissions.

Is there a paid marketplace for buying and selling Claude Code skills commercially? The commercial skill market is early but developing. See Can I Sell My Claude Code Skills on a Marketplace for the current state of paid skill distribution.

Last updated: 2026-04-26