/ skill-engineer-master

Most AI skills are prompts wearing a trenchcoat.

We build the real thing.

Every skill we engineer has four components: trigger logic, a methodology body, an output contract, and handled edge cases. That’s the production bar. Most community skills clear two of them on a good day.

SKILL.md
12345 678910 1112131415 1617181920 2122
---
name: weekly-content-review
trigger: /content-review
version: 1.0.0
tools:
  - Read
  - Grep
  - Write
output-contract:
  file: revision-brief.md
  fields:
    - tone-drift-flags: string[]
    - structure-score: "0–10"
    - suggested-edits: string[]
edge-cases:
  - incomplete-draft: flag sections
  - off-brand-tone:
      score < 6 → rewrite
  - adversarial-input:
      reject + log
---
# methodology body follows
/ the problem

700,000 skills. Statistically, most of them are vibes with a file extension.

[∅]
No quality gate
700,000+ files on SkillsMP
Zero pass a production review. Any text file with the right extension gets listed. Volume is the product. Quality is not the constraint.
[⚡]
Fair-weather skills
Works once. In ideal conditions.
Community skills work for the person who wrote them, in the scenario they had in mind. The first edge case breaks them. There is no edge case handling because there was no edge case testing.
[⏱]
The craft tax
Afternoon spent. Still doesn’t work right.
Writing a skill that handles edge cases takes real time to master. Trigger logic, output contracts, methodology design — that’s a craft. Most people are running a Porsche on fumes.
/ the production bar

Four components. All four. Every time.

01 —
Trigger Logic
When exactly does this skill activate? What context makes it relevant? What inputs does it accept, and what does it reject? Defined precisely — not left to the model’s interpretation.
02 —
Methodology Body
The step-by-step process the skill runs. Not vibes. A procedure. Each step has a defined input, action, and output. The model follows it. It doesn’t improvise around it.
03 —
Output Contract
Exactly what the skill returns. Format, fields, conditions. No surprises. Downstream processes that depend on the output can trust it. Every time, not most of the time.
04 —
Edge Case Handling
What happens when the input is incomplete, ambiguous, or adversarial. Tested, not hoped for. We identify the scenarios that break fair-weather skills and define how the skill responds.
/ process

Brief in. Deployable skill out.

01 — Commission
Submit your brief
Tell us what the skill should do, what triggers it, what tools it uses, and what it returns. Structured intake form — 15 minutes to fill out, not a back-and-forth thread.
02 — Engineer
We run the full process
Intake, prior art research, design, generation, validation. Every skill passes the four-component bar check before it leaves our queue. If it fails the bar, we rebuild it. You don’t see the failed draft.
03 — Deploy
Drop it in and run it
You receive a complete skill folder: SKILL.md + assets + references. Drop it in .claude/skills/. That’s it. No configuration. No setup guide. It runs.
/ delivery: .zip + GitHub Gist  |  within 1 hour  |  one revision round included
/ pricing

Named. Specific. Accountable.

/ per skill
$9.99
one skill, production-ready
  • Brief → SKILL.md + assets
  • 1-hour delivery
  • One revision round
  • Four-component bar check
Commission a Skill
/ every skill clears all four checkpoints. if it misses the bar, we rebuild it.
/ faq

Common questions.

A structured instruction file that tells Claude Code how to run a specific workflow. Four required components make it production-grade: trigger logic, methodology body, output contract, edge case handling. Think of it as a recipe — but one that defines what to do when you’re out of an ingredient.
We rebuild it. The production bar isn’t a suggestion. If a skill fails a use case inside its stated scope, that’s a bar miss on our side. We fix it. No charge, no argument. The only thing we ask: tell us specifically what broke and on what input.
You need to be able to drop a folder into .claude/skills/ and run it. That’s a one-time setup. We document exactly how. If you’re using Claude Code in any serious capacity, you already have this covered.
Skills are designed for Claude Code, but the SKILL.md format is readable by any agent that follows structured instruction files. In practice they work well in OpenClaw, Codex, and similar tools. Claude Code is the primary target and where we test — behaviour in other agents isn’t guaranteed, but it’s rarely far off.
1 hour from brief approval to delivery. If we miss that window, we tell you before it happens — not after. No vague “we’ll get back to you.”

You’ve explained your workflow to Claude nine times this week.

That’s not a Claude problem. It’s a skill problem. Commission the skill once. Run it forever.

Commission Your First Skill