Capture the work
Before you write any code, write down what you're trying to package. The clearer you are here, the cleaner every later step becomes. This works whether you've never built a skill before or you've shipped a dozen.
I will help you build a plugin from repeated work. Here is the path:
- ✓ Capture the workflow and requirements
- ✓ Define the reusable skill
- ✓ Connect the tools involved
- ✓ Add commands to run the work
- ✓ Add hooks for validation
- ✓ Package and prepare to ship
Define the core skill this plugin will execute.
{
"name": "workflow_skill",
"description": "Runs the repeated workflow",
"input_schema": "src/schema/input.json",
"output_schema": "src/schema/output.json",
"model": "ai-workspace-runtime"
}
Draft ready. Review connected tools and run checks before using it on real work.
{} .claude-plugin/plugin.json {} manifest.json cmd commands/run.md chk hooks/validate.sh Each tab produces one build component. By the end, the workflow can be installed, evaluated, and improved as a plugin instead of copied as another prompt.
Write it as a prompt
A prompt is just text you send the model. It's the lightest possible packaging: zero setup, fine for one-off work. We're starting here because the prompt forces you to make every part of the workflow explicit. That same thinking will go straight into your skill in the next step.
Strong prompts share a common structure. Each part answers a question the model needs answered before it can produce useful output.
ROLEwho the model is acting asCONTEXTwhat background info it hasTASKwhat to do, step by stepFORMATwhat the output should look likeCONSTRAINTSwhat to avoid or never do
Reference: Anthropic's prompt engineering overview at docs.claude.com/en/docs/build-with-claude/prompt-engineering.
Fill the fields above to see your prompt compose here.
flowchart LR
A[Prompt fields] --> B[Composed prompt]
B --> C{Test in Claude}
C -->|Works| D[Lift the process
into a Skill]
C -->|Output drifts| E[Tighten
FORMAT and CONSTRAINTS]
E --> B
D --> F[Step 3]
Lift it into a skill
A skill is a markdown file the model reads. It teaches the model how to do something the same way every time. Once you've installed it, you don't have to retype your prompt: the skill auto-fires when the model sees a request that matches the description.
Every skill is a single markdown file with YAML frontmatter at the top. The frontmatter has three required fields:
--- name: skill-name-in-kebab-case description: When to use this skill. The model reads this to decide whether to fire. version: 1.0.0 --- The body is the instructions the model follows when this skill is active.
The description is doing the real work. It's what the model uses to auto-trigger the skill. Specific keywords and clear scope are what make a skill fire when it should and stay silent when it shouldn't.
sequenceDiagram
participant U as User
participant M as Model
participant S as Skill library
U->>M: Asks a question
M->>S: Scans descriptions
S-->>M: Skill matches keywords
M->>M: Loads SKILL.md body
M->>U: Responds following
the skill instructions
Fill the fields above to compose your SKILL.md.
Connect the systems
A skill alone can't read live data. It needs connected tools (MCP, when available) to reach the systems where work actually lives. You picked your tools in Step 1; now we'll wire them up. If your workflow doesn't need live data, you can skip this step.
Every connected tool is one entry under mcpServers. The server defines its command, environment variables, and (optionally) the scope of what the skill is allowed to do.
{
"mcpServers": {
"server-name": {
"command": "npx",
"args": ["-y", "vendor-mcp-server"],
"env": {
"API_KEY": "${API_KEY}"
}
}
}
}
Use ${VAR_NAME} for credentials. Never hardcode secrets in the file.
Select connectors in Step 1 to populate this file.
Add slash commands
Slash commands are the way you (or your team) actually run the plugin. Each command is a markdown file in the commands/ directory; typing /command-name in your AI workspace triggers it. Most plugins ship 1-4 commands. Don't overdo it.
One markdown file per command in commands/. The file name (minus .md) becomes the slash command.
--- name: command-name description: One line describing what this command does --- The body is the instruction the model follows when the command runs. Reference the skill, mention what to read from connected tools, etc.
flowchart TD
A[User types /command] --> B[Plugin loads command file]
B --> C[Skill is loaded into context]
C --> D[connected tools provide live data]
D --> E[Model executes the workflow]
E --> F{Hooks fire
at events}
F --> G[Output to user]
Toggle commands above to see them here.
Add deterministic checks
Hooks are scripts that run at specific events: before a tool runs, after a tool runs, when the model is about to stop. Use them for things you can't trust the model to remember every time. Format checks, schema validation, log writes, banned-word scans. If you don't need any of these, this step is optional.
PreToolUsebefore any tool callPostToolUseafter any tool callStopwhen the model is about to end its turnSessionStartwhen a new session opensSessionEndwhen a session closesUserPromptSubmitwhen the user sends a message
Hooks block by default. If your script returns non-zero, the action is halted. Use "timeout": 30 to cap script run time in seconds.
sequenceDiagram
participant U as User
participant P as Plugin
participant H as Hooks
participant T as Tool
U->>P: /command runs
P->>H: PreToolUse fires
H-->>P: Pass / block
P->>T: Tool executes
T->>H: PostToolUse fires
H-->>P: Pass / block
P->>U: Result returned
No hooks selected. This file will be omitted from the plugin.
Assemble the plugin
Now we wrap everything in a manifest. The manifest is the entry point your AI workspace reads to discover your plugin. The folder structure follows the standard plugin layout so auto-discovery finds your skill, commands, and hooks without any extra config.
{
"name": "plugin-name",
"version": "1.0.0",
"description": "What the plugin does",
"author": { "name": "You" }
}
The manifest lives at .claude-plugin/plugin.json. Component directories sit at the plugin root. Auto-discovery handles the rest.
your-plugin/ ← will be named after your workflow
plugin.json > description.
Fill in the fields above.
Ship it
Last step. We'll run a pre-flight check, then give you three ways to download what you've built. Pick whichever fits how you want to install in your AI workspace.
Before running your skill or plugin in production, your AI workspace includes evaluation skills you can run against it. The skill-creator and plugin-builder skills both have eval modes that test description triggering, schema validity, and end-to-end execution on sample inputs.
Once installed, type evaluate this skill or evaluate this plugin in your AI workspace to run the built-in checks before you point it at real work.
flowchart LR
A[Build] --> B[Install in your AI workspace]
B --> C[Run evaluation]
C --> D[Use for 2 weeks]
D --> E{Decide}
E -->|Working| F[Ship to team]
E -->|Wrong scope| G[Refine]
E -->|Not used| H[Kill]
G --> A
Download your plugin spec
Once downloaded: open your AI workspace, drop the file into the workspace, and run install plugin (for ZIP) or install skill (for the SKILL.md inside the markdown spec). Then run evaluate this plugin before using it on real work.