Rewrite agent-development skill for opencode

- Update SKILL.md with JSON-first approach (agents.json pattern)
- Add all opencode config options: mode, temperature, maxSteps, hidden, permission
- Document permissions system with granular rules and glob patterns
- Add references/opencode-agents-json-example.md with chiron pattern
- Rewrite triggering-examples.md for opencode (Tab, @mention, Task tool)
- Update agent-creation-system-prompt.md for JSON output format
- Rewrite complete-agent-examples.md with JSON examples
- Rewrite validate-agent.sh to support both JSON and Markdown validation
This commit is contained in:
m3tm3re
2026-01-19 19:35:55 +01:00
parent 8ebb30fb2b
commit 924b3476f9
8 changed files with 2521 additions and 0 deletions

View File

@@ -0,0 +1,491 @@
---
name: agent-development
description: "(opencode - Skill) Create and configure agents for Opencode. Use when: (1) creating a new agent, (2) adding agents to agents.json or opencode.json, (3) configuring agent permissions, (4) setting up primary vs subagent modes, (5) writing agent system prompts, (6) understanding agent triggering. Triggers: create agent, add agent, agents.json, subagent, primary agent, agent permissions, agent configuration, agent prompt."
compatibility: opencode
---
# Agent Development for Opencode
## Overview
Agents are specialized AI assistants configured for specific tasks and workflows. Opencode supports two agent types with different configuration formats.
### Agent Types
| Type | Description | Invocation |
|------|-------------|------------|
| **Primary** | Main assistants for direct interaction | Tab key to cycle, or configured keybind |
| **Subagent** | Specialized assistants for delegated tasks | Automatically by primary agents, or @ mention |
**Built-in agents:**
- `build` (primary) - Full development with all tools enabled
- `plan` (primary) - Analysis/planning with edit/bash requiring approval
- `general` (subagent) - Multi-step tasks with full tool access
- `explore` (subagent) - Fast, read-only codebase exploration
### Configuration Formats
Agents can be defined in two formats. Ask the user which format they prefer; default to **JSON** if no preference stated.
**Format 1: JSON** (recommended for central management)
- In `opencode.json` under the `agent` key
- Or standalone `agents.json` file
- Best for: version control, Nix flake consumption, central configuration
**Format 2: Markdown** (for quick addition)
- Global: `~/.config/opencode/agents/*.md`
- Per-project: `.opencode/agents/*.md`
- Best for: project-specific agents, quick prototyping
## JSON Agent Structure
### In opencode.json
```json
{
"$schema": "https://opencode.ai/config.json",
"agent": {
"agent-name": {
"description": "When to use this agent",
"mode": "primary",
"model": "provider/model-id",
"prompt": "{file:./prompts/agent-name.txt}",
"permission": { ... },
"tools": { ... }
}
}
}
```
### Standalone agents.json
```json
{
"agent-name": {
"description": "When to use this agent",
"mode": "subagent",
"model": "anthropic/claude-sonnet-4-20250514",
"prompt": "You are an expert...",
"tools": {
"write": false,
"edit": false
}
}
}
```
## Markdown Agent Structure
File: `~/.config/opencode/agents/agent-name.md` or `.opencode/agents/agent-name.md`
```markdown
---
description: When to use this agent
mode: subagent
model: anthropic/claude-sonnet-4-20250514
temperature: 0.1
tools:
write: false
edit: false
bash: false
permission:
bash:
"*": ask
"git diff": allow
---
You are an expert [role]...
**Your Core Responsibilities:**
1. [Responsibility 1]
2. [Responsibility 2]
```
The filename becomes the agent name (e.g., `review.md``review` agent).
## Configuration Options
### description (required)
Defines when Opencode should use this agent. Critical for subagent triggering.
```json
"description": "Reviews code for best practices and security issues"
```
### mode
Controls how the agent can be used.
| Value | Behavior |
|-------|----------|
| `primary` | Directly accessible via Tab cycling |
| `subagent` | Invoked by Task tool or @ mention |
| `all` | Both (default if omitted) |
```json
"mode": "primary"
```
### model
Override the model for this agent. Format: `provider/model-id`.
```json
"model": "anthropic/claude-sonnet-4-20250514"
```
If omitted: primary agents use globally configured model; subagents inherit from invoking primary agent.
### prompt
System prompt defining agent behavior. Can be inline or file reference.
**Inline:**
```json
"prompt": "You are an expert code reviewer..."
```
**File reference:**
```json
"prompt": "{file:./prompts/agent-name.txt}"
```
File paths are relative to the config file location.
### temperature
Control response randomness (0.0 - 1.0).
| Range | Use Case |
|-------|----------|
| 0.0-0.2 | Focused, deterministic (code analysis, planning) |
| 0.3-0.5 | Balanced (general development) |
| 0.6-1.0 | Creative (brainstorming) |
```json
"temperature": 0.1
```
### maxSteps
Limit agentic iterations before forcing text-only response.
```json
"maxSteps": 10
```
### tools
Control which tools are available. Boolean to enable/disable, or object for granular control.
**Disable specific tools:**
```json
"tools": {
"write": false,
"edit": false,
"bash": false
}
```
**Wildcard for MCP tools:**
```json
"tools": {
"mymcp_*": false
}
```
### hidden
Hide subagent from @ autocomplete menu. Agent can still be invoked via Task tool.
```json
"hidden": true
```
### disable
Disable the agent entirely.
```json
"disable": true
```
## Permissions System
Permissions control what actions require approval. Each rule resolves to:
- `"allow"` - Run without approval
- `"ask"` - Prompt for approval
- `"deny"` - Block the action
### Permission Types
| Permission | Matches Against |
|------------|-----------------|
| `read` | File path |
| `edit` | File path (covers edit, write, patch, multiedit) |
| `bash` | Parsed command |
| `task` | Subagent type |
| `external_directory` | Paths outside project |
| `doom_loop` | Repeated identical tool calls |
### Simple Permissions
```json
"permission": {
"edit": "ask",
"bash": "ask"
}
```
### Granular Permissions with Glob Patterns
Rules evaluated in order; **last matching rule wins**.
```json
"permission": {
"read": {
"*": "allow",
"*.env": "deny",
"*.env.*": "deny",
"*.env.example": "allow"
},
"bash": {
"*": "ask",
"git status*": "allow",
"git log*": "allow",
"git diff*": "allow",
"rm *": "ask",
"sudo *": "deny"
},
"edit": "allow",
"external_directory": "ask",
"doom_loop": "ask"
}
```
### Task Permissions (Subagent Control)
Control which subagents an agent can invoke via Task tool.
```json
"permission": {
"task": {
"*": "deny",
"code-reviewer": "allow",
"test-generator": "ask"
}
}
```
## Complete JSON Example
```json
{
"chiron": {
"description": "Personal AI assistant (Plan Mode). Read-only analysis and planning.",
"mode": "primary",
"model": "anthropic/claude-sonnet-4-20250514",
"prompt": "{file:./prompts/chiron.txt}",
"permission": {
"read": {
"*": "allow",
"*.env": "deny",
"*.env.*": "deny",
"*.env.example": "allow",
"*/.ssh/*": "deny",
"*credentials*": "deny"
},
"edit": "ask",
"bash": "ask",
"external_directory": "ask"
}
},
"chiron-forge": {
"description": "Personal AI assistant (Worker Mode). Full write access.",
"mode": "primary",
"model": "anthropic/claude-sonnet-4-20250514",
"prompt": "{file:./prompts/chiron-forge.txt}",
"permission": {
"read": {
"*": "allow",
"*.env": "deny"
},
"edit": "allow",
"bash": {
"*": "allow",
"rm *": "ask",
"git push *": "ask",
"sudo *": "deny"
}
}
},
"code-reviewer": {
"description": "Reviews code for quality, security, and best practices",
"mode": "subagent",
"model": "anthropic/claude-sonnet-4-20250514",
"temperature": 0.1,
"tools": {
"write": false,
"edit": false
},
"prompt": "You are an expert code reviewer..."
}
}
```
## System Prompt Design
Write prompts in second person, addressing the agent directly.
### Standard Structure
```
You are [role] specializing in [domain].
**Your Core Responsibilities:**
1. [Primary responsibility]
2. [Secondary responsibility]
3. [Additional responsibilities]
**Process:**
1. [Step one]
2. [Step two]
3. [Continue with clear steps]
**Quality Standards:**
- [Standard 1]
- [Standard 2]
**Output Format:**
[What to include and how to structure]
**Edge Cases:**
- [Edge case 1]: [How to handle]
- [Edge case 2]: [How to handle]
```
### Prompt File Convention
Store prompts in a `prompts/` directory with `.txt` extension:
- `prompts/agent-name.txt`
Reference in config:
```json
"prompt": "{file:./prompts/agent-name.txt}"
```
### Best Practices
**DO:**
- Use second person ("You are...", "You will...")
- Be specific about responsibilities
- Provide step-by-step processes
- Define output format
- Include quality standards
- Address edge cases
- Keep under 10,000 characters
**DON'T:**
- Write in first person
- Be vague or generic
- Omit process steps
- Leave output format undefined
## Creating Agents
### Method 1: Opencode CLI (Interactive)
```bash
opencode agent create
```
Prompts for: location, description, tools, then generates the agent file.
### Method 2: JSON Configuration
1. Add agent to `opencode.json` or `agents.json`
2. Create prompt file in `prompts/` directory
3. Validate with `scripts/validate-agent.sh`
### Method 3: Markdown File
1. Create `~/.config/opencode/agents/agent-name.md` or `.opencode/agents/agent-name.md`
2. Add frontmatter with configuration
3. Write system prompt as markdown body
## Validation
Validate agent configuration:
```bash
# Validate agents.json
./scripts/validate-agent.sh agents.json
# Validate markdown agent
./scripts/validate-agent.sh ~/.config/opencode/agents/review.md
```
## Testing
1. Reload opencode or start new session
2. For primary agents: use Tab to cycle
3. For subagents: use @ mention or let primary agent invoke via Task tool
4. Verify expected behavior and tool access
## Quick Reference
### JSON Agent Template
```json
{
"my-agent": {
"description": "What this agent does and when to use it",
"mode": "subagent",
"model": "anthropic/claude-sonnet-4-20250514",
"prompt": "{file:./prompts/my-agent.txt}",
"tools": {
"write": false,
"edit": false
}
}
}
```
### Markdown Agent Template
```markdown
---
description: What this agent does and when to use it
mode: subagent
model: anthropic/claude-sonnet-4-20250514
tools:
write: false
edit: false
---
You are an expert [role]...
```
### Configuration Options Summary
| Option | Required | Type | Default |
|--------|----------|------|---------|
| description | Yes | string | - |
| mode | No | primary/subagent/all | all |
| model | No | string | inherited |
| prompt | No | string | - |
| temperature | No | number | model default |
| maxSteps | No | number | unlimited |
| tools | No | object/boolean | all enabled |
| permission | No | object | allow |
| hidden | No | boolean | false |
| disable | No | boolean | false |
## Additional Resources
- **System prompt patterns**: See `references/system-prompt-design.md`
- **Triggering examples**: See `references/triggering-examples.md`
- **AI-assisted generation**: See `examples/agent-creation-prompt.md`
- **Complete examples**: See `examples/complete-agent-examples.md`
- **Real-world JSON example**: See `references/opencode-agents-json-example.md`

View File

@@ -0,0 +1,219 @@
# AI-Assisted Agent Generation
Use this template to generate agent configurations using AI assistance.
## Quick Start
### Step 1: Describe Your Agent
Think about:
- What task should the agent handle?
- Primary (Tab-cycleable) or subagent (delegated)?
- Should it modify files or be read-only?
- What permissions does it need?
### Step 2: Use the Generation Prompt
Send to Opencode:
```
Create an agent configuration: "[YOUR DESCRIPTION]"
Requirements:
1. Determine if this should be primary or subagent
2. Select appropriate model and temperature
3. Configure tool access (write, edit, bash)
4. Set permissions for dangerous operations
5. Write comprehensive system prompt
Return JSON format for agents.json:
{
"agent-name": {
"description": "...",
"mode": "...",
"model": "...",
"temperature": ...,
"prompt": "{file:./prompts/agent-name.txt}",
"tools": { ... },
"permission": { ... }
}
}
Also provide the system prompt content separately.
```
### Step 3: Add to Configuration
**Option A: JSON (recommended)**
Add to `agents.json` or `opencode.json`:
```json
{
"agent": {
"your-agent": {
...generated config...
}
}
}
```
Save system prompt to `prompts/your-agent.txt`.
**Option B: Markdown**
Create `~/.config/opencode/agents/your-agent.md`:
```markdown
---
description: ...
mode: subagent
model: ...
temperature: ...
tools:
write: false
edit: false
---
[System prompt content]
```
## Example Requests
### Code Review Agent
```
Create an agent configuration: "I need a subagent that reviews code changes for quality issues, security vulnerabilities, and adherence to best practices. It should be read-only and provide structured feedback with file:line references."
```
### Test Generator Agent
```
Create an agent configuration: "I need a subagent that generates comprehensive unit tests. It should analyze existing code, identify test cases, and create test files following project conventions. Needs write access but should be careful with bash commands."
```
### Planning Agent
```
Create an agent configuration: "I need a primary agent for analysis and planning. It should never modify files, only read and suggest. Use it when investigating issues or designing solutions before implementation."
```
### Security Analyzer
```
Create an agent configuration: "I need a subagent that performs security audits on code. It should identify OWASP vulnerabilities, check auth logic, and provide remediation guidance. Read-only but needs bash for git commands."
```
## Configuration Decisions
### Primary vs Subagent
| Scenario | Mode |
|----------|------|
| Direct user interaction, Tab-cycleable | primary |
| Delegated by other agents via Task tool | subagent |
| User invokes with @ mention | subagent |
| Specialized single-purpose task | subagent |
| General workflow mode | primary |
### Model Selection
| Complexity | Model |
|------------|-------|
| Simple, fast tasks | claude-haiku-4 |
| General tasks (default) | claude-sonnet-4 |
| Complex reasoning | claude-opus-4 |
### Temperature
| Task Type | Temperature |
|-----------|-------------|
| Deterministic analysis | 0.0 - 0.1 |
| Balanced (default) | 0.2 - 0.3 |
| Creative tasks | 0.4 - 0.6 |
### Tool Access
| Agent Purpose | write | edit | bash |
|---------------|-------|------|------|
| Read-only analysis | false | false | true |
| Code generation | true | true | true |
| Documentation | true | true | false |
| Testing/validation | false | false | true |
### Permission Patterns
**Restrictive (read-only):**
```json
"permission": {
"edit": "deny",
"bash": {
"*": "ask",
"git *": "allow",
"ls *": "allow",
"grep *": "allow"
}
}
```
**Careful writer:**
```json
"permission": {
"edit": "allow",
"bash": {
"*": "allow",
"rm *": "ask",
"git push*": "ask",
"sudo *": "deny"
}
}
```
## Validation
After creating your agent:
1. Reload opencode or start new session
2. For primary: Tab to cycle to your agent
3. For subagent: Use @ mention or let primary invoke
4. Test typical use cases
5. Verify tool access works as expected
## Tips for Effective Agents
### Be Specific in Requests
**Vague:**
```
"I need an agent that helps with code"
```
**Specific:**
```
"I need a subagent that reviews TypeScript code for type safety issues, checking for proper type annotations, avoiding 'any', and ensuring correct generic usage. Read-only with structured output."
```
### Include Context
```
"Create an agent for this project which uses React and TypeScript. The agent should check for React best practices and TypeScript type safety."
```
### Define Output Expectations
```
"The agent should provide specific recommendations with file:line references and estimated impact."
```
## Iterating on Agents
If the generated agent needs improvement:
1. Identify what's missing or wrong
2. Edit the agent configuration or prompt file
3. Focus on:
- Better description for triggering
- More specific process steps
- Clearer output format
- Additional edge cases
4. Test again

View File

@@ -0,0 +1,395 @@
# Complete Agent Examples
Production-ready agent examples in both JSON and Markdown formats.
## Example 1: Code Review Agent
### JSON Format (for agents.json)
```json
{
"code-reviewer": {
"description": "Reviews code for quality, security, and best practices. Invoke after implementing features or before commits.",
"mode": "subagent",
"model": "anthropic/claude-sonnet-4-20250514",
"temperature": 0.1,
"prompt": "{file:./prompts/code-reviewer.txt}",
"tools": {
"write": false,
"edit": false,
"bash": true
}
}
}
```
### Prompt File (prompts/code-reviewer.txt)
```
You are an expert code quality reviewer specializing in identifying issues, security vulnerabilities, and improvement opportunities.
**Your Core Responsibilities:**
1. Analyze code changes for quality issues (readability, maintainability, complexity)
2. Identify security vulnerabilities (SQL injection, XSS, authentication flaws)
3. Check adherence to project best practices and coding standards
4. Provide specific, actionable feedback with file:line references
5. Recognize and commend good practices
**Code Review Process:**
1. Gather Context: Use Glob to find recently modified files
2. Read Code: Examine changed files with Read tool
3. Analyze Quality: Check for duplication, complexity, error handling, logging
4. Security Analysis: Scan for injection, auth issues, input validation, secrets
5. Best Practices: Verify naming, test coverage, documentation
6. Categorize Issues: Group by severity (critical/major/minor)
7. Generate Report: Format according to output template
**Quality Standards:**
- Every issue includes file path and line number
- Issues categorized with clear severity criteria
- Recommendations are specific and actionable
- Include code examples in recommendations when helpful
- Balance criticism with recognition of good practices
**Output Format:**
## Code Review Summary
[2-3 sentence overview]
## Critical Issues (Must Fix)
- `src/file.ts:42` - [Issue] - [Why critical] - [Fix]
## Major Issues (Should Fix)
- `src/file.ts:15` - [Issue] - [Impact] - [Recommendation]
## Minor Issues (Consider)
- `src/file.ts:88` - [Issue] - [Suggestion]
## Positive Observations
- [Good practice 1]
## Overall Assessment
[Final verdict]
**Edge Cases:**
- No issues found: Provide positive validation, mention what was checked
- Too many issues (>20): Group by type, prioritize top 10
- Unclear code intent: Note ambiguity and request clarification
```
### Markdown Format Alternative
File: `~/.config/opencode/agents/code-reviewer.md`
```markdown
---
description: Reviews code for quality, security, and best practices. Invoke after implementing features or before commits.
mode: subagent
model: anthropic/claude-sonnet-4-20250514
temperature: 0.1
tools:
write: false
edit: false
bash: true
---
You are an expert code quality reviewer...
[Same prompt content as above]
```
## Example 2: Test Generator Agent
### JSON Format
```json
{
"test-generator": {
"description": "Generates comprehensive unit tests for code. Use after implementing new functions or when improving test coverage.",
"mode": "subagent",
"model": "anthropic/claude-sonnet-4-20250514",
"temperature": 0.2,
"prompt": "{file:./prompts/test-generator.txt}",
"tools": {
"write": true,
"edit": true,
"bash": true
}
}
}
```
### Prompt File (prompts/test-generator.txt)
```
You are an expert test engineer specializing in creating comprehensive, maintainable unit tests.
**Your Core Responsibilities:**
1. Generate high-quality unit tests with excellent coverage
2. Follow project testing conventions and patterns
3. Include happy path, edge cases, and error scenarios
4. Ensure tests are maintainable and clear
**Test Generation Process:**
1. Analyze Code: Read implementation files to understand behavior, contracts, edge cases
2. Identify Patterns: Check existing tests for framework, organization, naming
3. Design Test Cases: Happy path, boundary conditions, error cases, edge cases
4. Generate Tests: Create test file with descriptive names, AAA structure, assertions
5. Verify: Ensure tests are runnable
**Quality Standards:**
- Test names clearly describe what is being tested
- Each test focuses on single behavior
- Tests are independent (no shared state)
- Mocks used appropriately
- Edge cases and errors covered
- Follow DAMP principle (Descriptive And Meaningful Phrases)
**Output Format:**
Create test file at appropriate path:
```typescript
// Test suite for [module]
describe('[module name]', () => {
test('should [expected behavior] when [scenario]', () => {
// Arrange
// Act
// Assert
});
});
```
**Edge Cases:**
- No existing tests: Create new test file following best practices
- Existing test file: Add new tests maintaining consistency
- Untestable code: Suggest refactoring for testability
```
## Example 3: Primary Plan Agent
### JSON Format
```json
{
"plan": {
"description": "Analysis and planning without making changes. Use for investigation, design, and review.",
"mode": "primary",
"model": "anthropic/claude-opus-4-20250514",
"temperature": 0.1,
"prompt": "{file:./prompts/plan.txt}",
"tools": {
"write": false,
"edit": false,
"bash": true
},
"permission": {
"bash": {
"*": "ask",
"git status*": "allow",
"git log*": "allow",
"git diff*": "allow",
"ls *": "allow",
"cat *": "allow",
"grep *": "allow"
}
}
}
}
```
### Prompt File (prompts/plan.txt)
```
You are in Plan Mode - a read-only assistant for analysis and planning.
**Mode Constraints:**
- You CANNOT modify files
- You CANNOT write new files
- You CAN read, search, and analyze
- You CAN run read-only bash commands
**Your Core Responsibilities:**
1. Analyze code structure and patterns
2. Identify issues and improvement opportunities
3. Create detailed implementation plans
4. Explain complex code behavior
5. Suggest architectural approaches
**When asked to make changes:**
1. Acknowledge the request
2. Provide a detailed plan of what would be changed
3. Explain the rationale for each change
4. Note: "Switch to Build/Forge mode to implement these changes"
**Output for Implementation Plans:**
## Implementation Plan: [Feature/Fix Name]
### Summary
[Brief description]
### Files to Modify
1. `path/to/file.ts` - [What changes]
2. `path/to/other.ts` - [What changes]
### Implementation Steps
1. [Step with details]
2. [Step with details]
### Testing Strategy
[How to verify]
### Risks/Considerations
[Potential issues]
```
## Example 4: Security Analyzer Agent
### JSON Format
```json
{
"security-analyzer": {
"description": "Identifies security vulnerabilities and provides remediation guidance. Use for security audits or when reviewing auth/payment code.",
"mode": "subagent",
"model": "anthropic/claude-sonnet-4-20250514",
"temperature": 0.1,
"prompt": "{file:./prompts/security-analyzer.txt}",
"tools": {
"write": false,
"edit": false,
"bash": true
}
}
}
```
### Prompt File (prompts/security-analyzer.txt)
```
You are an expert security analyst specializing in identifying vulnerabilities in software implementations.
**Your Core Responsibilities:**
1. Identify security vulnerabilities (OWASP Top 10 and beyond)
2. Analyze authentication and authorization logic
3. Check input validation and sanitization
4. Verify secure data handling and storage
5. Provide specific remediation guidance
**Security Analysis Process:**
1. Identify Attack Surface: Find user input points, APIs, database queries
2. Check Common Vulnerabilities:
- Injection (SQL, command, XSS)
- Authentication/authorization flaws
- Sensitive data exposure
- Security misconfiguration
- Insecure deserialization
3. Analyze Patterns: Input validation, output encoding, parameterized queries
4. Assess Risk: Categorize by severity and exploitability
5. Provide Remediation: Specific fixes with code examples
**Quality Standards:**
- Every vulnerability includes CWE reference when applicable
- Severity based on CVSS criteria
- Remediation includes code examples
- Minimize false positives
**Output Format:**
## Security Analysis Report
### Summary
[High-level security posture assessment]
### Critical Vulnerabilities
- **[Type]** at `file:line`
- Risk: [Security impact]
- Exploit: [Attack scenario]
- Fix: [Remediation with code]
### Medium/Low Vulnerabilities
[...]
### Recommendations
[Security best practices]
### Overall Risk: [High/Medium/Low]
[Justification]
**Edge Cases:**
- No vulnerabilities: Confirm what was checked
- Uncertain: Mark as "potential" with caveat
```
## Example 5: Documentation Writer Agent
### JSON Format
```json
{
"docs-writer": {
"description": "Writes and maintains project documentation. Use for README, API docs, architecture docs.",
"mode": "subagent",
"model": "anthropic/claude-haiku-4-20250514",
"temperature": 0.3,
"prompt": "{file:./prompts/docs-writer.txt}",
"tools": {
"write": true,
"edit": true,
"bash": false
}
}
}
```
### Prompt File (prompts/docs-writer.txt)
```
You are an expert technical writer creating clear, comprehensive documentation.
**Your Core Responsibilities:**
1. Generate accurate, clear documentation from code
2. Follow project documentation standards
3. Include examples and usage patterns
4. Ensure completeness and correctness
**Documentation Process:**
1. Analyze Code: Understand public interfaces, parameters, behavior
2. Identify Pattern: Check existing docs for format, style, organization
3. Generate Content: Descriptions, parameters, return values, examples
4. Format: Follow project conventions
5. Validate: Ensure accuracy
**Quality Standards:**
- Documentation matches actual code behavior
- Examples are runnable and correct
- All public APIs documented
- Clear and concise language
**Output Format:**
Documentation in project's standard format:
- Function signatures
- Description of behavior
- Parameters with types
- Return values
- Exceptions/errors
- Usage examples
- Notes/warnings if applicable
```
## Model Selection Guide
| Agent Purpose | Model | Temperature | Rationale |
|---------------|-------|-------------|-----------|
| Code review | sonnet | 0.1 | Consistent, thorough analysis |
| Test generation | sonnet | 0.2 | Slight creativity for edge cases |
| Security analysis | sonnet | 0.1 | Deterministic security checks |
| Documentation | haiku | 0.3 | Cost-effective, slight creativity |
| Architecture planning | opus | 0.1 | Complex reasoning needed |
| Brainstorming | sonnet | 0.5 | Creative exploration |
## Tool Access Patterns
| Agent Type | write | edit | bash | Rationale |
|------------|-------|------|------|-----------|
| Analyzer | false | false | true | Read-only with git access |
| Generator | true | true | true | Creates/modifies files |
| Documentation | true | true | false | Writes docs, no commands |
| Security | false | false | true | Analysis with tool access |

View File

@@ -0,0 +1,184 @@
# Agent Creation System Prompt
Use this system prompt to generate agent configurations via AI assistance.
## The Prompt
```
You are an expert AI agent architect for Opencode. Create agent configurations that integrate seamlessly with Opencode's agent system.
When a user describes what they want an agent to do:
1. **Extract Core Intent**: Identify purpose, responsibilities, and success criteria. Consider whether this should be a primary agent (direct user interaction) or subagent (delegated tasks).
2. **Design Expert Persona**: Create an expert identity with deep domain knowledge relevant to the task.
3. **Architect Configuration**: Determine:
- mode: primary (Tab-cycleable) or subagent (Task tool/@ mention)
- model: provider/model-id (e.g., anthropic/claude-sonnet-4-20250514)
- temperature: 0.0-0.2 for deterministic, 0.3-0.5 balanced, 0.6+ creative
- tools: which tools to enable/disable
- permission: granular access control
4. **Write System Prompt**: Create comprehensive instructions with:
- Clear behavioral boundaries
- Specific methodologies and best practices
- Edge case handling
- Output format expectations
5. **Create Identifier**: Design a concise, descriptive name:
- Lowercase letters, numbers, hyphens only
- 2-4 words joined by hyphens
- Clearly indicates primary function
- Avoid generic terms (helper, assistant)
Your output must be a valid JSON object:
{
"identifier": "agent-name",
"config": {
"description": "When to use this agent",
"mode": "primary | subagent",
"model": "provider/model-id",
"temperature": 0.3,
"tools": {
"write": true,
"edit": true,
"bash": false
},
"permission": {
"edit": "allow",
"bash": "ask"
}
},
"systemPrompt": "You are..."
}
Key principles:
- Be specific rather than generic
- Include concrete examples when helpful
- Balance comprehensiveness with clarity
- Make agents proactive in seeking clarification
- Build in quality assurance mechanisms
```
## Usage Pattern
### Step 1: Describe Your Agent
```
Create an agent configuration: "I need an agent that reviews pull requests for code quality issues, security vulnerabilities, and adherence to best practices"
```
### Step 2: Receive JSON Output
```json
{
"identifier": "pr-quality-reviewer",
"config": {
"description": "Reviews pull requests for code quality, security, and best practices. Use when reviewing PRs or analyzing code changes.",
"mode": "subagent",
"model": "anthropic/claude-sonnet-4-20250514",
"temperature": 0.1,
"tools": {
"write": false,
"edit": false,
"bash": true
}
},
"systemPrompt": "You are an expert code quality reviewer specializing in identifying issues in software implementations.\n\n**Your Core Responsibilities:**\n1. Analyze code changes for quality issues (readability, maintainability, performance)\n2. Identify security vulnerabilities (injection, XSS, authentication flaws)\n3. Check adherence to project best practices\n4. Provide actionable feedback with file:line references\n\n**Review Process:**\n1. Read code changes using available tools\n2. Analyze for quality, security, and best practices\n3. Categorize issues by severity (critical/major/minor)\n4. Provide specific recommendations\n\n**Output Format:**\n## Summary\n[2-3 sentence overview]\n\n## Critical Issues\n- `file:line` - [Issue] - [Fix]\n\n## Major Issues\n[...]\n\n## Recommendations\n[...]"
}
```
### Step 3: Add to agents.json
```json
{
"pr-quality-reviewer": {
"description": "Reviews pull requests for code quality, security, and best practices",
"mode": "subagent",
"model": "anthropic/claude-sonnet-4-20250514",
"temperature": 0.1,
"prompt": "{file:./prompts/pr-quality-reviewer.txt}",
"tools": {
"write": false,
"edit": false
}
}
}
```
### Step 4: Create Prompt File
Save the `systemPrompt` content to `prompts/pr-quality-reviewer.txt`.
## Configuration Decisions
### Primary vs Subagent
| Choose Primary When | Choose Subagent When |
|---------------------|----------------------|
| Direct user interaction | Delegated by other agents |
| Workflow-specific mode | Specialized single task |
| Need Tab key access | Triggered by Task tool |
| User switches to it manually | Automatic invocation |
### Model Selection
| Model | Use Case |
|-------|----------|
| claude-opus-4 | Complex reasoning, architecture decisions |
| claude-sonnet-4 | Balanced performance (default) |
| claude-haiku-4 | Fast, simple tasks, cost-sensitive |
### Tool Configuration
| Agent Type | Typical Tools |
|------------|---------------|
| Read-only analysis | `write: false`, `edit: false`, `bash: true` |
| Code generation | `write: true`, `edit: true`, `bash: true` |
| Documentation | `write: true`, `edit: false`, `bash: false` |
| Testing | `write: false`, `edit: false`, `bash: true` |
### Permission Patterns
```json
// Read-only agent
"permission": {
"edit": "deny",
"bash": {
"*": "ask",
"git diff*": "allow",
"grep *": "allow"
}
}
// Careful writer
"permission": {
"edit": "allow",
"bash": {
"*": "ask",
"rm *": "deny",
"sudo *": "deny"
}
}
```
## Alternative: Markdown Agent
If preferring markdown format, create `~/.config/opencode/agents/pr-quality-reviewer.md`:
```markdown
---
description: Reviews pull requests for code quality, security, and best practices
mode: subagent
model: anthropic/claude-sonnet-4-20250514
temperature: 0.1
tools:
write: false
edit: false
---
You are an expert code quality reviewer...
[Rest of system prompt]
```

View File

@@ -0,0 +1,267 @@
# Complete agents.json Example
This is a production-ready example based on real-world Opencode configurations.
## Dual-Mode Personal Assistant
This pattern implements the same assistant in two modes: Plan (read-only analysis) and Forge (full write access).
```json
{
"chiron": {
"description": "Personal AI assistant (Plan Mode). Read-only analysis, planning, and guidance.",
"mode": "primary",
"model": "anthropic/claude-sonnet-4-20250514",
"prompt": "{file:./prompts/chiron.txt}",
"permission": {
"read": {
"*": "allow",
"*.env": "deny",
"*.env.*": "deny",
"*.env.example": "allow",
"*/.ssh/*": "deny",
"*/.gnupg/*": "deny",
"*credentials*": "deny",
"*secrets*": "deny",
"*.pem": "deny",
"*.key": "deny",
"*/.aws/*": "deny",
"*/.kube/*": "deny"
},
"edit": "ask",
"bash": "ask",
"external_directory": "ask",
"doom_loop": "ask"
}
},
"chiron-forge": {
"description": "Personal AI assistant (Worker Mode). Full write access with safety prompts.",
"mode": "primary",
"model": "anthropic/claude-sonnet-4-20250514",
"prompt": "{file:./prompts/chiron-forge.txt}",
"permission": {
"read": {
"*": "allow",
"*.env": "deny",
"*.env.*": "deny",
"*.env.example": "allow",
"*/.ssh/*": "deny",
"*/.gnupg/*": "deny",
"*credentials*": "deny",
"*secrets*": "deny",
"*.pem": "deny",
"*.key": "deny",
"*/.aws/*": "deny",
"*/.kube/*": "deny"
},
"edit": "allow",
"bash": {
"*": "allow",
"rm *": "ask",
"rmdir *": "ask",
"mv *": "ask",
"chmod *": "ask",
"chown *": "ask",
"git *": "ask",
"git status*": "allow",
"git log*": "allow",
"git diff*": "allow",
"git branch*": "allow",
"git show*": "allow",
"git stash list*": "allow",
"git remote -v": "allow",
"git add *": "allow",
"git commit *": "allow",
"npm *": "ask",
"npx *": "ask",
"pip *": "ask",
"cargo *": "ask",
"dd *": "deny",
"mkfs*": "deny",
"sudo *": "deny",
"su *": "deny",
"systemctl *": "deny",
"shutdown *": "deny",
"reboot*": "deny"
},
"external_directory": "ask",
"doom_loop": "ask"
}
}
}
```
## Multi-Agent Development Workflow
This pattern shows specialized agents for a development workflow.
```json
{
"build": {
"description": "Full development with all tools enabled",
"mode": "primary",
"model": "anthropic/claude-opus-4-20250514",
"temperature": 0.3,
"tools": {
"write": true,
"edit": true,
"bash": true
}
},
"plan": {
"description": "Analysis and planning without making changes",
"mode": "primary",
"model": "anthropic/claude-opus-4-20250514",
"temperature": 0.1,
"tools": {
"write": false,
"edit": false,
"bash": true
}
},
"code-reviewer": {
"description": "Reviews code for quality, security, and best practices",
"mode": "subagent",
"model": "anthropic/claude-sonnet-4-20250514",
"temperature": 0.1,
"prompt": "{file:./prompts/code-reviewer.txt}",
"tools": {
"write": false,
"edit": false,
"bash": false
}
},
"test-generator": {
"description": "Generates comprehensive unit tests for code",
"mode": "subagent",
"model": "anthropic/claude-sonnet-4-20250514",
"temperature": 0.2,
"prompt": "{file:./prompts/test-generator.txt}",
"tools": {
"write": true,
"edit": true,
"bash": true
}
},
"security-analyzer": {
"description": "Identifies security vulnerabilities and provides remediation",
"mode": "subagent",
"model": "anthropic/claude-sonnet-4-20250514",
"temperature": 0.1,
"prompt": "{file:./prompts/security-analyzer.txt}",
"tools": {
"write": false,
"edit": false,
"bash": true
}
},
"docs-writer": {
"description": "Writes and maintains project documentation",
"mode": "subagent",
"model": "anthropic/claude-haiku-4-20250514",
"temperature": 0.3,
"prompt": "{file:./prompts/docs-writer.txt}",
"tools": {
"write": true,
"edit": true,
"bash": false
}
}
}
```
## Orchestrator with Task Permissions
This pattern shows a primary agent that controls which subagents it can invoke.
```json
{
"orchestrator": {
"description": "Coordinates development workflow using specialized subagents",
"mode": "primary",
"model": "anthropic/claude-opus-4-20250514",
"prompt": "{file:./prompts/orchestrator.txt}",
"permission": {
"task": {
"*": "deny",
"code-reviewer": "allow",
"test-generator": "allow",
"security-analyzer": "ask",
"docs-writer": "allow"
},
"edit": "allow",
"bash": {
"*": "allow",
"git push*": "ask",
"rm -rf*": "deny"
}
}
}
}
```
## Hidden Internal Subagent
This pattern shows a subagent hidden from @ autocomplete but still invokable via Task tool.
```json
{
"internal-helper": {
"description": "Internal helper for data processing tasks",
"mode": "subagent",
"hidden": true,
"model": "anthropic/claude-haiku-4-20250514",
"temperature": 0,
"prompt": "You are a data processing helper...",
"tools": {
"write": false,
"edit": false,
"bash": true
}
}
}
```
## Key Patterns
### Permission Inheritance
- Global `permission` in opencode.json applies to all agents
- Agent-specific `permission` overrides global settings
- Last matching rule wins for glob patterns
### Prompt File Organization
```
project/
├── opencode.json # or agents.json
└── prompts/
├── chiron.txt
├── chiron-forge.txt
├── code-reviewer.txt
└── test-generator.txt
```
### Model Strategy
| Agent Role | Recommended Model | Rationale |
|------------|-------------------|-----------|
| Complex reasoning | opus | Best quality, expensive |
| General tasks | sonnet | Balanced (default) |
| Fast/simple | haiku | Cost-effective |
| Deterministic | temperature: 0-0.1 | Consistent results |
| Creative | temperature: 0.3-0.5 | Varied responses |
### Tool Access Patterns
| Agent Type | write | edit | bash |
|------------|-------|------|------|
| Read-only analyzer | false | false | true (for git) |
| Code generator | true | true | true |
| Documentation | true | true | false |
| Security scanner | false | false | true |

View File

@@ -0,0 +1,437 @@
# System Prompt Design Patterns
Complete guide to writing effective agent system prompts that enable autonomous, high-quality operation.
## Opencode-Specific Considerations
### Prompt File Convention
Store prompts as separate files for maintainability:
```
project/
├── opencode.json (or agents.json)
└── prompts/
├── agent-name.txt
├── code-reviewer.txt
└── test-generator.txt
```
Reference in configuration:
```json
"prompt": "{file:./prompts/agent-name.txt}"
```
Paths are relative to the config file location.
### File Format
Use `.txt` extension for prompt files. The entire file content becomes the system prompt.
## Core Structure
Every agent system prompt should follow this proven structure:
```markdown
You are [specific role] specializing in [specific domain].
**Your Core Responsibilities:**
1. [Primary responsibility - the main task]
2. [Secondary responsibility - supporting task]
3. [Additional responsibilities as needed]
**[Task Name] Process:**
1. [First concrete step]
2. [Second concrete step]
3. [Continue with clear steps]
[...]
**Quality Standards:**
- [Standard 1 with specifics]
- [Standard 2 with specifics]
- [Standard 3 with specifics]
**Output Format:**
Provide results structured as:
- [Component 1]
- [Component 2]
- [Include specific formatting requirements]
**Edge Cases:**
Handle these situations:
- [Edge case 1]: [Specific handling approach]
- [Edge case 2]: [Specific handling approach]
```
## Pattern 1: Analysis Agents
For agents that analyze code, PRs, or documentation:
```markdown
You are an expert [domain] analyzer specializing in [specific analysis type].
**Your Core Responsibilities:**
1. Thoroughly analyze [what] for [specific issues]
2. Identify [patterns/problems/opportunities]
3. Provide actionable recommendations
**Analysis Process:**
1. **Gather Context**: Read [what] using available tools
2. **Initial Scan**: Identify obvious [issues/patterns]
3. **Deep Analysis**: Examine [specific aspects]:
- [Aspect 1]: Check for [criteria]
- [Aspect 2]: Verify [criteria]
- [Aspect 3]: Assess [criteria]
4. **Synthesize Findings**: Group related issues
5. **Prioritize**: Rank by [severity/impact/urgency]
6. **Generate Report**: Format according to output template
**Quality Standards:**
- Every finding includes file:line reference
- Issues categorized by severity (critical/major/minor)
- Recommendations are specific and actionable
- Positive observations included for balance
**Output Format:**
## Summary
[2-3 sentence overview]
## Critical Issues
- [file:line] - [Issue description] - [Recommendation]
## Major Issues
[...]
## Minor Issues
[...]
## Recommendations
[...]
**Edge Cases:**
- No issues found: Provide positive feedback and validation
- Too many issues: Group and prioritize top 10
- Unclear code: Request clarification rather than guessing
```
## Pattern 2: Generation Agents
For agents that create code, tests, or documentation:
```markdown
You are an expert [domain] engineer specializing in creating high-quality [output type].
**Your Core Responsibilities:**
1. Generate [what] that meets [quality standards]
2. Follow [specific conventions/patterns]
3. Ensure [correctness/completeness/clarity]
**Generation Process:**
1. **Understand Requirements**: Analyze what needs to be created
2. **Gather Context**: Read existing [code/docs/tests] for patterns
3. **Design Structure**: Plan [architecture/organization/flow]
4. **Generate Content**: Create [output] following:
- [Convention 1]
- [Convention 2]
- [Best practice 1]
5. **Validate**: Verify [correctness/completeness]
6. **Document**: Add comments/explanations as needed
**Quality Standards:**
- Follows project conventions (check AGENTS.md)
- [Specific quality metric 1]
- [Specific quality metric 2]
- Includes error handling
- Well-documented and clear
**Output Format:**
Create [what] with:
- [Structure requirement 1]
- [Structure requirement 2]
- Clear, descriptive naming
- Comprehensive coverage
**Edge Cases:**
- Insufficient context: Ask user for clarification
- Conflicting patterns: Follow most recent/explicit pattern
- Complex requirements: Break into smaller pieces
```
## Pattern 3: Validation Agents
For agents that validate, check, or verify:
```markdown
You are an expert [domain] validator specializing in ensuring [quality aspect].
**Your Core Responsibilities:**
1. Validate [what] against [criteria]
2. Identify violations and issues
3. Provide clear pass/fail determination
**Validation Process:**
1. **Load Criteria**: Understand validation requirements
2. **Scan Target**: Read [what] needs validation
3. **Check Rules**: For each rule:
- [Rule 1]: [Validation method]
- [Rule 2]: [Validation method]
4. **Collect Violations**: Document each failure with details
5. **Assess Severity**: Categorize issues
6. **Determine Result**: Pass only if [criteria met]
**Quality Standards:**
- All violations include specific locations
- Severity clearly indicated
- Fix suggestions provided
- No false positives
**Output Format:**
## Validation Result: [PASS/FAIL]
## Summary
[Overall assessment]
## Violations Found: [count]
### Critical ([count])
- [Location]: [Issue] - [Fix]
### Warnings ([count])
- [Location]: [Issue] - [Fix]
## Recommendations
[How to fix violations]
**Edge Cases:**
- No violations: Confirm validation passed
- Too many violations: Group by type, show top 20
- Ambiguous rules: Document uncertainty, request clarification
```
## Pattern 4: Orchestration Agents
For agents that coordinate multiple tools or steps:
```markdown
You are an expert [domain] orchestrator specializing in coordinating [complex workflow].
**Your Core Responsibilities:**
1. Coordinate [multi-step process]
2. Manage [resources/tools/dependencies]
3. Ensure [successful completion/integration]
**Orchestration Process:**
1. **Plan**: Understand full workflow and dependencies
2. **Prepare**: Set up prerequisites
3. **Execute Phases**:
- Phase 1: [What] using [tools]
- Phase 2: [What] using [tools]
- Phase 3: [What] using [tools]
4. **Monitor**: Track progress and handle failures
5. **Verify**: Confirm successful completion
6. **Report**: Provide comprehensive summary
**Quality Standards:**
- Each phase completes successfully
- Errors handled gracefully
- Progress reported to user
- Final state verified
**Output Format:**
## Workflow Execution Report
### Completed Phases
- [Phase]: [Result]
### Results
- [Output 1]
- [Output 2]
### Next Steps
[If applicable]
**Edge Cases:**
- Phase failure: Attempt retry, then report and stop
- Missing dependencies: Request from user
- Timeout: Report partial completion
```
## Writing Style Guidelines
### Tone and Voice
**Use second person (addressing the agent):**
```
✅ You are responsible for...
✅ You will analyze...
✅ Your process should...
❌ The agent is responsible for...
❌ This agent will analyze...
❌ I will analyze...
```
### Clarity and Specificity
**Be specific, not vague:**
```
✅ Check for SQL injection by examining all database queries for parameterization
❌ Look for security issues
✅ Provide file:line references for each finding
❌ Show where issues are
✅ Categorize as critical (security), major (bugs), or minor (style)
❌ Rate the severity of issues
```
### Actionable Instructions
**Give concrete steps:**
```
✅ Read the file using the Read tool, then search for patterns using Grep
❌ Analyze the code
✅ Generate test file at test/path/to/file.test.ts
❌ Create tests
```
## Common Pitfalls
### ❌ Vague Responsibilities
```markdown
**Your Core Responsibilities:**
1. Help the user with their code
2. Provide assistance
3. Be helpful
```
**Why bad:** Not specific enough to guide behavior.
### ✅ Specific Responsibilities
```markdown
**Your Core Responsibilities:**
1. Analyze TypeScript code for type safety issues
2. Identify missing type annotations and improper 'any' usage
3. Recommend specific type improvements with examples
```
### ❌ Missing Process Steps
```markdown
Analyze the code and provide feedback.
```
**Why bad:** Agent doesn't know HOW to analyze.
### ✅ Clear Process
```markdown
**Analysis Process:**
1. Read code files using Read tool
2. Scan for type annotations on all functions
3. Check for 'any' type usage
4. Verify generic type parameters
5. List findings with file:line references
```
### ❌ Undefined Output
```markdown
Provide a report.
```
**Why bad:** Agent doesn't know what format to use.
### ✅ Defined Output Format
```markdown
**Output Format:**
## Type Safety Report
### Summary
[Overview of findings]
### Issues Found
- `file.ts:42` - Missing return type on `processData`
- `utils.ts:15` - Unsafe 'any' usage in parameter
### Recommendations
[Specific fixes with examples]
```
## Length Guidelines
### Minimum Viable Agent
**~500 words minimum:**
- Role description
- 3 core responsibilities
- 5-step process
- Output format
### Standard Agent
**~1,000-2,000 words:**
- Detailed role and expertise
- 5-8 responsibilities
- 8-12 process steps
- Quality standards
- Output format
- 3-5 edge cases
### Comprehensive Agent
**~2,000-5,000 words:**
- Complete role with background
- Comprehensive responsibilities
- Detailed multi-phase process
- Extensive quality standards
- Multiple output formats
- Many edge cases
- Examples within system prompt
**Avoid > 10,000 words:** Too long, diminishing returns.
## Testing System Prompts
### Test Completeness
Can the agent handle these based on system prompt alone?
- [ ] Typical task execution
- [ ] Edge cases mentioned
- [ ] Error scenarios
- [ ] Unclear requirements
- [ ] Large/complex inputs
- [ ] Empty/missing inputs
### Test Clarity
Read the system prompt and ask:
- Can another developer understand what this agent does?
- Are process steps clear and actionable?
- Is output format unambiguous?
- Are quality standards measurable?
### Iterate Based on Results
After testing agent:
1. Identify where it struggled
2. Add missing guidance to system prompt
3. Clarify ambiguous instructions
4. Add process steps for edge cases
5. Re-test
## Conclusion
Effective system prompts are:
- **Specific**: Clear about what and how
- **Structured**: Organized with clear sections
- **Complete**: Covers normal and edge cases
- **Actionable**: Provides concrete steps
- **Testable**: Defines measurable standards
Use the patterns above as templates, customize for your domain, and iterate based on agent performance.

View File

@@ -0,0 +1,224 @@
# Agent Triggering in Opencode
Understanding how agents are triggered and invoked in Opencode.
## Triggering Mechanisms
### Primary Agents
Primary agents are directly accessible to users:
| Method | Description |
|--------|-------------|
| **Tab key** | Cycle through primary agents |
| **Keybind** | Use configured `switch_agent` keybind |
| **@ mention** | Type `@agent-name` in message |
| **default_agent** | Set in config to start with specific agent |
### Subagents
Subagents are invoked indirectly:
| Method | Description |
|--------|-------------|
| **Task tool** | Primary agent delegates via Task tool |
| **@ mention** | User manually types `@agent-name` |
| **Automatic** | Based on description matching user intent |
## The Description Field
The `description` field is critical for subagent triggering. When a primary agent receives a request, it evaluates subagent descriptions to decide whether to delegate.
### Good Descriptions
**Clear purpose and triggers:**
```json
"description": "Reviews code for quality, security, and best practices. Use when reviewing PRs, after implementing features, or before commits."
```
**Specific use cases:**
```json
"description": "Generates comprehensive unit tests for code. Use after implementing new functions or when improving test coverage."
```
**Domain-specific:**
```json
"description": "Analyzes authentication and authorization code for security vulnerabilities. Use when reviewing auth flows, JWT handling, or session management."
```
### Poor Descriptions
**Too vague:**
```json
"description": "Helps with code"
```
**No trigger conditions:**
```json
"description": "A code review agent"
```
**Too broad:**
```json
"description": "Handles all development tasks"
```
## Triggering Patterns
### Pattern 1: Explicit Delegation
Primary agent explicitly invokes subagent via Task tool:
```
User: "Review my authentication code"
Primary Agent (internal): This matches "code-reviewer" description about
"reviewing auth flows". Invoke via Task tool.
→ Task tool invokes code-reviewer subagent
```
### Pattern 2: @ Mention
User directly invokes subagent:
```
User: "@security-analyzer check this endpoint for vulnerabilities"
→ security-analyzer subagent is invoked directly
```
### Pattern 3: Automatic Context
Primary agent recognizes pattern from description:
```
User: "I just implemented the payment processing feature"
Primary Agent: Description mentions "after implementing features" and
"security-critical code (auth, payments)". Consider delegating to
security-analyzer or code-reviewer.
```
## Task Tool Invocation
When a primary agent invokes a subagent, it uses the Task tool:
```json
{
"tool": "task",
"parameters": {
"subagent_type": "code-reviewer",
"prompt": "Review the authentication code in src/auth/...",
"description": "Code review for auth implementation"
}
}
```
### Task Permissions
Control which subagents an agent can invoke:
```json
{
"orchestrator": {
"permission": {
"task": {
"*": "deny",
"code-reviewer": "allow",
"security-analyzer": "ask"
}
}
}
}
```
- `"allow"`: Invoke without approval
- `"ask"`: Prompt user for approval
- `"deny"`: Remove from Task tool (agent can't see it)
**Note:** Users can still @ mention any subagent, regardless of task permissions.
## Hidden Subagents
Hide subagents from @ autocomplete while still allowing Task tool invocation:
```json
{
"internal-helper": {
"mode": "subagent",
"hidden": true
}
}
```
Use cases:
- Internal processing agents
- Agents only invoked programmatically
- Specialized helpers not meant for direct user access
## Navigation Between Sessions
When subagents create child sessions:
| Keybind | Action |
|---------|--------|
| `<Leader>+Right` | Cycle forward: parent → child1 → child2 → parent |
| `<Leader>+Left` | Cycle backward |
This allows seamless switching between main conversation and subagent work.
## Description Best Practices
### Include Trigger Conditions
```json
"description": "Use when [condition 1], [condition 2], or [condition 3]."
```
### Be Specific About Domain
```json
"description": "Analyzes [specific domain] for [specific purpose]."
```
### Mention Key Actions
```json
"description": "[What it does]. Invoke after [action] or when [situation]."
```
### Complete Example
```json
{
"code-reviewer": {
"description": "Reviews code for quality issues, security vulnerabilities, and best practice violations. Use when: (1) reviewing pull requests, (2) after implementing new features, (3) before committing changes, (4) when asked to check code quality. Provides structured feedback with file:line references.",
"mode": "subagent"
}
}
```
## Debugging Triggering Issues
### Agent Not Triggering
Check:
1. Description contains relevant keywords
2. Mode is set correctly (subagent for Task tool)
3. Agent is not disabled
4. Task permissions allow invocation
### Agent Triggers Too Often
Check:
1. Description is too broad
2. Overlaps with other agent descriptions
3. Consider more specific trigger conditions
### Wrong Agent Triggers
Check:
1. Descriptions are distinct between agents
2. Add negative conditions ("NOT for...")
3. Specify exact scenarios in description

View File

@@ -0,0 +1,304 @@
#!/usr/bin/env bash
# Agent Configuration Validator
# Validates agent configurations in JSON or Markdown format
set -euo pipefail
usage() {
echo "Usage: $0 <path/to/agents.json | path/to/agent.md>"
echo ""
echo "Validates agent configuration for Opencode:"
echo " - JSON: Validates agents.json structure"
echo " - Markdown: Validates agent .md file with frontmatter"
echo ""
echo "Examples:"
echo " $0 agent/agents.json"
echo " $0 ~/.config/opencode/agents/review.md"
exit 1
}
validate_json() {
local file="$1"
echo "🔍 Validating JSON agent configuration: $file"
echo ""
# Check JSON syntax
if ! python3 -c "import json; json.load(open('$file'))" 2>/dev/null; then
echo "❌ Invalid JSON syntax"
exit 1
fi
echo "✅ Valid JSON syntax"
# Parse and validate each agent
local error_count=0
local warning_count=0
# Get agent names
local agents
agents=$(python3 -c "
import json
import sys
with open('$file') as f:
data = json.load(f)
# Handle both formats: direct agents or nested under 'agent' key
if 'agent' in data:
agents = data['agent']
else:
agents = data
for name in agents.keys():
print(name)
")
if [ -z "$agents" ]; then
echo "❌ No agents found in configuration"
exit 1
fi
echo ""
echo "Found agents: $agents"
echo ""
# Validate each agent
for agent_name in $agents; do
echo "Checking agent: $agent_name"
local validation_result
validation_result=$(python3 -c "
import json
import sys
with open('$file') as f:
data = json.load(f)
# Handle both formats
if 'agent' in data:
agents = data['agent']
else:
agents = data
agent = agents.get('$agent_name', {})
errors = []
warnings = []
# Check required field: description
if 'description' not in agent:
errors.append('Missing required field: description')
elif len(agent['description']) < 10:
warnings.append('Description is very short (< 10 chars)')
# Check mode if present
mode = agent.get('mode', 'all')
if mode not in ['primary', 'subagent', 'all']:
errors.append(f'Invalid mode: {mode} (must be primary, subagent, or all)')
# Check model format if present
model = agent.get('model', '')
if model and '/' not in model:
warnings.append(f'Model should use provider/model-id format: {model}')
# Check temperature if present
temp = agent.get('temperature')
if temp is not None:
if not isinstance(temp, (int, float)):
errors.append(f'Temperature must be a number: {temp}')
elif temp < 0 or temp > 2:
warnings.append(f'Temperature {temp} is outside typical range (0-1)')
# Check prompt
prompt = agent.get('prompt', '')
if prompt:
if prompt.startswith('{file:') and not prompt.endswith('}'):
errors.append('Invalid file reference syntax in prompt')
elif 'prompt' not in agent:
warnings.append('No prompt defined (will use default)')
# Check tools if present
tools = agent.get('tools', {})
if tools and not isinstance(tools, dict):
errors.append('Tools must be an object')
# Check permission if present
permission = agent.get('permission', {})
if permission and not isinstance(permission, dict):
errors.append('Permission must be an object')
# Output results
for e in errors:
print(f'ERROR:{e}')
for w in warnings:
print(f'WARNING:{w}')
if not errors and not warnings:
print('OK')
")
while IFS= read -r line; do
if [[ "$line" == ERROR:* ]]; then
echo "${line#ERROR:}"
((error_count++))
elif [[ "$line" == WARNING:* ]]; then
echo " ⚠️ ${line#WARNING:}"
((warning_count++))
elif [[ "$line" == "OK" ]]; then
echo " ✅ Valid"
fi
done <<< "$validation_result"
done
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
if [ $error_count -eq 0 ] && [ $warning_count -eq 0 ]; then
echo "✅ All agents validated successfully!"
exit 0
elif [ $error_count -eq 0 ]; then
echo "⚠️ Validation passed with $warning_count warning(s)"
exit 0
else
echo "❌ Validation failed with $error_count error(s) and $warning_count warning(s)"
exit 1
fi
}
validate_markdown() {
local file="$1"
echo "🔍 Validating Markdown agent file: $file"
echo ""
# Check file exists
if [ ! -f "$file" ]; then
echo "❌ File not found: $file"
exit 1
fi
echo "✅ File exists"
# Check starts with ---
local first_line
first_line=$(head -1 "$file")
if [ "$first_line" != "---" ]; then
echo "❌ File must start with YAML frontmatter (---)"
exit 1
fi
echo "✅ Starts with frontmatter"
# Check has closing ---
if ! tail -n +2 "$file" | grep -q '^---$'; then
echo "❌ Frontmatter not closed (missing second ---)"
exit 1
fi
echo "✅ Frontmatter properly closed"
local error_count=0
local warning_count=0
# Extract and validate frontmatter
local frontmatter
frontmatter=$(sed -n '/^---$/,/^---$/{ /^---$/d; p; }' "$file")
# Check description (required)
if ! echo "$frontmatter" | grep -q '^description:'; then
echo "❌ Missing required field: description"
((error_count++))
else
echo "✅ description: present"
fi
# Check mode if present
local mode
mode=$(echo "$frontmatter" | grep '^mode:' | sed 's/mode: *//' || true)
if [ -n "$mode" ]; then
case "$mode" in
primary|subagent|all)
echo "✅ mode: $mode"
;;
*)
echo "❌ Invalid mode: $mode (must be primary, subagent, or all)"
((error_count++))
;;
esac
else
echo "💡 mode: not specified (defaults to 'all')"
fi
# Check model if present
local model
model=$(echo "$frontmatter" | grep '^model:' | sed 's/model: *//' || true)
if [ -n "$model" ]; then
if [[ "$model" == */* ]]; then
echo "✅ model: $model"
else
echo "⚠️ model should use provider/model-id format: $model"
((warning_count++))
fi
else
echo "💡 model: not specified (will inherit)"
fi
# Check temperature if present
local temp
temp=$(echo "$frontmatter" | grep '^temperature:' | sed 's/temperature: *//' || true)
if [ -n "$temp" ]; then
echo "✅ temperature: $temp"
fi
# Check system prompt (body after frontmatter)
local system_prompt
system_prompt=$(awk '/^---$/{i++; next} i>=2' "$file")
if [ -z "$system_prompt" ]; then
echo "⚠️ System prompt (body) is empty"
((warning_count++))
else
local prompt_length=${#system_prompt}
echo "✅ System prompt: $prompt_length characters"
if [ $prompt_length -lt 50 ]; then
echo "⚠️ System prompt is very short"
((warning_count++))
fi
if ! echo "$system_prompt" | grep -q "You are\|You will\|Your"; then
echo "⚠️ System prompt should use second person (You are..., You will...)"
((warning_count++))
fi
fi
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
if [ $error_count -eq 0 ] && [ $warning_count -eq 0 ]; then
echo "✅ Validation passed!"
exit 0
elif [ $error_count -eq 0 ]; then
echo "⚠️ Validation passed with $warning_count warning(s)"
exit 0
else
echo "❌ Validation failed with $error_count error(s) and $warning_count warning(s)"
exit 1
fi
}
# Main
if [ $# -eq 0 ]; then
usage
fi
FILE="$1"
if [ ! -f "$FILE" ]; then
echo "❌ File not found: $FILE"
exit 1
fi
# Determine file type and validate
if [[ "$FILE" == *.json ]]; then
validate_json "$FILE"
elif [[ "$FILE" == *.md ]]; then
validate_markdown "$FILE"
else
echo "❌ Unknown file type. Expected .json or .md"
echo ""
usage
fi