Rewrite agent-development skill for opencode

- Update SKILL.md with JSON-first approach (agents.json pattern)
- Add all opencode config options: mode, temperature, maxSteps, hidden, permission
- Document permissions system with granular rules and glob patterns
- Add references/opencode-agents-json-example.md with chiron pattern
- Rewrite triggering-examples.md for opencode (Tab, @mention, Task tool)
- Update agent-creation-system-prompt.md for JSON output format
- Rewrite complete-agent-examples.md with JSON examples
- Rewrite validate-agent.sh to support both JSON and Markdown validation
This commit is contained in:
m3tm3re
2026-01-19 19:35:55 +01:00
parent 8ebb30fb2b
commit 924b3476f9
8 changed files with 2521 additions and 0 deletions

View File

@@ -0,0 +1,184 @@
# Agent Creation System Prompt
Use this system prompt to generate agent configurations via AI assistance.
## The Prompt
```
You are an expert AI agent architect for Opencode. Create agent configurations that integrate seamlessly with Opencode's agent system.
When a user describes what they want an agent to do:
1. **Extract Core Intent**: Identify purpose, responsibilities, and success criteria. Consider whether this should be a primary agent (direct user interaction) or subagent (delegated tasks).
2. **Design Expert Persona**: Create an expert identity with deep domain knowledge relevant to the task.
3. **Architect Configuration**: Determine:
- mode: primary (Tab-cycleable) or subagent (Task tool/@ mention)
- model: provider/model-id (e.g., anthropic/claude-sonnet-4-20250514)
- temperature: 0.0-0.2 for deterministic, 0.3-0.5 balanced, 0.6+ creative
- tools: which tools to enable/disable
- permission: granular access control
4. **Write System Prompt**: Create comprehensive instructions with:
- Clear behavioral boundaries
- Specific methodologies and best practices
- Edge case handling
- Output format expectations
5. **Create Identifier**: Design a concise, descriptive name:
- Lowercase letters, numbers, hyphens only
- 2-4 words joined by hyphens
- Clearly indicates primary function
- Avoid generic terms (helper, assistant)
Your output must be a valid JSON object:
{
"identifier": "agent-name",
"config": {
"description": "When to use this agent",
"mode": "primary | subagent",
"model": "provider/model-id",
"temperature": 0.3,
"tools": {
"write": true,
"edit": true,
"bash": false
},
"permission": {
"edit": "allow",
"bash": "ask"
}
},
"systemPrompt": "You are..."
}
Key principles:
- Be specific rather than generic
- Include concrete examples when helpful
- Balance comprehensiveness with clarity
- Make agents proactive in seeking clarification
- Build in quality assurance mechanisms
```
## Usage Pattern
### Step 1: Describe Your Agent
```
Create an agent configuration: "I need an agent that reviews pull requests for code quality issues, security vulnerabilities, and adherence to best practices"
```
### Step 2: Receive JSON Output
```json
{
"identifier": "pr-quality-reviewer",
"config": {
"description": "Reviews pull requests for code quality, security, and best practices. Use when reviewing PRs or analyzing code changes.",
"mode": "subagent",
"model": "anthropic/claude-sonnet-4-20250514",
"temperature": 0.1,
"tools": {
"write": false,
"edit": false,
"bash": true
}
},
"systemPrompt": "You are an expert code quality reviewer specializing in identifying issues in software implementations.\n\n**Your Core Responsibilities:**\n1. Analyze code changes for quality issues (readability, maintainability, performance)\n2. Identify security vulnerabilities (injection, XSS, authentication flaws)\n3. Check adherence to project best practices\n4. Provide actionable feedback with file:line references\n\n**Review Process:**\n1. Read code changes using available tools\n2. Analyze for quality, security, and best practices\n3. Categorize issues by severity (critical/major/minor)\n4. Provide specific recommendations\n\n**Output Format:**\n## Summary\n[2-3 sentence overview]\n\n## Critical Issues\n- `file:line` - [Issue] - [Fix]\n\n## Major Issues\n[...]\n\n## Recommendations\n[...]"
}
```
### Step 3: Add to agents.json
```json
{
"pr-quality-reviewer": {
"description": "Reviews pull requests for code quality, security, and best practices",
"mode": "subagent",
"model": "anthropic/claude-sonnet-4-20250514",
"temperature": 0.1,
"prompt": "{file:./prompts/pr-quality-reviewer.txt}",
"tools": {
"write": false,
"edit": false
}
}
}
```
### Step 4: Create Prompt File
Save the `systemPrompt` content to `prompts/pr-quality-reviewer.txt`.
## Configuration Decisions
### Primary vs Subagent
| Choose Primary When | Choose Subagent When |
|---------------------|----------------------|
| Direct user interaction | Delegated by other agents |
| Workflow-specific mode | Specialized single task |
| Need Tab key access | Triggered by Task tool |
| User switches to it manually | Automatic invocation |
### Model Selection
| Model | Use Case |
|-------|----------|
| claude-opus-4 | Complex reasoning, architecture decisions |
| claude-sonnet-4 | Balanced performance (default) |
| claude-haiku-4 | Fast, simple tasks, cost-sensitive |
### Tool Configuration
| Agent Type | Typical Tools |
|------------|---------------|
| Read-only analysis | `write: false`, `edit: false`, `bash: true` |
| Code generation | `write: true`, `edit: true`, `bash: true` |
| Documentation | `write: true`, `edit: false`, `bash: false` |
| Testing | `write: false`, `edit: false`, `bash: true` |
### Permission Patterns
```json
// Read-only agent
"permission": {
"edit": "deny",
"bash": {
"*": "ask",
"git diff*": "allow",
"grep *": "allow"
}
}
// Careful writer
"permission": {
"edit": "allow",
"bash": {
"*": "ask",
"rm *": "deny",
"sudo *": "deny"
}
}
```
## Alternative: Markdown Agent
If preferring markdown format, create `~/.config/opencode/agents/pr-quality-reviewer.md`:
```markdown
---
description: Reviews pull requests for code quality, security, and best practices
mode: subagent
model: anthropic/claude-sonnet-4-20250514
temperature: 0.1
tools:
write: false
edit: false
---
You are an expert code quality reviewer...
[Rest of system prompt]
```

View File

@@ -0,0 +1,267 @@
# Complete agents.json Example
This is a production-ready example based on real-world Opencode configurations.
## Dual-Mode Personal Assistant
This pattern implements the same assistant in two modes: Plan (read-only analysis) and Forge (full write access).
```json
{
"chiron": {
"description": "Personal AI assistant (Plan Mode). Read-only analysis, planning, and guidance.",
"mode": "primary",
"model": "anthropic/claude-sonnet-4-20250514",
"prompt": "{file:./prompts/chiron.txt}",
"permission": {
"read": {
"*": "allow",
"*.env": "deny",
"*.env.*": "deny",
"*.env.example": "allow",
"*/.ssh/*": "deny",
"*/.gnupg/*": "deny",
"*credentials*": "deny",
"*secrets*": "deny",
"*.pem": "deny",
"*.key": "deny",
"*/.aws/*": "deny",
"*/.kube/*": "deny"
},
"edit": "ask",
"bash": "ask",
"external_directory": "ask",
"doom_loop": "ask"
}
},
"chiron-forge": {
"description": "Personal AI assistant (Worker Mode). Full write access with safety prompts.",
"mode": "primary",
"model": "anthropic/claude-sonnet-4-20250514",
"prompt": "{file:./prompts/chiron-forge.txt}",
"permission": {
"read": {
"*": "allow",
"*.env": "deny",
"*.env.*": "deny",
"*.env.example": "allow",
"*/.ssh/*": "deny",
"*/.gnupg/*": "deny",
"*credentials*": "deny",
"*secrets*": "deny",
"*.pem": "deny",
"*.key": "deny",
"*/.aws/*": "deny",
"*/.kube/*": "deny"
},
"edit": "allow",
"bash": {
"*": "allow",
"rm *": "ask",
"rmdir *": "ask",
"mv *": "ask",
"chmod *": "ask",
"chown *": "ask",
"git *": "ask",
"git status*": "allow",
"git log*": "allow",
"git diff*": "allow",
"git branch*": "allow",
"git show*": "allow",
"git stash list*": "allow",
"git remote -v": "allow",
"git add *": "allow",
"git commit *": "allow",
"npm *": "ask",
"npx *": "ask",
"pip *": "ask",
"cargo *": "ask",
"dd *": "deny",
"mkfs*": "deny",
"sudo *": "deny",
"su *": "deny",
"systemctl *": "deny",
"shutdown *": "deny",
"reboot*": "deny"
},
"external_directory": "ask",
"doom_loop": "ask"
}
}
}
```
## Multi-Agent Development Workflow
This pattern shows specialized agents for a development workflow.
```json
{
"build": {
"description": "Full development with all tools enabled",
"mode": "primary",
"model": "anthropic/claude-opus-4-20250514",
"temperature": 0.3,
"tools": {
"write": true,
"edit": true,
"bash": true
}
},
"plan": {
"description": "Analysis and planning without making changes",
"mode": "primary",
"model": "anthropic/claude-opus-4-20250514",
"temperature": 0.1,
"tools": {
"write": false,
"edit": false,
"bash": true
}
},
"code-reviewer": {
"description": "Reviews code for quality, security, and best practices",
"mode": "subagent",
"model": "anthropic/claude-sonnet-4-20250514",
"temperature": 0.1,
"prompt": "{file:./prompts/code-reviewer.txt}",
"tools": {
"write": false,
"edit": false,
"bash": false
}
},
"test-generator": {
"description": "Generates comprehensive unit tests for code",
"mode": "subagent",
"model": "anthropic/claude-sonnet-4-20250514",
"temperature": 0.2,
"prompt": "{file:./prompts/test-generator.txt}",
"tools": {
"write": true,
"edit": true,
"bash": true
}
},
"security-analyzer": {
"description": "Identifies security vulnerabilities and provides remediation",
"mode": "subagent",
"model": "anthropic/claude-sonnet-4-20250514",
"temperature": 0.1,
"prompt": "{file:./prompts/security-analyzer.txt}",
"tools": {
"write": false,
"edit": false,
"bash": true
}
},
"docs-writer": {
"description": "Writes and maintains project documentation",
"mode": "subagent",
"model": "anthropic/claude-haiku-4-20250514",
"temperature": 0.3,
"prompt": "{file:./prompts/docs-writer.txt}",
"tools": {
"write": true,
"edit": true,
"bash": false
}
}
}
```
## Orchestrator with Task Permissions
This pattern shows a primary agent that controls which subagents it can invoke.
```json
{
"orchestrator": {
"description": "Coordinates development workflow using specialized subagents",
"mode": "primary",
"model": "anthropic/claude-opus-4-20250514",
"prompt": "{file:./prompts/orchestrator.txt}",
"permission": {
"task": {
"*": "deny",
"code-reviewer": "allow",
"test-generator": "allow",
"security-analyzer": "ask",
"docs-writer": "allow"
},
"edit": "allow",
"bash": {
"*": "allow",
"git push*": "ask",
"rm -rf*": "deny"
}
}
}
}
```
## Hidden Internal Subagent
This pattern shows a subagent hidden from @ autocomplete but still invokable via Task tool.
```json
{
"internal-helper": {
"description": "Internal helper for data processing tasks",
"mode": "subagent",
"hidden": true,
"model": "anthropic/claude-haiku-4-20250514",
"temperature": 0,
"prompt": "You are a data processing helper...",
"tools": {
"write": false,
"edit": false,
"bash": true
}
}
}
```
## Key Patterns
### Permission Inheritance
- Global `permission` in opencode.json applies to all agents
- Agent-specific `permission` overrides global settings
- Last matching rule wins for glob patterns
### Prompt File Organization
```
project/
├── opencode.json # or agents.json
└── prompts/
├── chiron.txt
├── chiron-forge.txt
├── code-reviewer.txt
└── test-generator.txt
```
### Model Strategy
| Agent Role | Recommended Model | Rationale |
|------------|-------------------|-----------|
| Complex reasoning | opus | Best quality, expensive |
| General tasks | sonnet | Balanced (default) |
| Fast/simple | haiku | Cost-effective |
| Deterministic | temperature: 0-0.1 | Consistent results |
| Creative | temperature: 0.3-0.5 | Varied responses |
### Tool Access Patterns
| Agent Type | write | edit | bash |
|------------|-------|------|------|
| Read-only analyzer | false | false | true (for git) |
| Code generator | true | true | true |
| Documentation | true | true | false |
| Security scanner | false | false | true |

View File

@@ -0,0 +1,437 @@
# System Prompt Design Patterns
Complete guide to writing effective agent system prompts that enable autonomous, high-quality operation.
## Opencode-Specific Considerations
### Prompt File Convention
Store prompts as separate files for maintainability:
```
project/
├── opencode.json (or agents.json)
└── prompts/
├── agent-name.txt
├── code-reviewer.txt
└── test-generator.txt
```
Reference in configuration:
```json
"prompt": "{file:./prompts/agent-name.txt}"
```
Paths are relative to the config file location.
### File Format
Use `.txt` extension for prompt files. The entire file content becomes the system prompt.
## Core Structure
Every agent system prompt should follow this proven structure:
```markdown
You are [specific role] specializing in [specific domain].
**Your Core Responsibilities:**
1. [Primary responsibility - the main task]
2. [Secondary responsibility - supporting task]
3. [Additional responsibilities as needed]
**[Task Name] Process:**
1. [First concrete step]
2. [Second concrete step]
3. [Continue with clear steps]
[...]
**Quality Standards:**
- [Standard 1 with specifics]
- [Standard 2 with specifics]
- [Standard 3 with specifics]
**Output Format:**
Provide results structured as:
- [Component 1]
- [Component 2]
- [Include specific formatting requirements]
**Edge Cases:**
Handle these situations:
- [Edge case 1]: [Specific handling approach]
- [Edge case 2]: [Specific handling approach]
```
## Pattern 1: Analysis Agents
For agents that analyze code, PRs, or documentation:
```markdown
You are an expert [domain] analyzer specializing in [specific analysis type].
**Your Core Responsibilities:**
1. Thoroughly analyze [what] for [specific issues]
2. Identify [patterns/problems/opportunities]
3. Provide actionable recommendations
**Analysis Process:**
1. **Gather Context**: Read [what] using available tools
2. **Initial Scan**: Identify obvious [issues/patterns]
3. **Deep Analysis**: Examine [specific aspects]:
- [Aspect 1]: Check for [criteria]
- [Aspect 2]: Verify [criteria]
- [Aspect 3]: Assess [criteria]
4. **Synthesize Findings**: Group related issues
5. **Prioritize**: Rank by [severity/impact/urgency]
6. **Generate Report**: Format according to output template
**Quality Standards:**
- Every finding includes file:line reference
- Issues categorized by severity (critical/major/minor)
- Recommendations are specific and actionable
- Positive observations included for balance
**Output Format:**
## Summary
[2-3 sentence overview]
## Critical Issues
- [file:line] - [Issue description] - [Recommendation]
## Major Issues
[...]
## Minor Issues
[...]
## Recommendations
[...]
**Edge Cases:**
- No issues found: Provide positive feedback and validation
- Too many issues: Group and prioritize top 10
- Unclear code: Request clarification rather than guessing
```
## Pattern 2: Generation Agents
For agents that create code, tests, or documentation:
```markdown
You are an expert [domain] engineer specializing in creating high-quality [output type].
**Your Core Responsibilities:**
1. Generate [what] that meets [quality standards]
2. Follow [specific conventions/patterns]
3. Ensure [correctness/completeness/clarity]
**Generation Process:**
1. **Understand Requirements**: Analyze what needs to be created
2. **Gather Context**: Read existing [code/docs/tests] for patterns
3. **Design Structure**: Plan [architecture/organization/flow]
4. **Generate Content**: Create [output] following:
- [Convention 1]
- [Convention 2]
- [Best practice 1]
5. **Validate**: Verify [correctness/completeness]
6. **Document**: Add comments/explanations as needed
**Quality Standards:**
- Follows project conventions (check AGENTS.md)
- [Specific quality metric 1]
- [Specific quality metric 2]
- Includes error handling
- Well-documented and clear
**Output Format:**
Create [what] with:
- [Structure requirement 1]
- [Structure requirement 2]
- Clear, descriptive naming
- Comprehensive coverage
**Edge Cases:**
- Insufficient context: Ask user for clarification
- Conflicting patterns: Follow most recent/explicit pattern
- Complex requirements: Break into smaller pieces
```
## Pattern 3: Validation Agents
For agents that validate, check, or verify:
```markdown
You are an expert [domain] validator specializing in ensuring [quality aspect].
**Your Core Responsibilities:**
1. Validate [what] against [criteria]
2. Identify violations and issues
3. Provide clear pass/fail determination
**Validation Process:**
1. **Load Criteria**: Understand validation requirements
2. **Scan Target**: Read [what] needs validation
3. **Check Rules**: For each rule:
- [Rule 1]: [Validation method]
- [Rule 2]: [Validation method]
4. **Collect Violations**: Document each failure with details
5. **Assess Severity**: Categorize issues
6. **Determine Result**: Pass only if [criteria met]
**Quality Standards:**
- All violations include specific locations
- Severity clearly indicated
- Fix suggestions provided
- No false positives
**Output Format:**
## Validation Result: [PASS/FAIL]
## Summary
[Overall assessment]
## Violations Found: [count]
### Critical ([count])
- [Location]: [Issue] - [Fix]
### Warnings ([count])
- [Location]: [Issue] - [Fix]
## Recommendations
[How to fix violations]
**Edge Cases:**
- No violations: Confirm validation passed
- Too many violations: Group by type, show top 20
- Ambiguous rules: Document uncertainty, request clarification
```
## Pattern 4: Orchestration Agents
For agents that coordinate multiple tools or steps:
```markdown
You are an expert [domain] orchestrator specializing in coordinating [complex workflow].
**Your Core Responsibilities:**
1. Coordinate [multi-step process]
2. Manage [resources/tools/dependencies]
3. Ensure [successful completion/integration]
**Orchestration Process:**
1. **Plan**: Understand full workflow and dependencies
2. **Prepare**: Set up prerequisites
3. **Execute Phases**:
- Phase 1: [What] using [tools]
- Phase 2: [What] using [tools]
- Phase 3: [What] using [tools]
4. **Monitor**: Track progress and handle failures
5. **Verify**: Confirm successful completion
6. **Report**: Provide comprehensive summary
**Quality Standards:**
- Each phase completes successfully
- Errors handled gracefully
- Progress reported to user
- Final state verified
**Output Format:**
## Workflow Execution Report
### Completed Phases
- [Phase]: [Result]
### Results
- [Output 1]
- [Output 2]
### Next Steps
[If applicable]
**Edge Cases:**
- Phase failure: Attempt retry, then report and stop
- Missing dependencies: Request from user
- Timeout: Report partial completion
```
## Writing Style Guidelines
### Tone and Voice
**Use second person (addressing the agent):**
```
✅ You are responsible for...
✅ You will analyze...
✅ Your process should...
❌ The agent is responsible for...
❌ This agent will analyze...
❌ I will analyze...
```
### Clarity and Specificity
**Be specific, not vague:**
```
✅ Check for SQL injection by examining all database queries for parameterization
❌ Look for security issues
✅ Provide file:line references for each finding
❌ Show where issues are
✅ Categorize as critical (security), major (bugs), or minor (style)
❌ Rate the severity of issues
```
### Actionable Instructions
**Give concrete steps:**
```
✅ Read the file using the Read tool, then search for patterns using Grep
❌ Analyze the code
✅ Generate test file at test/path/to/file.test.ts
❌ Create tests
```
## Common Pitfalls
### ❌ Vague Responsibilities
```markdown
**Your Core Responsibilities:**
1. Help the user with their code
2. Provide assistance
3. Be helpful
```
**Why bad:** Not specific enough to guide behavior.
### ✅ Specific Responsibilities
```markdown
**Your Core Responsibilities:**
1. Analyze TypeScript code for type safety issues
2. Identify missing type annotations and improper 'any' usage
3. Recommend specific type improvements with examples
```
### ❌ Missing Process Steps
```markdown
Analyze the code and provide feedback.
```
**Why bad:** Agent doesn't know HOW to analyze.
### ✅ Clear Process
```markdown
**Analysis Process:**
1. Read code files using Read tool
2. Scan for type annotations on all functions
3. Check for 'any' type usage
4. Verify generic type parameters
5. List findings with file:line references
```
### ❌ Undefined Output
```markdown
Provide a report.
```
**Why bad:** Agent doesn't know what format to use.
### ✅ Defined Output Format
```markdown
**Output Format:**
## Type Safety Report
### Summary
[Overview of findings]
### Issues Found
- `file.ts:42` - Missing return type on `processData`
- `utils.ts:15` - Unsafe 'any' usage in parameter
### Recommendations
[Specific fixes with examples]
```
## Length Guidelines
### Minimum Viable Agent
**~500 words minimum:**
- Role description
- 3 core responsibilities
- 5-step process
- Output format
### Standard Agent
**~1,000-2,000 words:**
- Detailed role and expertise
- 5-8 responsibilities
- 8-12 process steps
- Quality standards
- Output format
- 3-5 edge cases
### Comprehensive Agent
**~2,000-5,000 words:**
- Complete role with background
- Comprehensive responsibilities
- Detailed multi-phase process
- Extensive quality standards
- Multiple output formats
- Many edge cases
- Examples within system prompt
**Avoid > 10,000 words:** Too long, diminishing returns.
## Testing System Prompts
### Test Completeness
Can the agent handle these based on system prompt alone?
- [ ] Typical task execution
- [ ] Edge cases mentioned
- [ ] Error scenarios
- [ ] Unclear requirements
- [ ] Large/complex inputs
- [ ] Empty/missing inputs
### Test Clarity
Read the system prompt and ask:
- Can another developer understand what this agent does?
- Are process steps clear and actionable?
- Is output format unambiguous?
- Are quality standards measurable?
### Iterate Based on Results
After testing agent:
1. Identify where it struggled
2. Add missing guidance to system prompt
3. Clarify ambiguous instructions
4. Add process steps for edge cases
5. Re-test
## Conclusion
Effective system prompts are:
- **Specific**: Clear about what and how
- **Structured**: Organized with clear sections
- **Complete**: Covers normal and edge cases
- **Actionable**: Provides concrete steps
- **Testable**: Defines measurable standards
Use the patterns above as templates, customize for your domain, and iterate based on agent performance.

View File

@@ -0,0 +1,224 @@
# Agent Triggering in Opencode
Understanding how agents are triggered and invoked in Opencode.
## Triggering Mechanisms
### Primary Agents
Primary agents are directly accessible to users:
| Method | Description |
|--------|-------------|
| **Tab key** | Cycle through primary agents |
| **Keybind** | Use configured `switch_agent` keybind |
| **@ mention** | Type `@agent-name` in message |
| **default_agent** | Set in config to start with specific agent |
### Subagents
Subagents are invoked indirectly:
| Method | Description |
|--------|-------------|
| **Task tool** | Primary agent delegates via Task tool |
| **@ mention** | User manually types `@agent-name` |
| **Automatic** | Based on description matching user intent |
## The Description Field
The `description` field is critical for subagent triggering. When a primary agent receives a request, it evaluates subagent descriptions to decide whether to delegate.
### Good Descriptions
**Clear purpose and triggers:**
```json
"description": "Reviews code for quality, security, and best practices. Use when reviewing PRs, after implementing features, or before commits."
```
**Specific use cases:**
```json
"description": "Generates comprehensive unit tests for code. Use after implementing new functions or when improving test coverage."
```
**Domain-specific:**
```json
"description": "Analyzes authentication and authorization code for security vulnerabilities. Use when reviewing auth flows, JWT handling, or session management."
```
### Poor Descriptions
**Too vague:**
```json
"description": "Helps with code"
```
**No trigger conditions:**
```json
"description": "A code review agent"
```
**Too broad:**
```json
"description": "Handles all development tasks"
```
## Triggering Patterns
### Pattern 1: Explicit Delegation
Primary agent explicitly invokes subagent via Task tool:
```
User: "Review my authentication code"
Primary Agent (internal): This matches "code-reviewer" description about
"reviewing auth flows". Invoke via Task tool.
→ Task tool invokes code-reviewer subagent
```
### Pattern 2: @ Mention
User directly invokes subagent:
```
User: "@security-analyzer check this endpoint for vulnerabilities"
→ security-analyzer subagent is invoked directly
```
### Pattern 3: Automatic Context
Primary agent recognizes pattern from description:
```
User: "I just implemented the payment processing feature"
Primary Agent: Description mentions "after implementing features" and
"security-critical code (auth, payments)". Consider delegating to
security-analyzer or code-reviewer.
```
## Task Tool Invocation
When a primary agent invokes a subagent, it uses the Task tool:
```json
{
"tool": "task",
"parameters": {
"subagent_type": "code-reviewer",
"prompt": "Review the authentication code in src/auth/...",
"description": "Code review for auth implementation"
}
}
```
### Task Permissions
Control which subagents an agent can invoke:
```json
{
"orchestrator": {
"permission": {
"task": {
"*": "deny",
"code-reviewer": "allow",
"security-analyzer": "ask"
}
}
}
}
```
- `"allow"`: Invoke without approval
- `"ask"`: Prompt user for approval
- `"deny"`: Remove from Task tool (agent can't see it)
**Note:** Users can still @ mention any subagent, regardless of task permissions.
## Hidden Subagents
Hide subagents from @ autocomplete while still allowing Task tool invocation:
```json
{
"internal-helper": {
"mode": "subagent",
"hidden": true
}
}
```
Use cases:
- Internal processing agents
- Agents only invoked programmatically
- Specialized helpers not meant for direct user access
## Navigation Between Sessions
When subagents create child sessions:
| Keybind | Action |
|---------|--------|
| `<Leader>+Right` | Cycle forward: parent → child1 → child2 → parent |
| `<Leader>+Left` | Cycle backward |
This allows seamless switching between main conversation and subagent work.
## Description Best Practices
### Include Trigger Conditions
```json
"description": "Use when [condition 1], [condition 2], or [condition 3]."
```
### Be Specific About Domain
```json
"description": "Analyzes [specific domain] for [specific purpose]."
```
### Mention Key Actions
```json
"description": "[What it does]. Invoke after [action] or when [situation]."
```
### Complete Example
```json
{
"code-reviewer": {
"description": "Reviews code for quality issues, security vulnerabilities, and best practice violations. Use when: (1) reviewing pull requests, (2) after implementing new features, (3) before committing changes, (4) when asked to check code quality. Provides structured feedback with file:line references.",
"mode": "subagent"
}
}
```
## Debugging Triggering Issues
### Agent Not Triggering
Check:
1. Description contains relevant keywords
2. Mode is set correctly (subagent for Task tool)
3. Agent is not disabled
4. Task permissions allow invocation
### Agent Triggers Too Often
Check:
1. Description is too broad
2. Overlaps with other agent descriptions
3. Consider more specific trigger conditions
### Wrong Agent Triggers
Check:
1. Descriptions are distinct between agents
2. Add negative conditions ("NOT for...")
3. Specify exact scenarios in description