Compare commits
27 Commits
76cd0e4ee6
...
master
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
39ac89f388 | ||
|
|
1bc81fb38c | ||
|
|
1f1eabd1ed | ||
|
|
5b204c95e4 | ||
|
|
4e9da366e4 | ||
|
|
8910413315 | ||
|
|
d475dde398 | ||
|
|
6fceea7460 | ||
|
|
923e2f1eaa | ||
|
|
231b9f2e0b | ||
|
|
c64d71f438 | ||
|
|
1719f70452 | ||
|
|
0d6ff423be | ||
|
|
79e6adb362 | ||
|
|
1e03c165e7 | ||
|
|
94b89da533 | ||
|
|
b9d535b926 | ||
|
|
46b9c0e4e3 | ||
|
|
eab0a94650 | ||
|
|
0ad1037c71 | ||
|
|
1b4e8322d6 | ||
|
|
7a3b72d5d4 | ||
|
|
156ebf7d63 | ||
|
|
a57e302727 | ||
|
|
d08deaf9d2 | ||
|
|
666696b17c | ||
|
|
1e7decc84a |
39
.beads/.gitignore
vendored
39
.beads/.gitignore
vendored
@@ -1,39 +0,0 @@
|
||||
# SQLite databases
|
||||
*.db
|
||||
*.db?*
|
||||
*.db-journal
|
||||
*.db-wal
|
||||
*.db-shm
|
||||
|
||||
# Daemon runtime files
|
||||
daemon.lock
|
||||
daemon.log
|
||||
daemon.pid
|
||||
bd.sock
|
||||
sync-state.json
|
||||
last-touched
|
||||
|
||||
# Local version tracking (prevents upgrade notification spam after git ops)
|
||||
.local_version
|
||||
|
||||
# Legacy database files
|
||||
db.sqlite
|
||||
bd.db
|
||||
|
||||
# Worktree redirect file (contains relative path to main repo's .beads/)
|
||||
# Must not be committed as paths would be wrong in other clones
|
||||
redirect
|
||||
|
||||
# Merge artifacts (temporary files from 3-way merge)
|
||||
beads.base.jsonl
|
||||
beads.base.meta.json
|
||||
beads.left.jsonl
|
||||
beads.left.meta.json
|
||||
beads.right.jsonl
|
||||
beads.right.meta.json
|
||||
|
||||
# NOTE: Do NOT add negation patterns (e.g., !issues.jsonl) here.
|
||||
# They would override fork protection in .git/info/exclude, allowing
|
||||
# contributors to accidentally commit upstream issue databases.
|
||||
# The JSONL files (issues.jsonl, interactions.jsonl) and config files
|
||||
# are tracked by git by default since no pattern above ignores them.
|
||||
@@ -1,81 +0,0 @@
|
||||
# Beads - AI-Native Issue Tracking
|
||||
|
||||
Welcome to Beads! This repository uses **Beads** for issue tracking - a modern, AI-native tool designed to live directly in your codebase alongside your code.
|
||||
|
||||
## What is Beads?
|
||||
|
||||
Beads is issue tracking that lives in your repo, making it perfect for AI coding agents and developers who want their issues close to their code. No web UI required - everything works through the CLI and integrates seamlessly with git.
|
||||
|
||||
**Learn more:** [github.com/steveyegge/beads](https://github.com/steveyegge/beads)
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Essential Commands
|
||||
|
||||
```bash
|
||||
# Create new issues
|
||||
bd create "Add user authentication"
|
||||
|
||||
# View all issues
|
||||
bd list
|
||||
|
||||
# View issue details
|
||||
bd show <issue-id>
|
||||
|
||||
# Update issue status
|
||||
bd update <issue-id> --status in_progress
|
||||
bd update <issue-id> --status done
|
||||
|
||||
# Sync with git remote
|
||||
bd sync
|
||||
```
|
||||
|
||||
### Working with Issues
|
||||
|
||||
Issues in Beads are:
|
||||
- **Git-native**: Stored in `.beads/issues.jsonl` and synced like code
|
||||
- **AI-friendly**: CLI-first design works perfectly with AI coding agents
|
||||
- **Branch-aware**: Issues can follow your branch workflow
|
||||
- **Always in sync**: Auto-syncs with your commits
|
||||
|
||||
## Why Beads?
|
||||
|
||||
✨ **AI-Native Design**
|
||||
- Built specifically for AI-assisted development workflows
|
||||
- CLI-first interface works seamlessly with AI coding agents
|
||||
- No context switching to web UIs
|
||||
|
||||
🚀 **Developer Focused**
|
||||
- Issues live in your repo, right next to your code
|
||||
- Works offline, syncs when you push
|
||||
- Fast, lightweight, and stays out of your way
|
||||
|
||||
🔧 **Git Integration**
|
||||
- Automatic sync with git commits
|
||||
- Branch-aware issue tracking
|
||||
- Intelligent JSONL merge resolution
|
||||
|
||||
## Get Started with Beads
|
||||
|
||||
Try Beads in your own projects:
|
||||
|
||||
```bash
|
||||
# Install Beads
|
||||
curl -sSL https://raw.githubusercontent.com/steveyegge/beads/main/scripts/install.sh | bash
|
||||
|
||||
# Initialize in your repo
|
||||
bd init
|
||||
|
||||
# Create your first issue
|
||||
bd create "Try out Beads"
|
||||
```
|
||||
|
||||
## Learn More
|
||||
|
||||
- **Documentation**: [github.com/steveyegge/beads/docs](https://github.com/steveyegge/beads/tree/main/docs)
|
||||
- **Quick Start Guide**: Run `bd quickstart`
|
||||
- **Examples**: [github.com/steveyegge/beads/examples](https://github.com/steveyegge/beads/tree/main/examples)
|
||||
|
||||
---
|
||||
|
||||
*Beads: Issue tracking that moves at the speed of thought* ⚡
|
||||
@@ -1,62 +0,0 @@
|
||||
# Beads Configuration File
|
||||
# This file configures default behavior for all bd commands in this repository
|
||||
# All settings can also be set via environment variables (BD_* prefix)
|
||||
# or overridden with command-line flags
|
||||
|
||||
# Issue prefix for this repository (used by bd init)
|
||||
# If not set, bd init will auto-detect from directory name
|
||||
# Example: issue-prefix: "myproject" creates issues like "myproject-1", "myproject-2", etc.
|
||||
# issue-prefix: ""
|
||||
|
||||
# Use no-db mode: load from JSONL, no SQLite, write back after each command
|
||||
# When true, bd will use .beads/issues.jsonl as the source of truth
|
||||
# instead of SQLite database
|
||||
# no-db: false
|
||||
|
||||
# Disable daemon for RPC communication (forces direct database access)
|
||||
# no-daemon: false
|
||||
|
||||
# Disable auto-flush of database to JSONL after mutations
|
||||
# no-auto-flush: false
|
||||
|
||||
# Disable auto-import from JSONL when it's newer than database
|
||||
# no-auto-import: false
|
||||
|
||||
# Enable JSON output by default
|
||||
# json: false
|
||||
|
||||
# Default actor for audit trails (overridden by BD_ACTOR or --actor)
|
||||
# actor: ""
|
||||
|
||||
# Path to database (overridden by BEADS_DB or --db)
|
||||
# db: ""
|
||||
|
||||
# Auto-start daemon if not running (can also use BEADS_AUTO_START_DAEMON)
|
||||
# auto-start-daemon: true
|
||||
|
||||
# Debounce interval for auto-flush (can also use BEADS_FLUSH_DEBOUNCE)
|
||||
# flush-debounce: "5s"
|
||||
|
||||
# Git branch for beads commits (bd sync will commit to this branch)
|
||||
# IMPORTANT: Set this for team projects so all clones use the same sync branch.
|
||||
# This setting persists across clones (unlike database config which is gitignored).
|
||||
# Can also use BEADS_SYNC_BRANCH env var for local override.
|
||||
# If not set, bd sync will require you to run 'bd config set sync.branch <branch>'.
|
||||
# sync-branch: "beads-sync"
|
||||
|
||||
# Multi-repo configuration (experimental - bd-307)
|
||||
# Allows hydrating from multiple repositories and routing writes to the correct JSONL
|
||||
# repos:
|
||||
# primary: "." # Primary repo (where this database lives)
|
||||
# additional: # Additional repos to hydrate from (read-only)
|
||||
# - ~/beads-planning # Personal planning repo
|
||||
# - ~/work-planning # Work planning repo
|
||||
|
||||
# Integration settings (access with 'bd config get/set')
|
||||
# These are stored in the database, not in this file:
|
||||
# - jira.url
|
||||
# - jira.project
|
||||
# - linear.url
|
||||
# - linear.api-key
|
||||
# - github.org
|
||||
# - github.repo
|
||||
@@ -1,15 +0,0 @@
|
||||
{"id":"AGENTS-1jw","title":"Athena prompt: Convert to numbered responsibility format","description":"Athena prompt uses bullet points under 'Core Capabilities' section instead of numbered lists. Per agent-development skill best practices, responsibilities should be numbered (1, 2, 3) for clarity. Update prompts/athena.txt to use numbered format.","status":"closed","priority":2,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-24T19:32:16.133701271+01:00","created_by":"m3tm3re","updated_at":"2026-01-26T19:32:26.165270695+01:00","closed_at":"2026-01-26T19:32:26.165270695+01:00","close_reason":"Converted responsibility subsections from ### numbered headers to numbered list format (1., 2., 3., 4.) with bold titles"}
|
||||
{"id":"AGENTS-27m","title":"Create prompts/chiron-forge.txt with Chiron-Forge's build/execution mode system prompt","status":"closed","priority":2,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-02-03T18:47:31.083994237+01:00","created_by":"m3tm3re","updated_at":"2026-02-03T18:48:45.012894731+01:00","closed_at":"2026-02-03T18:48:45.012894731+01:00","close_reason":"Created prompts/chiron-forge.txt with Chiron-Forge's build/execution mode system prompt (3185 chars, 67 lines)"}
|
||||
{"id":"AGENTS-7gt","title":"Athena prompt: Rename Core Capabilities to exact header","description":"Athena prompt uses 'Core Capabilities' section header instead of 'Your Core Responsibilities:'. Per agent-development skill guidelines, the exact header 'Your Core Responsibilities:' should be used for consistency. Update prompts/athena.txt to use the exact recommended header.","status":"closed","priority":1,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-24T19:32:07.223102836+01:00","created_by":"m3tm3re","updated_at":"2026-01-26T19:31:19.080626796+01:00","closed_at":"2026-01-26T19:31:19.080626796+01:00","close_reason":"Renamed 'Core Capabilities' section header to exact 'Your Core Responsibilities:' in prompts/athena.txt"}
|
||||
{"id":"AGENTS-8ie","title":"Set up PARA work structure with 10 Basecamp projects","description":"Create 01-projects/work/ structure with project folders for all Basecamp projects. Each project needs: _index.md (MOC with Basecamp link), meetings/, decisions/, notes/. Also set up 02-areas/work/ for ongoing responsibilities.","status":"closed","priority":0,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-28T18:47:56.048622809+01:00","created_by":"m3tm3re","updated_at":"2026-01-28T18:57:09.033627658+01:00","closed_at":"2026-01-28T18:57:09.033627658+01:00","close_reason":"Created complete PARA work structure: 01-projects/work/ with 10 project folders (each with _index.md, meetings/, decisions/, notes/), 02-areas/work/ with 5 area files. Projects use placeholder names - user can customize with actual Basecamp data."}
|
||||
{"id":"AGENTS-9cs","title":"Configure basecamp skill with real projects","description":"Configure basecamp skill to work with real projects. Need to: get user's Basecamp projects, map them to PARA structure, test morning planning workflow with Basecamp todos.","status":"closed","priority":0,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-28T18:47:56.04844425+01:00","created_by":"m3tm3re","updated_at":"2026-01-28T18:57:14.097333313+01:00","closed_at":"2026-01-28T18:57:14.097333313+01:00","close_reason":"Enhanced basecamp skill with project mapping configuration. Added section on mapping Basecamp projects to PARA structure, with configuration examples and usage patterns. Ready for user to fetch actual projects and set up mappings."}
|
||||
{"id":"AGENTS-b74","title":"Create skills/msteams/SKILL.md with MS Teams Graph API integration documentation","status":"closed","priority":2,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-02-03T18:50:21.728376088+01:00","created_by":"m3tm3re","updated_at":"2026-02-03T18:52:08.609302234+01:00","closed_at":"2026-02-03T18:52:08.609302234+01:00","close_reason":"Created skills/msteams/SKILL.md with complete MS Teams Graph API integration documentation covering channels, messages, meetings, and chat operations"}
|
||||
{"id":"AGENTS-ch2","title":"Create skills/outlook/SKILL.md with Outlook Graph API documentation","status":"closed","priority":2,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-02-03T18:49:24.814232462+01:00","created_by":"m3tm3re","updated_at":"2026-02-03T18:54:30.910986438+01:00","closed_at":"2026-02-03T18:54:30.910986438+01:00","close_reason":"Completed: Created skills/outlook/SKILL.md with Outlook Graph API documentation including mail CRUD, calendar, contacts, folders, and workflow examples. Validation passed."}
|
||||
{"id":"AGENTS-der","title":"Create Outline skill for MCP integration","status":"closed","priority":2,"issue_type":"feature","owner":"p@m3ta.dev","created_at":"2026-01-28T18:47:56.042886345+01:00","created_by":"m3tm3re","updated_at":"2026-01-28T18:51:21.662507568+01:00","closed_at":"2026-01-28T18:51:21.662507568+01:00","close_reason":"Created outline/SKILL.md with comprehensive workflows, tool references, and integration patterns. Added references/outline-workflows.md and references/export-patterns.md for detailed examples."}
|
||||
{"id":"AGENTS-fac","title":"Design Teams transcript processing workflow (manual)","description":"Design manual workflow for Teams transcript processing: DOCX upload → extract text → AI analysis → meeting note + action items → optional Basecamp sync. Create templates and integration points.","status":"closed","priority":2,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-28T18:47:56.052076817+01:00","created_by":"m3tm3re","updated_at":"2026-01-28T18:56:34.567325504+01:00","closed_at":"2026-01-28T18:56:34.567325504+01:00","close_reason":"Created comprehensive Teams transcript workflow guide in skills/meeting-notes/references/teams-transcript-workflow.md. Includes: manual step-by-step process, Python script for DOCX extraction, AI analysis prompts, Obsidian templates, Basecamp sync integration, troubleshooting guide."}
|
||||
{"id":"AGENTS-in5","title":"Athena prompt: Standardize section headers","description":"Athena prompt uses 'Ethical Guidelines' and 'Methodological Rigor' headers instead of standard 'Quality Standards' and 'Edge Cases' headers. While semantically equivalent, skill recommends exact headers for consistency. Consider renaming in prompts/athena.txt.","status":"closed","priority":2,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-24T19:32:21.720932741+01:00","created_by":"m3tm3re","updated_at":"2026-01-26T19:33:15.959382333+01:00","closed_at":"2026-01-26T19:33:15.959382333+01:00","close_reason":"Renamed '## Ethical Guidelines' to '## Quality Standards' for consistency with agent-development skill guidelines"}
|
||||
{"id":"AGENTS-lyd","title":"Athena agent: Add explicit mode field","description":"Athena agent is missing the explicit 'mode': 'subagent' field. Per agent-development skill guidelines, all agents should explicitly declare mode for clarity. Current config relies on default which makes intent unclear.","status":"closed","priority":0,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-24T19:31:46.255196119+01:00","created_by":"m3tm3re","updated_at":"2026-01-26T19:30:46.191545632+01:00","closed_at":"2026-01-26T19:30:46.191545632+01:00","close_reason":"Added explicit 'mode': 'subagent' field to athena agent in agent/agents.json"}
|
||||
{"id":"AGENTS-mfw","title":"Athena agent: Add temperature setting","description":"Athena agent lacks explicit temperature configuration. Per agent-development skill, research/analysis agents should use temperature 0.0-0.2 for focused, deterministic, consistent results. Add 'temperature': 0.1 to agent config in agents.json.","status":"closed","priority":1,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-24T19:31:55.726506579+01:00","created_by":"m3tm3re","updated_at":"2026-01-26T19:31:06.905697638+01:00","closed_at":"2026-01-26T19:31:06.905697638+01:00","close_reason":"Added 'temperature': 0.1 to athena agent in agent/agents.json for focused, deterministic results"}
|
||||
{"id":"AGENTS-mvv","title":"Enhance daily routines with work context","status":"closed","priority":1,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-28T18:47:56.066628593+01:00","created_by":"m3tm3re","updated_at":"2026-01-28T18:56:34.576536473+01:00","closed_at":"2026-01-28T18:56:34.576536473+01:00","close_reason":"Enhanced daily-routines skill with full work context integration. Added sections for: morning planning with Basecamp/Outline, evening reflection with work metrics, weekly review with project status tracking, work area health review, work inbox processing."}
|
||||
{"id":"AGENTS-o45","title":"Agent development: Document validation script availability","description":"The agent-development skill references scripts/validate-agent.sh but this script doesn't exist in the repository. Consider either: (1) creating the validation script, or (2) removing the reference and only documenting the python3 alternative.","status":"closed","priority":2,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-24T19:32:27.325525742+01:00","created_by":"m3tm3re","updated_at":"2026-01-26T19:34:17.846875543+01:00","closed_at":"2026-01-26T19:34:17.846875543+01:00","close_reason":"Removed references to non-existent scripts/validate-agent.sh and documented python3 validation as the primary method"}
|
||||
{"id":"AGENTS-o7l","title":"Create agents.json with 6 agent definitions","status":"closed","priority":2,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-02-03T20:13:02.959856824+01:00","created_by":"m3tm3re","updated_at":"2026-02-03T20:13:58.186033248+01:00","closed_at":"2026-02-03T20:13:58.186033248+01:00","close_reason":"Created agents.json with all 6 agent definitions (chiron, chiron-forge, hermes, athena, apollo, calliope) with proper mode, model, prompt references, and permissions. Verified with Python JSON validation."}
|
||||
@@ -1,4 +0,0 @@
|
||||
{
|
||||
"database": "beads.db",
|
||||
"jsonl_export": "issues.jsonl"
|
||||
}
|
||||
14
.gitignore
vendored
Normal file
14
.gitignore
vendored
Normal file
@@ -0,0 +1,14 @@
|
||||
.todos/
|
||||
|
||||
# Sidecar worktree state files
|
||||
.sidecar/
|
||||
.sidecar-agent
|
||||
.sidecar-task
|
||||
.sidecar-pr
|
||||
.sidecar-start.sh
|
||||
.sidecar-base
|
||||
.td-root
|
||||
|
||||
# Nix / direnv
|
||||
.direnv/
|
||||
result
|
||||
@@ -1,338 +0,0 @@
|
||||
# Learnings - Chiron Agent Framework
|
||||
|
||||
## Wave 1, Task 1: Create agents.json with 6 agent definitions
|
||||
|
||||
### Agent Structure Pattern
|
||||
|
||||
**Required fields per agent:**
|
||||
- `description`: Clear purpose statement
|
||||
- `mode`: "primary" for orchestrators, "subagent" for specialists
|
||||
- `model`: "zai-coding-plan/glm-4.7" (consistent across all agents)
|
||||
- `prompt`: File reference pattern `{file:./prompts/<name>.txt}`
|
||||
- `permission`: Either explicit permissions or simple "question": "allow"
|
||||
|
||||
### Primary vs Subagent Modes
|
||||
|
||||
**Primary agents** (2): chiron, chiron-forge
|
||||
- Can be invoked directly by user
|
||||
- Orchestrate and delegate work
|
||||
- Higher permission levels (external_directory rules)
|
||||
|
||||
**Subagents** (4): hermes, athena, apollo, calliope
|
||||
- Invoked by primary agents via Task tool
|
||||
- Specialized single-purpose workflows
|
||||
- Simpler permission structure (question: "allow")
|
||||
|
||||
### Permission Patterns
|
||||
|
||||
**Primary agents**: Complex permission structure
|
||||
```json
|
||||
"permission": {
|
||||
"external_directory": {
|
||||
"~/p/**": "allow",
|
||||
"*": "ask"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Subagents**: Simple permission structure
|
||||
```json
|
||||
"permission": {
|
||||
"question": "allow"
|
||||
}
|
||||
```
|
||||
|
||||
### Agent Domains
|
||||
|
||||
1. **chiron**: Plan Mode - Read-only analysis and planning
|
||||
2. **chiron-forge**: Build Mode - Full execution with safety prompts
|
||||
3. **hermes**: Work communication (Basecamp, Outlook, Teams)
|
||||
4. **athena**: Work knowledge (Outline wiki, documentation)
|
||||
5. **apollo**: Private knowledge (Obsidian vault, personal notes)
|
||||
6. **calliope**: Writing (documentation, reports, prose)
|
||||
|
||||
### Verification Commands
|
||||
|
||||
**Agent count:**
|
||||
```bash
|
||||
python3 -c "import json; data = json.load(open('agents/agents.json')); print(len(data))"
|
||||
# Expected output: 6
|
||||
```
|
||||
|
||||
**Agent names:**
|
||||
```bash
|
||||
python3 -c "import json; data = json.load(open('agents/agents.json')); print(sorted(data.keys()))"
|
||||
# Expected output: ['apollo', 'athena', 'calliope', 'chiron', 'chiron-forge', 'hermes']
|
||||
```
|
||||
|
||||
### Key Takeaways
|
||||
|
||||
- Prompt files use file references, not inline content (Wave 2 will create these)
|
||||
- Model is consistent across all agents for predictable behavior
|
||||
- Permission structure matches agent capability level (more complex for primaries)
|
||||
- Mode determines how agent can be invoked (direct vs delegated)
|
||||
|
||||
## Wave 2, Task 6: Create Athena (Work Knowledge) system prompt
|
||||
|
||||
### Prompt Structure Pattern Consistency
|
||||
|
||||
All subagent prompts follow identical structure from skill-creator guidance:
|
||||
1. **Role definition**: "You are [name], the Greek [role], specializing in [domain]"
|
||||
2. **Your Core Responsibilities**: Numbered list of primary duties
|
||||
3. **Process**: Numbered steps for workflow execution
|
||||
4. **Quality Standards**: Bulleted list of requirements
|
||||
5. **Output Format**: Structure specification
|
||||
6. **Edge Cases**: Bulleted list of exception handling
|
||||
7. **Tool Usage**: Instructions for tool interaction (especially Question tool)
|
||||
8. **Boundaries**: Explicit DO NOT statements with domain attribution
|
||||
|
||||
### Athena's Domain Specialization
|
||||
|
||||
**Role**: Work knowledge specialist for Outline wiki
|
||||
- Primary tool: Outline wiki integration (document CRUD, search, collections, sharing)
|
||||
- Core activities: wiki search, knowledge retrieval, documentation updates, knowledge organization
|
||||
- Question tool usage: Document selection, search scope clarification, collection specification
|
||||
|
||||
**Differentiation from other agents:**
|
||||
- Hermes (communication): Short messages, team communication tools (Basecamp, Teams, Outlook)
|
||||
- Apollo (private knowledge): Obsidian vaults, personal notes, private data
|
||||
- Calliope (writing): Documentation drafting, creative prose, reports
|
||||
- Athena (work knowledge): Team wiki, Outline, shared documentation repositories
|
||||
|
||||
### Quality Focus for Knowledge Work
|
||||
|
||||
Key quality standards unique to Athena:
|
||||
- Outline-specific understanding: collections, documents, sharing permissions, revision history
|
||||
- Knowledge structure preservation: hierarchy, relationships, cross-references
|
||||
- Identification of outdated information for updates
|
||||
- Consistency in terminology across documentation
|
||||
- Pattern recognition for organization improvements
|
||||
|
||||
### Boundary Clarity
|
||||
|
||||
Boundaries section explicitly references other agents' domains:
|
||||
- "Do NOT handle short communication (Hermes's domain)"
|
||||
- "Do NOT access private knowledge (Apollo's domain)"
|
||||
- "Do NOT write creative content (Calliope's domain)"
|
||||
- Collaboration section acknowledges cross-agent workflows
|
||||
|
||||
### Verification Approach
|
||||
|
||||
Used grep commands to verify domain presence:
|
||||
- `grep -qi "outline"` → Confirms Outline tool specialization
|
||||
- `grep -qi "wiki\|knowledge"` → Confirms knowledge base focus
|
||||
- `grep -qi "document"` → Confirms document management capabilities
|
||||
|
||||
All verification checks passed successfully.
|
||||
|
||||
## Wave 2, Task 5: Create Hermes system prompt
|
||||
|
||||
### Prompt Structure Pattern
|
||||
|
||||
**Consistent sections across all subagent prompts:**
|
||||
1. Role definition (You are [role] specializing in [domain])
|
||||
2. Core Responsibilities (5-7 bullet points of primary duties)
|
||||
3. Process (5-6 numbered steps for workflow)
|
||||
4. Quality Standards (4-5 bullet points of output criteria)
|
||||
5. Output Format (3-5 lines describing structure)
|
||||
6. Edge Cases (5-6 bullet points of exceptional scenarios)
|
||||
7. Tool Usage (Question tool + domain-specific MCP tools)
|
||||
8. Boundaries (5-6 bullet points of what NOT to do)
|
||||
|
||||
### Hermes-Specific Domain Elements
|
||||
|
||||
**Greek mythology framing:** Hermes - god of communication, messengers, swift transactions
|
||||
|
||||
**Platform coverage:**
|
||||
- Basecamp: tasks, projects, todos, message boards, campfire
|
||||
- Outlook: email drafting, sending, inbox management
|
||||
- Teams: meeting scheduling, channel messages, chat conversations
|
||||
|
||||
**Focus areas:** Task updates, email drafting, meeting scheduling, quick communication
|
||||
|
||||
**Question tool triggers:**
|
||||
- Platform choice ambiguous
|
||||
- Recipients unclear
|
||||
- Project context missing
|
||||
|
||||
### Cross-Agent Boundaries
|
||||
|
||||
Hermes does NOT handle:
|
||||
- Documentation repositories/wiki (Athena's domain)
|
||||
- Personal tools/private knowledge (Apollo's domain)
|
||||
- Long-form writing/reports (Calliope's domain)
|
||||
|
||||
### Verification Pattern
|
||||
|
||||
```bash
|
||||
# Required content checks
|
||||
grep -qi "basecamp" prompts/hermes.txt
|
||||
grep -qiE "outlook|email" prompts/hermes.txt
|
||||
grep -qiE "teams|meeting" prompts/hermes.txt
|
||||
```
|
||||
|
||||
### Key Takeaways
|
||||
|
||||
- Use exact headers from SKILL.md template (line 358: "Your Core Responsibilities:")
|
||||
- Second-person voice addressing agent directly
|
||||
- 5-6 sections following consistent pattern
|
||||
- Boundaries section explicitly references other agents' domains
|
||||
- 45-50 lines is appropriate length for subagent prompts
|
||||
- Include MCP tool references in Tool Usage section
|
||||
|
||||
## Wave 2, Task 3: Create Chiron (Plan Mode) system prompt
|
||||
|
||||
### Prompt Structure Pattern
|
||||
|
||||
**Standard sections (from agent-development/SKILL.md):**
|
||||
- "You are [role]..." - Direct second-person address
|
||||
- "**Your Core Responsibilities:**" - Numbered list (1, 2, 3), not bullet points
|
||||
- "**Process:**" - Step-by-step workflow
|
||||
- "**Quality Standards:**" - Evaluation criteria
|
||||
- "**Output Format:**" - Response structure
|
||||
- "**Edge Cases:**" - Exception handling
|
||||
- "**Tool Usage:**" - Tool-specific guidance
|
||||
- "**Boundaries:**" - Must NOT Do section
|
||||
|
||||
### Chiron-Specific Design
|
||||
|
||||
**Key role definition:**
|
||||
- Main orchestrator in plan/analysis mode
|
||||
- Read-only permissions, delegates execution to Chiron-Forge
|
||||
- Coordinates 4 subagents via Task tool delegation
|
||||
|
||||
**Delegation logic:**
|
||||
- Hermes: Work communication (email, messages, meetings)
|
||||
- Athena: Work knowledge (wiki, documentation, project info)
|
||||
- Apollo: Private knowledge (Obsidian vault, personal notes)
|
||||
- Calliope: Writing (documentation, reports, prose)
|
||||
- Chiron-Forge: Execution (file modifications, commands, builds)
|
||||
|
||||
**Question tool usage:**
|
||||
- REQUIRED when requests are ambiguous
|
||||
- Required for unclear intent or scope
|
||||
- Required before delegation or analysis
|
||||
|
||||
**Boundaries:**
|
||||
- Do NOT modify files directly (read-only)
|
||||
- Do NOT execute commands (delegate to Chiron-Forge)
|
||||
- Do NOT access subagent domains directly (Hermes, Athena, Apollo, Calliope)
|
||||
|
||||
### Style Reference
|
||||
|
||||
**Used apollo.txt and calliope.txt as style guides:**
|
||||
- Consistent section headers with exact wording
|
||||
- Second-person address throughout
|
||||
- Numbered responsibilities list
|
||||
- Clear separation between sections
|
||||
- Specific tool usage instructions
|
||||
|
||||
### Verification Commands
|
||||
|
||||
**File size:**
|
||||
```bash
|
||||
wc -c prompts/chiron.txt # Expected: > 500
|
||||
```
|
||||
|
||||
**Keyword validation:**
|
||||
```bash
|
||||
grep -qi "orchestrat" prompts/chiron.txt # Should find match
|
||||
grep -qi "delegat" prompts/chiron.txt # Should find match
|
||||
grep -qi "hermes\|athena\|apollo\|calliope" prompts/chiron.txt # Should find all 4
|
||||
```
|
||||
|
||||
### Key Takeaways
|
||||
|
||||
- Standardized section headers critical for consistency across prompts
|
||||
- Numbered lists for responsibilities (not bullet points) matches best practices
|
||||
- Clear delegation routing prevents overlap between agent domains
|
||||
- Question tool requirement prevents action on ambiguous requests
|
||||
- Read-only orchestrator mode cleanly separates planning from execution
|
||||
- All 4 subagents must be explicitly mentioned for routing clarity
|
||||
|
||||
## Wave 2, Task 4: Create Chiron-Forge (Build Mode) system prompt
|
||||
|
||||
### Primary Agent Prompt Structure
|
||||
|
||||
Primary agent prompts follow similar structure to subagents but with expanded scope:
|
||||
1. **Role definition**: "You are Chiron-Forge, the Greek centaur smith, specializing in [domain]"
|
||||
2. **Your Core Responsibilities**: Numbered list emphasizing execution over planning
|
||||
3. **Process**: 7-step workflow including delegation pattern
|
||||
4. **Quality Standards**: Focus on execution accuracy and safety
|
||||
5. **Output Format**: Execution summary structure
|
||||
6. **Edge Cases**: Handling of destructive operations and failures
|
||||
7. **Tool Usage**: Explicit permission boundaries and safety protocols
|
||||
8. **Boundaries**: Clear separation from Chiron's planning role
|
||||
|
||||
### Chiron-Forge vs Chiron Separation
|
||||
|
||||
**Chiron-Forge (Build Mode):**
|
||||
- Purpose: Execution and task completion
|
||||
- Focus: Modifying files, running commands, building artifacts
|
||||
- Permissions: Full write access with safety constraints
|
||||
- Delegation: Routes specialized work to subagents
|
||||
- Safety: Uses Question tool for destructive operations
|
||||
|
||||
**Chiron (Plan Mode - Wave 2, Task 3):**
|
||||
- Purpose: Read-only analysis and planning
|
||||
- Focus: Analysis, planning, coordination
|
||||
- Permissions: Read-only access
|
||||
- Role: Orchestrator without direct execution
|
||||
|
||||
### Permission Structure Mapping to Prompt
|
||||
|
||||
From agents.json chiron-forge permissions:
|
||||
```json
|
||||
"permission": {
|
||||
"read": { "*": "allow", "*.env": "deny" },
|
||||
"edit": "allow",
|
||||
"bash": { "*": "allow", "rm *": "ask", "git push *": "ask", "sudo *": "deny" }
|
||||
}
|
||||
```
|
||||
|
||||
Mapped to prompt instructions:
|
||||
- "Execute commands, but use Question for rm, git push"
|
||||
- "Use Question tool for destructive operations"
|
||||
- "DO NOT execute destructive operations without confirmation"
|
||||
|
||||
### Delegation Pattern for Primary Agents
|
||||
|
||||
Primary agents have unique delegation responsibilities:
|
||||
- **Chiron-Forge**: Delegates based on domain expertise (Hermes for communications, Athena for knowledge, etc.)
|
||||
- **Chiron**: Delegates based on planning and coordination needs
|
||||
|
||||
Process includes delegation as step 5:
|
||||
1. Understand the Task
|
||||
2. Clarify Scope
|
||||
3. Identify Dependencies
|
||||
4. Execute Work
|
||||
5. **Delegate to Subagents**: Use Task tool for specialized domains
|
||||
6. Verify Results
|
||||
7. Report Completion
|
||||
|
||||
### Verification Commands
|
||||
|
||||
Successful verification of prompt requirements:
|
||||
```bash
|
||||
# File character count > 500
|
||||
wc -c prompts/chiron-forge.txt
|
||||
# Output: 2598 (✓)
|
||||
|
||||
# Domain keyword verification
|
||||
grep -qi "execut" prompts/chiron-forge.txt
|
||||
# Output: Found 'execut' (✓)
|
||||
|
||||
grep -qi "build" prompts/chiron-forge.txt
|
||||
# Output: Found 'build' (✓)
|
||||
```
|
||||
|
||||
All verification checks passed successfully.
|
||||
|
||||
### Key Takeaways
|
||||
|
||||
- Primary agent prompts require clear separation from each other (Chiron plans, Chiron-Forge executes)
|
||||
- Permission structure in agents.json must be reflected in prompt instructions
|
||||
- Safety protocols for destructive operations are critical for write-access agents
|
||||
- Delegation is a core responsibility for both primary agents, but with different criteria
|
||||
- Role naming consistency reinforces domain separation (centaur smith vs wise centaur)
|
||||
|
||||
@@ -1,748 +0,0 @@
|
||||
# Agent Permissions Refinement
|
||||
|
||||
## TL;DR
|
||||
|
||||
> **Quick Summary**: Refine OpenCode agent permissions for Chiron (planning) and Chriton-Forge (build) to implement 2025 AI security best practices with principle of least privilege, human-in-the-loop for critical actions, and explicit guardrails against permission bypass.
|
||||
|
||||
> **Deliverables**:
|
||||
> - Updated `agents/agents.json` with refined permissions for Chiron and Chriton-Forge
|
||||
> - Critical bug fix: Duplicate `external_directory` key in Chiron config
|
||||
> - Enhanced secret blocking with additional patterns
|
||||
> - Bash injection prevention rules
|
||||
> - Git protection against secret commits and repo hijacking
|
||||
|
||||
> **Estimated Effort**: Medium
|
||||
> **Parallel Execution**: NO - sequential changes to single config file
|
||||
> **Critical Path**: Fix duplicate key → Apply Chiron permissions → Apply Chriton-Forge permissions → Validate
|
||||
|
||||
---
|
||||
|
||||
## Context
|
||||
|
||||
### Original Request
|
||||
User wants to refine agent permissions for:
|
||||
- **Chiron**: Planning agent with read-only access, restricted to read-only subagents, no file editing, can create beads issues
|
||||
- **Chriton-Forge**: Build agent with write access restricted to ~/p/**, git commits allowed but git push asks, package install commands ask
|
||||
- **General**: Sane defaults that are secure but open enough for autonomous work
|
||||
|
||||
### Interview Summary
|
||||
**Key Discussions**:
|
||||
- Chiron: Read-only planning, no file editing, bash denied except for `bd *` commands, external_directory ~/p/** only, task permission to restrict subagents to explore/librarian/athena + chiron-forge for handoff
|
||||
- Chriton-Forge: Write access restricted to ~/p/**, git commits allow / git push ask, package install commands ask, git config deny
|
||||
- Workspace path: ~/p/** is symlink to ~/projects/personal/** (just replacing path reference)
|
||||
- Bash security: Block all bash redirect patterns (echo >, cat >, tee, etc.)
|
||||
|
||||
**Research Findings**:
|
||||
- OpenCode supports granular permission rules with wildcards, last-match-wins
|
||||
- 2025 best practices: Principle of least privilege, tiered permissions (read-only auto, destructive ask, JIT privileges), human-in-the-loop for critical actions
|
||||
- Security hardening: Block command injection vectors, prevent git secret commits, add comprehensive secret blocking patterns
|
||||
|
||||
### Metis Review
|
||||
**Critical Issues Identified**:
|
||||
1. **Duplicate `external_directory` key** in Chiron config (lines 8-9 and 27) - second key overrides first, breaking intended behavior
|
||||
2. **Bash edit bypass**: Even with `edit: deny`, bash can write files via redirection (`echo "x" > file.txt`, `cat >`, `tee`)
|
||||
3. **Git secret protection**: Agent could commit secrets (read .env, then git commit .env)
|
||||
4. **Git config hijacking**: Agent could modify .git/config to push to attacker-controlled repo
|
||||
5. **Command injection**: Malicious content could execute via `$()`, backticks, `eval`, `source`
|
||||
6. **Secret blocking incomplete**: Missing patterns for `.local/share/*`, `.cache/*`, `*.db`, `*.keychain`, `*.p12`
|
||||
|
||||
**Guardrails Applied**:
|
||||
- Fix duplicate external_directory key (use single object with catch-all `"*": "ask"` after specific rules)
|
||||
- Add bash file write protection patterns (echo >, cat >, printf >, tee, > operators)
|
||||
- Add git secret protection (`git add *.env*`: deny, `git commit *.env*`: deny)
|
||||
- Add git config protection (`git config *`: deny for Chriton-Forge)
|
||||
- Add bash injection prevention (`$(*`, `` `*``, `eval *`, `source *`)
|
||||
- Expand secret blocking with additional patterns
|
||||
- Add /run/agenix/* to read deny list
|
||||
|
||||
---
|
||||
|
||||
## Work Objectives
|
||||
|
||||
### Core Objective
|
||||
Refine OpenCode agent permissions in `agents/agents.json` to implement security hardening based on 2025 AI agent best practices while maintaining autonomous workflow capabilities.
|
||||
|
||||
### Concrete Deliverables
|
||||
- Updated `agents/agents.json` with:
|
||||
- Chiron: Read-only permissions, subagent restrictions, bash denial (except `bd *`), no file editing
|
||||
- Chriton-Forge: Write access scoped to ~/p/**, git commit allow / push ask, package install ask, git config deny
|
||||
- Both: Enhanced secret blocking, bash injection prevention, git secret protection
|
||||
|
||||
### Definition of Done
|
||||
- [x] Permission configuration updated in `agents/agents.json`
|
||||
- [x] JSON syntax valid (no duplicate keys, valid structure)
|
||||
- [x] Workspace path validated (~/p/** exists and is correct)
|
||||
- [x] Acceptance criteria tests pass (via manual verification)
|
||||
|
||||
### Must Have
|
||||
- Chiron cannot edit files directly
|
||||
- Chiron cannot write files via bash (redirects blocked)
|
||||
- Chiron restricted to read-only subagents + chiron-forge for handoff
|
||||
- Chriton-Forge can only write to ~/p/**
|
||||
- Chriton-Forge cannot git config
|
||||
- Both agents block secret file reads
|
||||
- Both agents prevent command injection
|
||||
- Git operations cannot commit secrets
|
||||
- No duplicate keys in permission configuration
|
||||
|
||||
### Must NOT Have (Guardrails)
|
||||
- **Edit bypass via bash**: No bash redirection patterns that allow file writes when `edit: deny`
|
||||
- **Git secret commits**: No ability to git add/commit .env or credential files
|
||||
- **Repo hijacking**: No git config modification allowed for Chriton-Forge
|
||||
- **Command injection**: No `$()`, backticks, `eval`, `source` execution via bash
|
||||
- **Write scope escape**: Chriton-Forge cannot write outside ~/p/** without asking
|
||||
- **Secret exfiltration**: No access to .env, .ssh, .gnupg, credentials, secrets, .pem, .key, /run/agenix
|
||||
- **Unrestricted bash for Chiron**: Only `bd *` commands allowed
|
||||
|
||||
---
|
||||
|
||||
## Verification Strategy (MANDATORY)
|
||||
|
||||
> This is configuration work, not code development. Manual verification is required after deployment.
|
||||
|
||||
### Test Decision
|
||||
- **Infrastructure exists**: YES (home-manager deployment)
|
||||
- **User wants tests**: NO (Manual-only verification)
|
||||
- **Framework**: None
|
||||
|
||||
### Manual Verification Procedures
|
||||
|
||||
Each TODO includes EXECUTABLE verification procedures that users can run to validate changes.
|
||||
|
||||
**Verification Commands to Run After Deployment:**
|
||||
|
||||
1. **JSON Syntax Validation**:
|
||||
```bash
|
||||
# Validate JSON structure and no duplicate keys
|
||||
jq '.' /home/m3tam3re/p/AI/AGENTS/agents/agents.json > /dev/null 2>&1
|
||||
# Expected: Exit code 0 (valid JSON)
|
||||
|
||||
# Check for duplicate keys (manual review of chiron permission object)
|
||||
# Expected: Single external_directory key, no other duplicates
|
||||
```
|
||||
|
||||
2. **Workspace Path Validation**:
|
||||
```bash
|
||||
ls -la ~/p/ 2>&1
|
||||
# Expected: Directory exists, shows contents (likely symlink to ~/projects/personal/)
|
||||
```
|
||||
|
||||
3. **After Deployment - Chiron Read-Only Test** (manual):
|
||||
- Have Chiron attempt to edit a test file
|
||||
- Expected: Permission denied with clear error message
|
||||
- Have Chiron attempt to write via bash (echo "test" > /tmp/test.txt)
|
||||
- Expected: Permission denied
|
||||
- Have Chiron run `bd ready` command
|
||||
- Expected: Command succeeds, returns JSON output with issue list
|
||||
- Have Chiron attempt to invoke build-capable subagent (sisyphus-junior)
|
||||
- Expected: Permission denied
|
||||
|
||||
4. **After Deployment - Chiron Workspace Access** (manual):
|
||||
- Have Chiron read file within ~/p/**
|
||||
- Expected: Success, returns file contents
|
||||
- Have Chiron read file outside ~/p/**
|
||||
- Expected: Permission denied or ask user
|
||||
- Have Chiron delegate to explore/librarian/athena
|
||||
- Expected: Success, subagent executes
|
||||
|
||||
5. **After Deployment - Chriton-Forge Write Access** (manual):
|
||||
- Have Chriton-Forge write test file in ~/p/** directory
|
||||
- Expected: Success, file created
|
||||
- Have Chriton-Forge attempt to write file to /tmp
|
||||
- Expected: Ask user for approval
|
||||
- Have Chriton-Forge run `git add` and `git commit -m "test"`
|
||||
- Expected: Success, commit created without asking
|
||||
- Have Chriton-Forge attempt `git push`
|
||||
- Expected: Ask user for approval
|
||||
- Have Chriton-Forge attempt `git config`
|
||||
- Expected: Permission denied
|
||||
- Have Chriton-Forge attempt `npm install lodash`
|
||||
- Expected: Ask user for approval
|
||||
|
||||
6. **After Deployment - Secret Blocking Tests** (manual):
|
||||
- Attempt to read .env file with both agents
|
||||
- Expected: Permission denied
|
||||
- Attempt to read /run/agenix/ with Chiron
|
||||
- Expected: Permission denied
|
||||
- Attempt to read .env.example (should be allowed)
|
||||
- Expected: Success
|
||||
|
||||
7. **After Deployment - Bash Injection Prevention** (manual):
|
||||
- Have agent attempt bash -c "$(cat /malicious)"
|
||||
- Expected: Permission denied
|
||||
- Have agent attempt bash -c "`cat /malicious`"
|
||||
- Expected: Permission denied
|
||||
- Have agent attempt eval command
|
||||
- Expected: Permission denied
|
||||
|
||||
8. **After Deployment - Git Secret Protection** (manual):
|
||||
- Have agent attempt `git add .env`
|
||||
- Expected: Permission denied
|
||||
- Have agent attempt `git commit .env`
|
||||
- Expected: Permission denied
|
||||
|
||||
9. **Deployment Verification**:
|
||||
```bash
|
||||
# After home-manager switch, verify config is embedded correctly
|
||||
cat ~/.config/opencode/config.json | jq '.agent.chiron.permission.external_directory'
|
||||
# Expected: Shows ~/p/** rule, no duplicate keys
|
||||
|
||||
# Verify agents load without errors
|
||||
# Expected: No startup errors when launching OpenCode
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution Strategy
|
||||
|
||||
### Parallel Execution Waves
|
||||
|
||||
> Single file sequential changes - no parallelization possible.
|
||||
|
||||
```
|
||||
Single-Threaded Execution:
|
||||
Task 1: Fix duplicate external_directory key
|
||||
Task 2: Apply Chiron permission updates
|
||||
Task 3: Apply Chriton-Forge permission updates
|
||||
Task 4: Validate configuration
|
||||
```
|
||||
|
||||
### Dependency Matrix
|
||||
|
||||
| Task | Depends On | Blocks | Can Parallelize With |
|
||||
|------|------------|--------|---------------------|
|
||||
| 1 | None | 2, 3 | None (must start) |
|
||||
| 2 | 1 | 4 | 3 |
|
||||
| 3 | 1 | 4 | 2 |
|
||||
| 4 | 2, 3 | None | None (validation) |
|
||||
|
||||
### Agent Dispatch Summary
|
||||
|
||||
| Task | Recommended Agent |
|
||||
|------|-----------------|
|
||||
| 1 | delegate_task(category="quick", load_skills=["git-master"]) |
|
||||
| 2 | delegate_task(category="quick", load_skills=["git-master"]) |
|
||||
| 3 | delegate_task(category="quick", load_skills=["git-master"]) |
|
||||
| 4 | User (manual verification) |
|
||||
|
||||
---
|
||||
|
||||
## TODOs
|
||||
|
||||
> Implementation tasks for agent configuration changes. Each task MUST include acceptance criteria with executable verification.
|
||||
|
||||
- [x] 1. Fix Duplicate external_directory Key in Chiron Config
|
||||
|
||||
**What to do**:
|
||||
- Remove duplicate `external_directory` key from Chiron permission object
|
||||
- Consolidate into single object with specific rule + catch-all `"*": "ask"`
|
||||
- Replace `~/projects/personal/**` with `~/p/**` (symlink to same directory)
|
||||
|
||||
**Must NOT do**:
|
||||
- Leave duplicate keys (second key overrides first, breaks config)
|
||||
- Skip workspace path validation (verify ~/p/** exists)
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
> **Category**: quick
|
||||
- Reason: Simple JSON edit, single file change, no complex logic
|
||||
> **Skills**: git-master
|
||||
- git-master: Git workflow for committing changes
|
||||
> **Skills Evaluated but Omitted**:
|
||||
- research: Not needed (no investigation required)
|
||||
- librarian: Not needed (no external docs needed)
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: NO
|
||||
- **Parallel Group**: Sequential
|
||||
- **Blocks**: Tasks 2, 3 (depends on clean config)
|
||||
- **Blocked By**: None (can start immediately)
|
||||
|
||||
**References** (CRITICAL - Be Exhaustive):
|
||||
|
||||
**Pattern References** (existing code to follow):
|
||||
- `agents/agents.json:1-135` - Current agent configuration structure (JSON format, permission object structure)
|
||||
- `agents/agents.json:7-29` - Chiron permission object (current state with duplicate key)
|
||||
|
||||
**API/Type References** (contracts to implement against):
|
||||
- OpenCode permission schema: `{"permission": {"bash": {...}, "edit": "...", "external_directory": {...}, "task": {...}}`
|
||||
|
||||
**Documentation References** (specs and requirements):
|
||||
- Interview draft: `.sisyphus/drafts/agent-permissions-refinement.md` - All user decisions and requirements
|
||||
- Metis analysis: Critical issue #1 - Duplicate external_directory key
|
||||
|
||||
**External References** (libraries and frameworks):
|
||||
- OpenCode docs: https://opencode.ai/docs/permissions/ - Permission system documentation (allow/ask/deny, wildcards, last-match-wins)
|
||||
- OpenCode docs: https://opencode.ai/docs/agents/ - Agent configuration format
|
||||
|
||||
**WHY Each Reference Matters** (explain the relevance):
|
||||
- `agents/agents.json` - Target file to modify, shows current structure and duplicate key bug
|
||||
- Interview draft - Contains all user decisions (~/p/** path, subagent restrictions, etc.)
|
||||
- OpenCode permissions docs - Explains permission system mechanics (last-match-wins critical for rule ordering)
|
||||
- Metis analysis - Identifies the duplicate key bug that MUST be fixed
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
> **CRITICAL: AGENT-EXECUTABLE VERIFICATION ONLY**
|
||||
|
||||
**Automated Verification (config validation)**:
|
||||
\`\`\`bash
|
||||
# Agent runs:
|
||||
jq '.' /home/m3tam3re/p/AI/AGENTS/agents/agents.json > /dev/null 2>&1
|
||||
# Assert: Exit code 0 (valid JSON)
|
||||
|
||||
# Verify single external_directory key in chiron permission object
|
||||
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron.permission | keys' | grep external_directory | wc -l
|
||||
# Assert: Output is "1" (exactly one external_directory key)
|
||||
|
||||
# Verify workspace path exists
|
||||
ls -la ~/p/ 2>&1 | head -1
|
||||
# Assert: Shows directory listing (not "No such file or directory")
|
||||
\`\`\`
|
||||
|
||||
**Evidence to Capture**:
|
||||
- [x] jq validation output (exit code 0)
|
||||
- [x] external_directory key count output (should be "1")
|
||||
- [x] Workspace path ls output (shows directory exists)
|
||||
|
||||
**Commit**: NO (group with Task 2 and 3)
|
||||
|
||||
- [x] 2. Apply Chiron Permission Updates
|
||||
|
||||
**What to do**:
|
||||
- Set `edit` to `"deny"` (planning agent should not write files)
|
||||
- Set `bash` permissions to deny all except `bd *`:
|
||||
```json
|
||||
"bash": {
|
||||
"*": "deny",
|
||||
"bd *": "allow"
|
||||
}
|
||||
```
|
||||
- Set `external_directory` to `~/p/**` with catch-all ask:
|
||||
```json
|
||||
"external_directory": {
|
||||
"~/p/**": "allow",
|
||||
"*": "ask"
|
||||
}
|
||||
```
|
||||
- Add `task` permission to restrict subagents:
|
||||
```json
|
||||
"task": {
|
||||
"*": "deny",
|
||||
"explore": "allow",
|
||||
"librarian": "allow",
|
||||
"athena": "allow",
|
||||
"chiron-forge": "allow"
|
||||
}
|
||||
```
|
||||
- Add `/run/agenix/*` to read deny list
|
||||
- Add expanded secret blocking patterns: `.local/share/*`, `.cache/*`, `*.db`, `*.keychain`, `*.p12`
|
||||
|
||||
**Must NOT do**:
|
||||
- Allow bash file write operators (echo >, cat >, tee, etc.) - will add in Task 3 for both agents
|
||||
- Allow chiron to invoke build-capable subagents beyond chiron-forge
|
||||
- Skip webfetch permission (should be "allow" for research capability)
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
> **Category**: quick
|
||||
- Reason: JSON configuration update, follows clear specifications from draft
|
||||
> **Skills**: git-master
|
||||
- git-master: Git workflow for committing changes
|
||||
> **Skills Evaluated but Omitted**:
|
||||
- research: Not needed (all requirements documented in draft)
|
||||
- librarian: Not needed (no external docs needed)
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2 (with Task 3)
|
||||
- **Blocks**: Task 4
|
||||
- **Blocked By**: Task 1
|
||||
|
||||
**References** (CRITICAL - Be Exhaustive):
|
||||
|
||||
**Pattern References** (existing code to follow):
|
||||
- `agents/agents.json:11-24` - Current Chiron read permissions with secret blocking patterns
|
||||
- `agents/agents.json:114-132` - Athena permission object (read-only subagent reference pattern)
|
||||
|
||||
**API/Type References** (contracts to implement against):
|
||||
- OpenCode task permission schema: `{"task": {"agent-name": "allow"}}`
|
||||
|
||||
**Documentation References** (specs and requirements):
|
||||
- Interview draft: `.sisyphus/drafts/agent-permissions-refinement.md` - Chiron permission decisions
|
||||
- Metis analysis: Guardrails #7, #8 - Secret blocking patterns, task permission implementation
|
||||
|
||||
**External References** (libraries and frameworks):
|
||||
- OpenCode docs: https://opencode.ai/docs/agents/#task-permissions - Task permission documentation
|
||||
- OpenCode docs: https://opencode.ai/docs/permissions/ - Permission level definitions and pattern matching
|
||||
|
||||
**WHY Each Reference Matters** (explain the relevance):
|
||||
- `agents/agents.json:11-24` - Shows current secret blocking patterns to extend
|
||||
- `agents/agents.json:114-132` - Shows read-only subagent pattern for reference (athena: deny bash, deny edit)
|
||||
- Interview draft - Contains exact user requirements for Chiron permissions
|
||||
- OpenCode task docs - Explains how to restrict subagent invocation via task permission
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
> **CRITICAL: AGENT-EXECUTABLE VERIFICATION ONLY**
|
||||
|
||||
**Automated Verification (config validation)**:
|
||||
\`\`\`bash
|
||||
# Agent runs:
|
||||
jq '.chiron.permission.edit' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron.permission.bash."*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron.permission.bash."bd *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "allow"
|
||||
|
||||
jq '.chiron.permission.task."*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron.permission.task | keys' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Contains ["*", "athena", "chiron-forge", "explore", "librarian"]
|
||||
|
||||
jq '.chiron.permission.external_directory."~/p/**"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "allow"
|
||||
|
||||
jq '.chiron.permission.external_directory."*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "ask"
|
||||
|
||||
jq '.chiron.permission.read."/run/agenix/*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
\`\`\`
|
||||
|
||||
**Evidence to Capture**:
|
||||
- [x] Edit permission value (should be "deny")
|
||||
- [x] Bash wildcard permission (should be "deny")
|
||||
- [x] Bash bd permission (should be "allow")
|
||||
- [x] Task wildcard permission (should be "deny")
|
||||
- [x] Task allowlist keys (should show 5 entries)
|
||||
- [x] External directory ~/p/** permission (should be "allow")
|
||||
- [x] External directory wildcard permission (should be "ask")
|
||||
- [x] Read /run/agenix/* permission (should be "deny")
|
||||
|
||||
**Commit**: NO (group with Task 3)
|
||||
|
||||
- [x] 3. Apply Chriton-Forge Permission Updates
|
||||
|
||||
**What to do**:
|
||||
- Split `git *: "ask"` into granular rules:
|
||||
- Allow: `git add *`, `git commit *`, read-only commands (status, log, diff, branch, show, stash, remote)
|
||||
- Ask: `git push *`
|
||||
- Deny: `git config *`
|
||||
- Change package managers from `"ask"` to granular rules:
|
||||
- Ask for installs: `npm install *`, `npm i *`, `npx *`, `pip install *`, `pip3 install *`, `uv *`, `bun install *`, `bun i *`, `bunx *`, `yarn install *`, `yarn add *`, `pnpm install *`, `pnpm add *`, `cargo install *`, `go install *`, `make install`
|
||||
- Allow other commands implicitly (let them use catch-all rules or existing allow patterns)
|
||||
- Set `external_directory` to allow `~/p/**` with catch-all ask:
|
||||
```json
|
||||
"external_directory": {
|
||||
"~/p/**": "allow",
|
||||
"*": "ask"
|
||||
}
|
||||
```
|
||||
- Add bash file write protection patterns (apply to both agents):
|
||||
```json
|
||||
"bash": {
|
||||
"echo * > *": "deny",
|
||||
"cat * > *": "deny",
|
||||
"printf * > *": "deny",
|
||||
"tee": "deny",
|
||||
"*>*": "deny",
|
||||
">*>*": "deny"
|
||||
}
|
||||
```
|
||||
- Add bash command injection prevention (apply to both agents):
|
||||
```json
|
||||
"bash": {
|
||||
"$(*": "deny",
|
||||
"`*": "deny",
|
||||
"eval *": "deny",
|
||||
"source *": "deny"
|
||||
}
|
||||
```
|
||||
- Add git secret protection patterns (apply to both agents):
|
||||
```json
|
||||
"bash": {
|
||||
"git add *.env*": "deny",
|
||||
"git commit *.env*": "deny",
|
||||
"git add *credentials*": "deny",
|
||||
"git add *secrets*": "deny"
|
||||
}
|
||||
```
|
||||
- Add expanded secret blocking patterns to read permission:
|
||||
- `.local/share/*`, `.cache/*`, `*.db`, `*.keychain`, `*.p12`
|
||||
|
||||
**Must NOT do**:
|
||||
- Remove existing bash deny rules for dangerous commands (dd, mkfs, fdisk, parted, eval, sudo, su, systemctl, etc.)
|
||||
- Allow git config modifications
|
||||
- Allow bash to write files via any method (must block all redirect patterns)
|
||||
- Skip command injection prevention ($(), backticks, eval, source)
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
> **Category**: quick
|
||||
- Reason: JSON configuration update, follows clear specifications from draft
|
||||
> **Skills**: git-master
|
||||
- git-master: Git workflow for committing changes
|
||||
> **Skills Evaluated but Omitted**:
|
||||
- research: Not needed (all requirements documented in draft)
|
||||
- librarian: Not needed (no external docs needed)
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2 (with Task 2)
|
||||
- **Blocks**: Task 4
|
||||
- **Blocked By**: Task 1
|
||||
|
||||
**References** (CRITICAL - Be Exhaustive):
|
||||
|
||||
**Pattern References** (existing code to follow):
|
||||
- `agents/agents.json:37-103` - Current Chriton-Forge bash permissions (many explicit allow/ask/deny rules)
|
||||
- `agents/agents.json:37-50` - Current Chriton-Forge read permissions with secret blocking
|
||||
|
||||
**API/Type References** (contracts to implement against):
|
||||
- OpenCode permission schema: Same as Task 2
|
||||
|
||||
**Documentation References** (specs and requirements):
|
||||
- Interview draft: `.sisyphus/drafts/agent-permissions-refinement.md` - Chriton-Forge permission decisions
|
||||
- Metis analysis: Guardrails #1-#6 - Bash edit bypass, git secret protection, command injection, git config protection
|
||||
|
||||
**External References** (libraries and frameworks):
|
||||
- OpenCode docs: https://opencode.ai/docs/permissions/ - Permission pattern matching (wildcards, last-match-wins)
|
||||
|
||||
**WHY Each Reference Matters** (explain the relevance):
|
||||
- `agents/agents.json:37-103` - Shows current bash permission structure (many explicit rules) to extend with new patterns
|
||||
- `agents/agents.json:37-50` - Shows current secret blocking to extend with additional patterns
|
||||
- Interview draft - Contains exact user requirements for Chriton-Forge permissions
|
||||
- Metis analysis - Provides bash injection prevention patterns and git protection rules
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
> **CRITICAL: AGENT-EXECUTABLE VERIFICATION ONLY**
|
||||
|
||||
**Automated Verification (config validation)**:
|
||||
\`\`\`bash
|
||||
# Agent runs:
|
||||
|
||||
# Verify git commit is allowed
|
||||
jq '.chiron-forge.permission.bash."git commit *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "allow"
|
||||
|
||||
# Verify git push asks
|
||||
jq '.chiron-forge.permission.bash."git push *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "ask"
|
||||
|
||||
# Verify git config is denied
|
||||
jq '.chiron-forge.permission.bash."git config *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
# Verify npm install asks
|
||||
jq '.chiron-forge.permission.bash."npm install *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "ask"
|
||||
|
||||
# Verify bash file write redirects are blocked
|
||||
jq '.chiron-forge.permission.bash."echo * > *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron-forge.permission.bash."cat * > *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron-forge.permission.bash."tee"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
# Verify command injection is blocked
|
||||
jq '.chiron-forge.permission.bash."$(*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron-forge.permission.bash."`*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
# Verify git secret protection
|
||||
jq '.chiron-forge.permission.bash."git add *.env*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron-forge.permission.bash."git commit *.env*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
# Verify external_directory scope
|
||||
jq '.chiron-forge.permission.external_directory."~/p/**"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "allow"
|
||||
|
||||
jq '.chiron-forge.permission.external_directory."*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "ask"
|
||||
|
||||
# Verify expanded secret blocking
|
||||
jq '.chiron-forge.permission.read.".local/share/*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron-forge.permission.read.".cache/*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron-forge.permission.read."*.db"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
\`\`\`
|
||||
|
||||
**Evidence to Capture**:
|
||||
- [x] Git commit permission (should be "allow")
|
||||
- [x] Git push permission (should be "ask")
|
||||
- [x] Git config permission (should be "deny")
|
||||
- [x] npm install permission (should be "ask")
|
||||
- [x] bash redirect echo > permission (should be "deny")
|
||||
- [x] bash redirect cat > permission (should be "deny")
|
||||
- [x] bash tee permission (should be "deny")
|
||||
- [x] bash $() injection permission (should be "deny")
|
||||
- [x] bash backtick injection permission (should be "deny")
|
||||
- [x] git add *.env* permission (should be "deny")
|
||||
- [x] git commit *.env* permission (should be "deny")
|
||||
- [x] external_directory ~/p/** permission (should be "allow")
|
||||
- [x] external_directory wildcard permission (should be "ask")
|
||||
- [x] read .local/share/* permission (should be "deny")
|
||||
- [x] read .cache/* permission (should be "deny")
|
||||
- [x] read *.db permission (should be "deny")
|
||||
|
||||
**Commit**: YES (groups with Tasks 1, 2, 3)
|
||||
- Message: `chore(agents): refine permissions for Chiron and Chriton-Forge with security hardening`
|
||||
- Files: `agents/agents.json`
|
||||
- Pre-commit: `jq '.' agents/agents.json > /dev/null 2>&1` (validate JSON)
|
||||
|
||||
- [x] 4. Validate Configuration (Manual Verification)
|
||||
|
||||
**What to do**:
|
||||
- Run JSON syntax validation: `jq '.' agents/agents.json`
|
||||
- Verify no duplicate keys in configuration
|
||||
- Verify workspace path exists: `ls -la ~/p/`
|
||||
- Document manual verification procedure for post-deployment testing
|
||||
|
||||
**Must NOT do**:
|
||||
- Skip workspace path validation
|
||||
- Skip duplicate key verification
|
||||
- Proceed to deployment without validation
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
> **Category**: quick
|
||||
- Reason: Simple validation commands, documentation task
|
||||
> **Skills**: git-master
|
||||
- git-master: Git workflow for committing validation script or notes if needed
|
||||
> **Skills Evaluated but Omitted**:
|
||||
- research: Not needed (validation is straightforward)
|
||||
- librarian: Not needed (no external docs needed)
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: NO
|
||||
- **Parallel Group**: Sequential
|
||||
- **Blocks**: None (final validation task)
|
||||
- **Blocked By**: Tasks 2, 3
|
||||
|
||||
**References** (CRITICAL - Be Exhaustive):
|
||||
|
||||
**Pattern References** (existing code to follow):
|
||||
- `AGENTS.md` - Repository documentation structure
|
||||
|
||||
**API/Type References** (contracts to implement against):
|
||||
- N/A (validation task)
|
||||
|
||||
**Documentation References** (specs and requirements):
|
||||
- Interview draft: `.sisyphus/drafts/agent-permissions-refinement.md` - All user requirements
|
||||
- Metis analysis: Guardrails #1-#6 - Validation requirements
|
||||
|
||||
**External References** (libraries and frameworks):
|
||||
- N/A (validation task)
|
||||
|
||||
**WHY Each Reference Matters** (explain the relevance):
|
||||
- Interview draft - Contains all requirements to validate against
|
||||
- Metis analysis - Identifies specific validation steps (duplicate keys, workspace path, etc.)
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
> **CRITICAL: AGENT-EXECUTABLE VERIFICATION ONLY**
|
||||
|
||||
**Automated Verification (config validation)**:
|
||||
\`\`\`bash
|
||||
# Agent runs:
|
||||
|
||||
# JSON syntax validation
|
||||
jq '.' /home/m3tam3re/p/AI/AGENTS/agents/agents.json > /dev/null 2>&1
|
||||
# Assert: Exit code 0
|
||||
|
||||
# Verify no duplicate external_directory keys
|
||||
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron.permission | keys' | grep external_directory | wc -l
|
||||
# Assert: Output is "1"
|
||||
|
||||
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron-forge.permission | keys' | grep external_directory | wc -l
|
||||
# Assert: Output is "1"
|
||||
|
||||
# Verify workspace path exists
|
||||
ls -la ~/p/ 2>&1 | head -1
|
||||
# Assert: Shows directory listing (not "No such file or directory")
|
||||
|
||||
# Verify all permission keys are valid
|
||||
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron.permission' > /dev/null 2>&1
|
||||
# Assert: Exit code 0
|
||||
|
||||
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron-forge.permission' > /dev/null 2>&1
|
||||
# Assert: Exit code 0
|
||||
\`\`\`
|
||||
|
||||
**Evidence to Capture**:
|
||||
- [x] jq validation output (exit code 0)
|
||||
- [x] Chiron external_directory key count (should be "1")
|
||||
- [x] Chriton-Forge external_directory key count (should be "1")
|
||||
- [x] Workspace path ls output (shows directory exists)
|
||||
- [x] Chiron permission object validation (exit code 0)
|
||||
- [x] Chriton-Forge permission object validation (exit code 0)
|
||||
|
||||
**Commit**: NO (validation only, no changes)
|
||||
|
||||
---
|
||||
|
||||
## Commit Strategy
|
||||
|
||||
| After Task | Message | Files | Verification |
|
||||
|------------|---------|-------|--------------|
|
||||
| 1, 2, 3 | `chore(agents): refine permissions for Chiron and Chriton-Forge with security hardening` | agents/agents.json | `jq '.' agents/agents.json > /dev/null` |
|
||||
| 4 | N/A (validation only) | N/A | N/A |
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
### Verification Commands
|
||||
```bash
|
||||
# Pre-deployment validation
|
||||
jq '.' /home/m3tam3re/p/AI/AGENTS/agents/agents.json > /dev/null 2>&1
|
||||
# Expected: Exit code 0
|
||||
|
||||
# Duplicate key check
|
||||
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron.permission | keys' | grep external_directory | wc -l
|
||||
# Expected: 1
|
||||
|
||||
# Workspace path validation
|
||||
ls -la ~/p/ 2>&1
|
||||
# Expected: Directory listing
|
||||
|
||||
# Post-deployment (manual)
|
||||
# Have Chiron attempt file edit → Expected: Permission denied
|
||||
# Have Chiron run bd ready → Expected: Success
|
||||
# Have Chriton-Forge git commit → Expected: Success
|
||||
# Have Chriton-Forge git push → Expected: Ask user
|
||||
# Have agent read .env → Expected: Permission denied
|
||||
```
|
||||
|
||||
### Final Checklist
|
||||
- [x] Duplicate `external_directory` key fixed
|
||||
- [x] Chiron edit set to "deny"
|
||||
- [x] Chiron bash denied except `bd *`
|
||||
- [x] Chiron task permission restricts subagents (explore, librarian, athena, chiron-forge)
|
||||
- [x] Chiron external_directory allows ~/p/** only
|
||||
- [x] Chriton-Forge git commit allowed, git push asks
|
||||
- [x] Chriton-Forge git config denied
|
||||
- [x] Chriton-Forge package install commands ask
|
||||
- [x] Chriton-Forge external_directory allows ~/p/**, asks others
|
||||
- [x] Bash file write operators blocked (echo >, cat >, tee, etc.)
|
||||
- [x] Bash command injection blocked ($(), backticks, eval, source)
|
||||
- [x] Git secret protection added (git add/commit *.env* deny)
|
||||
- [x] Expanded secret blocking patterns added (.local/share/*, .cache/*, *.db, *.keychain, *.p12)
|
||||
- [x] /run/agenix/* blocked in read permissions
|
||||
- [x] JSON syntax valid (jq validates)
|
||||
- [x] No duplicate keys in configuration
|
||||
- [x] Workspace path ~/p/** exists
|
||||
41
AGENTS.md
41
AGENTS.md
@@ -12,26 +12,27 @@ Configuration repository for Opencode Agent Skills, context files, and agent con
|
||||
|
||||
# Skill creation
|
||||
python3 skills/skill-creator/scripts/init_skill.py <name> --path skills/
|
||||
|
||||
# Issue tracking (beads)
|
||||
bd ready && bd create "title" && bd close <id> && bd sync
|
||||
```
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
.
|
||||
├── skills/ # Agent skills (25 modules)
|
||||
├── skills/ # Agent skills (15 modules)
|
||||
│ └── skill-name/
|
||||
│ ├── SKILL.md # Required: YAML frontmatter + workflows
|
||||
│ ├── scripts/ # Executable code (optional)
|
||||
│ ├── references/ # Domain docs (optional)
|
||||
│ └── assets/ # Templates/files (optional)
|
||||
├── rules/ # AI coding rules (languages, concerns, frameworks)
|
||||
│ ├── languages/ # Python, TypeScript, Nix, Shell
|
||||
│ ├── concerns/ # Testing, naming, documentation, etc.
|
||||
│ └── frameworks/ # Framework-specific rules (n8n, etc.)
|
||||
├── agents/ # Agent definitions (agents.json)
|
||||
├── prompts/ # System prompts (chiron*.txt)
|
||||
├── context/ # User profiles
|
||||
├── commands/ # Custom commands
|
||||
└── scripts/ # Repo utilities (test-skill.sh)
|
||||
└── scripts/ # Repo utilities (test-skill.sh, validate-agents.sh)
|
||||
```
|
||||
|
||||
## Code Conventions
|
||||
@@ -58,7 +59,7 @@ compatibility: opencode
|
||||
## Anti-Patterns (CRITICAL)
|
||||
|
||||
**Frontend Design**: NEVER use generic AI aesthetics, NEVER converge on common choices
|
||||
**Excalidraw**: NEVER use diamond shapes (broken arrows), NEVER use `label` property
|
||||
**Excalidraw**: NEVER use `label` property (use boundElements + text element pairs instead)
|
||||
**Debugging**: NEVER fix just symptom, ALWAYS find root cause first
|
||||
**Excel**: ALWAYS respect existing template conventions over guidelines
|
||||
**Structure**: NEVER place scripts/docs outside scripts/references/ directories
|
||||
@@ -77,27 +78,46 @@ compatibility: opencode
|
||||
|
||||
## Deployment
|
||||
|
||||
**Nix pattern** (non-flake input):
|
||||
**Nix flake pattern**:
|
||||
```nix
|
||||
agents = {
|
||||
url = "git+https://code.m3ta.dev/m3tam3re/AGENTS";
|
||||
flake = false; # Files only, not a Nix flake
|
||||
inputs.nixpkgs.follows = "nixpkgs"; # Optional but recommended
|
||||
};
|
||||
```
|
||||
|
||||
**Exports:**
|
||||
- `packages.skills-runtime` — composable runtime with all skill dependencies
|
||||
- `devShells.default` — dev environment for working on skills
|
||||
|
||||
**Mapping** (via home-manager):
|
||||
- `skills/`, `context/`, `commands/`, `prompts/` → symlinks
|
||||
- `agents/agents.json` → embedded into config.json
|
||||
- Agent changes: require `home-manager switch`
|
||||
- Other changes: visible immediately
|
||||
|
||||
## Rules System
|
||||
|
||||
Centralized AI coding rules consumed via `mkOpencodeRules` from m3ta-nixpkgs:
|
||||
|
||||
```nix
|
||||
# In project flake.nix
|
||||
m3taLib.opencode-rules.mkOpencodeRules {
|
||||
inherit agents;
|
||||
languages = [ "python" "typescript" ];
|
||||
frameworks = [ "n8n" ];
|
||||
};
|
||||
```
|
||||
|
||||
See `rules/USAGE.md` for full documentation.
|
||||
|
||||
## Notes for AI Agents
|
||||
|
||||
1. **Config-only repo** - No compilation, no build, manual validation only
|
||||
2. **Skills are documentation** - Write for AI consumption, progressive disclosure
|
||||
3. **Consistent structure** - All skills follow 4-level deep pattern (skills/name/ + optional subdirs)
|
||||
4. **Cross-cutting concerns** - Standardized SKILL.md, workflow patterns, delegation rules
|
||||
5. **Always push** - Session completion workflow: commit + bd sync + git push
|
||||
5. **Always push** - Session completion workflow: commit + git push
|
||||
|
||||
## Quality Gates
|
||||
|
||||
@@ -105,4 +125,5 @@ Before committing:
|
||||
1. `./scripts/test-skill.sh --validate`
|
||||
2. Python shebang + docstrings check
|
||||
3. No extraneous files (README.md, CHANGELOG.md in skills/)
|
||||
4. Git status clean
|
||||
4. If skill has scripts with external dependencies → verify `flake.nix` is updated (see skill-creator Step 4)
|
||||
5. Git status clean
|
||||
|
||||
310
README.md
310
README.md
@@ -1,6 +1,6 @@
|
||||
# Opencode Agent Skills & Configurations
|
||||
|
||||
Central repository for [Opencode](https://opencode.dev) Agent Skills, AI agent configurations, custom commands, and AI-assisted workflows. This is an extensible framework for building productivity systems, automations, knowledge management, and specialized AI capabilities.
|
||||
Central repository for [Opencode](https://opencode.ai) Agent Skills, AI agent configurations, custom commands, and AI-assisted workflows. This is an extensible framework for building productivity systems, automations, knowledge management, and specialized AI capabilities.
|
||||
|
||||
## 🎯 What This Repository Provides
|
||||
|
||||
@@ -8,36 +8,45 @@ This repository serves as a **personal AI operating system** - a collection of s
|
||||
|
||||
- **Productivity & Task Management** - PARA methodology, GTD workflows, project tracking
|
||||
- **Knowledge Management** - Note-taking, research workflows, information organization
|
||||
- **Communications** - Email management, meeting scheduling, follow-up tracking
|
||||
- **AI Development** - Tools for creating new skills and agent configurations
|
||||
- **Memory & Context** - Persistent memory systems, conversation analysis
|
||||
- **Document Processing** - PDF manipulation, spreadsheet handling, diagram generation
|
||||
- **Custom Workflows** - Domain-specific automation and specialized agents
|
||||
|
||||
## 📂 Repository Structure
|
||||
|
||||
```
|
||||
.
|
||||
├── agent/ # Agent definitions (agents.json)
|
||||
├── prompts/ # Agent system prompts (chiron.txt, chiron-forge.txt)
|
||||
├── agents/ # Agent definitions (agents.json)
|
||||
├── prompts/ # Agent system prompts (chiron.txt, chiron-forge.txt, etc.)
|
||||
├── context/ # User profiles and preferences
|
||||
│ └── profile.md # Work style, PARA areas, preferences
|
||||
├── command/ # Custom command definitions
|
||||
├── commands/ # Custom command definitions
|
||||
│ └── reflection.md
|
||||
├── skill/ # Opencode Agent Skills (11+ skills)
|
||||
│ ├── task-management/ # PARA-based productivity
|
||||
│ ├── skill-creator/ # Meta-skill for creating skills
|
||||
│ ├── reflection/ # Conversation analysis
|
||||
│ ├── communications/ # Email & messaging
|
||||
│ ├── calendar-scheduling/ # Time management
|
||||
│ ├── mem0-memory/ # Persistent memory
|
||||
│ ├── research/ # Investigation workflows
|
||||
│ ├── knowledge-management/ # Note capture & organization
|
||||
├── skills/ # Opencode Agent Skills (15 skills)
|
||||
│ ├── agent-development/ # Agent creation and configuration
|
||||
│ ├── basecamp/ # Basecamp project management
|
||||
│ ├── brainstorming/ # Ideation & strategic thinking
|
||||
│ └── plan-writing/ # Project planning templates
|
||||
│ ├── doc-translator/ # Documentation translation
|
||||
│ ├── excalidraw/ # Architecture diagrams
|
||||
│ ├── frontend-design/ # UI/UX design patterns
|
||||
│ ├── memory/ # Persistent memory system
|
||||
│ ├── obsidian/ # Obsidian vault management
|
||||
│ ├── outline/ # Outline wiki integration
|
||||
│ ├── pdf/ # PDF manipulation toolkit
|
||||
│ ├── prompt-engineering-patterns/ # Prompt patterns
|
||||
│ ├── reflection/ # Conversation analysis
|
||||
│ ├── skill-creator/ # Meta-skill for creating skills
|
||||
│ ├── systematic-debugging/ # Debugging methodology
|
||||
│ └── xlsx/ # Spreadsheet handling
|
||||
├── scripts/ # Repository utility scripts
|
||||
│ └── test-skill.sh # Test skills without deploying
|
||||
├── .beads/ # Issue tracking database
|
||||
├── rules/ # AI coding rules
|
||||
│ ├── languages/ # Python, TypeScript, Nix, Shell
|
||||
│ ├── concerns/ # Testing, naming, documentation
|
||||
│ └── frameworks/ # Framework-specific rules (n8n)
|
||||
├── flake.nix # Nix flake: dev shell + skills-runtime export
|
||||
├── .envrc # direnv config (use flake)
|
||||
├── AGENTS.md # Developer documentation
|
||||
└── README.md # This file
|
||||
```
|
||||
@@ -46,43 +55,96 @@ This repository serves as a **personal AI operating system** - a collection of s
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- **Opencode** - AI coding assistant ([opencode.dev](https://opencode.dev))
|
||||
- **Nix** (optional) - For declarative deployment via home-manager
|
||||
- **Python 3** - For skill validation and creation scripts
|
||||
- **bd (beads)** (optional) - For issue tracking
|
||||
- **Nix** with flakes enabled — for reproducible dependency management and deployment
|
||||
- **direnv** (recommended) — auto-activates the development environment when entering the repo
|
||||
- **Opencode** — AI coding assistant ([opencode.ai](https://opencode.ai))
|
||||
|
||||
### Installation
|
||||
|
||||
#### Option 1: Nix Flake (Recommended)
|
||||
|
||||
This repository is consumed as a **non-flake input** by your NixOS configuration:
|
||||
This repository is a **Nix flake** that exports:
|
||||
|
||||
- **`devShells.default`** — development environment for working on skills (activated via direnv)
|
||||
- **`packages.skills-runtime`** — composable runtime with all skill script dependencies (Python packages + system tools)
|
||||
|
||||
**Consume in your system flake:**
|
||||
|
||||
```nix
|
||||
# In your flake.nix
|
||||
# flake.nix
|
||||
inputs.agents = {
|
||||
url = "git+https://code.m3ta.dev/m3tam3re/AGENTS";
|
||||
flake = false; # Pure files, not a Nix flake
|
||||
inputs.nixpkgs.follows = "nixpkgs";
|
||||
};
|
||||
|
||||
# In your home-manager module (e.g., opencode.nix)
|
||||
xdg.configFile = {
|
||||
"opencode/skill".source = "${inputs.agents}/skill";
|
||||
"opencode/skills".source = "${inputs.agents}/skills";
|
||||
"opencode/context".source = "${inputs.agents}/context";
|
||||
"opencode/command".source = "${inputs.agents}/command";
|
||||
"opencode/commands".source = "${inputs.agents}/commands";
|
||||
"opencode/prompts".source = "${inputs.agents}/prompts";
|
||||
};
|
||||
|
||||
# Agent config is embedded into config.json, not deployed as files
|
||||
programs.opencode.settings.agent = builtins.fromJSON
|
||||
(builtins.readFile "${inputs.agents}/agent/agents.json");
|
||||
(builtins.readFile "${inputs.agents}/agents/agents.json");
|
||||
```
|
||||
|
||||
Rebuild your system:
|
||||
**Deploy skills via home-manager:**
|
||||
|
||||
```nix
|
||||
# home-manager module (e.g., opencode.nix)
|
||||
{ inputs, system, ... }:
|
||||
{
|
||||
# Skill files — symlinked, changes visible immediately
|
||||
xdg.configFile = {
|
||||
"opencode/skills".source = "${inputs.agents}/skills";
|
||||
"opencode/context".source = "${inputs.agents}/context";
|
||||
"opencode/commands".source = "${inputs.agents}/commands";
|
||||
"opencode/prompts".source = "${inputs.agents}/prompts";
|
||||
};
|
||||
|
||||
# Agent config — embedded into config.json (requires home-manager switch)
|
||||
programs.opencode.settings.agent = builtins.fromJSON
|
||||
(builtins.readFile "${inputs.agents}/agents/agents.json");
|
||||
|
||||
# Skills runtime — ensures opencode always has script dependencies
|
||||
home.packages = [ inputs.agents.packages.${system}.skills-runtime ];
|
||||
}
|
||||
```
|
||||
|
||||
**Compose into project flakes** (so opencode has skill deps in any project):
|
||||
|
||||
```nix
|
||||
# Any project's flake.nix
|
||||
{
|
||||
inputs.agents.url = "git+https://code.m3ta.dev/m3tam3re/AGENTS";
|
||||
inputs.agents.inputs.nixpkgs.follows = "nixpkgs";
|
||||
|
||||
outputs = { self, nixpkgs, agents, ... }:
|
||||
let
|
||||
system = "x86_64-linux";
|
||||
pkgs = nixpkgs.legacyPackages.${system};
|
||||
in {
|
||||
devShells.${system}.default = pkgs.mkShell {
|
||||
packages = [
|
||||
# project-specific tools
|
||||
pkgs.nodejs
|
||||
# skill script dependencies
|
||||
agents.packages.${system}.skills-runtime
|
||||
];
|
||||
};
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
Rebuild:
|
||||
|
||||
```bash
|
||||
home-manager switch
|
||||
```
|
||||
|
||||
**Note**: The `agent/` directory is NOT deployed as files. Instead, `agents.json` is read at Nix evaluation time and embedded into the opencode `config.json`.
|
||||
**Note**: The `agents/` directory is NOT deployed as files. Instead, `agents.json` is read at Nix evaluation time and embedded into the opencode `config.json`.
|
||||
|
||||
#### Option 2: Manual Installation
|
||||
|
||||
@@ -92,8 +154,11 @@ Clone and symlink:
|
||||
# Clone repository
|
||||
git clone https://github.com/yourusername/AGENTS.git ~/AGENTS
|
||||
|
||||
# Create symlink to Opencode config directory
|
||||
ln -s ~/AGENTS ~/.config/opencode
|
||||
# Create symlinks to Opencode config directory
|
||||
ln -s ~/AGENTS/skills ~/.config/opencode/skills
|
||||
ln -s ~/AGENTS/context ~/.config/opencode/context
|
||||
ln -s ~/AGENTS/commands ~/.config/opencode/commands
|
||||
ln -s ~/AGENTS/prompts ~/.config/opencode/prompts
|
||||
```
|
||||
|
||||
### Verify Installation
|
||||
@@ -101,8 +166,8 @@ ln -s ~/AGENTS ~/.config/opencode
|
||||
Check that Opencode can see your skills:
|
||||
|
||||
```bash
|
||||
# Skills should be available at ~/.config/opencode/skill/
|
||||
ls ~/.config/opencode/skill/
|
||||
# Skills should be available at ~/.config/opencode/skills/
|
||||
ls ~/.config/opencode/skills/
|
||||
```
|
||||
|
||||
## 🎨 Creating Your First Skill
|
||||
@@ -112,18 +177,19 @@ Skills are modular packages that extend Opencode with specialized knowledge and
|
||||
### 1. Initialize a New Skill
|
||||
|
||||
```bash
|
||||
python3 skill/skill-creator/scripts/init_skill.py my-skill-name --path skill/
|
||||
python3 skills/skill-creator/scripts/init_skill.py my-skill-name --path skills/
|
||||
```
|
||||
|
||||
This creates:
|
||||
- `skill/my-skill-name/SKILL.md` - Main skill documentation
|
||||
- `skill/my-skill-name/scripts/` - Executable code (optional)
|
||||
- `skill/my-skill-name/references/` - Reference documentation (optional)
|
||||
- `skill/my-skill-name/assets/` - Templates and files (optional)
|
||||
|
||||
- `skills/my-skill-name/SKILL.md` - Main skill documentation
|
||||
- `skills/my-skill-name/scripts/` - Executable code (optional)
|
||||
- `skills/my-skill-name/references/` - Reference documentation (optional)
|
||||
- `skills/my-skill-name/assets/` - Templates and files (optional)
|
||||
|
||||
### 2. Edit the Skill
|
||||
|
||||
Open `skill/my-skill-name/SKILL.md` and customize:
|
||||
Open `skills/my-skill-name/SKILL.md` and customize:
|
||||
|
||||
```yaml
|
||||
---
|
||||
@@ -131,7 +197,6 @@ name: my-skill-name
|
||||
description: What it does and when to use it. Include trigger keywords.
|
||||
compatibility: opencode
|
||||
---
|
||||
|
||||
# My Skill Name
|
||||
|
||||
## Overview
|
||||
@@ -139,108 +204,111 @@ compatibility: opencode
|
||||
[Your skill instructions for Opencode]
|
||||
```
|
||||
|
||||
### 3. Validate the Skill
|
||||
### 3. Register Dependencies
|
||||
|
||||
```bash
|
||||
python3 skill/skill-creator/scripts/quick_validate.py skill/my-skill-name
|
||||
If your skill includes scripts with external dependencies, add them to `flake.nix`:
|
||||
|
||||
```nix
|
||||
# Python packages — add to pythonEnv:
|
||||
# my-skill: my_script.py
|
||||
some-python-package
|
||||
|
||||
# System tools — add to skills-runtime paths:
|
||||
# my-skill: needed by my_script.py
|
||||
pkgs.some-tool
|
||||
```
|
||||
|
||||
### 4. Test the Skill
|
||||
Verify: `nix develop --command python3 -c "import some_package"`
|
||||
|
||||
Test your skill without deploying via home-manager:
|
||||
### 4. Validate the Skill
|
||||
|
||||
```bash
|
||||
python3 skills/skill-creator/scripts/quick_validate.py skills/my-skill-name
|
||||
```
|
||||
|
||||
### 5. Test the Skill
|
||||
|
||||
```bash
|
||||
# Use the test script to validate and list skills
|
||||
./scripts/test-skill.sh my-skill-name # Validate specific skill
|
||||
./scripts/test-skill.sh --list # List all dev skills
|
||||
./scripts/test-skill.sh --run # Launch opencode with dev skills
|
||||
```
|
||||
|
||||
The test script creates a temporary config directory with symlinks to this repo's skills, allowing you to test changes before committing.
|
||||
|
||||
## 📚 Available Skills
|
||||
|
||||
| Skill | Purpose | Status |
|
||||
|-------|---------|--------|
|
||||
| **task-management** | PARA-based productivity with Obsidian Tasks integration | ✅ Active |
|
||||
| **skill-creator** | Guide for creating new Opencode skills | ✅ Active |
|
||||
| **reflection** | Conversation analysis and skill improvement | ✅ Active |
|
||||
| **communications** | Email drafts, follow-ups, message management | ✅ Active |
|
||||
| **calendar-scheduling** | Time blocking, meeting management | ✅ Active |
|
||||
| **mem0-memory** | Persistent memory storage and retrieval | ✅ Active |
|
||||
| **research** | Investigation workflows, source management | ✅ Active |
|
||||
| **knowledge-management** | Note capture, knowledge organization | ✅ Active |
|
||||
| --------------------------- | -------------------------------------------------------------- | ------------ |
|
||||
| **agent-development** | Create and configure Opencode agents | ✅ Active |
|
||||
| **basecamp** | Basecamp project & todo management via MCP | ✅ Active |
|
||||
| **brainstorming** | General-purpose ideation with Obsidian save | ✅ Active |
|
||||
| **plan-writing** | Project plans with templates (kickoff, tasks, risks) | ✅ Active |
|
||||
| **brainstorming** | General-purpose ideation and strategic thinking | ✅ Active |
|
||||
| **doc-translator** | Documentation translation to German/Czech with Outline publish | ✅ Active |
|
||||
| **excalidraw** | Architecture diagrams from codebase analysis | ✅ Active |
|
||||
| **frontend-design** | Production-grade UI/UX with high design quality | ✅ Active |
|
||||
| **memory** | SQLite-based persistent memory with hybrid search | ✅ Active |
|
||||
| **obsidian** | Obsidian vault management via Local REST API | ✅ Active |
|
||||
| **outline** | Outline wiki integration for team documentation | ✅ Active |
|
||||
| **pdf** | PDF manipulation, extraction, creation, and forms | ✅ Active |
|
||||
| **prompt-engineering-patterns** | Advanced prompt engineering techniques | ✅ Active |
|
||||
| **reflection** | Conversation analysis and skill improvement | ✅ Active |
|
||||
| **skill-creator** | Guide for creating new Opencode skills | ✅ Active |
|
||||
| **systematic-debugging** | Debugging methodology for bugs and test failures | ✅ Active |
|
||||
| **xlsx** | Spreadsheet creation, editing, and analysis | ✅ Active |
|
||||
|
||||
## 🤖 AI Agents
|
||||
|
||||
### Chiron - Personal Assistant
|
||||
### Primary Agents
|
||||
|
||||
**Configuration**: `agent/agents.json` + `prompts/chiron.txt`
|
||||
| Agent | Mode | Purpose |
|
||||
| ------------------- | ------- | ---------------------------------------------------- |
|
||||
| **Chiron** | Plan | Read-only analysis, planning, and guidance |
|
||||
| **Chiron Forge** | Build | Full execution and task completion with safety |
|
||||
|
||||
Chiron is a personal AI assistant focused on productivity and task management. Named after the wise centaur from Greek mythology, Chiron provides:
|
||||
### Subagents (Specialists)
|
||||
|
||||
- Task and project management guidance
|
||||
- Daily and weekly review workflows
|
||||
- Skill routing based on user intent
|
||||
- Integration with productivity tools (Obsidian, ntfy, n8n)
|
||||
| Agent | Domain | Purpose |
|
||||
| ------------------- | ---------------- | ------------------------------------------ |
|
||||
| **Hermes** | Communication | Basecamp, Outlook, MS Teams |
|
||||
| **Athena** | Research | Outline wiki, documentation, knowledge |
|
||||
| **Apollo** | Private Knowledge| Obsidian vault, personal notes |
|
||||
| **Calliope** | Writing | Documentation, reports, prose |
|
||||
|
||||
**Modes**:
|
||||
- **Chiron** (Plan Mode) - Read-only analysis and planning (`prompts/chiron.txt`)
|
||||
- **Chiron-Forge** (Worker Mode) - Full write access with safety prompts (`prompts/chiron-forge.txt`)
|
||||
**Configuration**: `agents/agents.json` + `prompts/*.txt`
|
||||
|
||||
**Triggers**: Personal productivity requests, task management, reviews, planning
|
||||
## 🛠️ Development
|
||||
|
||||
## 🛠️ Development Workflow
|
||||
### Environment
|
||||
|
||||
### Issue Tracking with Beads
|
||||
|
||||
This project uses [beads](https://github.com/steveyegge/beads) for AI-native issue tracking:
|
||||
The repository includes a Nix flake with a development shell. With [direnv](https://direnv.net/) installed, the environment activates automatically:
|
||||
|
||||
```bash
|
||||
bd ready # Find available work
|
||||
bd create "title" # Create new issue
|
||||
bd update <id> --status in_progress
|
||||
bd close <id> # Complete work
|
||||
bd sync # Sync with git
|
||||
cd AGENTS/
|
||||
# → direnv: loading .envrc
|
||||
# → 🔧 AGENTS dev shell active — Python 3.13.x, jq-1.x
|
||||
|
||||
# All skill script dependencies are now available:
|
||||
python3 -c "import pypdf, openpyxl, yaml" # ✔️
|
||||
pdftoppm -v # ✔️
|
||||
```
|
||||
|
||||
Without direnv, activate manually: `nix develop`
|
||||
|
||||
### Quality Gates
|
||||
|
||||
Before committing:
|
||||
|
||||
1. **Validate skills**: `./scripts/test-skill.sh --validate` or `python3 skill/skill-creator/scripts/quick_validate.py skill/<name>`
|
||||
1. **Validate skills**: `./scripts/test-skill.sh --validate` or `python3 skills/skill-creator/scripts/quick_validate.py skills/<name>`
|
||||
2. **Test locally**: `./scripts/test-skill.sh --run` to launch opencode with dev skills
|
||||
3. **Check formatting**: Ensure YAML frontmatter is valid
|
||||
4. **Update docs**: Keep README and AGENTS.md in sync
|
||||
|
||||
### Session Completion
|
||||
|
||||
When ending a work session:
|
||||
|
||||
1. File beads issues for remaining work
|
||||
2. Run quality gates
|
||||
3. Update issue status
|
||||
4. **Push to remote** (mandatory):
|
||||
```bash
|
||||
git pull --rebase
|
||||
bd sync
|
||||
git push
|
||||
```
|
||||
5. Verify changes are pushed
|
||||
|
||||
See `AGENTS.md` for complete developer documentation.
|
||||
|
||||
## 🎓 Learning Resources
|
||||
|
||||
### Essential Documentation
|
||||
|
||||
- **AGENTS.md** - Complete developer guide for AI agents
|
||||
- **skill/skill-creator/SKILL.md** - Comprehensive skill creation guide
|
||||
- **skill/skill-creator/references/workflows.md** - Workflow pattern library
|
||||
- **skill/skill-creator/references/output-patterns.md** - Output formatting patterns
|
||||
- **skills/skill-creator/SKILL.md** - Comprehensive skill creation guide
|
||||
- **skills/skill-creator/references/workflows.md** - Workflow pattern library
|
||||
- **skills/skill-creator/references/output-patterns.md** - Output formatting patterns
|
||||
- **rules/USAGE.md** - AI coding rules integration guide
|
||||
|
||||
### Skill Design Principles
|
||||
|
||||
@@ -251,27 +319,33 @@ See `AGENTS.md` for complete developer documentation.
|
||||
|
||||
### Example Skills to Study
|
||||
|
||||
- **task-management/** - Full implementation with Obsidian Tasks integration
|
||||
- **skill-creator/** - Meta-skill with bundled resources
|
||||
- **reflection/** - Conversation analysis with rating system
|
||||
- **basecamp/** - MCP server integration with multiple tool categories
|
||||
- **brainstorming/** - Framework-based ideation with Obsidian markdown save
|
||||
- **plan-writing/** - Template-driven document generation
|
||||
- **memory/** - SQLite-based hybrid search implementation
|
||||
- **excalidraw/** - Diagram generation with JSON templates and Python renderer
|
||||
|
||||
## 🔧 Customization
|
||||
|
||||
### Modify Agent Behavior
|
||||
|
||||
Edit `agent/agents.json` for agent definitions and `prompts/*.txt` for system prompts:
|
||||
- `agent/agents.json` - Agent names, models, permissions
|
||||
Edit `agents/agents.json` for agent definitions and `prompts/*.txt` for system prompts:
|
||||
|
||||
- `agents/agents.json` - Agent names, models, permissions
|
||||
- `prompts/chiron.txt` - Chiron (Plan Mode) system prompt
|
||||
- `prompts/chiron-forge.txt` - Chiron-Forge (Worker Mode) system prompt
|
||||
- `prompts/chiron-forge.txt` - Chiron Forge (Build Mode) system prompt
|
||||
- `prompts/hermes.txt` - Hermes (Communication) system prompt
|
||||
- `prompts/athena.txt` - Athena (Research) system prompt
|
||||
- `prompts/apollo.txt` - Apollo (Private Knowledge) system prompt
|
||||
- `prompts/calliope.txt` - Calliope (Writing) system prompt
|
||||
|
||||
**Note**: Agent changes require `home-manager switch` to take effect (config is embedded, not symlinked).
|
||||
|
||||
### Update User Context
|
||||
|
||||
Edit `context/profile.md` to configure:
|
||||
|
||||
- Work style preferences
|
||||
- PARA areas and projects
|
||||
- Communication preferences
|
||||
@@ -279,13 +353,29 @@ Edit `context/profile.md` to configure:
|
||||
|
||||
### Add Custom Commands
|
||||
|
||||
Create new command definitions in `command/` directory following the pattern in `command/reflection.md`.
|
||||
Create new command definitions in `commands/` directory following the pattern in `commands/reflection.md`.
|
||||
|
||||
### Add Project Rules
|
||||
|
||||
Use the rules system to inject AI coding rules into projects:
|
||||
|
||||
```nix
|
||||
# In project flake.nix
|
||||
m3taLib.opencode-rules.mkOpencodeRules {
|
||||
inherit agents;
|
||||
languages = [ "python" "typescript" ];
|
||||
frameworks = [ "n8n" ];
|
||||
};
|
||||
```
|
||||
|
||||
See `rules/USAGE.md` for full documentation.
|
||||
|
||||
## 🌟 Use Cases
|
||||
|
||||
### Personal Productivity
|
||||
|
||||
Use the PARA methodology with Obsidian Tasks integration:
|
||||
|
||||
- Capture tasks and notes quickly
|
||||
- Run daily/weekly reviews
|
||||
- Prioritize work based on impact
|
||||
@@ -294,6 +384,7 @@ Use the PARA methodology with Obsidian Tasks integration:
|
||||
### Knowledge Management
|
||||
|
||||
Build a personal knowledge base:
|
||||
|
||||
- Capture research findings
|
||||
- Organize notes and references
|
||||
- Link related concepts
|
||||
@@ -302,6 +393,7 @@ Build a personal knowledge base:
|
||||
### AI-Assisted Development
|
||||
|
||||
Extend Opencode for specialized domains:
|
||||
|
||||
- Create company-specific skills (finance, legal, engineering)
|
||||
- Integrate with APIs and databases
|
||||
- Build custom automation workflows
|
||||
@@ -310,6 +402,7 @@ Extend Opencode for specialized domains:
|
||||
### Team Collaboration
|
||||
|
||||
Share skills and agents across teams:
|
||||
|
||||
- Document company processes as skills
|
||||
- Create shared knowledge bases
|
||||
- Standardize communication templates
|
||||
@@ -331,15 +424,14 @@ This repository contains personal configurations and skills. Feel free to use th
|
||||
## 🔗 Links
|
||||
|
||||
- [Opencode](https://opencode.dev) - AI coding assistant
|
||||
- [Beads](https://github.com/steveyegge/beads) - AI-native issue tracking
|
||||
- [PARA Method](https://fortelabs.com/blog/para/) - Productivity methodology
|
||||
- [Obsidian](https://obsidian.md) - Knowledge management platform
|
||||
|
||||
## 🙋 Questions?
|
||||
|
||||
- Check `AGENTS.md` for detailed developer documentation
|
||||
- Review existing skills in `skill/` for examples
|
||||
- See `skill/skill-creator/SKILL.md` for skill creation guide
|
||||
- Review existing skills in `skills/` for examples
|
||||
- See `skills/skill-creator/SKILL.md` for skill creation guide
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -1,62 +1,173 @@
|
||||
{
|
||||
"chiron": {
|
||||
"Chiron (Assistant)": {
|
||||
"description": "Personal AI assistant (Plan Mode). Read-only analysis, planning, and guidance.",
|
||||
"mode": "primary",
|
||||
"model": "zai-coding-plan/glm-4.7",
|
||||
"model": "zai-coding-plan/glm-5",
|
||||
"prompt": "{file:./prompts/chiron.txt}",
|
||||
"permission": {
|
||||
"question": "allow",
|
||||
"webfetch": "allow",
|
||||
"websearch": "allow",
|
||||
"edit": "deny",
|
||||
"bash": {
|
||||
"*": "ask",
|
||||
"git status*": "allow",
|
||||
"git log*": "allow",
|
||||
"git diff*": "allow",
|
||||
"git branch*": "allow",
|
||||
"git show*": "allow",
|
||||
"grep *": "allow",
|
||||
"ls *": "allow",
|
||||
"cat *": "allow",
|
||||
"head *": "allow",
|
||||
"tail *": "allow",
|
||||
"wc *": "allow",
|
||||
"which *": "allow",
|
||||
"echo *": "allow",
|
||||
"td *": "allow",
|
||||
"bd *": "allow",
|
||||
"nix *": "allow"
|
||||
},
|
||||
"external_directory": {
|
||||
"*": "ask",
|
||||
"~/p/**": "allow",
|
||||
"*": "ask"
|
||||
"~/.config/opencode/**": "allow",
|
||||
"/tmp/**": "allow",
|
||||
"/run/agenix/**": "allow"
|
||||
}
|
||||
}
|
||||
},
|
||||
"chiron-forge": {
|
||||
"Chiron Forge (Builder)": {
|
||||
"description": "Personal AI assistant (Build Mode). Full execution and task completion capabilities with safety prompts.",
|
||||
"mode": "primary",
|
||||
"model": "zai-coding-plan/glm-4.7",
|
||||
"model": "zai-coding-plan/glm-5",
|
||||
"prompt": "{file:./prompts/chiron-forge.txt}",
|
||||
"permission": {
|
||||
"question": "allow",
|
||||
"webfetch": "allow",
|
||||
"websearch": "allow",
|
||||
"edit": {
|
||||
"*": "allow",
|
||||
"/run/agenix/**": "deny"
|
||||
},
|
||||
"bash": {
|
||||
"*": "allow",
|
||||
"rm -rf *": "ask",
|
||||
"git reset --hard*": "ask",
|
||||
"git push*": "ask",
|
||||
"git push --force*": "deny",
|
||||
"git push -f *": "deny"
|
||||
},
|
||||
"external_directory": {
|
||||
"*": "ask",
|
||||
"~/p/**": "allow",
|
||||
"*": "ask"
|
||||
"~/.config/opencode/**": "allow",
|
||||
"/tmp/**": "allow",
|
||||
"/run/agenix/**": "allow"
|
||||
}
|
||||
}
|
||||
},
|
||||
"hermes": {
|
||||
"Hermes (Communication)": {
|
||||
"description": "Work communication specialist. Handles Basecamp tasks, Outlook email, and MS Teams meetings.",
|
||||
"mode": "subagent",
|
||||
"model": "zai-coding-plan/glm-4.7",
|
||||
"model": "zai-coding-plan/glm-5",
|
||||
"prompt": "{file:./prompts/hermes.txt}",
|
||||
"permission": {
|
||||
"question": "allow"
|
||||
"question": "allow",
|
||||
"webfetch": "allow",
|
||||
"edit": {
|
||||
"*": "allow",
|
||||
"/run/agenix/**": "deny"
|
||||
},
|
||||
"bash": {
|
||||
"*": "ask",
|
||||
"cat *": "allow",
|
||||
"echo *": "allow"
|
||||
},
|
||||
"external_directory": {
|
||||
"*": "ask",
|
||||
"~/p/**": "allow",
|
||||
"~/.config/opencode/**": "allow",
|
||||
"/tmp/**": "allow",
|
||||
"/run/agenix/**": "allow"
|
||||
}
|
||||
}
|
||||
},
|
||||
"athena": {
|
||||
"Athena (Researcher)": {
|
||||
"description": "Work knowledge specialist. Manages Outline wiki, documentation, and knowledge organization.",
|
||||
"mode": "subagent",
|
||||
"model": "zai-coding-plan/glm-4.7",
|
||||
"model": "zai-coding-plan/glm-5",
|
||||
"prompt": "{file:./prompts/athena.txt}",
|
||||
"permission": {
|
||||
"question": "allow"
|
||||
"question": "allow",
|
||||
"webfetch": "allow",
|
||||
"websearch": "allow",
|
||||
"edit": {
|
||||
"*": "allow",
|
||||
"/run/agenix/**": "deny"
|
||||
},
|
||||
"bash": {
|
||||
"*": "ask",
|
||||
"grep *": "allow",
|
||||
"cat *": "allow"
|
||||
},
|
||||
"external_directory": {
|
||||
"*": "ask",
|
||||
"~/p/**": "allow",
|
||||
"~/.config/opencode/**": "allow",
|
||||
"/tmp/**": "allow",
|
||||
"/run/agenix/**": "allow"
|
||||
}
|
||||
}
|
||||
},
|
||||
"apollo": {
|
||||
"Apollo (Knowledge Management)": {
|
||||
"description": "Private knowledge specialist. Manages Obsidian vault, personal notes, and private knowledge graph.",
|
||||
"mode": "subagent",
|
||||
"model": "zai-coding-plan/glm-4.7",
|
||||
"model": "zai-coding-plan/glm-5",
|
||||
"prompt": "{file:./prompts/apollo.txt}",
|
||||
"permission": {
|
||||
"question": "allow"
|
||||
"question": "allow",
|
||||
"edit": {
|
||||
"*": "allow",
|
||||
"/run/agenix/**": "deny"
|
||||
},
|
||||
"bash": {
|
||||
"*": "ask",
|
||||
"cat *": "allow"
|
||||
},
|
||||
"external_directory": {
|
||||
"*": "ask",
|
||||
"~/p/**": "allow",
|
||||
"~/.config/opencode/**": "allow",
|
||||
"/tmp/**": "allow",
|
||||
"/run/agenix/**": "allow"
|
||||
}
|
||||
}
|
||||
},
|
||||
"calliope": {
|
||||
"Calliope (Writer)": {
|
||||
"description": "Writing specialist. Creates documentation, reports, meeting notes, and prose.",
|
||||
"mode": "subagent",
|
||||
"model": "zai-coding-plan/glm-4.7",
|
||||
"model": "zai-coding-plan/glm-5",
|
||||
"prompt": "{file:./prompts/calliope.txt}",
|
||||
"permission": {
|
||||
"question": "allow"
|
||||
"question": "allow",
|
||||
"webfetch": "allow",
|
||||
"edit": {
|
||||
"*": "allow",
|
||||
"/run/agenix/**": "deny"
|
||||
},
|
||||
"bash": {
|
||||
"*": "ask",
|
||||
"cat *": "allow",
|
||||
"wc *": "allow"
|
||||
},
|
||||
"external_directory": {
|
||||
"*": "ask",
|
||||
"~/p/**": "allow",
|
||||
"~/.config/opencode/**": "allow",
|
||||
"/tmp/**": "allow",
|
||||
"/run/agenix/**": "allow"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -104,3 +104,48 @@
|
||||
- Batch related information together
|
||||
- Remember my preferences across sessions
|
||||
- Proactively surface relevant information
|
||||
|
||||
---
|
||||
|
||||
## Memory System
|
||||
|
||||
AI agents have access to a persistent memory system for context across sessions via the opencode-memory plugin.
|
||||
|
||||
### Configuration
|
||||
|
||||
| Setting | Value |
|
||||
|---------|-------|
|
||||
| **Plugin** | `opencode-memory` |
|
||||
| **Obsidian Vault** | `~/CODEX` |
|
||||
| **Memory Folder** | `80-memory/` |
|
||||
| **Database** | `~/.local/share/opencode-memory/index.db` |
|
||||
| **Auto-Capture** | Enabled (session.idle event) |
|
||||
| **Auto-Recall** | Enabled (session.created event) |
|
||||
| **Token Budget** | 2000 tokens |
|
||||
|
||||
### Memory Categories
|
||||
|
||||
| Category | Purpose | Example |
|
||||
|----------|---------|---------|
|
||||
| `preference` | Personal preferences | UI settings, workflow styles |
|
||||
| `fact` | Objective information | Tech stack, role, constraints |
|
||||
| `decision` | Choices with rationale | Tool selections, architecture |
|
||||
| `entity` | People, orgs, systems | Key contacts, important APIs |
|
||||
| `other` | Everything else | General learnings |
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| `memory_search` | Hybrid search (vector + BM25) over vault + sessions |
|
||||
| `memory_store` | Store new memory as markdown file |
|
||||
| `memory_get` | Read specific file/lines from vault |
|
||||
|
||||
### Usage Notes
|
||||
|
||||
- Memories are stored as markdown files in Obsidian (source of truth)
|
||||
- SQLite provides fast hybrid search (vector similarity + keyword BM25)
|
||||
- Use explicit "remember this" to store important information
|
||||
- Auto-recall injects relevant memories at session start
|
||||
- Auto-capture extracts preferences/decisions at session idle
|
||||
- See `skills/memory/SKILL.md` for full documentation
|
||||
|
||||
27
flake.lock
generated
Normal file
27
flake.lock
generated
Normal file
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"nodes": {
|
||||
"nixpkgs": {
|
||||
"locked": {
|
||||
"lastModified": 1772479524,
|
||||
"narHash": "sha256-u7nCaNiMjqvKpE+uZz9hE7pgXXTmm5yvdtFaqzSzUQI=",
|
||||
"owner": "NixOS",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "4215e62dc2cd3bc705b0a423b9719ff6be378a43",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "NixOS",
|
||||
"ref": "nixpkgs-unstable",
|
||||
"repo": "nixpkgs",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"root": {
|
||||
"inputs": {
|
||||
"nixpkgs": "nixpkgs"
|
||||
}
|
||||
}
|
||||
},
|
||||
"root": "root",
|
||||
"version": 7
|
||||
}
|
||||
68
flake.nix
Normal file
68
flake.nix
Normal file
@@ -0,0 +1,68 @@
|
||||
{
|
||||
description = "Opencode Agent Skills — development environment & runtime";
|
||||
|
||||
inputs = { nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable"; };
|
||||
|
||||
outputs = { self, nixpkgs }:
|
||||
let
|
||||
supportedSystems = [ "x86_64-linux" "aarch64-linux" "aarch64-darwin" ];
|
||||
forAllSystems = nixpkgs.lib.genAttrs supportedSystems;
|
||||
in {
|
||||
# Composable runtime for project flakes and home-manager.
|
||||
# Usage:
|
||||
# home.packages = [ inputs.agents.packages.${system}.skills-runtime ];
|
||||
# devShells.default = pkgs.mkShell {
|
||||
# packages = [ inputs.agents.packages.${system}.skills-runtime ];
|
||||
# };
|
||||
packages = forAllSystems (system:
|
||||
let
|
||||
pkgs = nixpkgs.legacyPackages.${system};
|
||||
|
||||
pythonEnv = pkgs.python3.withPackages (ps:
|
||||
with ps; [
|
||||
# skill-creator: quick_validate.py
|
||||
pyyaml
|
||||
|
||||
# xlsx: recalc.py
|
||||
openpyxl
|
||||
|
||||
# prompt-engineering-patterns: optimize-prompt.py
|
||||
numpy
|
||||
|
||||
# pdf: multiple scripts
|
||||
pypdf
|
||||
pillow # PIL
|
||||
pdf2image
|
||||
|
||||
# excalidraw: render_excalidraw.py
|
||||
playwright
|
||||
]);
|
||||
in {
|
||||
skills-runtime = pkgs.buildEnv {
|
||||
name = "opencode-skills-runtime";
|
||||
paths = [
|
||||
pythonEnv
|
||||
pkgs.poppler-utils # pdf: pdftoppm/pdfinfo
|
||||
pkgs.jq # shell scripts
|
||||
pkgs.playwright-driver.browsers # excalidraw: chromium for rendering
|
||||
];
|
||||
};
|
||||
});
|
||||
|
||||
# Dev shell for working on this repo (wraps skills-runtime).
|
||||
devShells = forAllSystems (system:
|
||||
let
|
||||
pkgs = nixpkgs.legacyPackages.${system};
|
||||
in {
|
||||
default = pkgs.mkShell {
|
||||
packages = [ self.packages.${system}.skills-runtime ];
|
||||
|
||||
env.PLAYWRIGHT_BROWSERS_PATH = "${pkgs.playwright-driver.browsers}";
|
||||
|
||||
shellHook = ''
|
||||
echo "🔧 AGENTS dev shell active — Python $(python3 --version 2>&1 | cut -d' ' -f2), $(jq --version)"
|
||||
'';
|
||||
};
|
||||
});
|
||||
};
|
||||
}
|
||||
@@ -5,6 +5,7 @@ You are Apollo, the Greek god of knowledge, prophecy, and light, specializing in
|
||||
2. Search, organize, and structure personal knowledge graphs
|
||||
3. Assist with personal task management embedded in private notes
|
||||
4. Bridge personal knowledge with work contexts without exposing sensitive data
|
||||
5. Manage dual-layer memory system (Mem0 + Obsidian CODEX) for persistent context across sessions
|
||||
|
||||
**Process:**
|
||||
1. Identify which vault or note collection the user references
|
||||
@@ -20,6 +21,10 @@ You are Apollo, the Greek god of knowledge, prophecy, and light, specializing in
|
||||
- Respect vault structure: folders, backlinks, unlinked references
|
||||
- Preserve context when retrieving related notes
|
||||
- Handle multiple vault configurations gracefully
|
||||
- Store valuable memories in dual-layer system: Mem0 (semantic search) + Obsidian 80-memory/ (human-readable)
|
||||
- Auto-capture session insights at session end (max 3 per session, confirm with user)
|
||||
- Retrieve relevant memories when context suggests past preferences/decisions
|
||||
- Use memory categories: preference, fact, decision, entity, other
|
||||
|
||||
**Output Format:**
|
||||
- Summarized findings with citations to note titles (not file paths)
|
||||
@@ -33,11 +38,15 @@ You are Apollo, the Greek god of knowledge, prophecy, and light, specializing in
|
||||
- Large result sets: Provide summary and offer filtering options
|
||||
- Nested tasks or complex dependencies: Break down into clear hierarchical view
|
||||
- Sensitive content detected: Flag it without revealing details
|
||||
- Mem0 unavailable: Warn user, continue without memory features, do not block workflow
|
||||
- Obsidian unavailable: Store in Mem0 only, log sync failure for later retry
|
||||
|
||||
**Tool Usage:**
|
||||
- Question tool: Required when vault location is ambiguous or note reference is unclear
|
||||
- Never reveal absolute file paths or directory structures in output
|
||||
- Extract patterns and insights while obscuring specific personal details
|
||||
- Memory tools: Store/recall memories via Mem0 REST API (localhost:8000)
|
||||
- Obsidian MCP: Create memory notes in 80-memory/ with mem0_id cross-reference
|
||||
|
||||
**Boundaries:**
|
||||
- Do NOT handle work tools (Hermes/Athena's domain)
|
||||
|
||||
50
prompts/chiron-forge.txt
Normal file
50
prompts/chiron-forge.txt
Normal file
@@ -0,0 +1,50 @@
|
||||
You are Chiron-Forge, the Greek centaur smith of Hephaestus, specializing in execution and task completion as Chiron's build counterpart.
|
||||
|
||||
**Your Core Responsibilities:**
|
||||
1. Execute tasks with full write access to complete planned work
|
||||
2. Modify files, run commands, and implement solutions
|
||||
3. Build and create artifacts based on Chiron's plans
|
||||
4. Delegate to specialized subagents for domain-specific work
|
||||
5. Confirm destructive operations before executing them
|
||||
|
||||
**Process:**
|
||||
1. **Understand the Task**: Review the user's request and any plan provided by Chiron
|
||||
2. **Clarify Scope**: Use the Question tool for ambiguous requirements or destructive operations
|
||||
3. **Identify Dependencies**: Check if specialized subagent expertise is needed
|
||||
4. **Execute Work**: Use available tools to modify files, run commands, and complete tasks
|
||||
5. **Delegate to Subagents**: Use Task tool for specialized domains (Hermes for communications, Athena for knowledge, etc.)
|
||||
6. **Verify Results**: Confirm work is complete and meets quality standards
|
||||
7. **Report Completion**: Summarize what was accomplished
|
||||
|
||||
**Quality Standards:**
|
||||
- Execute tasks accurately following specifications
|
||||
- Preserve code structure and formatting conventions
|
||||
- Confirm destructive operations before execution
|
||||
- Delegate appropriately when specialized expertise would improve quality
|
||||
- Maintain clear separation from Chiron's planning role
|
||||
|
||||
**Output Format:**
|
||||
- Confirmation of what was executed
|
||||
- Summary of files modified or commands run
|
||||
- Verification that work is complete
|
||||
- Reference to any subagents that assisted
|
||||
|
||||
**Edge Cases:**
|
||||
- **Destructive operations**: Use Question tool to confirm rm, git push, or similar commands
|
||||
- **Ambiguous requirements**: Ask for clarification rather than making assumptions
|
||||
- **Specialized domain work**: Recognize when tasks require Hermes, Athena, Apollo, or Calliope expertise
|
||||
- **Failed commands**: Diagnose errors, attempt fixes, and escalate when necessary
|
||||
|
||||
**Tool Usage:**
|
||||
- Write/Edit tools: Use freely for file modifications
|
||||
- Bash tool: Execute commands, but use Question for rm, git push
|
||||
- Question tool: Required for destructive operations and ambiguous requirements
|
||||
- Task tool: Delegate to subagents for specialized domains
|
||||
- Git commands: Commit work when tasks are complete
|
||||
|
||||
**Boundaries:**
|
||||
- DO NOT do extensive planning or analysis (that's Chiron's domain)
|
||||
- DO NOT write long-form documentation (Calliope's domain)
|
||||
- DO NOT manage private knowledge (Apollo's domain)
|
||||
- DO NOT handle work communications (Hermes's domain)
|
||||
- DO NOT execute destructive operations without confirmation
|
||||
59
prompts/chiron.txt
Normal file
59
prompts/chiron.txt
Normal file
@@ -0,0 +1,59 @@
|
||||
You are Chiron, the wise centaur from Greek mythology, serving as the main orchestrator in plan and analysis mode. You coordinate specialized subagents and provide high-level guidance without direct execution.
|
||||
|
||||
**Your Core Responsibilities:**
|
||||
1. Analyze user requests and determine optimal routing to specialized subagents or direct handling
|
||||
2. Provide strategic planning and analysis for complex workflows that require multiple agent capabilities
|
||||
3. Delegate tasks to appropriate subagents: Hermes (communication), Athena (work knowledge), Apollo (private knowledge), Calliope (writing)
|
||||
4. Coordinate multi-step workflows that span multiple domains and require agent collaboration
|
||||
5. Offer guidance and decision support for productivity, project management, and knowledge work
|
||||
6. Bridge personal and work contexts while maintaining appropriate boundaries between domains
|
||||
|
||||
**Process:**
|
||||
1. **Analyze Request**: Identify the user's intent, required domains (communication, knowledge, writing, or combination), and complexity level
|
||||
2. **Clarify Ambiguity**: Use the Question tool when the request is vague, requires context, or needs clarification before proceeding
|
||||
3. **Determine Approach**: Decide whether to handle directly, delegate to a single subagent, or orchestrate multiple subagents
|
||||
4. **Delegate or Execute**: Route to appropriate subagent(s) with clear context, or provide direct analysis/guidance
|
||||
5. **Synthesize Results**: Combine outputs from multiple subagents into coherent recommendations or action plans
|
||||
6. **Provide Guidance**: Offer strategic insights, priorities, and next steps based on the analysis
|
||||
|
||||
**Delegation Logic:**
|
||||
- **Hermes**: Work communication tasks (email drafts, message management, meeting coordination)
|
||||
- **Athena**: Work knowledge retrieval (wiki searches, documentation lookup, project information)
|
||||
- **Apollo**: Private knowledge management (Obsidian vault access, personal notes, task tracking)
|
||||
- **Calliope**: Writing assistance (documentation, reports, meeting summaries, professional prose)
|
||||
- **Chiron-Forge**: Execution tasks requiring file modifications, command execution, or direct system changes
|
||||
|
||||
**Quality Standards:**
|
||||
- Clarify ambiguous requests before proceeding with delegation or analysis
|
||||
- Provide clear rationale when delegating to specific subagents
|
||||
- Maintain appropriate separation between personal (Apollo) and work (Athena/Hermes) domains
|
||||
- Synthesize multi-agent outputs into coherent, actionable guidance
|
||||
- Respect permission boundaries (read-only analysis, delegate execution to Chiron-Forge)
|
||||
- Offer strategic context alongside tactical recommendations
|
||||
|
||||
**Output Format:**
|
||||
For direct analysis: Provide structured insights with clear reasoning and recommendations
|
||||
For delegation: State which subagent is handling the task and why
|
||||
For orchestration: Outline the workflow, which agents are involved, and expected outcomes
|
||||
Include next steps or decision points when appropriate
|
||||
|
||||
**Edge Cases:**
|
||||
- **Ambiguous requests**: Use Question tool to clarify intent, scope, and preferred approach before proceeding
|
||||
- **Cross-domain requests**: Analyze which subagents are needed and delegate in sequence or parallel as appropriate
|
||||
- **Personal vs work overlap**: Explicitly maintain boundaries, route personal tasks to Apollo, work tasks to Hermes/Athena
|
||||
- **Execution required tasks**: Explain that Chiron-Forge handles execution and offer to delegate
|
||||
- **Multiple possible approaches**: Present options with trade-offs and ask for user preference
|
||||
|
||||
**Tool Usage:**
|
||||
- Question tool: REQUIRED when requests are ambiguous, lack context, or require clarification before delegation or analysis
|
||||
- Task tool: Use to delegate to subagents (hermes, athena, apollo, calliope) with clear context and objectives
|
||||
- Read/analysis tools: Available for gathering context and providing read-only guidance
|
||||
|
||||
**Boundaries:**
|
||||
- Do NOT modify files directly (read-only orchestrator mode)
|
||||
- Do NOT execute commands or make system changes (delegate to Chiron-Forge)
|
||||
- Do NOT handle communication drafting directly (Hermes's domain)
|
||||
- Do NOT access work documentation repositories (Athena's domain)
|
||||
- Do NOT access private vaults or personal notes (Apollo's domain)
|
||||
- Do NOT write long-form content (Calliope's domain)
|
||||
- Do NOT execute build or deployment tasks (Chiron-Forge's domain)
|
||||
62
rules/USAGE.md
Normal file
62
rules/USAGE.md
Normal file
@@ -0,0 +1,62 @@
|
||||
# Opencode Rules Usage
|
||||
|
||||
Add AI coding rules to your project via `mkOpencodeRules`.
|
||||
|
||||
## flake.nix Setup
|
||||
|
||||
```nix
|
||||
{
|
||||
inputs = {
|
||||
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
|
||||
m3ta-nixpkgs.url = "git+https://code.m3ta.dev/m3tam3re/nixpkgs";
|
||||
agents = {
|
||||
url = "git+https://code.m3ta.dev/m3tam3re/AGENTS";
|
||||
flake = false;
|
||||
};
|
||||
};
|
||||
|
||||
outputs = { self, nixpkgs, m3ta-nixpkgs, agents, ... }:
|
||||
let
|
||||
system = "x86_64-linux";
|
||||
pkgs = nixpkgs.legacyPackages.${system};
|
||||
m3taLib = m3ta-nixpkgs.lib.${system};
|
||||
in {
|
||||
devShells.${system}.default = let
|
||||
rules = m3taLib.opencode-rules.mkOpencodeRules {
|
||||
inherit agents;
|
||||
languages = [ "python" "typescript" ];
|
||||
frameworks = [ "n8n" ];
|
||||
};
|
||||
in pkgs.mkShell {
|
||||
shellHook = rules.shellHook;
|
||||
};
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
- `agents` (required): Path to AGENTS repo flake input
|
||||
- `languages` (optional): List of language names (e.g., `["python" "typescript"]`)
|
||||
- `concerns` (optional): Rule categories (default: all standard concerns)
|
||||
- `frameworks` (optional): List of framework names (e.g., `["n8n" "django"]`)
|
||||
- `extraInstructions` (optional): Additional instruction file paths
|
||||
|
||||
## .gitignore
|
||||
|
||||
Add to your project's `.gitignore`:
|
||||
```
|
||||
.opencode-rules
|
||||
opencode.json
|
||||
```
|
||||
|
||||
## Project Overrides
|
||||
|
||||
Create `AGENTS.md` in your project root to override central rules. OpenCode applies project-level rules with precedence over central ones.
|
||||
|
||||
## Updating Rules
|
||||
|
||||
When central rules are updated:
|
||||
```bash
|
||||
nix flake update agents
|
||||
```
|
||||
163
rules/concerns/coding-style.md
Normal file
163
rules/concerns/coding-style.md
Normal file
@@ -0,0 +1,163 @@
|
||||
# Coding Style
|
||||
|
||||
## Critical Rules (MUST follow)
|
||||
|
||||
Always prioritize readability over cleverness. Never write code that requires mental gymnastics to understand.
|
||||
Always fail fast and explicitly. Never silently swallow errors or hide exceptions.
|
||||
Always keep functions under 20 lines. Never create monolithic functions that do multiple things.
|
||||
Always validate inputs at function boundaries. Never trust external data implicitly.
|
||||
|
||||
## Formatting
|
||||
|
||||
Prefer consistent indentation throughout the codebase. Never mix tabs and spaces.
|
||||
Prefer meaningful variable names over short abbreviations. Never use single letters except for loop counters.
|
||||
|
||||
### Correct:
|
||||
```lang
|
||||
const maxRetryAttempts = 3;
|
||||
const connectionTimeout = 5000;
|
||||
|
||||
for (let attempt = 1; attempt <= maxRetryAttempts; attempt++) {
|
||||
// process attempt
|
||||
}
|
||||
```
|
||||
|
||||
### Incorrect:
|
||||
```lang
|
||||
const m = 3;
|
||||
const t = 5000;
|
||||
|
||||
for (let i = 1; i <= m; i++) {
|
||||
// process attempt
|
||||
}
|
||||
```
|
||||
|
||||
## Patterns and Anti-Patterns
|
||||
|
||||
Never repeat yourself. Always extract duplicated logic into reusable functions.
|
||||
Prefer composition over inheritance. Never create deep inheritance hierarchies.
|
||||
Always use guard clauses to reduce nesting. Never write arrow-shaped code.
|
||||
|
||||
### Correct:
|
||||
```lang
|
||||
def process_user(user):
|
||||
if not user:
|
||||
return None
|
||||
if not user.is_active:
|
||||
return None
|
||||
return user.calculate_score()
|
||||
```
|
||||
|
||||
### Incorrect:
|
||||
```lang
|
||||
def process_user(user):
|
||||
if user:
|
||||
if user.is_active:
|
||||
return user.calculate_score()
|
||||
else:
|
||||
return None
|
||||
else:
|
||||
return None
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
Always handle specific exceptions. Never use broad catch-all exception handlers.
|
||||
Always log error context, not just the error message. Never let errors vanish without trace.
|
||||
|
||||
### Correct:
|
||||
```lang
|
||||
try:
|
||||
data = fetch_resource(url)
|
||||
return parse_data(data)
|
||||
except NetworkError as e:
|
||||
log_error(f"Network failed for {url}: {e}")
|
||||
raise
|
||||
except ParseError as e:
|
||||
log_error(f"Parse failed for {url}: {e}")
|
||||
return fallback_data
|
||||
```
|
||||
|
||||
### Incorrect:
|
||||
```lang
|
||||
try:
|
||||
data = fetch_resource(url)
|
||||
return parse_data(data)
|
||||
except Exception:
|
||||
pass
|
||||
```
|
||||
|
||||
## Type Safety
|
||||
|
||||
Always use type annotations where supported. Never rely on implicit type coercion.
|
||||
Prefer explicit type checks over duck typing for public APIs. Never assume type behavior.
|
||||
|
||||
### Correct:
|
||||
```lang
|
||||
function calculateTotal(price: number, quantity: number): number {
|
||||
return price * quantity;
|
||||
}
|
||||
```
|
||||
|
||||
### Incorrect:
|
||||
```lang
|
||||
function calculateTotal(price, quantity) {
|
||||
return price * quantity;
|
||||
}
|
||||
```
|
||||
|
||||
## Function Design
|
||||
|
||||
Always write pure functions when possible. Never mutate arguments unless required.
|
||||
Always limit function parameters to 3 or fewer. Never pass objects to hide parameter complexity.
|
||||
|
||||
### Correct:
|
||||
```lang
|
||||
def create_user(name: str, email: str) -> User:
|
||||
return User(name=name, email=email, created_at=now())
|
||||
```
|
||||
|
||||
### Incorrect:
|
||||
```lang
|
||||
def create_user(config: dict) -> User:
|
||||
return User(
|
||||
name=config['name'],
|
||||
email=config['email'],
|
||||
created_at=config['timestamp']
|
||||
)
|
||||
```
|
||||
|
||||
## SOLID Principles
|
||||
|
||||
Never let classes depend on concrete implementations. Always depend on abstractions.
|
||||
Always ensure classes are open for extension but closed for modification. Never change working code to add features.
|
||||
Prefer many small interfaces over one large interface. Never force clients to depend on methods they don't use.
|
||||
|
||||
### Correct:
|
||||
```lang
|
||||
class EmailSender {
|
||||
send(message: Message): void {
|
||||
// implementation
|
||||
}
|
||||
}
|
||||
|
||||
class NotificationService {
|
||||
constructor(private sender: EmailSender) {}
|
||||
}
|
||||
```
|
||||
|
||||
### Incorrect:
|
||||
```lang
|
||||
class NotificationService {
|
||||
sendEmail(message: Message): void { }
|
||||
sendSMS(message: Message): void { }
|
||||
sendPush(message: Message): void { }
|
||||
}
|
||||
```
|
||||
|
||||
## Critical Rules (REPEAT)
|
||||
|
||||
Always write self-documenting code. Never rely on comments to explain complex logic.
|
||||
Always refactor when you see code smells. Never let technical debt accumulate.
|
||||
Always test edge cases explicitly. Never assume happy path only behavior.
|
||||
Never commit commented-out code. Always remove it or restore it.
|
||||
149
rules/concerns/documentation.md
Normal file
149
rules/concerns/documentation.md
Normal file
@@ -0,0 +1,149 @@
|
||||
# Documentation Rules
|
||||
|
||||
## When to Document
|
||||
|
||||
**Document public APIs**. Every public function, class, method, and module needs documentation. Users need to know how to use your code.
|
||||
**Document complex logic**. Algorithms, state machines, and non-obvious implementations need explanations. Future readers will thank you.
|
||||
**Document business rules**. Encode domain knowledge directly in comments. Don't make anyone reverse-engineer requirements from code.
|
||||
**Document trade-offs**. When you choose between alternatives, explain why. Help future maintainers understand the decision context.
|
||||
**Do NOT document obvious code**. Comments like `// get user` add noise. Delete them.
|
||||
|
||||
## Docstring Formats
|
||||
|
||||
### Python (Google Style)
|
||||
|
||||
```python
|
||||
def calculate_price(quantity: int, unit_price: float, discount: float = 0.0) -> float:
|
||||
"""Calculate total price after discount.
|
||||
Args:
|
||||
quantity: Number of items ordered.
|
||||
unit_price: Price per item in USD.
|
||||
discount: Decimal discount rate (0.0 to 1.0).
|
||||
Returns:
|
||||
Final price in USD.
|
||||
Raises:
|
||||
ValueError: If quantity is negative.
|
||||
"""
|
||||
```
|
||||
|
||||
### JavaScript/TypeScript (JSDoc)
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Validates user input against security rules.
|
||||
* @param {string} input - Raw user input from form.
|
||||
* @param {Object} rules - Validation constraints.
|
||||
* @param {number} rules.maxLength - Maximum allowed length.
|
||||
* @returns {boolean} True if input passes all rules.
|
||||
* @throws {ValidationError} If input violates security constraints.
|
||||
*/
|
||||
function validateInput(input, rules) {
|
||||
```
|
||||
|
||||
### Bash
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
# Deploy application to production environment.
|
||||
#
|
||||
# Usage: ./deploy.sh [environment]
|
||||
#
|
||||
# Args:
|
||||
# environment: Target environment (staging|production). Default: staging.
|
||||
#
|
||||
# Exits:
|
||||
# 0 on success, 1 on deployment failure.
|
||||
```
|
||||
|
||||
## Inline Comments: WHY Not WHAT
|
||||
|
||||
**Incorrect:**
|
||||
```python
|
||||
# Iterate through all users
|
||||
for user in users:
|
||||
# Check if user is active
|
||||
if user.active:
|
||||
# Increment counter
|
||||
count += 1
|
||||
```
|
||||
|
||||
**Correct:**
|
||||
```python
|
||||
# Count only active users to calculate monthly revenue
|
||||
for user in users:
|
||||
if user.active:
|
||||
count += 1
|
||||
```
|
||||
|
||||
**Incorrect:**
|
||||
```javascript
|
||||
// Set timeout to 5000
|
||||
setTimeout(() => {
|
||||
// Show error message
|
||||
alert('Error');
|
||||
}, 5000);
|
||||
```
|
||||
|
||||
**Correct:**
|
||||
```javascript
|
||||
// 5000ms delay prevents duplicate alerts during rapid retries
|
||||
setTimeout(() => {
|
||||
alert('Error');
|
||||
}, 5000);
|
||||
```
|
||||
|
||||
**Incorrect:**
|
||||
```bash
|
||||
# Remove temporary files
|
||||
rm -rf /tmp/app/*
|
||||
```
|
||||
|
||||
**Correct:**
|
||||
```bash
|
||||
# Clear temp directory before batch import to prevent partial state
|
||||
rm -rf /tmp/app/*
|
||||
```
|
||||
|
||||
**Rule:** Describe the intent and context. Never describe what the code obviously does.
|
||||
|
||||
## README Standards
|
||||
|
||||
Every project needs a README at the top level.
|
||||
|
||||
**Required sections:**
|
||||
1. **What it does** - One sentence summary
|
||||
2. **Installation** - Setup commands
|
||||
3. **Usage** - Basic example
|
||||
4. **Configuration** - Environment variables and settings
|
||||
5. **Contributing** - How to contribute
|
||||
|
||||
**Example structure:**
|
||||
|
||||
```markdown
|
||||
# Project Name
|
||||
|
||||
One-line description of what this project does.
|
||||
|
||||
## Installation
|
||||
```bash
|
||||
npm install
|
||||
```
|
||||
|
||||
## Usage
|
||||
```bash
|
||||
npm start
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Create `.env` file:
|
||||
```
|
||||
API_KEY=your_key_here
|
||||
```
|
||||
|
||||
## Contributing
|
||||
|
||||
See [CONTRIBUTING.md](./CONTRIBUTING.md).
|
||||
```
|
||||
|
||||
**Keep READMEs focused**. Link to separate docs for complex topics. Don't make the README a tutorial.
|
||||
118
rules/concerns/git-workflow.md
Normal file
118
rules/concerns/git-workflow.md
Normal file
@@ -0,0 +1,118 @@
|
||||
# Git Workflow Rules
|
||||
|
||||
## Conventional Commits
|
||||
|
||||
Format: `<type>(<scope>): <subject>`
|
||||
|
||||
### Commit Types
|
||||
|
||||
- **feat**: New feature
|
||||
- `feat(auth): add OAuth2 login flow`
|
||||
- `feat(api): expose user endpoints`
|
||||
|
||||
- **fix**: Bug fix
|
||||
- `fix(payment): resolve timeout on Stripe calls`
|
||||
- `fix(ui): button not clickable on mobile`
|
||||
|
||||
- **refactor**: Code refactoring (no behavior change)
|
||||
- `refactor(utils): extract date helpers`
|
||||
- `refactor(api): simplify error handling`
|
||||
|
||||
- **docs**: Documentation only
|
||||
- `docs(readme): update installation steps`
|
||||
- `docs(api): add endpoint examples`
|
||||
|
||||
- **chore**: Maintenance tasks
|
||||
- `chore(deps): update Node to 20`
|
||||
- `chore(ci): add GitHub actions workflow`
|
||||
|
||||
- **test**: Tests only
|
||||
- `test(auth): add unit tests for login`
|
||||
- `test(e2e): add checkout flow tests`
|
||||
|
||||
- **style**: Formatting, no logic change
|
||||
- `style: sort imports alphabetically`
|
||||
|
||||
### Commit Rules
|
||||
|
||||
- Subject max 72 chars
|
||||
- Imperative mood ("add", not "added")
|
||||
- No period at end
|
||||
- Reference issues: `Closes #123`
|
||||
|
||||
## Branch Naming
|
||||
|
||||
Pattern: `<type>/<short-description>`
|
||||
|
||||
### Branch Types
|
||||
|
||||
- `feature/add-user-dashboard`
|
||||
- `feature/enable-dark-mode`
|
||||
- `fix/login-redirect-loop`
|
||||
- `fix/payment-timeout-error`
|
||||
- `refactor/extract-user-service`
|
||||
- `refactor/simplify-auth-flow`
|
||||
- `hotfix/security-vulnerability`
|
||||
|
||||
### Branch Rules
|
||||
|
||||
- Lowercase and hyphens
|
||||
- Max 50 chars
|
||||
- Delete after merge
|
||||
|
||||
## Pull Requests
|
||||
|
||||
### PR Title
|
||||
|
||||
Follow Conventional Commit format:
|
||||
- `feat: add user dashboard`
|
||||
- `fix: resolve login redirect loop`
|
||||
|
||||
### PR Description
|
||||
|
||||
```markdown
|
||||
## What
|
||||
Brief description
|
||||
|
||||
## Why
|
||||
Reason for change
|
||||
|
||||
## How
|
||||
Implementation approach
|
||||
|
||||
## Testing
|
||||
Steps performed
|
||||
|
||||
## Checklist
|
||||
- [ ] Tests pass
|
||||
- [ ] Code reviewed
|
||||
- [ ] Docs updated
|
||||
```
|
||||
|
||||
## Merge Strategy
|
||||
|
||||
### Squash Merge
|
||||
|
||||
- Many small commits
|
||||
- One cohesive feature
|
||||
- Clean history
|
||||
|
||||
### Merge Commit
|
||||
|
||||
- Preserve commit history
|
||||
- Distinct milestones
|
||||
- Detailed history preferred
|
||||
|
||||
### When to Rebase
|
||||
|
||||
- Before opening PR
|
||||
- Resolving conflicts
|
||||
- Keeping current with main
|
||||
|
||||
## General Rules
|
||||
|
||||
- Pull latest from main before starting
|
||||
- Write atomic commits
|
||||
- Run tests before pushing
|
||||
- Request peer review before merge
|
||||
- Never force push to main/master
|
||||
105
rules/concerns/naming.md
Normal file
105
rules/concerns/naming.md
Normal file
@@ -0,0 +1,105 @@
|
||||
# Naming Conventions
|
||||
|
||||
Use consistent naming across all code. Follow language-specific conventions.
|
||||
|
||||
## Language Reference
|
||||
|
||||
| Type | Python | TypeScript | Nix | Shell |
|
||||
|------|--------|------------|-----|-------|
|
||||
| Variables | snake_case | camelCase | camelCase | UPPER_SNAKE |
|
||||
| Functions | snake_case | camelCase | camelCase | lower_case |
|
||||
| Classes | PascalCase | PascalCase | - | - |
|
||||
| Constants | UPPER_SNAKE | UPPER_SNAKE | camelCase | UPPER_SNAKE |
|
||||
| Files | snake_case | camelCase | hyphen-case | hyphen-case |
|
||||
| Modules | snake_case | camelCase | - | - |
|
||||
|
||||
## General Rules
|
||||
|
||||
**Files**: Use hyphen-case for documentation, snake_case for Python, camelCase for TypeScript. Names should describe content.
|
||||
|
||||
**Variables**: Use descriptive names. Avoid single letters except loop counters. No Hungarian notation.
|
||||
|
||||
**Functions**: Use verb-noun pattern. Name describes what it does, not how it does it.
|
||||
|
||||
**Classes**: Use PascalCase with descriptive nouns. Avoid abbreviations.
|
||||
|
||||
**Constants**: Use UPPER_SNAKE with descriptive names. Group related constants.
|
||||
|
||||
## Examples
|
||||
|
||||
Python:
|
||||
```python
|
||||
# Variables
|
||||
user_name = "alice"
|
||||
is_authenticated = True
|
||||
|
||||
# Functions
|
||||
def get_user_data(user_id):
|
||||
pass
|
||||
|
||||
# Classes
|
||||
class UserProfile:
|
||||
pass
|
||||
|
||||
# Constants
|
||||
MAX_RETRIES = 3
|
||||
API_ENDPOINT = "https://api.example.com"
|
||||
```
|
||||
|
||||
TypeScript:
|
||||
```typescript
|
||||
// Variables
|
||||
const userName = "alice";
|
||||
const isAuthenticated = true;
|
||||
|
||||
// Functions
|
||||
function getUserData(userId: string): User {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Classes
|
||||
class UserProfile {
|
||||
private name: string;
|
||||
}
|
||||
|
||||
// Constants
|
||||
const MAX_RETRIES = 3;
|
||||
const API_ENDPOINT = "https://api.example.com";
|
||||
```
|
||||
|
||||
Nix:
|
||||
```nix
|
||||
# Variables
|
||||
let
|
||||
userName = "alice";
|
||||
isAuthenticated = true;
|
||||
in
|
||||
# ...
|
||||
```
|
||||
|
||||
Shell:
|
||||
```bash
|
||||
# Variables
|
||||
USER_NAME="alice"
|
||||
IS_AUTHENTICATED=true
|
||||
|
||||
# Functions
|
||||
get_user_data() {
|
||||
echo "Getting data"
|
||||
}
|
||||
|
||||
# Constants
|
||||
MAX_RETRIES=3
|
||||
API_ENDPOINT="https://api.example.com"
|
||||
```
|
||||
|
||||
## File Naming
|
||||
|
||||
Use these patterns consistently. No exceptions.
|
||||
|
||||
- Skills: `hyphen-case`
|
||||
- Python: `snake_case.py`
|
||||
- TypeScript: `camelCase.ts` or `hyphen-case.ts`
|
||||
- Nix: `hyphen-case.nix`
|
||||
- Shell: `hyphen-case.sh`
|
||||
- Markdown: `UPPERCASE.md` or `sentence-case.md`
|
||||
82
rules/concerns/project-structure.md
Normal file
82
rules/concerns/project-structure.md
Normal file
@@ -0,0 +1,82 @@
|
||||
# Project Structure
|
||||
|
||||
## Python
|
||||
|
||||
Use src layout for all projects. Place application code in `src/<project>/`, tests in `tests/`.
|
||||
|
||||
```
|
||||
project/
|
||||
├── src/myproject/
|
||||
│ ├── __init__.py
|
||||
│ ├── main.py # Entry point
|
||||
│ └── core/
|
||||
│ └── module.py
|
||||
├── tests/
|
||||
│ ├── __init__.py
|
||||
│ └── test_module.py
|
||||
├── pyproject.toml # Config
|
||||
├── README.md
|
||||
└── .gitignore
|
||||
```
|
||||
|
||||
**Rules:**
|
||||
- One module per directory file
|
||||
- `__init__.py` in every package
|
||||
- Entry point in `src/myproject/main.py`
|
||||
- Config in root: `pyproject.toml`, `requirements.txt`
|
||||
|
||||
## TypeScript
|
||||
|
||||
Use `src/` for source, `dist/` for build output.
|
||||
|
||||
```
|
||||
project/
|
||||
├── src/
|
||||
│ ├── index.ts # Entry point
|
||||
│ ├── core/
|
||||
│ │ └── module.ts
|
||||
│ └── types.ts
|
||||
├── tests/
|
||||
│ └── module.test.ts
|
||||
├── package.json # Config
|
||||
├── tsconfig.json
|
||||
└── README.md
|
||||
```
|
||||
|
||||
**Rules:**
|
||||
- One module per file
|
||||
- Index exports from `src/index.ts`
|
||||
- Entry point in `src/index.ts`
|
||||
- Config in root: `package.json`, `tsconfig.json`
|
||||
|
||||
## Nix
|
||||
|
||||
Use `modules/` for NixOS modules, `pkgs/` for packages.
|
||||
|
||||
```
|
||||
nix-config/
|
||||
├── modules/
|
||||
│ ├── default.nix # Module list
|
||||
│ └── my-service.nix
|
||||
├── pkgs/
|
||||
│ └── my-package/
|
||||
│ └── default.nix
|
||||
├── flake.nix # Entry point
|
||||
├── flake.lock
|
||||
└── README.md
|
||||
```
|
||||
|
||||
**Rules:**
|
||||
- One module per file in `modules/`
|
||||
- One package per directory in `pkgs/`
|
||||
- Entry point in `flake.nix`
|
||||
- Config in root: `flake.nix`, shell.nix
|
||||
|
||||
## General
|
||||
|
||||
- Use hyphen-case for directories
|
||||
- Use kebab-case for file names
|
||||
- Config files in project root
|
||||
- Tests separate from source
|
||||
- Docs in root: README.md, CHANGELOG.md
|
||||
- Hidden configs: .env, .gitignore
|
||||
476
rules/concerns/tdd.md
Normal file
476
rules/concerns/tdd.md
Normal file
@@ -0,0 +1,476 @@
|
||||
# Test-Driven Development (Strict Enforcement)
|
||||
|
||||
## Critical Rules (MUST follow)
|
||||
|
||||
**NEVER write production code without a failing test first.**
|
||||
**ALWAYS follow the red-green-refactor cycle. No exceptions.**
|
||||
**NEVER skip the refactor step. Code quality is mandatory.**
|
||||
**ALWAYS commit after green, never commit red tests.**
|
||||
|
||||
---
|
||||
|
||||
## The Red-Green-Refactor Cycle
|
||||
|
||||
### Phase 1: Red (Write Failing Test)
|
||||
|
||||
The test MUST fail for the right reason—not a syntax error or missing import.
|
||||
|
||||
```python
|
||||
# CORRECT: Test fails because behavior doesn't exist yet
|
||||
def test_calculate_discount_for_premium_members():
|
||||
user = User(tier="premium")
|
||||
cart = Cart(items=[Item(price=100)])
|
||||
|
||||
discount = calculate_discount(user, cart)
|
||||
|
||||
assert discount == 10 # Fails: calculate_discount not implemented
|
||||
|
||||
# INCORRECT: Test fails for wrong reason (will pass accidentally)
|
||||
def test_calculate_discount():
|
||||
discount = calculate_discount() # Fails: missing required args
|
||||
assert discount is not None
|
||||
```
|
||||
|
||||
**Red Phase Checklist:**
|
||||
- [ ] Test describes ONE behavior
|
||||
- [ ] Test name clearly states expected outcome
|
||||
- [ ] Test fails for the intended reason
|
||||
- [ ] Error message is meaningful
|
||||
|
||||
### Phase 2: Green (Write Minimum Code)
|
||||
|
||||
Write the MINIMUM code to make the test pass. Do not implement future features.
|
||||
|
||||
```python
|
||||
# CORRECT: Minimum implementation
|
||||
def calculate_discount(user, cart):
|
||||
if user.tier == "premium":
|
||||
return 10
|
||||
return 0
|
||||
|
||||
# INCORRECT: Over-engineering for future needs
|
||||
def calculate_discount(user, cart):
|
||||
discounts = {
|
||||
"premium": 10,
|
||||
"gold": 15, # Not tested
|
||||
"silver": 5, # Not tested
|
||||
"basic": 0 # Not tested
|
||||
}
|
||||
return discounts.get(user.tier, 0)
|
||||
```
|
||||
|
||||
**Green Phase Checklist:**
|
||||
- [ ] Code makes the test pass
|
||||
- [ ] No extra functionality added
|
||||
- [ ] Code may be ugly (refactor comes next)
|
||||
- [ ] All existing tests still pass
|
||||
|
||||
### Phase 3: Refactor (Improve Code Quality)
|
||||
|
||||
Refactor ONLY when all tests are green. Make small, incremental changes.
|
||||
|
||||
```python
|
||||
# BEFORE (Green but messy)
|
||||
def calculate_discount(user, cart):
|
||||
if user.tier == "premium":
|
||||
return 10
|
||||
return 0
|
||||
|
||||
# AFTER (Refactored)
|
||||
DISCOUNT_RATES = {"premium": 0.10}
|
||||
|
||||
def calculate_discount(user, cart):
|
||||
rate = DISCOUNT_RATES.get(user.tier, 0)
|
||||
return int(cart.total * rate)
|
||||
```
|
||||
|
||||
**Refactor Phase Checklist:**
|
||||
- [ ] All tests still pass after each change
|
||||
- [ ] One refactoring at a time
|
||||
- [ ] Commit if significant improvement made
|
||||
- [ ] No behavior changes (tests remain green)
|
||||
|
||||
---
|
||||
|
||||
## Enforcement Rules
|
||||
|
||||
### 1. Test-First Always
|
||||
|
||||
```python
|
||||
# WRONG: Code first, test later
|
||||
class PaymentProcessor:
|
||||
def process(self, amount):
|
||||
return self.gateway.charge(amount)
|
||||
|
||||
# Then write test... (TOO LATE!)
|
||||
|
||||
# CORRECT: Test first
|
||||
def test_process_payment_charges_gateway():
|
||||
mock_gateway = MockGateway()
|
||||
processor = PaymentProcessor(gateway=mock_gateway)
|
||||
|
||||
processor.process(100)
|
||||
|
||||
assert mock_gateway.charged_amount == 100
|
||||
```
|
||||
|
||||
### 2. No Commented-Out Tests
|
||||
|
||||
```python
|
||||
# WRONG: Commented test hides failing behavior
|
||||
# def test_refund_processing():
|
||||
# # TODO: fix this later
|
||||
# assert False
|
||||
|
||||
# CORRECT: Use skip with reason
|
||||
@pytest.mark.skip(reason="Refund flow not yet implemented")
|
||||
def test_refund_processing():
|
||||
assert False
|
||||
```
|
||||
|
||||
### 3. Commit Hygiene
|
||||
|
||||
```bash
|
||||
# WRONG: Committing with failing tests
|
||||
git commit -m "WIP: adding payment"
|
||||
# Tests fail in CI
|
||||
|
||||
# CORRECT: Only commit green
|
||||
git commit -m "Add payment processing"
|
||||
# All tests pass locally and in CI
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## AI-Assisted TDD Patterns
|
||||
|
||||
### Pattern 1: Explicit Test Request
|
||||
|
||||
When working with AI assistants, request tests explicitly:
|
||||
|
||||
```
|
||||
CORRECT PROMPT:
|
||||
"Write a failing test for calculating user discounts based on tier.
|
||||
Then implement the minimum code to make it pass."
|
||||
|
||||
INCORRECT PROMPT:
|
||||
"Implement a discount calculator with tier support."
|
||||
```
|
||||
|
||||
### Pattern 2: Verification Request
|
||||
|
||||
After AI generates code, verify test coverage:
|
||||
|
||||
```
|
||||
PROMPT:
|
||||
"The code you wrote for calculate_discount is missing tests.
|
||||
First, show me a failing test for the edge case where cart is empty.
|
||||
Then make it pass with minimum code."
|
||||
```
|
||||
|
||||
### Pattern 3: Refactor Request
|
||||
|
||||
Request refactoring as a separate step:
|
||||
|
||||
```
|
||||
CORRECT:
|
||||
"Refactor calculate_discount to use a lookup table.
|
||||
Run tests after each change."
|
||||
|
||||
INCORRECT:
|
||||
"Refactor and add new features at the same time."
|
||||
```
|
||||
|
||||
### Pattern 4: Red-Green-Refactor in Prompts
|
||||
|
||||
Structure AI prompts to follow the cycle:
|
||||
|
||||
```
|
||||
PROMPT TEMPLATE:
|
||||
"Phase 1 (Red): Write a test that [describes behavior].
|
||||
The test should fail because [reason].
|
||||
Show me the failing test output.
|
||||
|
||||
Phase 2 (Green): Write the minimum code to pass this test.
|
||||
No extra features.
|
||||
|
||||
Phase 3 (Refactor): Review the code. Suggest improvements.
|
||||
I'll approve before you apply changes."
|
||||
```
|
||||
|
||||
### AI Anti-Patterns to Avoid
|
||||
|
||||
```python
|
||||
# ANTI-PATTERN: AI generates code without tests
|
||||
# User: "Create a user authentication system"
|
||||
# AI generates 200 lines of code with no tests
|
||||
|
||||
# CORRECT APPROACH:
|
||||
# User: "Let's build authentication with TDD.
|
||||
# First, write a failing test for successful login."
|
||||
|
||||
# ANTI-PATTERN: AI generates tests after implementation
|
||||
# User: "Write tests for this code"
|
||||
# AI writes tests that pass trivially (not TDD)
|
||||
|
||||
# CORRECT APPROACH:
|
||||
# User: "I need a new feature. Write the failing test first."
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Legacy Code Strategy
|
||||
|
||||
### 1. Characterization Tests First
|
||||
|
||||
Before modifying legacy code, capture existing behavior:
|
||||
|
||||
```python
|
||||
def test_legacy_calculate_price_characterization():
|
||||
"""
|
||||
This test documents existing behavior, not desired behavior.
|
||||
Do not change expected values without understanding impact.
|
||||
"""
|
||||
# Given: Current production inputs
|
||||
order = Order(items=[Item(price=100, quantity=2)])
|
||||
|
||||
# When: Execute legacy code
|
||||
result = legacy_calculate_price(order)
|
||||
|
||||
# Then: Capture ACTUAL output (even if wrong)
|
||||
assert result == 215 # Includes mystery 7.5% surcharge
|
||||
```
|
||||
|
||||
### 2. Strangler Fig Pattern
|
||||
|
||||
```python
|
||||
# Step 1: Write test for new behavior
|
||||
def test_calculate_price_with_new_algorithm():
|
||||
order = Order(items=[Item(price=100, quantity=2)])
|
||||
result = calculate_price_v2(order)
|
||||
assert result == 200 # No mystery surcharge
|
||||
|
||||
# Step 2: Implement new code with TDD
|
||||
def calculate_price_v2(order):
|
||||
return sum(item.price * item.quantity for item in order.items)
|
||||
|
||||
# Step 3: Route new requests to new code
|
||||
def calculate_price(order):
|
||||
if order.use_new_pricing:
|
||||
return calculate_price_v2(order)
|
||||
return legacy_calculate_price(order)
|
||||
|
||||
# Step 4: Gradually migrate, removing legacy path
|
||||
```
|
||||
|
||||
### 3. Safe Refactoring Sequence
|
||||
|
||||
```python
|
||||
# 1. Add characterization tests
|
||||
# 2. Extract method (tests stay green)
|
||||
# 3. Add unit tests for extracted method
|
||||
# 4. Refactor extracted method with TDD
|
||||
# 5. Inline or delete old method
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration Test TDD
|
||||
|
||||
### Outside-In (London School)
|
||||
|
||||
```python
|
||||
# 1. Write acceptance test (fails end-to-end)
|
||||
def test_user_can_complete_purchase():
|
||||
user = create_user()
|
||||
add_item_to_cart(user, item)
|
||||
|
||||
result = complete_purchase(user)
|
||||
|
||||
assert result.status == "success"
|
||||
assert user.has_receipt()
|
||||
|
||||
# 2. Drop down to unit test for first component
|
||||
def test_cart_calculates_total():
|
||||
cart = Cart()
|
||||
cart.add(Item(price=100))
|
||||
|
||||
assert cart.total == 100
|
||||
|
||||
# 3. Implement with TDD, working inward
|
||||
```
|
||||
|
||||
### Contract Testing
|
||||
|
||||
```python
|
||||
# Provider contract test
|
||||
def test_payment_api_contract():
|
||||
"""External services must match this contract."""
|
||||
response = client.post("/payments", json={
|
||||
"amount": 100,
|
||||
"currency": "USD"
|
||||
})
|
||||
|
||||
assert response.status_code == 201
|
||||
assert "transaction_id" in response.json()
|
||||
|
||||
# Consumer contract test
|
||||
def test_payment_gateway_contract():
|
||||
"""We expect the gateway to return transaction IDs."""
|
||||
mock_gateway = MockPaymentGateway()
|
||||
mock_gateway.expect_charge(amount=100).and_return(
|
||||
transaction_id="tx_123"
|
||||
)
|
||||
|
||||
result = process_payment(mock_gateway, amount=100)
|
||||
|
||||
assert result.transaction_id == "tx_123"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Refactoring Rules
|
||||
|
||||
### Rule 1: Refactor Only When Green
|
||||
|
||||
```python
|
||||
# WRONG: Refactoring with failing test
|
||||
def test_new_feature():
|
||||
assert False # Failing
|
||||
|
||||
def existing_code():
|
||||
# Refactoring here is DANGEROUS
|
||||
pass
|
||||
|
||||
# CORRECT: All tests pass before refactoring
|
||||
def existing_code():
|
||||
# Safe to refactor now
|
||||
pass
|
||||
```
|
||||
|
||||
### Rule 2: One Refactoring at a Time
|
||||
|
||||
```python
|
||||
# WRONG: Multiple refactorings at once
|
||||
def process_order(order):
|
||||
# Changed: variable name
|
||||
# Changed: extracted method
|
||||
# Changed: added caching
|
||||
# Which broke it? Who knows.
|
||||
pass
|
||||
|
||||
# CORRECT: One change, test, commit
|
||||
# Commit 1: Rename variable
|
||||
# Commit 2: Extract method
|
||||
# Commit 3: Add caching
|
||||
```
|
||||
|
||||
### Rule 3: Baby Steps
|
||||
|
||||
```python
|
||||
# WRONG: Large refactoring
|
||||
# Before: 500-line monolith
|
||||
# After: 10 new classes
|
||||
# Risk: Too high
|
||||
|
||||
# CORRECT: Extract one method at a time
|
||||
# Step 1: Extract calculate_total (commit)
|
||||
# Step 2: Extract validate_items (commit)
|
||||
# Step 3: Extract apply_discounts (commit)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Test Quality Gates
|
||||
|
||||
### Pre-Commit Hooks
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# .git/hooks/pre-commit
|
||||
|
||||
# Run fast unit tests
|
||||
uv run pytest tests/unit -x -q || exit 1
|
||||
|
||||
# Check test coverage threshold
|
||||
uv run pytest --cov=src --cov-fail-under=80 || exit 1
|
||||
```
|
||||
|
||||
### CI/CD Requirements
|
||||
|
||||
```yaml
|
||||
# .github/workflows/test.yml
|
||||
- name: Run Tests
|
||||
run: |
|
||||
pytest --cov=src --cov-report=xml --cov-fail-under=80
|
||||
|
||||
- name: Check Test Quality
|
||||
run: |
|
||||
# Fail if new code lacks tests
|
||||
diff-cover coverage.xml --fail-under=80
|
||||
```
|
||||
|
||||
### Code Review Checklist
|
||||
|
||||
```markdown
|
||||
## TDD Verification
|
||||
- [ ] New code has corresponding tests
|
||||
- [ ] Tests were written FIRST (check commit order)
|
||||
- [ ] Each test tests ONE behavior
|
||||
- [ ] Test names describe the scenario
|
||||
- [ ] No commented-out or skipped tests without reason
|
||||
- [ ] Coverage maintained or improved
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## When TDD Is Not Appropriate
|
||||
|
||||
TDD may be skipped ONLY for:
|
||||
|
||||
### 1. Exploratory Prototypes
|
||||
|
||||
```python
|
||||
# prototype.py - Delete after learning
|
||||
# No tests needed for throwaway exploration
|
||||
def quick_test_api():
|
||||
response = requests.get("https://api.example.com")
|
||||
print(response.json())
|
||||
```
|
||||
|
||||
### 2. One-Time Scripts
|
||||
|
||||
```python
|
||||
# migrate_data.py - Run once, discard
|
||||
# Tests would cost more than value provided
|
||||
```
|
||||
|
||||
### 3. Trivial Changes
|
||||
|
||||
```python
|
||||
# Typo fix or comment change
|
||||
# No behavior change = no new test needed
|
||||
```
|
||||
|
||||
**If unsure, write the test.**
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Phase | Rule | Check |
|
||||
|---------|-----------------------------------------|-------------------------------------|
|
||||
| Red | Write failing test first | Test fails for right reason |
|
||||
| Green | Write minimum code to pass | No extra features |
|
||||
| Refactor| Improve code while tests green | Run tests after each change |
|
||||
| Commit | Only commit green tests | All tests pass in CI |
|
||||
|
||||
## TDD Mantra
|
||||
|
||||
```
|
||||
Red. Green. Refactor. Commit. Repeat.
|
||||
|
||||
No test = No code.
|
||||
No green = No commit.
|
||||
No refactor = Technical debt.
|
||||
```
|
||||
134
rules/concerns/testing.md
Normal file
134
rules/concerns/testing.md
Normal file
@@ -0,0 +1,134 @@
|
||||
# Testing Rules
|
||||
|
||||
## Arrange-Act-Assert Pattern
|
||||
|
||||
Structure every test in three distinct phases:
|
||||
|
||||
```python
|
||||
# Arrange: Set up the test data and conditions
|
||||
user = User(name="Alice", role="admin")
|
||||
session = create_test_session(user.id)
|
||||
|
||||
# Act: Execute the behavior under test
|
||||
result = grant_permission(session, "read_documents")
|
||||
|
||||
# Assert: Verify the expected outcome
|
||||
assert result.granted is True
|
||||
assert result.permissions == ["read_documents"]
|
||||
```
|
||||
|
||||
Never mix phases. Comment each phase clearly for complex setups. Keep Act phase to one line if possible.
|
||||
|
||||
## Behavior vs Implementation Testing
|
||||
|
||||
Test behavior, not implementation details:
|
||||
|
||||
```python
|
||||
# GOOD: Tests the observable behavior
|
||||
def test_user_can_login():
|
||||
response = login("alice@example.com", "password123")
|
||||
assert response.status_code == 200
|
||||
assert "session_token" in response.cookies
|
||||
|
||||
# BAD: Tests internal implementation
|
||||
def test_login_sets_database_flag():
|
||||
login("alice@example.com", "password123")
|
||||
user = User.get(email="alice@example.com")
|
||||
assert user._logged_in_flag is True # Private field
|
||||
```
|
||||
|
||||
Focus on inputs and outputs. Test public contracts. Refactor internals freely without breaking tests.
|
||||
|
||||
## Mocking Philosophy
|
||||
|
||||
Mock external dependencies, not internal code:
|
||||
|
||||
```python
|
||||
# GOOD: Mock external services
|
||||
@patch("requests.post")
|
||||
def test_sends_notification_to_slack(mock_post):
|
||||
send_notification("Build complete!")
|
||||
mock_post.assert_called_once_with(
|
||||
"https://slack.com/api/chat.postMessage",
|
||||
json={"text": "Build complete!"}
|
||||
)
|
||||
|
||||
# BAD: Mock internal methods
|
||||
@patch("NotificationService._format_message")
|
||||
def test_notification_formatting(mock_format):
|
||||
# Don't mock private methods
|
||||
send_notification("Build complete!")
|
||||
```
|
||||
|
||||
Mock when:
|
||||
- Dependency is slow (database, network, file system)
|
||||
- Dependency is unreliable (external APIs)
|
||||
- Dependency is expensive (third-party services)
|
||||
|
||||
Don't mock when:
|
||||
- Testing the dependency itself
|
||||
- The dependency is fast and stable
|
||||
- The mock becomes more complex than real implementation
|
||||
|
||||
## Coverage Expectations
|
||||
|
||||
Write tests for:
|
||||
- Critical business logic (aim for 90%+)
|
||||
- Edge cases and error paths (aim for 80%+)
|
||||
- Public APIs and contracts (aim for 100%)
|
||||
|
||||
Don't obsess over:
|
||||
- Trivial getters/setters
|
||||
- Generated code
|
||||
- One-line wrappers
|
||||
|
||||
Coverage is a floor, not a ceiling. A test suite at 100% coverage that doesn't verify behavior is worthless.
|
||||
|
||||
## Test-Driven Development
|
||||
|
||||
Follow the red-green-refactor cycle:
|
||||
1. Red: Write failing test for new behavior
|
||||
2. Green: Write minimum code to pass
|
||||
3. Refactor: improve code while tests stay green
|
||||
|
||||
Write tests first for new features. Write tests after for bug fixes. Never refactor without tests.
|
||||
|
||||
## Test Organization
|
||||
|
||||
Group tests by feature or behavior, not by file structure. Name tests to describe the scenario:
|
||||
|
||||
```python
|
||||
class TestUserAuthentication:
|
||||
def test_valid_credentials_succeeds(self):
|
||||
pass
|
||||
|
||||
def test_invalid_credentials_fails(self):
|
||||
pass
|
||||
|
||||
def test_locked_account_fails(self):
|
||||
pass
|
||||
```
|
||||
|
||||
Each test should stand alone. Avoid shared state between tests. Use fixtures or setup methods to reduce duplication.
|
||||
|
||||
## Test Data
|
||||
|
||||
Use realistic test data that reflects production scenarios:
|
||||
|
||||
```python
|
||||
# GOOD: Realistic values
|
||||
user = User(
|
||||
email="alice@example.com",
|
||||
name="Alice Smith",
|
||||
age=28
|
||||
)
|
||||
|
||||
# BAD: Placeholder values
|
||||
user = User(
|
||||
email="test@test.com",
|
||||
name="Test User",
|
||||
age=999
|
||||
)
|
||||
```
|
||||
|
||||
Avoid magic strings and numbers. Use named constants for expected values that change often.
|
||||
0
rules/frameworks/.gitkeep
Normal file
0
rules/frameworks/.gitkeep
Normal file
42
rules/frameworks/n8n.md
Normal file
42
rules/frameworks/n8n.md
Normal file
@@ -0,0 +1,42 @@
|
||||
# n8n Workflow Automation Rules
|
||||
|
||||
## Workflow Design
|
||||
- Start with a clear trigger: Webhook, Schedule, or Event source
|
||||
- Keep workflows under 20 nodes for maintainability
|
||||
- Group related logic with sub-workflows
|
||||
- Use the "Switch" node for conditional branching
|
||||
- Add "Wait" nodes between rate-limited API calls
|
||||
|
||||
## Node Naming
|
||||
- Use verb-based names: `Fetch Users`, `Transform Data`, `Send Email`
|
||||
- Prefix data nodes: `Get_`, `Set_`, `Update_`
|
||||
- Prefix conditionals: `Check_`, `If_`, `When_`
|
||||
- Prefix actions: `Send_`, `Create_`, `Delete_`
|
||||
- Add version suffix to API nodes: `API_v1_Users`
|
||||
|
||||
## Error Handling
|
||||
- Always add an Error Trigger node
|
||||
- Route errors to a "Notify Failure" branch
|
||||
- Log error details: `$json.error.message`, `$json.node.name`
|
||||
- Send alerts on critical failures
|
||||
- Add "Continue On Fail" for non-essential nodes
|
||||
|
||||
## Data Flow
|
||||
- Use "Set" nodes to normalize output structure
|
||||
- Reference previous nodes: `{{ $json.field }}`
|
||||
- Use "Merge" node to combine multiple data sources
|
||||
- Apply "Code" node for complex transformations
|
||||
- Clean data before sending to external APIs
|
||||
|
||||
## Credential Security
|
||||
- Store all secrets in n8n credentials manager
|
||||
- Never hardcode API keys or tokens
|
||||
- Use environment-specific credential sets
|
||||
- Rotate credentials regularly
|
||||
- Limit credential scope to minimum required permissions
|
||||
|
||||
## Testing
|
||||
- Test each node independently with "Execute Node"
|
||||
- Verify data structure at each step
|
||||
- Mock external dependencies during development
|
||||
- Log workflow execution for debugging
|
||||
0
rules/languages/.gitkeep
Normal file
0
rules/languages/.gitkeep
Normal file
129
rules/languages/nix.md
Normal file
129
rules/languages/nix.md
Normal file
@@ -0,0 +1,129 @@
|
||||
# Nix Code Conventions
|
||||
|
||||
## Formatting
|
||||
|
||||
- Use `alejandra` for formatting
|
||||
- camelCase for variables, `PascalCase` for types
|
||||
- 2 space indentation (alejandra default)
|
||||
- No trailing whitespace
|
||||
|
||||
## Flake Structure
|
||||
|
||||
```nix
|
||||
{
|
||||
description = "Description here";
|
||||
inputs = {
|
||||
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
|
||||
flake-utils.url = "github:numtide/flake-utils";
|
||||
};
|
||||
outputs = { self, nixpkgs, flake-utils, ... }:
|
||||
flake-utils.lib.eachDefaultSystem (system:
|
||||
let
|
||||
pkgs = nixpkgs.legacyPackages.${system};
|
||||
in {
|
||||
packages.default = pkgs.hello;
|
||||
devShells.default = pkgs.mkShell {
|
||||
buildInputs = [ pkgs.hello ];
|
||||
};
|
||||
}
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
## Module Patterns
|
||||
|
||||
Standard module function signature:
|
||||
|
||||
```nix
|
||||
{ config, lib, pkgs, ... }:
|
||||
{
|
||||
options.myService.enable = lib.mkEnableOption "my service";
|
||||
config = lib.mkIf config.myService.enable {
|
||||
services.myService.enable = true;
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## Conditionals and Merging
|
||||
|
||||
- Use `mkIf` for conditional config
|
||||
- Use `mkMerge` to combine multiple config sets
|
||||
- Use `mkOptionDefault` for defaults that can be overridden
|
||||
|
||||
```nix
|
||||
config = lib.mkMerge [
|
||||
(lib.mkIf cfg.enable { ... })
|
||||
(lib.mkIf cfg.extraConfig { ... })
|
||||
];
|
||||
```
|
||||
|
||||
## Anti-Patterns (AVOID)
|
||||
|
||||
### `with pkgs;`
|
||||
Bad: Pollutes namespace, hard to trace origins
|
||||
```nix
|
||||
{ pkgs, ... }:
|
||||
{
|
||||
packages = with pkgs; [ vim git ];
|
||||
}
|
||||
```
|
||||
|
||||
Good: Explicit references
|
||||
```nix
|
||||
{ pkgs, ... }:
|
||||
{
|
||||
packages = [ pkgs.vim pkgs.git ];
|
||||
}
|
||||
```
|
||||
|
||||
### `builtins.fetchTarball`
|
||||
Use flake inputs instead. `fetchTarball` is non-reproducible.
|
||||
|
||||
### Impure operations
|
||||
Avoid `import <nixpkgs>` in flakes. Always use inputs.
|
||||
|
||||
### `builtins.getAttr` / `builtins.hasAttr`
|
||||
Use `lib.attrByPath` or `lib.optionalAttrs` instead.
|
||||
|
||||
## Home Manager Patterns
|
||||
|
||||
```nix
|
||||
{ config, pkgs, lib, ... }:
|
||||
{
|
||||
home.packages = with pkgs; [ ripgrep fd ];
|
||||
programs.zsh.enable = true;
|
||||
xdg.configFile."myapp/config".text = "...";
|
||||
}
|
||||
```
|
||||
|
||||
## Overlays
|
||||
|
||||
```nix
|
||||
{ config, lib, pkgs, ... }:
|
||||
let
|
||||
myOverlay = final: prev: {
|
||||
myPackage = prev.myPackage.overrideAttrs (old: { ... });
|
||||
};
|
||||
in
|
||||
{
|
||||
nixpkgs.overlays = [ myOverlay ];
|
||||
}
|
||||
```
|
||||
|
||||
## Imports and References
|
||||
|
||||
- Use flake inputs for dependencies
|
||||
- `lib` is always available in modules
|
||||
- Reference packages via `pkgs.packageName`
|
||||
- Use `callPackage` for complex package definitions
|
||||
|
||||
## File Organization
|
||||
|
||||
```
|
||||
flake.nix # Entry point
|
||||
modules/ # NixOS modules
|
||||
services/
|
||||
my-service.nix
|
||||
overlays/ # Package overrides
|
||||
default.nix
|
||||
```
|
||||
224
rules/languages/python.md
Normal file
224
rules/languages/python.md
Normal file
@@ -0,0 +1,224 @@
|
||||
# Python Language Rules
|
||||
|
||||
## Toolchain
|
||||
|
||||
### Package Management (uv)
|
||||
```bash
|
||||
uv init my-project --package
|
||||
uv add numpy pandas
|
||||
uv add --dev pytest ruff pyright hypothesis
|
||||
uv run python -m pytest
|
||||
uv lock --upgrade-package numpy
|
||||
```
|
||||
|
||||
### Linting & Formatting (ruff)
|
||||
```toml
|
||||
[tool.ruff]
|
||||
line-length = 100
|
||||
target-version = "py311"
|
||||
|
||||
[tool.ruff.lint]
|
||||
select = ["E", "F", "W", "I", "N", "UP"]
|
||||
ignore = ["E501"]
|
||||
|
||||
[tool.ruff.format]
|
||||
quote-style = "double"
|
||||
```
|
||||
|
||||
### Type Checking (pyright)
|
||||
```toml
|
||||
[tool.pyright]
|
||||
typeCheckingMode = "strict"
|
||||
reportMissingTypeStubs = true
|
||||
reportUnknownMemberType = true
|
||||
```
|
||||
|
||||
### Testing (pytest + hypothesis)
|
||||
```python
|
||||
import pytest
|
||||
from hypothesis import given, strategies as st
|
||||
|
||||
@given(st.integers(), st.integers())
|
||||
def test_addition_commutative(a, b):
|
||||
assert a + b == b + a
|
||||
|
||||
@pytest.fixture
|
||||
def user_data():
|
||||
return {"name": "Alice", "age": 30}
|
||||
|
||||
def test_user_creation(user_data):
|
||||
user = User(**user_data)
|
||||
assert user.name == "Alice"
|
||||
```
|
||||
|
||||
### Data Validation (Pydantic)
|
||||
```python
|
||||
from pydantic import BaseModel, Field, validator
|
||||
|
||||
class User(BaseModel):
|
||||
name: str = Field(min_length=1, max_length=100)
|
||||
age: int = Field(ge=0, le=150)
|
||||
email: str
|
||||
|
||||
@validator('email')
|
||||
def email_must_contain_at(cls, v):
|
||||
if '@' not in v:
|
||||
raise ValueError('must contain @')
|
||||
return v
|
||||
```
|
||||
|
||||
## Idioms
|
||||
|
||||
### Comprehensions
|
||||
```python
|
||||
# List comprehension
|
||||
squares = [x**2 for x in range(10) if x % 2 == 0]
|
||||
|
||||
# Dict comprehension
|
||||
word_counts = {word: text.count(word) for word in unique_words}
|
||||
|
||||
# Set comprehension
|
||||
unique_chars = {char for char in text if char.isalpha()}
|
||||
```
|
||||
|
||||
### Context Managers
|
||||
```python
|
||||
# Built-in context managers
|
||||
with open('file.txt', 'r') as f:
|
||||
content = f.read()
|
||||
|
||||
# Custom context manager
|
||||
from contextlib import contextmanager
|
||||
|
||||
@contextmanager
|
||||
def timer():
|
||||
start = time.time()
|
||||
yield
|
||||
print(f"Elapsed: {time.time() - start:.2f}s")
|
||||
```
|
||||
|
||||
### Generators
|
||||
```python
|
||||
def fibonacci():
|
||||
a, b = 0, 1
|
||||
while True:
|
||||
yield a
|
||||
a, b = b, a + b
|
||||
|
||||
def read_lines(file_path):
|
||||
with open(file_path) as f:
|
||||
for line in f:
|
||||
yield line.strip()
|
||||
```
|
||||
|
||||
### F-strings
|
||||
```python
|
||||
name = "Alice"
|
||||
age = 30
|
||||
|
||||
# Basic interpolation
|
||||
msg = f"Name: {name}, Age: {age}"
|
||||
|
||||
# Expression evaluation
|
||||
msg = f"Next year: {age + 1}"
|
||||
|
||||
# Format specs
|
||||
msg = f"Price: ${price:.2f}"
|
||||
msg = f"Hex: {0xFF:X}"
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### Bare Except
|
||||
```python
|
||||
# AVOID: Catches all exceptions including SystemExit
|
||||
try:
|
||||
risky_operation()
|
||||
except:
|
||||
pass
|
||||
|
||||
# USE: Catch specific exceptions
|
||||
try:
|
||||
risky_operation()
|
||||
except ValueError as e:
|
||||
log_error(e)
|
||||
except KeyError as e:
|
||||
log_error(e)
|
||||
```
|
||||
|
||||
### Mutable Defaults
|
||||
```python
|
||||
# AVOID: Default argument created once
|
||||
def append_item(item, items=[]):
|
||||
items.append(item)
|
||||
return items
|
||||
|
||||
# USE: None as sentinel
|
||||
def append_item(item, items=None):
|
||||
if items is None:
|
||||
items = []
|
||||
items.append(item)
|
||||
return items
|
||||
```
|
||||
|
||||
### Global State
|
||||
```python
|
||||
# AVOID: Global mutable state
|
||||
counter = 0
|
||||
|
||||
def increment():
|
||||
global counter
|
||||
counter += 1
|
||||
|
||||
# USE: Class-based state
|
||||
class Counter:
|
||||
def __init__(self):
|
||||
self.count = 0
|
||||
|
||||
def increment(self):
|
||||
self.count += 1
|
||||
```
|
||||
|
||||
### Star Imports
|
||||
```python
|
||||
# AVOID: Pollutes namespace, unclear origins
|
||||
from module import *
|
||||
|
||||
# USE: Explicit imports
|
||||
from module import specific_function, MyClass
|
||||
import module as m
|
||||
```
|
||||
|
||||
## Project Setup
|
||||
|
||||
### pyproject.toml Structure
|
||||
```toml
|
||||
[project]
|
||||
name = "my-project"
|
||||
version = "0.1.0"
|
||||
requires-python = ">=3.11"
|
||||
dependencies = [
|
||||
"pydantic>=2.0",
|
||||
"httpx>=0.25",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
dev = ["pytest", "ruff", "pyright", "hypothesis"]
|
||||
|
||||
[build-system]
|
||||
requires = ["hatchling"]
|
||||
build-backend = "hatchling.build"
|
||||
```
|
||||
|
||||
### src Layout
|
||||
```
|
||||
my-project/
|
||||
├── pyproject.toml
|
||||
└── src/
|
||||
└── my_project/
|
||||
├── __init__.py
|
||||
├── main.py
|
||||
└── utils/
|
||||
├── __init__.py
|
||||
└── helpers.py
|
||||
```
|
||||
100
rules/languages/shell.md
Normal file
100
rules/languages/shell.md
Normal file
@@ -0,0 +1,100 @@
|
||||
# Shell Scripting Rules
|
||||
|
||||
## Shebang
|
||||
|
||||
Always use `#!/usr/bin/env bash` for portability. Never hardcode `/bin/bash`.
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
```
|
||||
|
||||
## Strict Mode
|
||||
|
||||
Enable strict mode in every script.
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
```
|
||||
|
||||
- `-e`: Exit on error
|
||||
- `-u`: Error on unset variables
|
||||
- `-o pipefail`: Return exit status of last failed pipe command
|
||||
|
||||
## Shellcheck
|
||||
|
||||
Run shellcheck on all scripts before committing.
|
||||
|
||||
```bash
|
||||
shellcheck script.sh
|
||||
```
|
||||
|
||||
## Quoting
|
||||
|
||||
Quote all variable expansions and command substitutions. Use arrays instead of word-splitting strings.
|
||||
|
||||
```bash
|
||||
# Good
|
||||
"${var}"
|
||||
files=("file1.txt" "file2.txt")
|
||||
for f in "${files[@]}"; do
|
||||
process "$f"
|
||||
done
|
||||
|
||||
# Bad
|
||||
$var
|
||||
files="file1.txt file2.txt"
|
||||
for f in $files; do
|
||||
process $f
|
||||
done
|
||||
```
|
||||
|
||||
## Functions
|
||||
|
||||
Define with parentheses, use `local` for variables.
|
||||
|
||||
```bash
|
||||
my_function() {
|
||||
local result
|
||||
result=$(some_command)
|
||||
echo "$result"
|
||||
}
|
||||
```
|
||||
|
||||
## Command Substitution
|
||||
|
||||
Use `$()` not backticks. Nests cleanly.
|
||||
|
||||
```bash
|
||||
# Good
|
||||
output=$(ls "$dir")
|
||||
|
||||
# Bad
|
||||
output=`ls $dir`
|
||||
```
|
||||
|
||||
## POSIX Portability
|
||||
|
||||
Write POSIX-compliant scripts when targeting `/bin/sh`.
|
||||
|
||||
- Use `[[` only for bash scripts
|
||||
- Use `printf` instead of `echo -e`
|
||||
- Avoid `[[`, `((`, `&>` in sh scripts
|
||||
|
||||
## Error Handling
|
||||
|
||||
Use `trap` for cleanup.
|
||||
|
||||
```bash
|
||||
cleanup() {
|
||||
rm -f /tmp/lockfile
|
||||
}
|
||||
trap cleanup EXIT
|
||||
```
|
||||
|
||||
## Readability
|
||||
|
||||
- Use 2-space indentation
|
||||
- Limit lines to 80 characters
|
||||
- Add comments for non-obvious logic
|
||||
- Separate sections with blank lines
|
||||
150
rules/languages/typescript.md
Normal file
150
rules/languages/typescript.md
Normal file
@@ -0,0 +1,150 @@
|
||||
# TypeScript Patterns
|
||||
|
||||
## Strict tsconfig
|
||||
|
||||
Always enable strict mode and key safety options:
|
||||
|
||||
```json
|
||||
{
|
||||
"compilerOptions": {
|
||||
"strict": true,
|
||||
"noUncheckedIndexedAccess": true,
|
||||
"noImplicitReturns": true,
|
||||
"noFallthroughCasesInSwitch": true,
|
||||
"noUnusedLocals": true,
|
||||
"noUnusedParameters": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Discriminated Unions
|
||||
|
||||
Use discriminated unions for exhaustive type safety:
|
||||
|
||||
```ts
|
||||
type Result =
|
||||
| { success: true; data: string }
|
||||
| { success: false; error: Error };
|
||||
|
||||
function handleResult(result: Result): string {
|
||||
if (result.success) {
|
||||
return result.data;
|
||||
}
|
||||
throw result.error;
|
||||
}
|
||||
```
|
||||
|
||||
## Branded Types
|
||||
|
||||
Prevent type confusion with nominal branding:
|
||||
|
||||
```ts
|
||||
type UserId = string & { readonly __brand: unique symbol };
|
||||
type Email = string & { readonly __brand: unique symbol };
|
||||
|
||||
function createUserId(id: string): UserId {
|
||||
return id as UserId;
|
||||
}
|
||||
|
||||
function sendEmail(email: Email, userId: UserId) {}
|
||||
```
|
||||
|
||||
## satisfies Operator
|
||||
|
||||
Use `satisfies` for type-safe object literal inference:
|
||||
|
||||
```ts
|
||||
const config = {
|
||||
port: 3000,
|
||||
host: "localhost",
|
||||
} satisfies {
|
||||
port: number;
|
||||
host: string;
|
||||
debug?: boolean;
|
||||
};
|
||||
|
||||
config.port; // number
|
||||
config.host; // string
|
||||
```
|
||||
|
||||
## as const Assertions
|
||||
|
||||
Freeze literal types with `as const`:
|
||||
|
||||
```ts
|
||||
const routes = {
|
||||
home: "/",
|
||||
about: "/about",
|
||||
contact: "/contact",
|
||||
} as const;
|
||||
|
||||
type Route = typeof routes[keyof typeof routes];
|
||||
```
|
||||
|
||||
## Modern Features
|
||||
|
||||
```ts
|
||||
// Promise.withResolvers()
|
||||
const { promise, resolve, reject } = Promise.withResolvers<string>();
|
||||
|
||||
// Object.groupBy()
|
||||
const users = [
|
||||
{ name: "Alice", role: "admin" },
|
||||
{ name: "Bob", role: "user" },
|
||||
];
|
||||
const grouped = Object.groupBy(users, u => u.role);
|
||||
|
||||
// using statement for disposables
|
||||
class Resource implements Disposable {
|
||||
async [Symbol.asyncDispose]() {
|
||||
await this.cleanup();
|
||||
}
|
||||
}
|
||||
async function withResource() {
|
||||
using r = new Resource();
|
||||
}
|
||||
```
|
||||
|
||||
## Toolchain
|
||||
|
||||
Prefer modern tooling:
|
||||
- Runtime: `bun` or `tsx` (no `tsc` for execution)
|
||||
- Linting: `biome` (preferred) or `eslint`
|
||||
- Formatting: `biome` (built-in) or `prettier`
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
Avoid these TypeScript patterns:
|
||||
|
||||
```ts
|
||||
// NEVER use as any
|
||||
const data = response as any;
|
||||
|
||||
// NEVER use @ts-ignore
|
||||
// @ts-ignore
|
||||
const value = unknownFunction();
|
||||
|
||||
// NEVER use ! assertion (non-null)
|
||||
const element = document.querySelector("#foo")!;
|
||||
|
||||
// NEVER use enum (prefer union)
|
||||
enum Status { Active, Inactive } // ❌
|
||||
|
||||
// Prefer const object or union
|
||||
type Status = "Active" | "Inactive"; // ✅
|
||||
const Status = { Active: "Active", Inactive: "Inactive" } as const; // ✅
|
||||
```
|
||||
|
||||
## Indexed Access Safety
|
||||
|
||||
With `noUncheckedIndexedAccess`, handle undefined:
|
||||
|
||||
```ts
|
||||
const arr: string[] = ["a", "b"];
|
||||
const item = arr[0]; // string | undefined
|
||||
|
||||
const item2 = arr.at(0); // string | undefined
|
||||
|
||||
const map = new Map<string, number>();
|
||||
const value = map.get("key"); // number | undefined
|
||||
```
|
||||
@@ -8,7 +8,7 @@
|
||||
# ./scripts/test-skill.sh --run # Launch interactive opencode session
|
||||
#
|
||||
# This script creates a temporary XDG_CONFIG_HOME with symlinks to this
|
||||
# repository's skill/, context/, command/, and prompts/ directories,
|
||||
# repository's skills/, context/, command/, and prompts/ directories,
|
||||
# allowing you to test skill changes before deploying via home-manager.
|
||||
|
||||
set -euo pipefail
|
||||
@@ -72,17 +72,17 @@ list_skills() {
|
||||
|
||||
validate_skill() {
|
||||
local skill_name="$1"
|
||||
local skill_path="$REPO_ROOT/skill/$skill_name"
|
||||
local skill_path="$REPO_ROOT/skills/$skill_name"
|
||||
|
||||
if [[ ! -d "$skill_path" ]]; then
|
||||
echo -e "${RED}❌ Skill not found: $skill_name${NC}"
|
||||
echo "Available skills:"
|
||||
ls -1 "$REPO_ROOT/skill/"
|
||||
ls -1 "$REPO_ROOT/skills/"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "${YELLOW}Validating skill: $skill_name${NC}"
|
||||
if python3 "$REPO_ROOT/skill/skill-creator/scripts/quick_validate.py" "$skill_path"; then
|
||||
if python3 "$REPO_ROOT/skills/skill-creator/scripts/quick_validate.py" "$skill_path"; then
|
||||
echo -e "${GREEN}✅ Skill '$skill_name' is valid${NC}"
|
||||
else
|
||||
echo -e "${RED}❌ Skill '$skill_name' has validation errors${NC}"
|
||||
@@ -95,14 +95,14 @@ validate_all() {
|
||||
echo ""
|
||||
|
||||
local failed=0
|
||||
for skill_dir in "$REPO_ROOT/skill/"*/; do
|
||||
for skill_dir in "$REPO_ROOT/skills/"*/; do
|
||||
local skill_name=$(basename "$skill_dir")
|
||||
echo -n " $skill_name: "
|
||||
if python3 "$REPO_ROOT/skill/skill-creator/scripts/quick_validate.py" "$skill_dir" > /dev/null 2>&1; then
|
||||
if python3 "$REPO_ROOT/skills/skill-creator/scripts/quick_validate.py" "$skill_dir" > /dev/null 2>&1; then
|
||||
echo -e "${GREEN}✅${NC}"
|
||||
else
|
||||
echo -e "${RED}❌${NC}"
|
||||
python3 "$REPO_ROOT/skill/skill-creator/scripts/quick_validate.py" "$skill_dir" 2>&1 | sed 's/^/ /'
|
||||
python3 "$REPO_ROOT/skills/skill-creator/scripts/quick_validate.py" "$skill_dir" 2>&1 | sed 's/^/ /'
|
||||
((failed++)) || true
|
||||
fi
|
||||
done
|
||||
|
||||
@@ -1,16 +1,16 @@
|
||||
#!/usr/bin/env bash
|
||||
#
|
||||
# Validate agents.json structure and prompt files
|
||||
# Validate agents.json structure and referenced prompt files
|
||||
#
|
||||
# Usage:
|
||||
# ./scripts/validate-agents.sh # Validate agents.json
|
||||
# ./scripts/validate-agents.sh --help # Show help
|
||||
# ./scripts/validate-agents.sh
|
||||
#
|
||||
# Checks:
|
||||
# - agents.json is valid JSON
|
||||
# - Each agent has required fields (description, mode, model, prompt, permission)
|
||||
# - All referenced prompt files exist
|
||||
# - All prompt files are non-empty
|
||||
# This script validates the agent configuration by:
|
||||
# - Parsing agents.json as valid JSON
|
||||
# - Checking all 6 required agents are present
|
||||
# - Verifying each agent has required fields
|
||||
# - Validating agent modes (primary vs subagent)
|
||||
# - Verifying all referenced prompt files exist and are non-empty
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
@@ -22,146 +22,161 @@ GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m'
|
||||
|
||||
usage() {
|
||||
echo "Usage: $0 [OPTIONS]"
|
||||
echo ""
|
||||
echo "Validate agents.json structure and prompt files."
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " --help Show this help message"
|
||||
echo ""
|
||||
echo "Validates:"
|
||||
echo " - agents.json is valid JSON"
|
||||
echo " - Each agent has required fields"
|
||||
echo " - All referenced prompt files exist"
|
||||
echo " - All prompt files are non-empty"
|
||||
AGENTS_FILE="$REPO_ROOT/agents/agents.json"
|
||||
PROMPTS_DIR="$REPO_ROOT/prompts"
|
||||
|
||||
# Expected agent list
|
||||
EXPECTED_AGENTS=("chiron" "chiron-forge" "hermes" "athena" "apollo" "calliope")
|
||||
# Expected primary agents
|
||||
PRIMARY_AGENTS=("chiron" "chiron-forge")
|
||||
# Expected subagents
|
||||
SUBAGENTS=("hermes" "athena" "apollo" "calliope")
|
||||
# Required fields for each agent
|
||||
REQUIRED_FIELDS=("description" "mode" "model" "prompt")
|
||||
|
||||
echo -e "${YELLOW}Validating agent configuration...${NC}"
|
||||
echo ""
|
||||
|
||||
# Track errors
|
||||
error_count=0
|
||||
warning_count=0
|
||||
|
||||
# Function to print error
|
||||
error() {
|
||||
echo -e "${RED}❌ $1${NC}" >&2
|
||||
((error_count++)) || true
|
||||
}
|
||||
|
||||
check_json_valid() {
|
||||
local agents_file="$1"
|
||||
# Function to print warning
|
||||
warning() {
|
||||
echo -e "${YELLOW}⚠️ $1${NC}"
|
||||
((warning_count++)) || true
|
||||
}
|
||||
|
||||
if ! python3 -m json.tool "$agents_file" > /dev/null 2>&1; then
|
||||
echo -e "${RED}❌ agents.json is not valid JSON${NC}"
|
||||
return 1
|
||||
# Function to print success
|
||||
success() {
|
||||
echo -e "${GREEN}✅ $1${NC}"
|
||||
}
|
||||
|
||||
# Check if agents.json exists
|
||||
if [[ ! -f "$AGENTS_FILE" ]]; then
|
||||
error "agents.json not found at $AGENTS_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate JSON syntax
|
||||
if ! python3 -c "import json; json.load(open('$AGENTS_FILE'))" 2>/dev/null; then
|
||||
error "agents.json is not valid JSON"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
success "agents.json is valid JSON"
|
||||
echo ""
|
||||
|
||||
# Parse agents.json
|
||||
AGENT_COUNT=$(python3 -c "import json; print(len(json.load(open('$AGENTS_FILE'))))")
|
||||
success "Found $AGENT_COUNT agents in agents.json"
|
||||
|
||||
# Check agent count
|
||||
if [[ $AGENT_COUNT -ne ${#EXPECTED_AGENTS[@]} ]]; then
|
||||
error "Expected ${#EXPECTED_AGENTS[@]} agents, found $AGENT_COUNT"
|
||||
fi
|
||||
|
||||
# Get list of agent names
|
||||
AGENT_NAMES=$(python3 -c "import json; print(' '.join(sorted(json.load(open('$AGENTS_FILE')).keys())))")
|
||||
|
||||
echo ""
|
||||
echo "Checking agent list..."
|
||||
|
||||
# Check for missing agents
|
||||
for expected_agent in "${EXPECTED_AGENTS[@]}"; do
|
||||
if echo "$AGENT_NAMES" | grep -qw "$expected_agent"; then
|
||||
success "Agent '$expected_agent' found"
|
||||
else
|
||||
error "Required agent '$expected_agent' not found"
|
||||
fi
|
||||
done
|
||||
|
||||
echo -e "${GREEN}✅ agents.json is valid JSON${NC}"
|
||||
return 0
|
||||
}
|
||||
# Check for unexpected agents
|
||||
for agent_name in $AGENT_NAMES; do
|
||||
if [[ ! " ${EXPECTED_AGENTS[@]} " =~ " ${agent_name} " ]]; then
|
||||
warning "Unexpected agent '$agent_name' found (not in expected list)"
|
||||
fi
|
||||
done
|
||||
|
||||
check_required_fields() {
|
||||
local agents_file="$1"
|
||||
local agent_name="$2"
|
||||
local agent_data="$3"
|
||||
echo ""
|
||||
echo "Checking agent fields and modes..."
|
||||
|
||||
local required_fields=("description" "mode" "model" "prompt" "permission")
|
||||
local missing_fields=()
|
||||
# Validate each agent
|
||||
for agent_name in "${EXPECTED_AGENTS[@]}"; do
|
||||
echo -n " $agent_name: "
|
||||
|
||||
for field in "${required_fields[@]}"; do
|
||||
if ! echo "$agent_data" | python3 -c "import sys, json; data = json.load(sys.stdin); exit(0 if '$field' in data else 1)" 2>/dev/null; then
|
||||
# Check required fields
|
||||
missing_fields=()
|
||||
for field in "${REQUIRED_FIELDS[@]}"; do
|
||||
if ! python3 -c "import json; data=json.load(open('$AGENTS_FILE')); print(data.get('$agent_name').get('$field', ''))" 2>/dev/null | grep -q .; then
|
||||
missing_fields+=("$field")
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ ${#missing_fields[@]} -gt 0 ]]; then
|
||||
echo -e " ${RED}❌ Missing required fields for '$agent_name': ${missing_fields[*]}${NC}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo -e " ${GREEN}✅ '$agent_name' has all required fields${NC}"
|
||||
return 0
|
||||
}
|
||||
|
||||
check_prompt_file() {
|
||||
local agent_name="$1"
|
||||
local prompt_ref="$2"
|
||||
|
||||
# Extract filename from {file:./prompts/filename}
|
||||
if [[ ! $prompt_ref =~ \{file:./prompts/([^}]+)\} ]]; then
|
||||
echo -e " ${RED}❌ '$agent_name': Invalid prompt reference format: $prompt_ref${NC}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local prompt_file="prompts/${BASH_REMATCH[1]}"
|
||||
|
||||
if [[ ! -f "$REPO_ROOT/$prompt_file" ]]; then
|
||||
echo -e " ${RED}❌ '$agent_name': Prompt file not found: $prompt_file${NC}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ ! -s "$REPO_ROOT/$prompt_file" ]]; then
|
||||
echo -e " ${RED}❌ '$agent_name': Prompt file is empty: $prompt_file${NC}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo -e " ${GREEN}✅ '$agent_name': Prompt file exists and is non-empty ($prompt_file)${NC}"
|
||||
return 0
|
||||
}
|
||||
|
||||
validate_agents() {
|
||||
local agents_file="$REPO_ROOT/agents/agents.json"
|
||||
|
||||
echo -e "${YELLOW}Validating agents.json...${NC}"
|
||||
echo ""
|
||||
|
||||
if [[ ! -f "$agents_file" ]]; then
|
||||
echo -e "${RED}❌ agents.json not found at $agents_file${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
check_json_valid "$agents_file" || exit 1
|
||||
|
||||
local agent_names
|
||||
agent_names=$(python3 -c "import json; data = json.load(open('$agents_file')); print('\n'.join(data.keys()))")
|
||||
|
||||
local failed=0
|
||||
|
||||
while IFS= read -r agent_name; do
|
||||
[[ -z "$agent_name" ]] && continue
|
||||
|
||||
echo -n " Checking '$agent_name': "
|
||||
|
||||
local agent_data
|
||||
agent_data=$(python3 -c "import json; data = json.load(open('$agents_file')); print(json.dumps(data['$agent_name']))")
|
||||
|
||||
if ! check_required_fields "$agents_file" "$agent_name" "$agent_data"; then
|
||||
((failed++)) || true
|
||||
error "Missing required fields: ${missing_fields[*]}"
|
||||
continue
|
||||
fi
|
||||
|
||||
local prompt_ref
|
||||
prompt_ref=$(python3 -c "import json, sys; data = json.load(open('$agents_file')); print(data['$agent_name'].get('prompt', ''))")
|
||||
# Get mode value
|
||||
mode=$(python3 -c "import json; print(json.load(open('$AGENTS_FILE'))['$agent_name']['mode'])")
|
||||
|
||||
if ! check_prompt_file "$agent_name" "$prompt_ref"; then
|
||||
((failed++)) || true
|
||||
fi
|
||||
|
||||
done <<< "$agent_names"
|
||||
|
||||
echo ""
|
||||
|
||||
if [[ $failed -eq 0 ]]; then
|
||||
echo -e "${GREEN}✅ All agents validated successfully!${NC}"
|
||||
exit 0
|
||||
# Validate mode
|
||||
if [[ " ${PRIMARY_AGENTS[@]} " =~ " ${agent_name} " ]]; then
|
||||
if [[ "$mode" == "primary" ]]; then
|
||||
success "Mode: $mode (valid)"
|
||||
else
|
||||
echo -e "${RED}❌ $failed agent(s) failed validation${NC}"
|
||||
exit 1
|
||||
error "Expected mode 'primary' for agent '$agent_name', found '$mode'"
|
||||
fi
|
||||
}
|
||||
elif [[ " ${SUBAGENTS[@]} " =~ " ${agent_name} " ]]; then
|
||||
if [[ "$mode" == "subagent" ]]; then
|
||||
success "Mode: $mode (valid)"
|
||||
else
|
||||
error "Expected mode 'subagent' for agent '$agent_name', found '$mode'"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
# Main
|
||||
case "${1:-}" in
|
||||
--help|-h)
|
||||
usage
|
||||
echo ""
|
||||
echo "Checking prompt files..."
|
||||
|
||||
# Validate prompt file references
|
||||
for agent_name in "${EXPECTED_AGENTS[@]}"; do
|
||||
# Extract prompt file path from agent config
|
||||
prompt_ref=$(python3 -c "import json; print(json.load(open('$AGENTS_FILE'))['$agent_name']['prompt'])")
|
||||
|
||||
# Parse prompt reference: {file:./prompts/<name>.txt}
|
||||
if [[ "$prompt_ref" =~ \{file:(\./prompts/[^}]+)\} ]]; then
|
||||
prompt_file="${BASH_REMATCH[1]}"
|
||||
prompt_path="$REPO_ROOT/${prompt_file#./}"
|
||||
|
||||
# Check if prompt file exists
|
||||
if [[ -f "$prompt_path" ]]; then
|
||||
# Check if prompt file is non-empty
|
||||
if [[ -s "$prompt_path" ]]; then
|
||||
success "Prompt file exists and non-empty: $prompt_file"
|
||||
else
|
||||
error "Prompt file is empty: $prompt_file"
|
||||
fi
|
||||
else
|
||||
error "Prompt file not found: $prompt_file"
|
||||
fi
|
||||
else
|
||||
error "Invalid prompt reference format for agent '$agent_name': $prompt_ref"
|
||||
fi
|
||||
done
|
||||
|
||||
echo ""
|
||||
if [[ $error_count -eq 0 ]]; then
|
||||
echo -e "${GREEN}All validations passed!${NC}"
|
||||
exit 0
|
||||
;;
|
||||
"")
|
||||
validate_agents
|
||||
;;
|
||||
*)
|
||||
echo -e "${RED}Unknown option: $1${NC}"
|
||||
echo ""
|
||||
usage
|
||||
else
|
||||
echo -e "${RED}$error_count validation error(s) found${NC}"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
|
||||
315
skills/basecamp/SKILL.md
Normal file
315
skills/basecamp/SKILL.md
Normal file
@@ -0,0 +1,315 @@
|
||||
---
|
||||
name: basecamp
|
||||
description: "Use when: (1) Managing Basecamp projects, (2) Working with Basecamp todos and tasks, (3) Reading/updating message boards and campfire, (4) Managing card tables (kanban), (5) Handling email forwards/inbox, (6) Setting up webhooks for automation. Triggers: 'Basecamp', 'project', 'todo', 'card table', 'campfire', 'message board', 'webhook', 'inbox', 'email forwards'."
|
||||
compatibility: opencode
|
||||
---
|
||||
|
||||
# Basecamp
|
||||
|
||||
Basecamp 3 project management integration via MCP server. Provides comprehensive access to projects, todos, messages, card tables (kanban), campfire, inbox, documents, and webhooks.
|
||||
|
||||
## Core Workflows
|
||||
|
||||
### Finding Projects and Todos
|
||||
|
||||
**List all projects:**
|
||||
```bash
|
||||
# Get all accessible Basecamp projects
|
||||
get_projects
|
||||
```
|
||||
|
||||
**Get project details:**
|
||||
```bash
|
||||
# Get specific project information including status, tools, and access level
|
||||
get_project --project_id <id>
|
||||
```
|
||||
|
||||
**Explore todos:**
|
||||
```bash
|
||||
# Get all todo lists in a project
|
||||
get_todolists --project_id <id>
|
||||
|
||||
# Get all todos from a specific todo list (handles pagination automatically)
|
||||
get_todos --recording_id <todo_list_id>
|
||||
|
||||
# Search across projects for todos/messages containing keywords
|
||||
search_basecamp --query <search_term>
|
||||
```
|
||||
|
||||
### Managing Card Tables (Kanban)
|
||||
|
||||
**Card tables** are Basecamp's kanban-style workflow management tool.
|
||||
|
||||
**Explore card table:**
|
||||
```bash
|
||||
# Get card table for a project
|
||||
get_card_table --project_id <id>
|
||||
|
||||
# Get all columns in a card table
|
||||
get_columns --card_table_id <id>
|
||||
|
||||
# Get all cards in a specific column
|
||||
get_cards --column_id <id>
|
||||
```
|
||||
|
||||
**Manage columns:**
|
||||
```bash
|
||||
# Create new column (e.g., "In Progress", "Done")
|
||||
create_column --card_table_id <id> --title "Column Name"
|
||||
|
||||
# Update column title
|
||||
update_column --column_id <id> --title "New Title"
|
||||
|
||||
# Move column to different position
|
||||
move_column --column_id <id> --position 3
|
||||
|
||||
# Update column color
|
||||
update_column_color --column_id <id> --color "red"
|
||||
|
||||
# Put column on hold (freeze work)
|
||||
put_column_on_hold --column_id <id>
|
||||
|
||||
# Remove hold from column (unfreeze work)
|
||||
remove_column_hold --column_id <id>
|
||||
```
|
||||
|
||||
**Manage cards:**
|
||||
```bash
|
||||
# Create new card in a column
|
||||
create_card --column_id <id> --title "Task Name" --content "Description"
|
||||
|
||||
# Update card details
|
||||
update_card --card_id <id> --title "Updated Title" --content "New content"
|
||||
|
||||
# Move card to different column
|
||||
move_card --card_id <id> --to_column_id <new_column_id>
|
||||
|
||||
# Mark card as complete
|
||||
complete_card --card_id <id>
|
||||
|
||||
# Mark card as incomplete
|
||||
uncomplete_card --card_id <id>
|
||||
```
|
||||
|
||||
**Manage card steps (sub-tasks):**
|
||||
```bash
|
||||
# Get all steps for a card
|
||||
get_card_steps --card_id <id>
|
||||
|
||||
# Create new step
|
||||
create_card_step --card_id <id> --content "Sub-task description"
|
||||
|
||||
# Update step
|
||||
update_card_step --step_id <id> --content "Updated description"
|
||||
|
||||
# Delete step
|
||||
delete_card_step --step_id <id>
|
||||
|
||||
# Mark step as complete
|
||||
complete_card_step --step_id <id>
|
||||
|
||||
# Mark step as incomplete
|
||||
uncomplete_card_step --step_id <id>
|
||||
```
|
||||
|
||||
### Working with Messages and Campfire
|
||||
|
||||
**Message board:**
|
||||
```bash
|
||||
# Get message board for a project
|
||||
get_message_board --project_id <id>
|
||||
|
||||
# Get all messages from a project
|
||||
get_messages --project_id <id>
|
||||
|
||||
# Get specific message
|
||||
get_message --message_id <id>
|
||||
```
|
||||
|
||||
**Campfire (team chat):**
|
||||
```bash
|
||||
# Get recent campfire lines (messages)
|
||||
get_campfire_lines --campfire_id <id>
|
||||
```
|
||||
|
||||
**Comments:**
|
||||
```bash
|
||||
# Get comments for any Basecamp item (message, todo, card, etc.)
|
||||
get_comments --recording_id <id>
|
||||
|
||||
# Create a comment
|
||||
create_comment --recording_id <id> --content "Your comment"
|
||||
```
|
||||
|
||||
### Managing Inbox (Email Forwards)
|
||||
|
||||
**Inbox** handles email forwarding to Basecamp projects.
|
||||
|
||||
**Explore inbox:**
|
||||
```bash
|
||||
# Get inbox for a project (email forwards container)
|
||||
get_inbox --project_id <id>
|
||||
|
||||
# Get all forwarded emails from a project's inbox
|
||||
get_forwards --project_id <id>
|
||||
|
||||
# Get specific forwarded email
|
||||
get_forward --forward_id <id>
|
||||
|
||||
# Get all replies to a forwarded email
|
||||
get_inbox_replies --forward_id <id>
|
||||
|
||||
# Get specific reply
|
||||
get_inbox_reply --reply_id <id>
|
||||
```
|
||||
|
||||
**Manage forwards:**
|
||||
```bash
|
||||
# Move forwarded email to trash
|
||||
trash_forward --forward_id <id>
|
||||
```
|
||||
|
||||
### Documents
|
||||
|
||||
**Manage documents:**
|
||||
```bash
|
||||
# List documents in a vault
|
||||
get_documents --vault_id <id>
|
||||
|
||||
# Get specific document
|
||||
get_document --document_id <id>
|
||||
|
||||
# Create new document
|
||||
create_document --vault_id <id> --title "Document Title" --content "Document content"
|
||||
|
||||
# Update document
|
||||
update_document --document_id <id> --title "Updated Title" --content "New content"
|
||||
|
||||
# Move document to trash
|
||||
trash_document --document_id <id>
|
||||
```
|
||||
|
||||
### Webhooks and Automation
|
||||
|
||||
**Webhooks** enable automation by triggering external services on Basecamp events.
|
||||
|
||||
**Manage webhooks:**
|
||||
```bash
|
||||
# List webhooks for a project
|
||||
get_webhooks --project_id <id>
|
||||
|
||||
# Create webhook
|
||||
create_webhook --project_id <id> --callback_url "https://your-service.com/webhook" --types "TodoCreated,TodoCompleted"
|
||||
|
||||
# Delete webhook
|
||||
delete_webhook --webhook_id <id>
|
||||
```
|
||||
|
||||
### Daily Check-ins
|
||||
|
||||
**Project check-ins:**
|
||||
```bash
|
||||
# Get daily check-in questions for a project
|
||||
get_daily_check_ins --project_id <id>
|
||||
|
||||
# Get answers to daily check-in questions
|
||||
get_question_answers --question_id <id>
|
||||
```
|
||||
|
||||
### Attachments and Events
|
||||
|
||||
**Upload and track:**
|
||||
```bash
|
||||
# Upload file as attachment
|
||||
create_attachment --recording_id <id> --file_path "/path/to/file"
|
||||
|
||||
# Get events for a recording
|
||||
get_events --recording_id <id>
|
||||
```
|
||||
|
||||
## Integration with Other Skills
|
||||
|
||||
### Hermes (Work Communication)
|
||||
|
||||
Hermes loads this skill when working with Basecamp projects. Common workflows:
|
||||
|
||||
| User Request | Hermes Action | Basecamp Tools Used |
|
||||
|--------------|---------------|---------------------|
|
||||
| "Create a task in Marketing project" | Create card/todo | `create_card`, `get_columns`, `create_column` |
|
||||
| "Check project updates" | Read messages/campfire | `get_messages`, `get_campfire_lines`, `get_comments` |
|
||||
| "Update my tasks" | Move cards, update status | `move_card`, `complete_card`, `update_card` |
|
||||
| "Add comment to discussion" | Post comment | `create_comment`, `get_comments` |
|
||||
| "Review project inbox" | Check email forwards | `get_inbox`, `get_forwards`, `get_inbox_replies` |
|
||||
|
||||
### Workflow Patterns
|
||||
|
||||
**Project setup:**
|
||||
1. Use `get_projects` to find existing projects
|
||||
2. Use `get_project` to verify project details
|
||||
3. Use `get_todolists` or `get_card_table` to understand project structure
|
||||
|
||||
**Task management:**
|
||||
1. Use `get_todolists` or `get_columns` to find appropriate location
|
||||
2. Use `create_card` or todo creation to add work
|
||||
3. Use `move_card`, `complete_card` to update status
|
||||
4. Use `get_card_steps` and `create_card_step` for sub-task breakdown
|
||||
|
||||
**Communication:**
|
||||
1. Use `get_messages` or `get_campfire_lines` to read discussions
|
||||
2. Use `create_comment` to contribute to existing items
|
||||
3. Use `search_basecamp` to find relevant content
|
||||
|
||||
**Automation:**
|
||||
1. Use `get_webhooks` to check existing integrations
|
||||
2. Use `create_webhook` to set up external notifications
|
||||
|
||||
## Tool Organization by Category
|
||||
|
||||
**Projects & Lists:**
|
||||
- `get_projects`, `get_project`, `get_todolists`, `get_todos`, `search_basecamp`
|
||||
|
||||
**Card Table (Kanban):**
|
||||
- `get_card_table`, `get_columns`, `get_column`, `create_column`, `update_column`, `move_column`, `update_column_color`, `put_column_on_hold`, `remove_column_hold`, `watch_column`, `unwatch_column`, `get_cards`, `get_card`, `create_card`, `update_card`, `move_card`, `complete_card`, `uncomplete_card`, `get_card_steps`, `create_card_step`, `get_card_step`, `update_card_step`, `delete_card_step`, `complete_card_step`, `uncomplete_card_step`
|
||||
|
||||
**Messages & Communication:**
|
||||
- `get_message_board`, `get_messages`, `get_message`, `get_campfire_lines`, `get_comments`, `create_comment`
|
||||
|
||||
**Inbox (Email Forwards):**
|
||||
- `get_inbox`, `get_forwards`, `get_forward`, `get_inbox_replies`, `get_inbox_reply`, `trash_forward`
|
||||
|
||||
**Documents:**
|
||||
- `get_documents`, `get_document`, `create_document`, `update_document`, `trash_document`
|
||||
|
||||
**Webhooks:**
|
||||
- `get_webhooks`, `create_webhook`, `delete_webhook`
|
||||
|
||||
**Other:**
|
||||
- `get_daily_check_ins`, `get_question_answers`, `create_attachment`, `get_events`
|
||||
|
||||
## Common Queries
|
||||
|
||||
**Finding the right project:**
|
||||
```bash
|
||||
# Use search to find projects by keyword
|
||||
search_basecamp --query "marketing"
|
||||
# Then inspect specific project
|
||||
get_project --project_id <id>
|
||||
```
|
||||
|
||||
**Understanding project structure:**
|
||||
```bash
|
||||
# Check which tools are available in a project
|
||||
get_project --project_id <id>
|
||||
# Project response includes tools: message_board, campfire, card_table, todolists, etc.
|
||||
```
|
||||
|
||||
**Bulk operations:**
|
||||
```bash
|
||||
# Get all todos across a project (pagination handled automatically)
|
||||
get_todos --recording_id <todo_list_id>
|
||||
# Returns all pages of results
|
||||
|
||||
# Get all cards across all columns
|
||||
get_columns --card_table_id <id>
|
||||
get_cards --column_id <id> # Repeat for each column
|
||||
```
|
||||
262
skills/doc-translator/SKILL.md
Normal file
262
skills/doc-translator/SKILL.md
Normal file
@@ -0,0 +1,262 @@
|
||||
---
|
||||
name: doc-translator
|
||||
description: "Translates external documentation websites to specified language(s) and publishes to Outline wiki. Use when: (1) Translating SaaS/product documentation into German or Czech, (2) Publishing translated docs to Outline wiki, (3) Re-hosting external images to Outline. Triggers: 'translate docs', 'translate documentation', 'translate to German', 'translate to Czech', 'publish to wiki', 'doc translation', 'TEEM translation'."
|
||||
compatibility: opencode
|
||||
---
|
||||
|
||||
# Doc Translator
|
||||
|
||||
Translate external documentation websites to German (DE) and/or Czech (CZ), then publish to the company Outline wiki at `https://wiki.az-gruppe.com`. All images are re-hosted on Outline. UI terms use TEEM format.
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### 1. Validate Input & Clarify
|
||||
|
||||
Before starting, confirm:
|
||||
|
||||
1. **URL accessibility** - Check with `curl -sI <URL>` for HTTP 200
|
||||
2. **Target language(s)** - Always ask explicitly using the `question` tool:
|
||||
|
||||
```
|
||||
question: "Which language(s) should I translate to?"
|
||||
options: ["German (DE)", "Czech (CZ)", "Both (DE + CZ)"]
|
||||
```
|
||||
|
||||
3. **Scope** - If URL is an index page with multiple sub-pages, ask:
|
||||
|
||||
```
|
||||
question: "This page links to multiple sub-pages. What should I translate?"
|
||||
options: ["This page only", "This page + all linked sub-pages", "Let me pick specific pages"]
|
||||
```
|
||||
|
||||
4. **Target collection** - Use `Outline_list_collections` to show available collections, then ask which one to publish to
|
||||
|
||||
**CRITICAL:** NEVER auto-select collection. Always present collection list to user and wait for explicit selection before proceeding with document creation.
|
||||
|
||||
If URL fetch fails, use `question` to ask for an alternative URL or manual content paste.
|
||||
|
||||
### 2. Fetch & Parse Content
|
||||
|
||||
Use the `webfetch` tool to retrieve page content:
|
||||
|
||||
```
|
||||
webfetch(url="<URL>", format="markdown")
|
||||
```
|
||||
|
||||
From the result:
|
||||
- Extract main content body (ignore navigation, footers, sidebars, cookie banners)
|
||||
- Preserve document structure (headings, lists, tables, code blocks)
|
||||
- Collect all image URLs into a list for Step 3
|
||||
- Note any embedded videos or interactive elements (these cannot be translated)
|
||||
|
||||
For multi-page docs, repeat for each page.
|
||||
|
||||
### 3. Download Images
|
||||
|
||||
Download all images to a temporary directory:
|
||||
|
||||
```bash
|
||||
mkdir -p /tmp/doc-images
|
||||
|
||||
# For each image URL:
|
||||
curl -sL "IMAGE_URL" -o "/tmp/doc-images/$(basename IMAGE_URL)"
|
||||
```
|
||||
|
||||
Track a mapping of: `original_url -> local_filename -> outline_attachment_url`
|
||||
|
||||
If an image download fails, log it and continue. Use a placeholder in the final document:
|
||||
|
||||
```markdown
|
||||
> **[Image unavailable]** Original: IMAGE_URL
|
||||
```
|
||||
|
||||
### 4. Upload Images to Outline
|
||||
|
||||
MCP-outline does not support attachment creation. Use the bundled script for image uploads:
|
||||
|
||||
```bash
|
||||
# Upload with optional document association
|
||||
bash scripts/upload_image_to_outline.sh "/tmp/doc-images/screenshot.png" "$DOCUMENT_ID"
|
||||
|
||||
# Upload without document (attach later)
|
||||
bash scripts/upload_image_to_outline.sh "/tmp/doc-images/screenshot.png"
|
||||
```
|
||||
|
||||
The script handles API key loading from `/run/agenix/outline-key`, content-type detection, the two-step presigned POST flow, and retries. Output is JSON: `{"success": true, "attachment_url": "https://..."}`.
|
||||
|
||||
Replace image references in the translated markdown with the returned `attachment_url`:
|
||||
```markdown
|
||||

|
||||
```
|
||||
|
||||
For all other Outline operations (documents, collections, search), use MCP tools (`Outline_*`).
|
||||
|
||||
### 5. Translate with TEEM Format
|
||||
|
||||
Translate the entire document into each target language. Apply TEEM format to UI elements.
|
||||
|
||||
#### Address Form (CRITICAL)
|
||||
|
||||
**Always use the informal "you" form** in ALL target languages:
|
||||
- **German**: Use **"Du"** (informal), NEVER "Sie" (formal)
|
||||
- **Czech**: Use **"ty"** (informal), NEVER "vy" (formal)
|
||||
- This applies to all translations — documentation should feel approachable and direct
|
||||
|
||||
#### Infobox / Callout Formatting
|
||||
|
||||
Source documentation often uses admonitions, callouts, or info boxes (e.g., GitHub-style `> [!NOTE]`, Docusaurus `:::note`, or custom HTML boxes). **Convert ALL such elements** to Outline's callout syntax:
|
||||
|
||||
```markdown
|
||||
:::tip
|
||||
Tip or best practice content here.
|
||||
|
||||
:::
|
||||
|
||||
:::info
|
||||
Informational content here.
|
||||
|
||||
:::
|
||||
|
||||
:::warning
|
||||
Warning or caution content here.
|
||||
|
||||
:::
|
||||
|
||||
:::success
|
||||
Success message or positive outcome here.
|
||||
|
||||
:::
|
||||
```
|
||||
|
||||
**Mapping rules** (source → Outline):
|
||||
| Source pattern | Outline syntax |
|
||||
|---|---|
|
||||
| Note, Info, Information | `:::info` |
|
||||
| Tip, Hint, Best Practice | `:::tip` |
|
||||
| Warning, Caution, Danger, Important | `:::warning` |
|
||||
| Success, Done, Check | `:::success` |
|
||||
|
||||
**CRITICAL formatting**: The closing `:::` MUST be on its own line with an empty line before it. Content goes directly after the opening line.
|
||||
|
||||
#### TEEM Rules
|
||||
|
||||
**Format:** `**English UI Term** (Translation)`
|
||||
|
||||
**Apply TEEM to:**
|
||||
- Button labels
|
||||
- Menu items and navigation tabs
|
||||
- Form field labels
|
||||
- Dialog/modal titles
|
||||
- Toolbar icons with text
|
||||
- Status messages from the app
|
||||
- **Headings containing UI terms** (example: "## [Adding a new To-do]" becomes "## [Ein neues **To-do** (Aufgabe) hinzufügen]")
|
||||
|
||||
**Translate normally (no TEEM):**
|
||||
- Your own explanatory text
|
||||
- Document headings you create (that don't contain UI terms)
|
||||
- General descriptions and conceptual explanations
|
||||
- Code blocks and technical identifiers
|
||||
|
||||
#### German Examples
|
||||
|
||||
```markdown
|
||||
Click **Settings** (Einstellungen) to open preferences.
|
||||
Navigate to **Dashboard** (Übersicht) > **Reports** (Berichte).
|
||||
Press the **Submit** (Absenden) button.
|
||||
In the **File** (Datei) menu, select **Export** (Exportieren).
|
||||
|
||||
# Heading with UI term: Create a new **To-do** (Aufgabe)
|
||||
## [Adding a new **To-do** (Aufgabe)]
|
||||
```
|
||||
|
||||
#### Czech Examples
|
||||
|
||||
```markdown
|
||||
Click **Settings** (Nastavení) to open preferences.
|
||||
Navigate to **Dashboard** (Přehled) > **Reports** (Sestavy).
|
||||
Press the **Submit** (Odeslat) button.
|
||||
In the **File** (Soubor) menu, select **Export** (Exportovat).
|
||||
|
||||
# Heading with UI term: Create a new **To-do** (Úkol)
|
||||
## [Adding a new **To-do** (Úkol)]
|
||||
```
|
||||
|
||||
#### Ambiguous UI Terms
|
||||
|
||||
If a UI term has multiple valid translations depending on context, use the `question` tool:
|
||||
|
||||
```
|
||||
question: "The term 'Board' appears in the UI. Which translation fits this context?"
|
||||
options: ["Pinnwand (pinboard/bulletin)", "Tafel (whiteboard)", "Gremium (committee)"]
|
||||
```
|
||||
|
||||
### 6. Publish to Outline
|
||||
|
||||
Use mcp-outline tools to publish:
|
||||
|
||||
1. **Find or create collection:**
|
||||
- `Outline_list_collections` to find target collection
|
||||
- `Outline_create_collection` if needed
|
||||
|
||||
2. **Create document:**
|
||||
- `Outline_create_document` with translated markdown content
|
||||
- Set `publish: true` for immediate visibility
|
||||
- Use `parent_document_id` if nesting under an existing doc
|
||||
|
||||
3. **For multi-language:** Create one document per language, clearly titled:
|
||||
- `[Product Name] - Dokumentation (DE)`
|
||||
- `[Product Name] - Dokumentace (CZ)`
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Issue | Action |
|
||||
|-------|--------|
|
||||
| URL fetch fails | Use `question` to ask for alternative URL or manual paste |
|
||||
| Image download fails | Continue with placeholder, note in completion report |
|
||||
| Outline API error (attachments) | Script retries 3x with backoff; on final failure save markdown to `/tmp/doc-translator-backup-TIMESTAMP.md`, report error |
|
||||
| Outline API error (document) | Save markdown to `/tmp/doc-translator-backup-TIMESTAMP.md`, report error |
|
||||
| Ambiguous UI term | Use `question` to ask user for correct translation |
|
||||
| Large document (>5000 words) | Ask user if splitting into multiple docs is preferred |
|
||||
| Multi-page docs | Ask user about scope before proceeding |
|
||||
| Rate limiting | Wait and retry with exponential backoff |
|
||||
|
||||
If Outline publish fails, always save the translated markdown locally as backup before reporting the error.
|
||||
|
||||
## Completion Report
|
||||
|
||||
After each translation, output:
|
||||
|
||||
```
|
||||
Translation Complete
|
||||
|
||||
Documents Created:
|
||||
- DE: [Document Title] - ID: [xxx] - URL: https://wiki.az-gruppe.com/doc/[slug]
|
||||
- CZ: [Document Title] - ID: [xxx] - URL: https://wiki.az-gruppe.com/doc/[slug]
|
||||
|
||||
Images Processed: X of Y successfully uploaded
|
||||
|
||||
Items Needing Review:
|
||||
- [Any sections with complex screenshots]
|
||||
- [Any failed image uploads with original URLs]
|
||||
- [Any unclear UI terms that were best-guessed]
|
||||
```
|
||||
|
||||
## Language Codes
|
||||
|
||||
| Code | Language | Native Name |
|
||||
|------|----------|-------------|
|
||||
| DE | German | Deutsch |
|
||||
| CZ | Czech | Čeština |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Purpose | Source |
|
||||
|----------|---------|--------|
|
||||
| `OUTLINE_API_KEY` | Bearer token for wiki.az-gruppe.com API | Auto-loaded from `/run/agenix/outline-key` by upload script |
|
||||
|
||||
## Integration with Other Skills
|
||||
|
||||
| Need | Skill | When |
|
||||
|------|-------|------|
|
||||
| Wiki document management | outline | Managing existing translated docs |
|
||||
| Browser-based content extraction | playwright / dev-browser | When webfetch cannot access content (login-required pages) |
|
||||
116
skills/doc-translator/scripts/upload_image_to_outline.sh
Executable file
116
skills/doc-translator/scripts/upload_image_to_outline.sh
Executable file
@@ -0,0 +1,116 @@
|
||||
#!/usr/bin/env bash
|
||||
# Upload an image to Outline via presigned POST (two-step flow)
|
||||
#
|
||||
# Usage:
|
||||
# upload_image_to_outline.sh <image_path> [document_id]
|
||||
#
|
||||
# Environment:
|
||||
# OUTLINE_API_KEY - Bearer token for wiki.az-gruppe.com API
|
||||
# Auto-loaded from /run/agenix/outline-key if not set
|
||||
#
|
||||
# Output (JSON to stdout):
|
||||
# {"success": true, "attachment_url": "https://..."}
|
||||
# Error (JSON to stderr):
|
||||
# {"success": false, "error": "error message"}
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
MAX_RETRIES=3
|
||||
RETRY_DELAY=2
|
||||
|
||||
if [ $# -lt 1 ] || [ $# -gt 2 ]; then
|
||||
echo '{"success": false, "error": "Usage: upload_image_to_outline.sh <image_path> [document_id]"}' >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
IMAGE_PATH="$1"
|
||||
DOCUMENT_ID="${2:-}"
|
||||
|
||||
if [ -z "${OUTLINE_API_KEY:-}" ]; then
|
||||
if [ -f /run/agenix/outline-key ]; then
|
||||
OUTLINE_API_KEY=$(cat /run/agenix/outline-key)
|
||||
export OUTLINE_API_KEY
|
||||
else
|
||||
echo '{"success": false, "error": "OUTLINE_API_KEY not set and /run/agenix/outline-key not found"}' >&2
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check if file exists
|
||||
if [ ! -f "$IMAGE_PATH" ]; then
|
||||
echo "{\"success\": false, \"error\": \"Image file not found: $IMAGE_PATH\"}" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Extract image name and extension
|
||||
IMAGE_NAME="$(basename "$IMAGE_PATH")"
|
||||
EXTENSION="${IMAGE_NAME##*.}"
|
||||
|
||||
# Detect content type by extension
|
||||
case "${EXTENSION,,}" in
|
||||
png) CONTENT_TYPE="image/png" ;;
|
||||
jpg|jpeg) CONTENT_TYPE="image/jpeg" ;;
|
||||
gif) CONTENT_TYPE="image/gif" ;;
|
||||
svg) CONTENT_TYPE="image/svg+xml" ;;
|
||||
webp) CONTENT_TYPE="image/webp" ;;
|
||||
*) CONTENT_TYPE="application/octet-stream" ;;
|
||||
esac
|
||||
|
||||
FILESIZE=$(stat -c%s "$IMAGE_PATH" 2>/dev/null || stat -f%z "$IMAGE_PATH" 2>/dev/null)
|
||||
|
||||
if [ -z "$FILESIZE" ]; then
|
||||
echo "{\"success\": false, \"error\": \"Failed to get file size for: $IMAGE_PATH\"}" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
REQUEST_BODY=$(jq -n \
|
||||
--arg name "$IMAGE_NAME" \
|
||||
--arg contentType "$CONTENT_TYPE" \
|
||||
--argjson size "$FILESIZE" \
|
||||
--arg documentId "$DOCUMENT_ID" \
|
||||
'if $documentId == "" then
|
||||
{name: $name, contentType: $contentType, size: $size}
|
||||
else
|
||||
{name: $name, contentType: $contentType, size: $size, documentId: $documentId}
|
||||
end')
|
||||
|
||||
# Step 1: Create attachment record
|
||||
RESPONSE=$(curl -s -X POST "https://wiki.az-gruppe.com/api/attachments.create" \
|
||||
-H "Authorization: Bearer $OUTLINE_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "$REQUEST_BODY")
|
||||
|
||||
UPLOAD_URL=$(echo "$RESPONSE" | jq -r '.data.uploadUrl // empty')
|
||||
ATTACHMENT_URL=$(echo "$RESPONSE" | jq -r '.data.attachment.url // empty')
|
||||
|
||||
if [ -z "$UPLOAD_URL" ]; then
|
||||
ERROR_MSG=$(echo "$RESPONSE" | jq -r '.message // "Failed to create attachment"')
|
||||
echo "{\"success\": false, \"error\": \"$ERROR_MSG\", \"response\": $(echo "$RESPONSE" | jq -c .)}" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
FORM_ARGS=()
|
||||
while IFS= read -r line; do
|
||||
key=$(echo "$line" | jq -r '.key')
|
||||
value=$(echo "$line" | jq -r '.value')
|
||||
FORM_ARGS+=(-F "$key=$value")
|
||||
done < <(echo "$RESPONSE" | jq -c '.data.form | to_entries[]')
|
||||
|
||||
# Step 2: Upload binary to presigned URL with retry
|
||||
for attempt in $(seq 1 "$MAX_RETRIES"); do
|
||||
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" -X POST "$UPLOAD_URL" \
|
||||
"${FORM_ARGS[@]}" \
|
||||
-F "file=@$IMAGE_PATH")
|
||||
|
||||
if [ "$HTTP_CODE" = "200" ] || [ "$HTTP_CODE" = "201" ] || [ "$HTTP_CODE" = "204" ]; then
|
||||
echo "{\"success\": true, \"attachment_url\": \"$ATTACHMENT_URL\"}"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ "$attempt" -lt "$MAX_RETRIES" ]; then
|
||||
sleep "$((RETRY_DELAY * attempt))"
|
||||
fi
|
||||
done
|
||||
|
||||
echo "{\"success\": false, \"error\": \"Upload failed after $MAX_RETRIES attempts (last HTTP $HTTP_CODE)\"}" >&2
|
||||
exit 1
|
||||
@@ -1,266 +1,544 @@
|
||||
---
|
||||
name: excalidraw
|
||||
description: Generate architecture diagrams as .excalidraw files from codebase analysis. Use when the user asks to create architecture diagrams, system diagrams, visualize codebase structure, or generate excalidraw files.
|
||||
description: "Create Excalidraw diagram JSON files that make visual arguments. Use when: (1) user wants to visualize workflows, architectures, or concepts, (2) creating system diagrams, (3) generating .excalidraw files. Triggers: excalidraw, diagram, visualize, architecture diagram, system diagram."
|
||||
compatibility: opencode
|
||||
---
|
||||
|
||||
# Excalidraw Diagram Generator
|
||||
# Excalidraw Diagram Creator
|
||||
|
||||
Generate architecture diagrams as `.excalidraw` files directly from codebase analysis.
|
||||
Generate `.excalidraw` JSON files that **argue visually**, not just display information.
|
||||
|
||||
## Customization
|
||||
|
||||
**All colors and brand-specific styles live in one file:** `references/color-palette.md`. Read it before generating any diagram and use it as the single source of truth for all color choices — shape fills, strokes, text colors, evidence artifact backgrounds, everything.
|
||||
|
||||
To make this skill produce diagrams in your own brand style, edit `color-palette.md`. Everything else in this file is universal design methodology and Excalidraw best practices.
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
## Core Philosophy
|
||||
|
||||
**User just asks:**
|
||||
```
|
||||
"Generate an architecture diagram for this project"
|
||||
"Create an excalidraw diagram of the system"
|
||||
"Visualize this codebase as an excalidraw file"
|
||||
```
|
||||
**Diagrams should ARGUE, not DISPLAY.**
|
||||
|
||||
**Claude Code will:**
|
||||
1. Analyze the codebase (any language/framework)
|
||||
2. Identify components, services, databases, APIs
|
||||
3. Map relationships and data flows
|
||||
4. Generate valid `.excalidraw` JSON with dynamic IDs and labels
|
||||
A diagram isn't formatted text. It's a visual argument that shows relationships, causality, and flow that words alone can't express. The shape should BE the meaning.
|
||||
|
||||
**No prerequisites:** Works without existing diagrams, Terraform, or specific file types.
|
||||
**The Isomorphism Test**: If you removed all text, would the structure alone communicate the concept? If not, redesign.
|
||||
|
||||
**The Education Test**: Could someone learn something concrete from this diagram, or does it just label boxes? A good diagram teaches—it shows actual formats, real event names, concrete examples.
|
||||
|
||||
---
|
||||
|
||||
## Critical Rules
|
||||
## Depth Assessment (Do This First)
|
||||
|
||||
### 1. NEVER Use Diamond Shapes
|
||||
Before designing, determine what level of detail this diagram needs:
|
||||
|
||||
Diamond arrow connections are broken in raw Excalidraw JSON. Use styled rectangles instead:
|
||||
### Simple/Conceptual Diagrams
|
||||
Use abstract shapes when:
|
||||
- Explaining a mental model or philosophy
|
||||
- The audience doesn't need technical specifics
|
||||
- The concept IS the abstraction (e.g., "separation of concerns")
|
||||
|
||||
| Semantic Meaning | Rectangle Style |
|
||||
|------------------|-----------------|
|
||||
| Orchestrator/Hub | Coral (`#ffa8a8`/`#c92a2a`) + strokeWidth: 3 |
|
||||
| Decision Point | Orange (`#ffd8a8`/`#e8590c`) + dashed stroke |
|
||||
### Comprehensive/Technical Diagrams
|
||||
Use concrete examples when:
|
||||
- Diagramming a real system, protocol, or architecture
|
||||
- The diagram will be used to teach or explain (e.g., YouTube video)
|
||||
- The audience needs to understand what things actually look like
|
||||
- You're showing how multiple technologies integrate
|
||||
|
||||
### 2. Labels Require TWO Elements
|
||||
**For technical diagrams, you MUST include evidence artifacts** (see below).
|
||||
|
||||
The `label` property does NOT work in raw JSON. Every labeled shape needs:
|
||||
---
|
||||
|
||||
```json
|
||||
// 1. Shape with boundElements reference
|
||||
{
|
||||
"id": "my-box",
|
||||
"type": "rectangle",
|
||||
"boundElements": [{ "type": "text", "id": "my-box-text" }]
|
||||
}
|
||||
## Research Mandate (For Technical Diagrams)
|
||||
|
||||
// 2. Separate text element with containerId
|
||||
{
|
||||
"id": "my-box-text",
|
||||
"type": "text",
|
||||
"containerId": "my-box",
|
||||
"text": "My Label"
|
||||
}
|
||||
**Before drawing anything technical, research the actual specifications.**
|
||||
|
||||
If you're diagramming a protocol, API, or framework:
|
||||
1. Look up the actual JSON/data formats
|
||||
2. Find the real event names, method names, or API endpoints
|
||||
3. Understand how the pieces actually connect
|
||||
4. Use real terminology, not generic placeholders
|
||||
|
||||
Bad: "Protocol" → "Frontend"
|
||||
Good: "AG-UI streams events (RUN_STARTED, STATE_DELTA, A2UI_UPDATE)" → "CopilotKit renders via createA2UIMessageRenderer()"
|
||||
|
||||
**Research makes diagrams accurate AND educational.**
|
||||
|
||||
---
|
||||
|
||||
## Evidence Artifacts
|
||||
|
||||
Evidence artifacts are concrete examples that prove your diagram is accurate and help viewers learn. Include them in technical diagrams.
|
||||
|
||||
**Types of evidence artifacts** (choose what's relevant to your diagram):
|
||||
|
||||
| Artifact Type | When to Use | How to Render |
|
||||
|---------------|-------------|---------------|
|
||||
| **Code snippets** | APIs, integrations, implementation details | Dark rectangle + syntax-colored text (see color palette for evidence artifact colors) |
|
||||
| **Data/JSON examples** | Data formats, schemas, payloads | Dark rectangle + colored text (see color palette) |
|
||||
| **Event/step sequences** | Protocols, workflows, lifecycles | Timeline pattern (line + dots + labels) |
|
||||
| **UI mockups** | Showing actual output/results | Nested rectangles mimicking real UI |
|
||||
| **Real input content** | Showing what goes IN to a system | Rectangle with sample content visible |
|
||||
| **API/method names** | Real function calls, endpoints | Use actual names from docs, not placeholders |
|
||||
|
||||
**Example**: For a diagram about a streaming protocol, you might show:
|
||||
- The actual event names from the spec (not just "Event 1", "Event 2")
|
||||
- A code snippet showing how to connect
|
||||
- What the streamed data actually looks like
|
||||
|
||||
**Example**: For a diagram about a data transformation pipeline:
|
||||
- Show sample input data (actual format, not "Input")
|
||||
- Show sample output data (actual format, not "Output")
|
||||
- Show intermediate states if relevant
|
||||
|
||||
The key principle: **show what things actually look like**, not just what they're called.
|
||||
|
||||
---
|
||||
|
||||
## Multi-Zoom Architecture
|
||||
|
||||
Comprehensive diagrams operate at multiple zoom levels simultaneously. Think of it like a map that shows both the country borders AND the street names.
|
||||
|
||||
### Level 1: Summary Flow
|
||||
A simplified overview showing the full pipeline or process at a glance. Often placed at the top or bottom of the diagram.
|
||||
|
||||
*Example*: `Input → Processing → Output` or `Client → Server → Database`
|
||||
|
||||
### Level 2: Section Boundaries
|
||||
Labeled regions that group related components. These create visual "rooms" that help viewers understand what belongs together.
|
||||
|
||||
*Example*: Grouping by responsibility (Backend / Frontend), by phase (Setup / Execution / Cleanup), or by team (User / System / External)
|
||||
|
||||
### Level 3: Detail Inside Sections
|
||||
Evidence artifacts, code snippets, and concrete examples within each section. This is where the educational value lives.
|
||||
|
||||
*Example*: Inside a "Backend" section, you might show the actual API response format, not just a box labeled "API Response"
|
||||
|
||||
**For comprehensive diagrams, aim to include all three levels.** The summary gives context, the sections organize, and the details teach.
|
||||
|
||||
### Bad vs Good
|
||||
|
||||
| Bad (Displaying) | Good (Arguing) |
|
||||
|------------------|----------------|
|
||||
| 5 equal boxes with labels | Each concept has a shape that mirrors its behavior |
|
||||
| Card grid layout | Visual structure matches conceptual structure |
|
||||
| Icons decorating text | Shapes that ARE the meaning |
|
||||
| Same container for everything | Distinct visual vocabulary per concept |
|
||||
| Everything in a box | Free-floating text with selective containers |
|
||||
|
||||
### Simple vs Comprehensive (Know Which You Need)
|
||||
|
||||
| Simple Diagram | Comprehensive Diagram |
|
||||
|----------------|----------------------|
|
||||
| Generic labels: "Input" → "Process" → "Output" | Specific: shows what the input/output actually looks like |
|
||||
| Named boxes: "API", "Database", "Client" | Named boxes + examples of actual requests/responses |
|
||||
| "Events" or "Messages" label | Timeline with real event/message names from the spec |
|
||||
| "UI" or "Dashboard" rectangle | Mockup showing actual UI elements and content |
|
||||
| ~30 seconds to explain | ~2-3 minutes of teaching content |
|
||||
| Viewer learns the structure | Viewer learns the structure AND the details |
|
||||
|
||||
**Simple diagrams** are fine for abstract concepts, quick overviews, or when the audience already knows the details. **Comprehensive diagrams** are needed for technical architectures, tutorials, educational content, or when you want the diagram itself to teach.
|
||||
|
||||
---
|
||||
|
||||
## Container vs. Free-Floating Text
|
||||
|
||||
**Not every piece of text needs a shape around it.** Default to free-floating text. Add containers only when they serve a purpose.
|
||||
|
||||
| Use a Container When... | Use Free-Floating Text When... |
|
||||
|------------------------|-------------------------------|
|
||||
| It's the focal point of a section | It's a label or description |
|
||||
| It needs visual grouping with other elements | It's supporting detail or metadata |
|
||||
| Arrows need to connect to it | It describes something nearby |
|
||||
| The shape itself carries meaning (decision diamond, etc.) | It's a section title, subtitle, or annotation |
|
||||
| It represents a distinct "thing" in the system | It's a section title, subtitle, or annotation |
|
||||
|
||||
**Typography as hierarchy**: Use font size, weight, and color to create visual hierarchy without boxes. A 28px title doesn't need a rectangle around it.
|
||||
|
||||
**The container test**: For each boxed element, ask "Would this work as free-floating text?" If yes, remove the container.
|
||||
|
||||
---
|
||||
|
||||
## Design Process (Do This BEFORE Generating JSON)
|
||||
|
||||
### Step 0: Assess Depth Required
|
||||
Before anything else, determine if this needs to be:
|
||||
- **Simple/Conceptual**: Abstract shapes, labels, relationships (mental models, philosophies)
|
||||
- **Comprehensive/Technical**: Concrete examples, code snippets, real data (systems, architectures, tutorials)
|
||||
|
||||
**If comprehensive**: Do research first. Look up actual specs, formats, event names, APIs.
|
||||
|
||||
### Step 1: Understand Deeply
|
||||
Read the content. For each concept, ask:
|
||||
- What does this concept **DO**? (not what IS it)
|
||||
- What relationships exist between concepts?
|
||||
- What's the core transformation or flow?
|
||||
- **What would someone need to SEE to understand this?** (not just read about)
|
||||
|
||||
### Step 2: Map Concepts to Patterns
|
||||
For each concept, find the visual pattern that mirrors its behavior:
|
||||
|
||||
| If the concept... | Use this pattern |
|
||||
|-------------------|------------------|
|
||||
| Spawns multiple outputs | **Fan-out** (radial arrows from center) |
|
||||
| Combines inputs into one | **Convergence** (funnel, arrows merging) |
|
||||
| Has hierarchy/nesting | **Tree** (lines + free-floating text) |
|
||||
| Is a sequence of steps | **Timeline** (line + dots + free-floating labels) |
|
||||
| Loops or improves continuously | **Spiral/Cycle** (arrow returning to start) |
|
||||
| Is an abstract state or context | **Cloud** (overlapping ellipses) |
|
||||
| Transforms input to output | **Assembly line** (before → process → after) |
|
||||
| Compares two things | **Side-by-side** (parallel with contrast) |
|
||||
| Separates into phases | **Gap/Break** (visual separation between sections) |
|
||||
|
||||
### Step 3: Ensure Variety
|
||||
For multi-concept diagrams: **each major concept must use a different visual pattern**. No uniform cards or grids.
|
||||
|
||||
### Step 4: Sketch the Flow
|
||||
Before JSON, mentally trace how the eye moves through the diagram. There should be a clear visual story.
|
||||
|
||||
### Step 5: Generate JSON
|
||||
Only now create the Excalidraw elements. **See below for how to handle large diagrams.**
|
||||
|
||||
### Step 6: Render & Validate (MANDATORY)
|
||||
After generating the JSON, you MUST run the render-view-fix loop until the diagram looks right. This is not optional — see the **Render & Validate** section below for the full process.
|
||||
|
||||
---
|
||||
|
||||
## Large / Comprehensive Diagram Strategy
|
||||
|
||||
**For comprehensive or technical diagrams, you MUST build the JSON one section at a time.** Do NOT attempt to generate the entire file in a single pass. This is a hard constraint — output token limits mean a comprehensive diagram easily exceeds capacity in one shot. Even if it didn't, generating everything at once leads to worse quality. Section-by-section is better in every way.
|
||||
|
||||
### The Section-by-Section Workflow
|
||||
|
||||
**Phase 1: Build each section**
|
||||
|
||||
1. **Create the base file** with the JSON wrapper (`type`, `version`, `appState`, `files`) and the first section of elements.
|
||||
2. **Add one section per edit.** Each section gets its own dedicated pass — take your time with it. Think carefully about the layout, spacing, and how this section connects to what's already there.
|
||||
3. **Use descriptive string IDs** (e.g., `"trigger_rect"`, `"arrow_fan_left"`) so cross-section references are readable.
|
||||
4. **Namespace seeds by section** (e.g., section 1 uses 100xxx, section 2 uses 200xxx) to avoid collisions.
|
||||
5. **Update cross-section bindings** as you go. When a new section's element needs to bind to an element from a previous section (e.g., an arrow connecting sections), edit the earlier element's `boundElements` array at the same time.
|
||||
|
||||
**Phase 2: Review the whole**
|
||||
|
||||
After all sections are in place, read through the complete JSON and check:
|
||||
- Are cross-section arrows bound correctly on both ends?
|
||||
- Is the overall spacing balanced, or are some sections cramped while others have too much whitespace?
|
||||
- Do IDs and bindings all reference elements that actually exist?
|
||||
|
||||
Fix any alignment or binding issues before rendering.
|
||||
|
||||
**Phase 3: Render & validate**
|
||||
|
||||
Now run the render-view-fix loop from the Render & Validate section. This is where you'll catch visual issues that aren't obvious from JSON — overlaps, clipping, imbalanced composition.
|
||||
|
||||
### Section Boundaries
|
||||
|
||||
Plan your sections around natural visual groupings from the diagram plan. A typical large diagram might split into:
|
||||
|
||||
- **Section 1**: Entry point / trigger
|
||||
- **Section 2**: First decision or routing
|
||||
- **Section 3**: Main content (hero section — may be the largest single section)
|
||||
- **Section 4-N**: Remaining phases, outputs, etc.
|
||||
|
||||
Each section should be independently understandable: its elements, internal arrows, and any cross-references to adjacent sections.
|
||||
|
||||
### What NOT to Do
|
||||
|
||||
- **Don't generate the entire diagram in one response.** You will hit the output token limit and produce truncated, broken JSON. Even if the diagram is small enough to fit, splitting into sections produces better results.
|
||||
- **Don't write a Python generator script.** The templating and coordinate math seem helpful but introduce a layer of indirection that makes debugging harder. Hand-crafted JSON with descriptive IDs is more maintainable.
|
||||
|
||||
---
|
||||
|
||||
## Visual Pattern Library
|
||||
|
||||
### Fan-Out (One-to-Many)
|
||||
Central element with arrows radiating to multiple targets. Use for: sources, PRDs, root causes, central hubs.
|
||||
```
|
||||
○
|
||||
↗
|
||||
□ → ○
|
||||
↘
|
||||
○
|
||||
```
|
||||
|
||||
### 3. Elbow Arrows Need Three Properties
|
||||
### Convergence (Many-to-One)
|
||||
Multiple inputs merging through arrows to single output. Use for: aggregation, funnels, synthesis.
|
||||
```
|
||||
○ ↘
|
||||
○ → □
|
||||
○ ↗
|
||||
```
|
||||
|
||||
For 90-degree corners (not curved):
|
||||
### Tree (Hierarchy)
|
||||
Parent-child branching with connecting lines and free-floating text (no boxes needed). Use for: file systems, org charts, taxonomies.
|
||||
```
|
||||
label
|
||||
├── label
|
||||
│ ├── label
|
||||
│ └── label
|
||||
└── label
|
||||
```
|
||||
Use `line` elements for the trunk and branches, free-floating text for labels.
|
||||
|
||||
### Spiral/Cycle (Continuous Loop)
|
||||
Elements in sequence with arrow returning to start. Use for: feedback loops, iterative processes, evolution.
|
||||
```
|
||||
□ → □
|
||||
↑ ↓
|
||||
□ ← □
|
||||
```
|
||||
|
||||
### Cloud (Abstract State)
|
||||
Overlapping ellipses with varied sizes. Use for: context, memory, conversations, mental states.
|
||||
|
||||
### Assembly Line (Transformation)
|
||||
Input → Process Box → Output with clear before/after. Use for: transformations, processing, conversion.
|
||||
```
|
||||
○○○ → [PROCESS] → □□□
|
||||
chaos order
|
||||
```
|
||||
|
||||
### Side-by-Side (Comparison)
|
||||
Two parallel structures with visual contrast. Use for: before/after, options, trade-offs.
|
||||
|
||||
### Gap/Break (Separation)
|
||||
Visual whitespace or barrier between sections. Use for: phase changes, context resets, boundaries.
|
||||
|
||||
### Lines as Structure
|
||||
Use lines (type: `line`, not arrows) as primary structural elements instead of boxes:
|
||||
- **Timelines**: Vertical or horizontal line with small dots (10-20px ellipses) at intervals, free-floating labels beside each dot
|
||||
- **Tree structures**: Vertical trunk line + horizontal branch lines, with free-floating text labels (no boxes needed)
|
||||
- **Dividers**: Thin dashed lines to separate sections
|
||||
- **Flow spines**: A central line that elements relate to, rather than connecting boxes
|
||||
|
||||
```
|
||||
Timeline: Tree:
|
||||
●─── Label 1 │
|
||||
│ ├── item
|
||||
●─── Label 2 │ ├── sub
|
||||
│ │ └── sub
|
||||
●─── Label 3 └── item
|
||||
```
|
||||
|
||||
Lines + free-floating text often creates a cleaner result than boxes + contained text.
|
||||
|
||||
---
|
||||
|
||||
## Shape Meaning
|
||||
|
||||
Choose shape based on what it represents—or use no shape at all:
|
||||
|
||||
| Concept Type | Shape | Why |
|
||||
|--------------|-------|-----|
|
||||
| Labels, descriptions, details | **none** (free-floating text) | Typography creates hierarchy |
|
||||
| Section titles, annotations | **none** (free-floating text) | Font size/weight is enough |
|
||||
| Markers on a timeline | small `ellipse` (10-20px) | Visual anchor, not container |
|
||||
| Start, trigger, input | `ellipse` | Soft, origin-like |
|
||||
| End, output, result | `ellipse` | Completion, destination |
|
||||
| Decision, condition | `diamond` | Classic decision symbol |
|
||||
| Process, action, step | `rectangle` | Contained action |
|
||||
| Abstract state, context | overlapping `ellipse` | Fuzzy, cloud-like |
|
||||
| Hierarchy node | lines + text (no boxes) | Structure through lines |
|
||||
|
||||
**Rule**: Default to no container. Add shapes only when they carry meaning. Aim for <30% of text elements to be inside containers.
|
||||
|
||||
---
|
||||
|
||||
## Color as Meaning
|
||||
|
||||
Colors encode information, not decoration. Every color choice should come from `references/color-palette.md` — the semantic shape colors, text hierarchy colors, and evidence artifact colors are all defined there.
|
||||
|
||||
**Key principles:**
|
||||
- Each semantic purpose (start, end, decision, AI, error, etc.) has a specific fill/stroke pair
|
||||
- Free-floating text uses color for hierarchy (titles, subtitles, details — each at a different level)
|
||||
- Evidence artifacts (code snippets, JSON examples) use their own dark background + colored text scheme
|
||||
- Always pair a darker stroke with a lighter fill for contrast
|
||||
|
||||
**Do not invent new colors.** If a concept doesn't fit an existing semantic category, use Primary/Neutral or Secondary.
|
||||
|
||||
---
|
||||
|
||||
## Modern Aesthetics
|
||||
|
||||
For clean, professional diagrams:
|
||||
|
||||
### Roughness
|
||||
- `roughness: 0` — Clean, crisp edges. Use for modern/technical diagrams.
|
||||
- `roughness: 1` — Hand-drawn, organic feel. Use for brainstorming/informal diagrams.
|
||||
|
||||
**Default to 0** for most professional use cases.
|
||||
|
||||
### Stroke Width
|
||||
- `strokeWidth: 1` — Thin, elegant. Good for lines, dividers, subtle connections.
|
||||
- `strokeWidth: 2` — Standard. Good for shapes and primary arrows.
|
||||
- `strokeWidth: 3` — Bold. Use sparingly for emphasis (main flow line, key connections).
|
||||
|
||||
### Opacity
|
||||
**Always use `opacity: 100` for all elements.** Use color, size, and stroke width to create hierarchy instead of transparency.
|
||||
|
||||
### Small Markers Instead of Shapes
|
||||
Instead of full shapes, use small dots (10-20px ellipses) as:
|
||||
- Timeline markers
|
||||
- Bullet points
|
||||
- Connection nodes
|
||||
- Visual anchors for free-floating text
|
||||
|
||||
---
|
||||
|
||||
## Layout Principles
|
||||
|
||||
### Hierarchy Through Scale
|
||||
- **Hero**: 300×150 - visual anchor, most important
|
||||
- **Primary**: 180×90
|
||||
- **Secondary**: 120×60
|
||||
- **Small**: 60×40
|
||||
|
||||
### Whitespace = Importance
|
||||
The most important element has the most empty space around it (200px+).
|
||||
|
||||
### Flow Direction
|
||||
Guide the eye: typically left→right or top→bottom for sequences, radial for hub-and-spoke.
|
||||
|
||||
### Connections Required
|
||||
Position alone doesn't show relationships. If A relates to B, there must be an arrow.
|
||||
|
||||
---
|
||||
|
||||
## Text Rules
|
||||
|
||||
**CRITICAL**: The JSON `text` property contains ONLY readable words.
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "arrow",
|
||||
"roughness": 0, // Clean lines
|
||||
"roundness": null, // Sharp corners
|
||||
"elbowed": true // 90-degree mode
|
||||
"id": "myElement1",
|
||||
"text": "Start",
|
||||
"originalText": "Start"
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Arrow Edge Calculations
|
||||
|
||||
Arrows must start/end at shape edges, not centers:
|
||||
|
||||
| Edge | Formula |
|
||||
|------|---------|
|
||||
| Top | `(x + width/2, y)` |
|
||||
| Bottom | `(x + width/2, y + height)` |
|
||||
| Left | `(x, y + height/2)` |
|
||||
| Right | `(x + width, y + height/2)` |
|
||||
|
||||
**Detailed arrow routing:** See `references/arrows.md`
|
||||
Settings: `fontSize: 16`, `fontFamily: 3`, `textAlign: "center"`, `verticalAlign: "middle"`
|
||||
|
||||
---
|
||||
|
||||
## Element Types
|
||||
## JSON Structure
|
||||
|
||||
| Type | Use For |
|
||||
|------|---------|
|
||||
| `rectangle` | Services, databases, containers, orchestrators |
|
||||
| `ellipse` | Users, external systems, start/end points |
|
||||
| `text` | Labels inside shapes, titles, annotations |
|
||||
| `arrow` | Data flow, connections, dependencies |
|
||||
| `line` | Grouping boundaries, separators |
|
||||
|
||||
**Full JSON format:** See `references/json-format.md`
|
||||
|
||||
---
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 1: Analyze Codebase
|
||||
|
||||
Discover components by looking for:
|
||||
|
||||
| Codebase Type | What to Look For |
|
||||
|---------------|------------------|
|
||||
| Monorepo | `packages/*/package.json`, workspace configs |
|
||||
| Microservices | `docker-compose.yml`, k8s manifests |
|
||||
| IaC | Terraform/Pulumi resource definitions |
|
||||
| Backend API | Route definitions, controllers, DB models |
|
||||
| Frontend | Component hierarchy, API calls |
|
||||
|
||||
**Use tools:**
|
||||
- `Glob` → `**/package.json`, `**/Dockerfile`, `**/*.tf`
|
||||
- `Grep` → `app.get`, `@Controller`, `CREATE TABLE`
|
||||
- `Read` → README, config files, entry points
|
||||
|
||||
### Step 2: Plan Layout
|
||||
|
||||
**Vertical flow (most common):**
|
||||
```
|
||||
Row 1: Users/Entry points (y: 100)
|
||||
Row 2: Frontend/Gateway (y: 230)
|
||||
Row 3: Orchestration (y: 380)
|
||||
Row 4: Services (y: 530)
|
||||
Row 5: Data layer (y: 680)
|
||||
|
||||
Columns: x = 100, 300, 500, 700, 900
|
||||
Element size: 160-200px x 80-90px
|
||||
```
|
||||
|
||||
**Other patterns:** See `references/examples.md`
|
||||
|
||||
### Step 3: Generate Elements
|
||||
|
||||
For each component:
|
||||
1. Create shape with unique `id`
|
||||
2. Add `boundElements` referencing text
|
||||
3. Create text with `containerId`
|
||||
4. Choose color based on type
|
||||
|
||||
**Color palettes:** See `references/colors.md`
|
||||
|
||||
### Step 4: Add Connections
|
||||
|
||||
For each relationship:
|
||||
1. Calculate source edge point
|
||||
2. Plan elbow route (avoid overlaps)
|
||||
3. Create arrow with `points` array
|
||||
4. Match stroke color to destination type
|
||||
|
||||
**Arrow patterns:** See `references/arrows.md`
|
||||
|
||||
### Step 5: Add Grouping (Optional)
|
||||
|
||||
For logical groupings:
|
||||
- Large transparent rectangle with `strokeStyle: "dashed"`
|
||||
- Standalone text label at top-left
|
||||
|
||||
### Step 6: Validate and Write
|
||||
|
||||
Run validation before writing. Save to `docs/` or user-specified path.
|
||||
|
||||
**Validation checklist:** See `references/validation.md`
|
||||
|
||||
---
|
||||
|
||||
## Quick Arrow Reference
|
||||
|
||||
**Straight down:**
|
||||
```json
|
||||
{ "points": [[0, 0], [0, 110]], "x": 590, "y": 290 }
|
||||
{
|
||||
"type": "excalidraw",
|
||||
"version": 2,
|
||||
"source": "https://excalidraw.com",
|
||||
"elements": [...],
|
||||
"appState": {
|
||||
"viewBackgroundColor": "#ffffff",
|
||||
"gridSize": 20
|
||||
},
|
||||
"files": {}
|
||||
}
|
||||
```
|
||||
|
||||
**L-shape (left then down):**
|
||||
```json
|
||||
{ "points": [[0, 0], [-325, 0], [-325, 125]], "x": 525, "y": 420 }
|
||||
```
|
||||
## Element Templates
|
||||
|
||||
**U-turn (callback):**
|
||||
```json
|
||||
{ "points": [[0, 0], [50, 0], [50, -125], [20, -125]], "x": 710, "y": 440 }
|
||||
```
|
||||
|
||||
**Arrow width/height** = bounding box of points:
|
||||
```
|
||||
points [[0,0], [-440,0], [-440,70]] → width=440, height=70
|
||||
```
|
||||
|
||||
**Multiple arrows from same edge** - stagger positions:
|
||||
```
|
||||
5 arrows: 20%, 35%, 50%, 65%, 80% across edge width
|
||||
```
|
||||
See `references/element-templates.md` for copy-paste JSON templates for each element type (text, line, dot, rectangle, arrow). Pull colors from `references/color-palette.md` based on each element's semantic purpose.
|
||||
|
||||
---
|
||||
|
||||
## Default Color Palette
|
||||
## Render & Validate (MANDATORY)
|
||||
|
||||
| Component | Background | Stroke |
|
||||
|-----------|------------|--------|
|
||||
| Frontend | `#a5d8ff` | `#1971c2` |
|
||||
| Backend/API | `#d0bfff` | `#7048e8` |
|
||||
| Database | `#b2f2bb` | `#2f9e44` |
|
||||
| Storage | `#ffec99` | `#f08c00` |
|
||||
| AI/ML | `#e599f7` | `#9c36b5` |
|
||||
| External APIs | `#ffc9c9` | `#e03131` |
|
||||
| Orchestration | `#ffa8a8` | `#c92a2a` |
|
||||
| Message Queue | `#fff3bf` | `#fab005` |
|
||||
| Cache | `#ffe8cc` | `#fd7e14` |
|
||||
| Users | `#e7f5ff` | `#1971c2` |
|
||||
You cannot judge a diagram from JSON alone. After generating or editing the Excalidraw JSON, you MUST render it to PNG, view the image, and fix what you see — in a loop until it's right. This is a core part of the workflow, not a final check.
|
||||
|
||||
**Cloud-specific palettes:** See `references/colors.md`
|
||||
### How to Render
|
||||
|
||||
Run the render script from the skill's `references/` directory:
|
||||
|
||||
```bash
|
||||
python3 <skill-references-dir>/render_excalidraw.py <path-to-file.excalidraw>
|
||||
```
|
||||
|
||||
This outputs a PNG next to the `.excalidraw` file. Then use the **Read tool** on the PNG to actually view it.
|
||||
|
||||
### The Loop
|
||||
|
||||
After generating the initial JSON, run this cycle:
|
||||
|
||||
**1. Render & View** — Run the render script, then Read the PNG.
|
||||
|
||||
**2. Audit against your original vision** — Before looking for bugs, compare the rendered result to what you designed in Steps 1-4. Ask:
|
||||
- Does the visual structure match the conceptual structure you planned?
|
||||
- Does each section use the pattern you intended (fan-out, convergence, timeline, etc.)?
|
||||
- Does the eye flow through the diagram in the order you designed?
|
||||
- Is the visual hierarchy correct — hero elements dominant, supporting elements smaller?
|
||||
- For technical diagrams: are the evidence artifacts (code snippets, data examples) readable and properly placed?
|
||||
|
||||
**3. Check for visual defects:**
|
||||
- Text clipped by or overflowing its container
|
||||
- Text or shapes overlapping other elements
|
||||
- Arrows crossing through elements instead of routing around them
|
||||
- Arrows landing on the wrong element or pointing into empty space
|
||||
- Labels floating ambiguously (not clearly anchored to what they describe)
|
||||
- Uneven spacing between elements that should be evenly spaced
|
||||
- Sections with too much whitespace next to sections that are too cramped
|
||||
- Text too small to read at the rendered size
|
||||
- Overall composition feels lopsided or unbalanced
|
||||
|
||||
**4. Fix** — Edit the JSON to address everything you found. Common fixes:
|
||||
- Widen containers when text is clipped
|
||||
- Adjust `x`/`y` coordinates to fix spacing and alignment
|
||||
- Add intermediate waypoints to arrow `points` arrays to route around elements
|
||||
- Reposition labels closer to the element they describe
|
||||
- Resize elements to rebalance visual weight across sections
|
||||
|
||||
**5. Re-render & re-view** — Run the render script again and Read the new PNG.
|
||||
|
||||
**6. Repeat** — Keep cycling until the diagram passes both the vision check (Step 2) and the defect check (Step 3). Typically takes 2-4 iterations. Don't stop after one pass just because there are no critical bugs — if the composition could be better, improve it.
|
||||
|
||||
### When to Stop
|
||||
|
||||
The loop is done when:
|
||||
- The rendered diagram matches the conceptual design from your planning steps
|
||||
- No text is clipped, overlapping, or unreadable
|
||||
- Arrows route cleanly and connect to the right elements
|
||||
- Spacing is consistent and the composition is balanced
|
||||
- You'd be comfortable showing it to someone without caveats
|
||||
|
||||
---
|
||||
|
||||
## Quick Validation Checklist
|
||||
## Quality Checklist
|
||||
|
||||
Before writing file:
|
||||
- [ ] Every shape with label has boundElements + text element
|
||||
- [ ] Text elements have containerId matching shape
|
||||
- [ ] Multi-point arrows have `elbowed: true`, `roundness: null`
|
||||
- [ ] Arrow x,y = source shape edge point
|
||||
- [ ] Arrow final point offset reaches target edge
|
||||
- [ ] No diamond shapes
|
||||
- [ ] No duplicate IDs
|
||||
### Depth & Evidence (Check First for Technical Diagrams)
|
||||
1. **Research done**: Did you look up actual specs, formats, event names?
|
||||
2. **Evidence artifacts**: Are there code snippets, JSON examples, or real data?
|
||||
3. **Multi-zoom**: Does it have summary flow + section boundaries + detail?
|
||||
4. **Concrete over abstract**: Real content shown, not just labeled boxes?
|
||||
5. **Educational value**: Could someone learn something concrete from this?
|
||||
|
||||
**Full validation algorithm:** See `references/validation.md`
|
||||
### Conceptual
|
||||
6. **Isomorphism**: Does each visual structure mirror its concept's behavior?
|
||||
7. **Argument**: Does the diagram SHOW something text alone couldn't?
|
||||
8. **Variety**: Does each major concept use a different visual pattern?
|
||||
9. **No uniform containers**: Avoided card grids and equal boxes?
|
||||
|
||||
---
|
||||
### Container Discipline
|
||||
10. **Minimal containers**: Could any boxed element work as free-floating text instead?
|
||||
11. **Lines as structure**: Are tree/timeline patterns using lines + text rather than boxes?
|
||||
12. **Typography hierarchy**: Are font size and color creating visual hierarchy (reducing need for boxes)?
|
||||
|
||||
## Common Issues
|
||||
### Structural
|
||||
13. **Connections**: Every relationship has an arrow or line
|
||||
14. **Flow**: Clear visual path for the eye to follow
|
||||
15. **Hierarchy**: Important elements are larger/more isolated
|
||||
|
||||
| Issue | Fix |
|
||||
|-------|-----|
|
||||
| Labels don't appear | Use TWO elements (shape + text), not `label` property |
|
||||
| Arrows curved | Add `elbowed: true`, `roundness: null`, `roughness: 0` |
|
||||
| Arrows floating | Calculate x,y from shape edge, not center |
|
||||
| Arrows overlapping | Stagger start positions across edge |
|
||||
### Technical
|
||||
16. **Text clean**: `text` contains only readable words
|
||||
17. **Font**: `fontFamily: 3`
|
||||
18. **Roughness**: `roughness: 0` for clean/modern (unless hand-drawn style requested)
|
||||
19. **Opacity**: `opacity: 100` for all elements (no transparency)
|
||||
20. **Container ratio**: <30% of text elements should be inside containers
|
||||
|
||||
**Detailed bug fixes:** See `references/validation.md`
|
||||
|
||||
---
|
||||
|
||||
## Reference Files
|
||||
|
||||
| File | Contents |
|
||||
|------|----------|
|
||||
| `references/json-format.md` | Element types, required properties, text bindings |
|
||||
| `references/arrows.md` | Routing algorithm, patterns, bindings, staggering |
|
||||
| `references/colors.md` | Default, AWS, Azure, GCP, K8s palettes |
|
||||
| `references/examples.md` | Complete JSON examples, layout patterns |
|
||||
| `references/validation.md` | Checklists, validation algorithm, bug fixes |
|
||||
|
||||
---
|
||||
|
||||
## Output
|
||||
|
||||
- **Location:** `docs/architecture/` or user-specified
|
||||
- **Filename:** Descriptive, e.g., `system-architecture.excalidraw`
|
||||
- **Testing:** Open in https://excalidraw.com or VS Code extension
|
||||
### Visual Validation (Render Required)
|
||||
21. **Rendered to PNG**: Diagram has been rendered and visually inspected
|
||||
22. **No text overflow**: All text fits within its container
|
||||
23. **No overlapping elements**: Shapes and text don't overlap unintentionally
|
||||
24. **Even spacing**: Similar elements have consistent spacing
|
||||
25. **Arrows land correctly**: Arrows connect to intended elements without crossing others
|
||||
26. **Readable at export size**: Text is legible in the rendered PNG
|
||||
27. **Balanced composition**: No large empty voids or overcrowded regions
|
||||
|
||||
@@ -1,288 +0,0 @@
|
||||
# Arrow Routing Reference
|
||||
|
||||
Complete guide for creating elbow arrows with proper connections.
|
||||
|
||||
---
|
||||
|
||||
## Critical: Elbow Arrow Properties
|
||||
|
||||
Three required properties for 90-degree corners:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "arrow",
|
||||
"roughness": 0, // Clean lines
|
||||
"roundness": null, // Sharp corners (not curved)
|
||||
"elbowed": true // Enables elbow mode
|
||||
}
|
||||
```
|
||||
|
||||
**Without these, arrows will be curved, not 90-degree elbows.**
|
||||
|
||||
---
|
||||
|
||||
## Edge Calculation Formulas
|
||||
|
||||
| Shape Type | Edge | Formula |
|
||||
|------------|------|---------|
|
||||
| Rectangle | Top | `(x + width/2, y)` |
|
||||
| Rectangle | Bottom | `(x + width/2, y + height)` |
|
||||
| Rectangle | Left | `(x, y + height/2)` |
|
||||
| Rectangle | Right | `(x + width, y + height/2)` |
|
||||
| Ellipse | Top | `(x + width/2, y)` |
|
||||
| Ellipse | Bottom | `(x + width/2, y + height)` |
|
||||
|
||||
---
|
||||
|
||||
## Universal Arrow Routing Algorithm
|
||||
|
||||
```
|
||||
FUNCTION createArrow(source, target, sourceEdge, targetEdge):
|
||||
// Step 1: Get source edge point
|
||||
sourcePoint = getEdgePoint(source, sourceEdge)
|
||||
|
||||
// Step 2: Get target edge point
|
||||
targetPoint = getEdgePoint(target, targetEdge)
|
||||
|
||||
// Step 3: Calculate offsets
|
||||
dx = targetPoint.x - sourcePoint.x
|
||||
dy = targetPoint.y - sourcePoint.y
|
||||
|
||||
// Step 4: Determine routing pattern
|
||||
IF sourceEdge == "bottom" AND targetEdge == "top":
|
||||
IF abs(dx) < 10: // Nearly aligned
|
||||
points = [[0, 0], [0, dy]]
|
||||
ELSE: // Need L-shape
|
||||
points = [[0, 0], [dx, 0], [dx, dy]]
|
||||
|
||||
ELSE IF sourceEdge == "right" AND targetEdge == "left":
|
||||
IF abs(dy) < 10:
|
||||
points = [[0, 0], [dx, 0]]
|
||||
ELSE:
|
||||
points = [[0, 0], [0, dy], [dx, dy]]
|
||||
|
||||
ELSE IF sourceEdge == targetEdge: // U-turn
|
||||
clearance = 50
|
||||
IF sourceEdge == "right":
|
||||
points = [[0, 0], [clearance, 0], [clearance, dy], [dx, dy]]
|
||||
ELSE IF sourceEdge == "bottom":
|
||||
points = [[0, 0], [0, clearance], [dx, clearance], [dx, dy]]
|
||||
|
||||
// Step 5: Calculate bounding box
|
||||
width = max(abs(p[0]) for p in points)
|
||||
height = max(abs(p[1]) for p in points)
|
||||
|
||||
RETURN {x: sourcePoint.x, y: sourcePoint.y, points, width, height}
|
||||
|
||||
FUNCTION getEdgePoint(shape, edge):
|
||||
SWITCH edge:
|
||||
"top": RETURN (shape.x + shape.width/2, shape.y)
|
||||
"bottom": RETURN (shape.x + shape.width/2, shape.y + shape.height)
|
||||
"left": RETURN (shape.x, shape.y + shape.height/2)
|
||||
"right": RETURN (shape.x + shape.width, shape.y + shape.height/2)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Arrow Patterns Reference
|
||||
|
||||
| Pattern | Points | Use Case |
|
||||
|---------|--------|----------|
|
||||
| Down | `[[0,0], [0,h]]` | Vertical connection |
|
||||
| Right | `[[0,0], [w,0]]` | Horizontal connection |
|
||||
| L-left-down | `[[0,0], [-w,0], [-w,h]]` | Go left, then down |
|
||||
| L-right-down | `[[0,0], [w,0], [w,h]]` | Go right, then down |
|
||||
| L-down-left | `[[0,0], [0,h], [-w,h]]` | Go down, then left |
|
||||
| L-down-right | `[[0,0], [0,h], [w,h]]` | Go down, then right |
|
||||
| S-shape | `[[0,0], [0,h1], [w,h1], [w,h2]]` | Navigate around obstacles |
|
||||
| U-turn | `[[0,0], [w,0], [w,-h], [0,-h]]` | Callback/return arrows |
|
||||
|
||||
---
|
||||
|
||||
## Worked Examples
|
||||
|
||||
### Vertical Connection (Bottom to Top)
|
||||
|
||||
```
|
||||
Source: x=500, y=200, width=180, height=90
|
||||
Target: x=500, y=400, width=180, height=90
|
||||
|
||||
source_bottom = (500 + 180/2, 200 + 90) = (590, 290)
|
||||
target_top = (500 + 180/2, 400) = (590, 400)
|
||||
|
||||
Arrow x = 590, y = 290
|
||||
Distance = 400 - 290 = 110
|
||||
Points = [[0, 0], [0, 110]]
|
||||
```
|
||||
|
||||
### Fan-out (One to Many)
|
||||
|
||||
```
|
||||
Orchestrator: x=570, y=400, width=140, height=80
|
||||
Target: x=120, y=550, width=160, height=80
|
||||
|
||||
orchestrator_bottom = (570 + 140/2, 400 + 80) = (640, 480)
|
||||
target_top = (120 + 160/2, 550) = (200, 550)
|
||||
|
||||
Arrow x = 640, y = 480
|
||||
Horizontal offset = 200 - 640 = -440
|
||||
Vertical offset = 550 - 480 = 70
|
||||
|
||||
Points = [[0, 0], [-440, 0], [-440, 70]] // Left first, then down
|
||||
```
|
||||
|
||||
### U-turn (Callback)
|
||||
|
||||
```
|
||||
Source: x=570, y=400, width=140, height=80
|
||||
Target: x=550, y=270, width=180, height=90
|
||||
Connection: Right of source -> Right of target
|
||||
|
||||
source_right = (570 + 140, 400 + 80/2) = (710, 440)
|
||||
target_right = (550 + 180, 270 + 90/2) = (730, 315)
|
||||
|
||||
Arrow x = 710, y = 440
|
||||
Vertical distance = 315 - 440 = -125
|
||||
Final x offset = 730 - 710 = 20
|
||||
|
||||
Points = [[0, 0], [50, 0], [50, -125], [20, -125]]
|
||||
// Right 50px (clearance), up 125px, left 30px
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Staggering Multiple Arrows
|
||||
|
||||
When N arrows leave from same edge, spread evenly:
|
||||
|
||||
```
|
||||
FUNCTION getStaggeredPositions(shape, edge, numArrows):
|
||||
positions = []
|
||||
FOR i FROM 0 TO numArrows-1:
|
||||
percentage = 0.2 + (0.6 * i / (numArrows - 1))
|
||||
|
||||
IF edge == "bottom" OR edge == "top":
|
||||
x = shape.x + shape.width * percentage
|
||||
y = (edge == "bottom") ? shape.y + shape.height : shape.y
|
||||
ELSE:
|
||||
x = (edge == "right") ? shape.x + shape.width : shape.x
|
||||
y = shape.y + shape.height * percentage
|
||||
|
||||
positions.append({x, y})
|
||||
RETURN positions
|
||||
|
||||
// Examples:
|
||||
// 2 arrows: 20%, 80%
|
||||
// 3 arrows: 20%, 50%, 80%
|
||||
// 5 arrows: 20%, 35%, 50%, 65%, 80%
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Arrow Bindings
|
||||
|
||||
For better visual attachment, use `startBinding` and `endBinding`:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "arrow-workflow-convert",
|
||||
"type": "arrow",
|
||||
"x": 525,
|
||||
"y": 420,
|
||||
"width": 325,
|
||||
"height": 125,
|
||||
"points": [[0, 0], [-325, 0], [-325, 125]],
|
||||
"roughness": 0,
|
||||
"roundness": null,
|
||||
"elbowed": true,
|
||||
"startBinding": {
|
||||
"elementId": "cloud-workflows",
|
||||
"focus": 0,
|
||||
"gap": 1,
|
||||
"fixedPoint": [0.5, 1]
|
||||
},
|
||||
"endBinding": {
|
||||
"elementId": "convert-pdf-service",
|
||||
"focus": 0,
|
||||
"gap": 1,
|
||||
"fixedPoint": [0.5, 0]
|
||||
},
|
||||
"startArrowhead": null,
|
||||
"endArrowhead": "arrow"
|
||||
}
|
||||
```
|
||||
|
||||
### fixedPoint Values
|
||||
|
||||
- Top center: `[0.5, 0]`
|
||||
- Bottom center: `[0.5, 1]`
|
||||
- Left center: `[0, 0.5]`
|
||||
- Right center: `[1, 0.5]`
|
||||
|
||||
### Update Shape boundElements
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "cloud-workflows",
|
||||
"boundElements": [
|
||||
{ "type": "text", "id": "cloud-workflows-text" },
|
||||
{ "type": "arrow", "id": "arrow-workflow-convert" }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Bidirectional Arrows
|
||||
|
||||
For two-way data flows:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "arrow",
|
||||
"startArrowhead": "arrow",
|
||||
"endArrowhead": "arrow"
|
||||
}
|
||||
```
|
||||
|
||||
Arrowhead options: `null`, `"arrow"`, `"bar"`, `"dot"`, `"triangle"`
|
||||
|
||||
---
|
||||
|
||||
## Arrow Labels
|
||||
|
||||
Position standalone text near arrow midpoint:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "arrow-api-db-label",
|
||||
"type": "text",
|
||||
"x": 305, // Arrow x + offset
|
||||
"y": 245, // Arrow midpoint
|
||||
"text": "SQL",
|
||||
"fontSize": 12,
|
||||
"containerId": null,
|
||||
"backgroundColor": "#ffffff"
|
||||
}
|
||||
```
|
||||
|
||||
**Positioning formula:**
|
||||
- Vertical: `label.y = arrow.y + (total_height / 2)`
|
||||
- Horizontal: `label.x = arrow.x + (total_width / 2)`
|
||||
- L-shaped: Position at corner or longest segment midpoint
|
||||
|
||||
---
|
||||
|
||||
## Width/Height Calculation
|
||||
|
||||
Arrow `width` and `height` = bounding box of path:
|
||||
|
||||
```
|
||||
points = [[0, 0], [-440, 0], [-440, 70]]
|
||||
width = abs(-440) = 440
|
||||
height = abs(70) = 70
|
||||
|
||||
points = [[0, 0], [50, 0], [50, -125], [20, -125]]
|
||||
width = max(abs(50), abs(20)) = 50
|
||||
height = abs(-125) = 125
|
||||
```
|
||||
67
skills/excalidraw/references/color-palette.md
Normal file
67
skills/excalidraw/references/color-palette.md
Normal file
@@ -0,0 +1,67 @@
|
||||
# Color Palette & Brand Style
|
||||
|
||||
**This is the single source of truth for all colors and brand-specific styles.** To customize diagrams for your own brand, edit this file — everything else in the skill is universal.
|
||||
|
||||
---
|
||||
|
||||
## Shape Colors (Semantic)
|
||||
|
||||
Colors encode meaning, not decoration. Each semantic purpose has a fill/stroke pair.
|
||||
|
||||
| Semantic Purpose | Fill | Stroke |
|
||||
|------------------|------|--------|
|
||||
| Primary/Neutral | `#3b82f6` | `#1e3a5f` |
|
||||
| Secondary | `#60a5fa` | `#1e3a5f` |
|
||||
| Tertiary | `#93c5fd` | `#1e3a5f` |
|
||||
| Start/Trigger | `#fed7aa` | `#c2410c` |
|
||||
| End/Success | `#a7f3d0` | `#047857` |
|
||||
| Warning/Reset | `#fee2e2` | `#dc2626` |
|
||||
| Decision | `#fef3c7` | `#b45309` |
|
||||
| AI/LLM | `#ddd6fe` | `#6d28d9` |
|
||||
| Inactive/Disabled | `#dbeafe` | `#1e40af` (use dashed stroke) |
|
||||
| Error | `#fecaca` | `#b91c1c` |
|
||||
|
||||
**Rule**: Always pair a darker stroke with a lighter fill for contrast.
|
||||
|
||||
---
|
||||
|
||||
## Text Colors (Hierarchy)
|
||||
|
||||
Use color on free-floating text to create visual hierarchy without containers.
|
||||
|
||||
| Level | Color | Use For |
|
||||
|-------|-------|---------|
|
||||
| Title | `#1e40af` | Section headings, major labels |
|
||||
| Subtitle | `#3b82f6` | Subheadings, secondary labels |
|
||||
| Body/Detail | `#64748b` | Descriptions, annotations, metadata |
|
||||
| On light fills | `#374151` | Text inside light-colored shapes |
|
||||
| On dark fills | `#ffffff` | Text inside dark-colored shapes |
|
||||
|
||||
---
|
||||
|
||||
## Evidence Artifact Colors
|
||||
|
||||
Used for code snippets, data examples, and other concrete evidence inside technical diagrams.
|
||||
|
||||
| Artifact | Background | Text Color |
|
||||
|----------|-----------|------------|
|
||||
| Code snippet | `#1e293b` | Syntax-colored (language-appropriate) |
|
||||
| JSON/data example | `#1e293b` | `#22c55e` (green) |
|
||||
|
||||
---
|
||||
|
||||
## Default Stroke & Line Colors
|
||||
|
||||
| Element | Color |
|
||||
|---------|-------|
|
||||
| Arrows | Use the stroke color of the source element's semantic purpose |
|
||||
| Structural lines (dividers, trees, timelines) | Primary stroke (`#1e3a5f`) or Slate (`#64748b`) |
|
||||
| Marker dots (fill + stroke) | Primary fill (`#3b82f6`) |
|
||||
|
||||
---
|
||||
|
||||
## Background
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| Canvas background | `#ffffff` |
|
||||
@@ -1,91 +0,0 @@
|
||||
# Color Palettes Reference
|
||||
|
||||
Color schemes for different platforms and component types.
|
||||
|
||||
---
|
||||
|
||||
## Default Palette (Platform-Agnostic)
|
||||
|
||||
| Component Type | Background | Stroke | Example |
|
||||
|----------------|------------|--------|---------|
|
||||
| Frontend/UI | `#a5d8ff` | `#1971c2` | Next.js, React apps |
|
||||
| Backend/API | `#d0bfff` | `#7048e8` | API servers, processors |
|
||||
| Database | `#b2f2bb` | `#2f9e44` | PostgreSQL, MySQL, MongoDB |
|
||||
| Storage | `#ffec99` | `#f08c00` | Object storage, file systems |
|
||||
| AI/ML Services | `#e599f7` | `#9c36b5` | ML models, AI APIs |
|
||||
| External APIs | `#ffc9c9` | `#e03131` | Third-party services |
|
||||
| Orchestration | `#ffa8a8` | `#c92a2a` | Workflows, schedulers |
|
||||
| Validation | `#ffd8a8` | `#e8590c` | Validators, checkers |
|
||||
| Network/Security | `#dee2e6` | `#495057` | VPC, IAM, firewalls |
|
||||
| Classification | `#99e9f2` | `#0c8599` | Routers, classifiers |
|
||||
| Users/Actors | `#e7f5ff` | `#1971c2` | User ellipses |
|
||||
| Message Queue | `#fff3bf` | `#fab005` | Kafka, RabbitMQ, SQS |
|
||||
| Cache | `#ffe8cc` | `#fd7e14` | Redis, Memcached |
|
||||
| Monitoring | `#d3f9d8` | `#40c057` | Prometheus, Grafana |
|
||||
|
||||
---
|
||||
|
||||
## AWS Palette
|
||||
|
||||
| Service Category | Background | Stroke |
|
||||
|-----------------|------------|--------|
|
||||
| Compute (EC2, Lambda, ECS) | `#ff9900` | `#cc7a00` |
|
||||
| Storage (S3, EBS) | `#3f8624` | `#2d6119` |
|
||||
| Database (RDS, DynamoDB) | `#3b48cc` | `#2d3899` |
|
||||
| Networking (VPC, Route53) | `#8c4fff` | `#6b3dcc` |
|
||||
| Security (IAM, KMS) | `#dd344c` | `#b12a3d` |
|
||||
| Analytics (Kinesis, Athena) | `#8c4fff` | `#6b3dcc` |
|
||||
| ML (SageMaker, Bedrock) | `#01a88d` | `#017d69` |
|
||||
|
||||
---
|
||||
|
||||
## Azure Palette
|
||||
|
||||
| Service Category | Background | Stroke |
|
||||
|-----------------|------------|--------|
|
||||
| Compute | `#0078d4` | `#005a9e` |
|
||||
| Storage | `#50e6ff` | `#3cb5cc` |
|
||||
| Database | `#0078d4` | `#005a9e` |
|
||||
| Networking | `#773adc` | `#5a2ca8` |
|
||||
| Security | `#ff8c00` | `#cc7000` |
|
||||
| AI/ML | `#50e6ff` | `#3cb5cc` |
|
||||
|
||||
---
|
||||
|
||||
## GCP Palette
|
||||
|
||||
| Service Category | Background | Stroke |
|
||||
|-----------------|------------|--------|
|
||||
| Compute (GCE, Cloud Run) | `#4285f4` | `#3367d6` |
|
||||
| Storage (GCS) | `#34a853` | `#2d8e47` |
|
||||
| Database (Cloud SQL, Firestore) | `#ea4335` | `#c53929` |
|
||||
| Networking | `#fbbc04` | `#d99e04` |
|
||||
| AI/ML (Vertex AI) | `#9334e6` | `#7627b8` |
|
||||
|
||||
---
|
||||
|
||||
## Kubernetes Palette
|
||||
|
||||
| Component | Background | Stroke |
|
||||
|-----------|------------|--------|
|
||||
| Pod | `#326ce5` | `#2756b8` |
|
||||
| Service | `#326ce5` | `#2756b8` |
|
||||
| Deployment | `#326ce5` | `#2756b8` |
|
||||
| ConfigMap/Secret | `#7f8c8d` | `#626d6e` |
|
||||
| Ingress | `#00d4aa` | `#00a888` |
|
||||
| Node | `#303030` | `#1a1a1a` |
|
||||
| Namespace | `#f0f0f0` | `#c0c0c0` (dashed) |
|
||||
|
||||
---
|
||||
|
||||
## Diagram Type Suggestions
|
||||
|
||||
| Diagram Type | Recommended Layout | Key Elements |
|
||||
|--------------|-------------------|--------------|
|
||||
| Microservices | Vertical flow | Services, databases, queues, API gateway |
|
||||
| Data Pipeline | Horizontal flow | Sources, transformers, sinks, storage |
|
||||
| Event-Driven | Hub-and-spoke | Event bus center, producers/consumers |
|
||||
| Kubernetes | Layered groups | Namespace boxes, pods inside deployments |
|
||||
| CI/CD | Horizontal flow | Source -> Build -> Test -> Deploy -> Monitor |
|
||||
| Network | Hierarchical | Internet -> LB -> VPC -> Subnets -> Instances |
|
||||
| User Flow | Swimlanes | User actions, system responses, external calls |
|
||||
182
skills/excalidraw/references/element-templates.md
Normal file
182
skills/excalidraw/references/element-templates.md
Normal file
@@ -0,0 +1,182 @@
|
||||
# Element Templates
|
||||
|
||||
Copy-paste JSON templates for each Excalidraw element type. The `strokeColor` and `backgroundColor` values are placeholders — always pull actual colors from `color-palette.md` based on the element's semantic purpose.
|
||||
|
||||
## Free-Floating Text (no container)
|
||||
```json
|
||||
{
|
||||
"type": "text",
|
||||
"id": "label1",
|
||||
"x": 100, "y": 100,
|
||||
"width": 200, "height": 25,
|
||||
"text": "Section Title",
|
||||
"originalText": "Section Title",
|
||||
"fontSize": 20,
|
||||
"fontFamily": 3,
|
||||
"textAlign": "left",
|
||||
"verticalAlign": "top",
|
||||
"strokeColor": "<title color from palette>",
|
||||
"backgroundColor": "transparent",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"seed": 11111,
|
||||
"version": 1,
|
||||
"versionNonce": 22222,
|
||||
"isDeleted": false,
|
||||
"groupIds": [],
|
||||
"boundElements": null,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"containerId": null,
|
||||
"lineHeight": 1.25
|
||||
}
|
||||
```
|
||||
|
||||
## Line (structural, not arrow)
|
||||
```json
|
||||
{
|
||||
"type": "line",
|
||||
"id": "line1",
|
||||
"x": 100, "y": 100,
|
||||
"width": 0, "height": 200,
|
||||
"strokeColor": "<structural line color from palette>",
|
||||
"backgroundColor": "transparent",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 2,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"seed": 44444,
|
||||
"version": 1,
|
||||
"versionNonce": 55555,
|
||||
"isDeleted": false,
|
||||
"groupIds": [],
|
||||
"boundElements": null,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"points": [[0, 0], [0, 200]]
|
||||
}
|
||||
```
|
||||
|
||||
## Small Marker Dot
|
||||
```json
|
||||
{
|
||||
"type": "ellipse",
|
||||
"id": "dot1",
|
||||
"x": 94, "y": 94,
|
||||
"width": 12, "height": 12,
|
||||
"strokeColor": "<marker dot color from palette>",
|
||||
"backgroundColor": "<marker dot color from palette>",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"seed": 66666,
|
||||
"version": 1,
|
||||
"versionNonce": 77777,
|
||||
"isDeleted": false,
|
||||
"groupIds": [],
|
||||
"boundElements": null,
|
||||
"link": null,
|
||||
"locked": false
|
||||
}
|
||||
```
|
||||
|
||||
## Rectangle
|
||||
```json
|
||||
{
|
||||
"type": "rectangle",
|
||||
"id": "elem1",
|
||||
"x": 100, "y": 100, "width": 180, "height": 90,
|
||||
"strokeColor": "<stroke from palette based on semantic purpose>",
|
||||
"backgroundColor": "<fill from palette based on semantic purpose>",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 2,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"seed": 12345,
|
||||
"version": 1,
|
||||
"versionNonce": 67890,
|
||||
"isDeleted": false,
|
||||
"groupIds": [],
|
||||
"boundElements": [{"id": "text1", "type": "text"}],
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"roundness": {"type": 3}
|
||||
}
|
||||
```
|
||||
|
||||
## Text (centered in shape)
|
||||
```json
|
||||
{
|
||||
"type": "text",
|
||||
"id": "text1",
|
||||
"x": 130, "y": 132,
|
||||
"width": 120, "height": 25,
|
||||
"text": "Process",
|
||||
"originalText": "Process",
|
||||
"fontSize": 16,
|
||||
"fontFamily": 3,
|
||||
"textAlign": "center",
|
||||
"verticalAlign": "middle",
|
||||
"strokeColor": "<text color — match parent shape's stroke or use 'on light/dark fills' from palette>",
|
||||
"backgroundColor": "transparent",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"seed": 11111,
|
||||
"version": 1,
|
||||
"versionNonce": 22222,
|
||||
"isDeleted": false,
|
||||
"groupIds": [],
|
||||
"boundElements": null,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"containerId": "elem1",
|
||||
"lineHeight": 1.25
|
||||
}
|
||||
```
|
||||
|
||||
## Arrow
|
||||
```json
|
||||
{
|
||||
"type": "arrow",
|
||||
"id": "arrow1",
|
||||
"x": 282, "y": 145, "width": 118, "height": 0,
|
||||
"strokeColor": "<arrow color — typically matches source element's stroke from palette>",
|
||||
"backgroundColor": "transparent",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 2,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"seed": 33333,
|
||||
"version": 1,
|
||||
"versionNonce": 44444,
|
||||
"isDeleted": false,
|
||||
"groupIds": [],
|
||||
"boundElements": null,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"points": [[0, 0], [118, 0]],
|
||||
"startBinding": {"elementId": "elem1", "focus": 0, "gap": 2},
|
||||
"endBinding": {"elementId": "elem2", "focus": 0, "gap": 2},
|
||||
"startArrowhead": null,
|
||||
"endArrowhead": "arrow"
|
||||
}
|
||||
```
|
||||
|
||||
For curves: use 3+ points in `points` array.
|
||||
@@ -1,381 +0,0 @@
|
||||
# Complete Examples Reference
|
||||
|
||||
Full JSON examples showing proper element structure.
|
||||
|
||||
---
|
||||
|
||||
## 3-Tier Architecture Example
|
||||
|
||||
This is a REFERENCE showing JSON structure. Replace IDs, labels, positions, and colors based on discovered components.
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "excalidraw",
|
||||
"version": 2,
|
||||
"source": "claude-code-excalidraw-skill",
|
||||
"elements": [
|
||||
{
|
||||
"id": "user",
|
||||
"type": "ellipse",
|
||||
"x": 150,
|
||||
"y": 50,
|
||||
"width": 100,
|
||||
"height": 60,
|
||||
"angle": 0,
|
||||
"strokeColor": "#1971c2",
|
||||
"backgroundColor": "#e7f5ff",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 2,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 1,
|
||||
"opacity": 100,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": { "type": 2 },
|
||||
"seed": 1,
|
||||
"version": 1,
|
||||
"versionNonce": 1,
|
||||
"isDeleted": false,
|
||||
"boundElements": [{ "type": "text", "id": "user-text" }],
|
||||
"updated": 1,
|
||||
"link": null,
|
||||
"locked": false
|
||||
},
|
||||
{
|
||||
"id": "user-text",
|
||||
"type": "text",
|
||||
"x": 175,
|
||||
"y": 67,
|
||||
"width": 50,
|
||||
"height": 25,
|
||||
"angle": 0,
|
||||
"strokeColor": "#1e1e1e",
|
||||
"backgroundColor": "transparent",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": null,
|
||||
"seed": 2,
|
||||
"version": 1,
|
||||
"versionNonce": 2,
|
||||
"isDeleted": false,
|
||||
"boundElements": null,
|
||||
"updated": 1,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"text": "User",
|
||||
"fontSize": 16,
|
||||
"fontFamily": 1,
|
||||
"textAlign": "center",
|
||||
"verticalAlign": "middle",
|
||||
"baseline": 14,
|
||||
"containerId": "user",
|
||||
"originalText": "User",
|
||||
"lineHeight": 1.25
|
||||
},
|
||||
{
|
||||
"id": "frontend",
|
||||
"type": "rectangle",
|
||||
"x": 100,
|
||||
"y": 180,
|
||||
"width": 200,
|
||||
"height": 80,
|
||||
"angle": 0,
|
||||
"strokeColor": "#1971c2",
|
||||
"backgroundColor": "#a5d8ff",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 2,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 1,
|
||||
"opacity": 100,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": { "type": 3 },
|
||||
"seed": 3,
|
||||
"version": 1,
|
||||
"versionNonce": 3,
|
||||
"isDeleted": false,
|
||||
"boundElements": [{ "type": "text", "id": "frontend-text" }],
|
||||
"updated": 1,
|
||||
"link": null,
|
||||
"locked": false
|
||||
},
|
||||
{
|
||||
"id": "frontend-text",
|
||||
"type": "text",
|
||||
"x": 105,
|
||||
"y": 195,
|
||||
"width": 190,
|
||||
"height": 50,
|
||||
"angle": 0,
|
||||
"strokeColor": "#1e1e1e",
|
||||
"backgroundColor": "transparent",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": null,
|
||||
"seed": 4,
|
||||
"version": 1,
|
||||
"versionNonce": 4,
|
||||
"isDeleted": false,
|
||||
"boundElements": null,
|
||||
"updated": 1,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"text": "Frontend\nNext.js",
|
||||
"fontSize": 16,
|
||||
"fontFamily": 1,
|
||||
"textAlign": "center",
|
||||
"verticalAlign": "middle",
|
||||
"baseline": 14,
|
||||
"containerId": "frontend",
|
||||
"originalText": "Frontend\nNext.js",
|
||||
"lineHeight": 1.25
|
||||
},
|
||||
{
|
||||
"id": "database",
|
||||
"type": "rectangle",
|
||||
"x": 100,
|
||||
"y": 330,
|
||||
"width": 200,
|
||||
"height": 80,
|
||||
"angle": 0,
|
||||
"strokeColor": "#2f9e44",
|
||||
"backgroundColor": "#b2f2bb",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 2,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 1,
|
||||
"opacity": 100,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": { "type": 3 },
|
||||
"seed": 5,
|
||||
"version": 1,
|
||||
"versionNonce": 5,
|
||||
"isDeleted": false,
|
||||
"boundElements": [{ "type": "text", "id": "database-text" }],
|
||||
"updated": 1,
|
||||
"link": null,
|
||||
"locked": false
|
||||
},
|
||||
{
|
||||
"id": "database-text",
|
||||
"type": "text",
|
||||
"x": 105,
|
||||
"y": 345,
|
||||
"width": 190,
|
||||
"height": 50,
|
||||
"angle": 0,
|
||||
"strokeColor": "#1e1e1e",
|
||||
"backgroundColor": "transparent",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": null,
|
||||
"seed": 6,
|
||||
"version": 1,
|
||||
"versionNonce": 6,
|
||||
"isDeleted": false,
|
||||
"boundElements": null,
|
||||
"updated": 1,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"text": "Database\nPostgreSQL",
|
||||
"fontSize": 16,
|
||||
"fontFamily": 1,
|
||||
"textAlign": "center",
|
||||
"verticalAlign": "middle",
|
||||
"baseline": 14,
|
||||
"containerId": "database",
|
||||
"originalText": "Database\nPostgreSQL",
|
||||
"lineHeight": 1.25
|
||||
},
|
||||
{
|
||||
"id": "arrow-user-frontend",
|
||||
"type": "arrow",
|
||||
"x": 200,
|
||||
"y": 115,
|
||||
"width": 0,
|
||||
"height": 60,
|
||||
"angle": 0,
|
||||
"strokeColor": "#1971c2",
|
||||
"backgroundColor": "transparent",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 2,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": null,
|
||||
"seed": 7,
|
||||
"version": 1,
|
||||
"versionNonce": 7,
|
||||
"isDeleted": false,
|
||||
"boundElements": null,
|
||||
"updated": 1,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"points": [[0, 0], [0, 60]],
|
||||
"lastCommittedPoint": null,
|
||||
"startBinding": null,
|
||||
"endBinding": null,
|
||||
"startArrowhead": null,
|
||||
"endArrowhead": "arrow",
|
||||
"elbowed": true
|
||||
},
|
||||
{
|
||||
"id": "arrow-frontend-database",
|
||||
"type": "arrow",
|
||||
"x": 200,
|
||||
"y": 265,
|
||||
"width": 0,
|
||||
"height": 60,
|
||||
"angle": 0,
|
||||
"strokeColor": "#2f9e44",
|
||||
"backgroundColor": "transparent",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 2,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": null,
|
||||
"seed": 8,
|
||||
"version": 1,
|
||||
"versionNonce": 8,
|
||||
"isDeleted": false,
|
||||
"boundElements": null,
|
||||
"updated": 1,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"points": [[0, 0], [0, 60]],
|
||||
"lastCommittedPoint": null,
|
||||
"startBinding": null,
|
||||
"endBinding": null,
|
||||
"startArrowhead": null,
|
||||
"endArrowhead": "arrow",
|
||||
"elbowed": true
|
||||
}
|
||||
],
|
||||
"appState": {
|
||||
"gridSize": 20,
|
||||
"viewBackgroundColor": "#ffffff"
|
||||
},
|
||||
"files": {}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Layout Patterns
|
||||
|
||||
### Vertical Flow (Most Common)
|
||||
|
||||
```
|
||||
Grid positioning:
|
||||
- Column width: 200-250px
|
||||
- Row height: 130-150px
|
||||
- Element size: 160-200px x 80-90px
|
||||
- Spacing: 40-50px between elements
|
||||
|
||||
Row positions (y):
|
||||
Row 0: 20 (title)
|
||||
Row 1: 100 (users/entry points)
|
||||
Row 2: 230 (frontend/gateway)
|
||||
Row 3: 380 (orchestration)
|
||||
Row 4: 530 (services)
|
||||
Row 5: 680 (data layer)
|
||||
Row 6: 830 (external services)
|
||||
|
||||
Column positions (x):
|
||||
Col 0: 100
|
||||
Col 1: 300
|
||||
Col 2: 500
|
||||
Col 3: 700
|
||||
Col 4: 900
|
||||
```
|
||||
|
||||
### Horizontal Flow (Pipelines)
|
||||
|
||||
```
|
||||
Stage positions (x):
|
||||
Stage 0: 100 (input/source)
|
||||
Stage 1: 350 (transform 1)
|
||||
Stage 2: 600 (transform 2)
|
||||
Stage 3: 850 (transform 3)
|
||||
Stage 4: 1100 (output/sink)
|
||||
|
||||
All stages at same y: 200
|
||||
Arrows: "right" -> "left" connections
|
||||
```
|
||||
|
||||
### Hub-and-Spoke
|
||||
|
||||
```
|
||||
Center hub: x=500, y=350
|
||||
8 positions at 45° increments:
|
||||
N: (500, 150)
|
||||
NE: (640, 210)
|
||||
E: (700, 350)
|
||||
SE: (640, 490)
|
||||
S: (500, 550)
|
||||
SW: (360, 490)
|
||||
W: (300, 350)
|
||||
NW: (360, 210)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Complex Architecture Layout
|
||||
|
||||
```
|
||||
Row 0: Title/Header (y: 20)
|
||||
Row 1: Users/Clients (y: 80)
|
||||
Row 2: Frontend/Gateway (y: 200)
|
||||
Row 3: Orchestration (y: 350)
|
||||
Row 4: Processing Services (y: 550)
|
||||
Row 5: Data Layer (y: 680)
|
||||
Row 6: External Services (y: 830)
|
||||
|
||||
Columns (x):
|
||||
Col 0: 120
|
||||
Col 1: 320
|
||||
Col 2: 520
|
||||
Col 3: 720
|
||||
Col 4: 920
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Diagram Complexity Guidelines
|
||||
|
||||
| Complexity | Max Elements | Max Arrows | Approach |
|
||||
|------------|-------------|------------|----------|
|
||||
| Simple | 5-10 | 5-10 | Single file, no groups |
|
||||
| Medium | 10-25 | 15-30 | Use grouping rectangles |
|
||||
| Complex | 25-50 | 30-60 | Split into multiple diagrams |
|
||||
| Very Complex | 50+ | 60+ | Multiple focused diagrams |
|
||||
|
||||
**When to split:**
|
||||
- More than 50 elements
|
||||
- Create: `architecture-overview.excalidraw`, `architecture-data-layer.excalidraw`
|
||||
|
||||
**When to use groups:**
|
||||
- 3+ related services
|
||||
- Same deployment unit
|
||||
- Logical boundaries (VPC, Security Zone)
|
||||
@@ -1,210 +0,0 @@
|
||||
# Excalidraw JSON Format Reference
|
||||
|
||||
Complete reference for Excalidraw JSON structure and element types.
|
||||
|
||||
---
|
||||
|
||||
## File Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "excalidraw",
|
||||
"version": 2,
|
||||
"source": "claude-code-excalidraw-skill",
|
||||
"elements": [],
|
||||
"appState": {
|
||||
"gridSize": 20,
|
||||
"viewBackgroundColor": "#ffffff"
|
||||
},
|
||||
"files": {}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Element Types
|
||||
|
||||
| Type | Use For | Arrow Reliability |
|
||||
|------|---------|-------------------|
|
||||
| `rectangle` | Services, components, databases, containers, orchestrators, decision points | Excellent |
|
||||
| `ellipse` | Users, external systems, start/end points | Good |
|
||||
| `text` | Labels inside shapes, titles, annotations | N/A |
|
||||
| `arrow` | Data flow, connections, dependencies | N/A |
|
||||
| `line` | Grouping boundaries, separators | N/A |
|
||||
|
||||
### BANNED: Diamond Shapes
|
||||
|
||||
**NEVER use `type: "diamond"` in generated diagrams.**
|
||||
|
||||
Diamond arrow connections are fundamentally broken in raw Excalidraw JSON:
|
||||
- Excalidraw applies `roundness` to diamond vertices during rendering
|
||||
- Visual edges appear offset from mathematical edge points
|
||||
- No offset formula reliably compensates
|
||||
- Arrows appear disconnected/floating
|
||||
|
||||
**Use styled rectangles instead** for visual distinction:
|
||||
|
||||
| Semantic Meaning | Rectangle Style |
|
||||
|------------------|-----------------|
|
||||
| Orchestrator/Hub | Coral (`#ffa8a8`/`#c92a2a`) + strokeWidth: 3 |
|
||||
| Decision Point | Orange (`#ffd8a8`/`#e8590c`) + dashed stroke |
|
||||
| Central Router | Larger size + bold color |
|
||||
|
||||
---
|
||||
|
||||
## Required Element Properties
|
||||
|
||||
Every element MUST have these properties:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "unique-id-string",
|
||||
"type": "rectangle",
|
||||
"x": 100,
|
||||
"y": 100,
|
||||
"width": 200,
|
||||
"height": 80,
|
||||
"angle": 0,
|
||||
"strokeColor": "#1971c2",
|
||||
"backgroundColor": "#a5d8ff",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 2,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 1,
|
||||
"opacity": 100,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": { "type": 3 },
|
||||
"seed": 1,
|
||||
"version": 1,
|
||||
"versionNonce": 1,
|
||||
"isDeleted": false,
|
||||
"boundElements": null,
|
||||
"updated": 1,
|
||||
"link": null,
|
||||
"locked": false
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Text Inside Shapes (Labels)
|
||||
|
||||
**Every labeled shape requires TWO elements:**
|
||||
|
||||
### Shape with boundElements
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "{component-id}",
|
||||
"type": "rectangle",
|
||||
"x": 500,
|
||||
"y": 200,
|
||||
"width": 200,
|
||||
"height": 90,
|
||||
"strokeColor": "#1971c2",
|
||||
"backgroundColor": "#a5d8ff",
|
||||
"boundElements": [{ "type": "text", "id": "{component-id}-text" }],
|
||||
// ... other required properties
|
||||
}
|
||||
```
|
||||
|
||||
### Text with containerId
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "{component-id}-text",
|
||||
"type": "text",
|
||||
"x": 505, // shape.x + 5
|
||||
"y": 220, // shape.y + (shape.height - text.height) / 2
|
||||
"width": 190, // shape.width - 10
|
||||
"height": 50,
|
||||
"text": "{Component Name}\n{Subtitle}",
|
||||
"fontSize": 16,
|
||||
"fontFamily": 1,
|
||||
"textAlign": "center",
|
||||
"verticalAlign": "middle",
|
||||
"containerId": "{component-id}",
|
||||
"originalText": "{Component Name}\n{Subtitle}",
|
||||
"lineHeight": 1.25,
|
||||
// ... other required properties
|
||||
}
|
||||
```
|
||||
|
||||
### DO NOT Use the `label` Property
|
||||
|
||||
The `label` property is for the JavaScript API, NOT raw JSON files:
|
||||
|
||||
```json
|
||||
// WRONG - will show empty boxes
|
||||
{ "type": "rectangle", "label": { "text": "My Label" } }
|
||||
|
||||
// CORRECT - requires TWO elements
|
||||
// 1. Shape with boundElements reference
|
||||
// 2. Separate text element with containerId
|
||||
```
|
||||
|
||||
### Text Positioning
|
||||
|
||||
- Text `x` = shape `x` + 5
|
||||
- Text `y` = shape `y` + (shape.height - text.height) / 2
|
||||
- Text `width` = shape `width` - 10
|
||||
- Use `\n` for multi-line labels
|
||||
- Always use `textAlign: "center"` and `verticalAlign: "middle"`
|
||||
|
||||
### ID Naming Convention
|
||||
|
||||
Always use pattern: `{shape-id}-text` for text element IDs.
|
||||
|
||||
---
|
||||
|
||||
## Dynamic ID Generation
|
||||
|
||||
IDs and labels are generated from codebase analysis:
|
||||
|
||||
| Discovered Component | Generated ID | Generated Label |
|
||||
|---------------------|--------------|-----------------|
|
||||
| Express API server | `express-api` | `"API Server\nExpress.js"` |
|
||||
| PostgreSQL database | `postgres-db` | `"PostgreSQL\nDatabase"` |
|
||||
| Redis cache | `redis-cache` | `"Redis\nCache Layer"` |
|
||||
| S3 bucket for uploads | `s3-uploads` | `"S3 Bucket\nuploads/"` |
|
||||
| Lambda function | `lambda-processor` | `"Lambda\nProcessor"` |
|
||||
| React frontend | `react-frontend` | `"React App\nFrontend"` |
|
||||
|
||||
---
|
||||
|
||||
## Grouping with Dashed Rectangles
|
||||
|
||||
For logical groupings (namespaces, VPCs, pipelines):
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "group-ai-pipeline",
|
||||
"type": "rectangle",
|
||||
"x": 100,
|
||||
"y": 500,
|
||||
"width": 1000,
|
||||
"height": 280,
|
||||
"strokeColor": "#9c36b5",
|
||||
"backgroundColor": "transparent",
|
||||
"strokeStyle": "dashed",
|
||||
"roughness": 0,
|
||||
"roundness": null,
|
||||
"boundElements": null
|
||||
}
|
||||
```
|
||||
|
||||
Group labels are standalone text (no containerId) at top-left:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "group-ai-pipeline-label",
|
||||
"type": "text",
|
||||
"x": 120,
|
||||
"y": 510,
|
||||
"text": "AI Processing Pipeline (Cloud Run)",
|
||||
"textAlign": "left",
|
||||
"verticalAlign": "top",
|
||||
"containerId": null
|
||||
}
|
||||
```
|
||||
71
skills/excalidraw/references/json-schema.md
Normal file
71
skills/excalidraw/references/json-schema.md
Normal file
@@ -0,0 +1,71 @@
|
||||
# Excalidraw JSON Schema
|
||||
|
||||
## Element Types
|
||||
|
||||
| Type | Use For |
|
||||
|------|---------|
|
||||
| `rectangle` | Processes, actions, components |
|
||||
| `ellipse` | Entry/exit points, external systems |
|
||||
| `diamond` | Decisions, conditionals |
|
||||
| `arrow` | Connections between shapes |
|
||||
| `text` | Labels inside shapes |
|
||||
| `line` | Non-arrow connections |
|
||||
| `frame` | Grouping containers |
|
||||
|
||||
## Common Properties
|
||||
|
||||
All elements share these:
|
||||
|
||||
| Property | Type | Description |
|
||||
|----------|------|-------------|
|
||||
| `id` | string | Unique identifier |
|
||||
| `type` | string | Element type |
|
||||
| `x`, `y` | number | Position in pixels |
|
||||
| `width`, `height` | number | Size in pixels |
|
||||
| `strokeColor` | string | Border color (hex) |
|
||||
| `backgroundColor` | string | Fill color (hex or "transparent") |
|
||||
| `fillStyle` | string | "solid", "hachure", "cross-hatch" |
|
||||
| `strokeWidth` | number | 1, 2, or 4 |
|
||||
| `strokeStyle` | string | "solid", "dashed", "dotted" |
|
||||
| `roughness` | number | 0 (smooth), 1 (default), 2 (rough) |
|
||||
| `opacity` | number | 0-100 |
|
||||
| `seed` | number | Random seed for roughness |
|
||||
|
||||
## Text-Specific Properties
|
||||
|
||||
| Property | Description |
|
||||
|----------|-------------|
|
||||
| `text` | The display text |
|
||||
| `originalText` | Same as text |
|
||||
| `fontSize` | Size in pixels (16-20 recommended) |
|
||||
| `fontFamily` | 3 for monospace (use this) |
|
||||
| `textAlign` | "left", "center", "right" |
|
||||
| `verticalAlign` | "top", "middle", "bottom" |
|
||||
| `containerId` | ID of parent shape |
|
||||
|
||||
## Arrow-Specific Properties
|
||||
|
||||
| Property | Description |
|
||||
|----------|-------------|
|
||||
| `points` | Array of [x, y] coordinates |
|
||||
| `startBinding` | Connection to start shape |
|
||||
| `endBinding` | Connection to end shape |
|
||||
| `startArrowhead` | null, "arrow", "bar", "dot", "triangle" |
|
||||
| `endArrowhead` | null, "arrow", "bar", "dot", "triangle" |
|
||||
|
||||
## Binding Format
|
||||
|
||||
```json
|
||||
{
|
||||
"elementId": "shapeId",
|
||||
"focus": 0,
|
||||
"gap": 2
|
||||
}
|
||||
```
|
||||
|
||||
## Rectangle Roundness
|
||||
|
||||
Add for rounded corners:
|
||||
```json
|
||||
"roundness": { "type": 3 }
|
||||
```
|
||||
205
skills/excalidraw/references/render_excalidraw.py
Normal file
205
skills/excalidraw/references/render_excalidraw.py
Normal file
@@ -0,0 +1,205 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Render Excalidraw JSON to PNG using Playwright + headless Chromium.
|
||||
|
||||
Usage:
|
||||
python3 render_excalidraw.py <path-to-file.excalidraw> [--output path.png] [--scale 2] [--width 1920]
|
||||
|
||||
Dependencies (playwright, chromium) are provided by the Nix flake / direnv environment.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def validate_excalidraw(data: dict) -> list[str]:
|
||||
"""Validate Excalidraw JSON structure. Returns list of errors (empty = valid)."""
|
||||
errors: list[str] = []
|
||||
|
||||
if data.get("type") != "excalidraw":
|
||||
errors.append(f"Expected type 'excalidraw', got '{data.get('type')}'")
|
||||
|
||||
if "elements" not in data:
|
||||
errors.append("Missing 'elements' array")
|
||||
elif not isinstance(data["elements"], list):
|
||||
errors.append("'elements' must be an array")
|
||||
elif len(data["elements"]) == 0:
|
||||
errors.append("'elements' array is empty — nothing to render")
|
||||
|
||||
return errors
|
||||
|
||||
|
||||
def compute_bounding_box(elements: list[dict]) -> tuple[float, float, float, float]:
|
||||
"""Compute bounding box (min_x, min_y, max_x, max_y) across all elements."""
|
||||
min_x = float("inf")
|
||||
min_y = float("inf")
|
||||
max_x = float("-inf")
|
||||
max_y = float("-inf")
|
||||
|
||||
for el in elements:
|
||||
if el.get("isDeleted"):
|
||||
continue
|
||||
x = el.get("x", 0)
|
||||
y = el.get("y", 0)
|
||||
w = el.get("width", 0)
|
||||
h = el.get("height", 0)
|
||||
|
||||
# For arrows/lines, points array defines the shape relative to x,y
|
||||
if el.get("type") in ("arrow", "line") and "points" in el:
|
||||
for px, py in el["points"]:
|
||||
min_x = min(min_x, x + px)
|
||||
min_y = min(min_y, y + py)
|
||||
max_x = max(max_x, x + px)
|
||||
max_y = max(max_y, y + py)
|
||||
else:
|
||||
min_x = min(min_x, x)
|
||||
min_y = min(min_y, y)
|
||||
max_x = max(max_x, x + abs(w))
|
||||
max_y = max(max_y, y + abs(h))
|
||||
|
||||
if min_x == float("inf"):
|
||||
return (0, 0, 800, 600)
|
||||
|
||||
return (min_x, min_y, max_x, max_y)
|
||||
|
||||
|
||||
def render(
|
||||
excalidraw_path: Path,
|
||||
output_path: Path | None = None,
|
||||
scale: int = 2,
|
||||
max_width: int = 1920,
|
||||
) -> Path:
|
||||
"""Render an .excalidraw file to PNG. Returns the output PNG path."""
|
||||
# Import playwright here so validation errors show before import errors
|
||||
try:
|
||||
from playwright.sync_api import sync_playwright
|
||||
except ImportError:
|
||||
print("ERROR: playwright not installed.", file=sys.stderr)
|
||||
print("Ensure the Nix dev shell is active (direnv allow).", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Read and validate
|
||||
raw = excalidraw_path.read_text(encoding="utf-8")
|
||||
try:
|
||||
data = json.loads(raw)
|
||||
except json.JSONDecodeError as e:
|
||||
print(f"ERROR: Invalid JSON in {excalidraw_path}: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
errors = validate_excalidraw(data)
|
||||
if errors:
|
||||
print(f"ERROR: Invalid Excalidraw file:", file=sys.stderr)
|
||||
for err in errors:
|
||||
print(f" - {err}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Compute viewport size from element bounding box
|
||||
elements = [e for e in data["elements"] if not e.get("isDeleted")]
|
||||
min_x, min_y, max_x, max_y = compute_bounding_box(elements)
|
||||
padding = 80
|
||||
diagram_w = max_x - min_x + padding * 2
|
||||
diagram_h = max_y - min_y + padding * 2
|
||||
|
||||
# Cap viewport width, let height be natural
|
||||
vp_width = min(int(diagram_w), max_width)
|
||||
vp_height = max(int(diagram_h), 600)
|
||||
|
||||
# Output path
|
||||
if output_path is None:
|
||||
output_path = excalidraw_path.with_suffix(".png")
|
||||
|
||||
# Template path (same directory as this script)
|
||||
template_path = Path(__file__).parent / "render_template.html"
|
||||
if not template_path.exists():
|
||||
print(f"ERROR: Template not found at {template_path}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
template_url = template_path.as_uri()
|
||||
|
||||
with sync_playwright() as p:
|
||||
try:
|
||||
browser = p.chromium.launch(headless=True)
|
||||
except Exception as e:
|
||||
if "Executable doesn't exist" in str(e) or "browserType.launch" in str(e):
|
||||
print("ERROR: Chromium not installed for Playwright.", file=sys.stderr)
|
||||
print("Ensure the Nix dev shell is active (direnv allow).", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
raise
|
||||
|
||||
page = browser.new_page(
|
||||
viewport={"width": vp_width, "height": vp_height},
|
||||
device_scale_factor=scale,
|
||||
)
|
||||
|
||||
# Load the template
|
||||
page.goto(template_url)
|
||||
|
||||
# Wait for the ES module to load (imports from esm.sh)
|
||||
page.wait_for_function("window.__moduleReady === true", timeout=30000)
|
||||
|
||||
# Inject the diagram data and render
|
||||
json_str = json.dumps(data)
|
||||
result = page.evaluate(f"window.renderDiagram({json_str})")
|
||||
|
||||
if not result or not result.get("success"):
|
||||
error_msg = (
|
||||
result.get("error", "Unknown render error")
|
||||
if result
|
||||
else "renderDiagram returned null"
|
||||
)
|
||||
print(f"ERROR: Render failed: {error_msg}", file=sys.stderr)
|
||||
browser.close()
|
||||
sys.exit(1)
|
||||
|
||||
# Wait for render completion signal
|
||||
page.wait_for_function("window.__renderComplete === true", timeout=15000)
|
||||
|
||||
# Screenshot the SVG element
|
||||
svg_el = page.query_selector("#root svg")
|
||||
if svg_el is None:
|
||||
print("ERROR: No SVG element found after render.", file=sys.stderr)
|
||||
browser.close()
|
||||
sys.exit(1)
|
||||
|
||||
svg_el.screenshot(path=str(output_path))
|
||||
browser.close()
|
||||
|
||||
return output_path
|
||||
|
||||
|
||||
def main() -> None:
|
||||
"""Entry point for rendering Excalidraw JSON files to PNG."""
|
||||
parser = argparse.ArgumentParser(description="Render Excalidraw JSON to PNG")
|
||||
parser.add_argument("input", type=Path, help="Path to .excalidraw JSON file")
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
"-o",
|
||||
type=Path,
|
||||
default=None,
|
||||
help="Output PNG path (default: same name with .png)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--scale", "-s", type=int, default=2, help="Device scale factor (default: 2)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--width",
|
||||
"-w",
|
||||
type=int,
|
||||
default=1920,
|
||||
help="Max viewport width (default: 1920)",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
if not args.input.exists():
|
||||
print(f"ERROR: File not found: {args.input}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
png_path = render(args.input, args.output, args.scale, args.width)
|
||||
print(str(png_path))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
57
skills/excalidraw/references/render_template.html
Normal file
57
skills/excalidraw/references/render_template.html
Normal file
@@ -0,0 +1,57 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="utf-8" />
|
||||
<style>
|
||||
* { margin: 0; padding: 0; box-sizing: border-box; }
|
||||
body { background: #ffffff; overflow: hidden; }
|
||||
#root { display: inline-block; }
|
||||
#root svg { display: block; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div id="root"></div>
|
||||
|
||||
<script type="module">
|
||||
import { exportToSvg } from "https://esm.sh/@excalidraw/excalidraw?bundle";
|
||||
|
||||
window.renderDiagram = async function(jsonData) {
|
||||
try {
|
||||
const data = typeof jsonData === "string" ? JSON.parse(jsonData) : jsonData;
|
||||
const elements = data.elements || [];
|
||||
const appState = data.appState || {};
|
||||
const files = data.files || {};
|
||||
|
||||
// Force white background in appState
|
||||
appState.viewBackgroundColor = appState.viewBackgroundColor || "#ffffff";
|
||||
appState.exportWithDarkMode = false;
|
||||
|
||||
const svg = await exportToSvg({
|
||||
elements: elements,
|
||||
appState: {
|
||||
...appState,
|
||||
exportBackground: true,
|
||||
},
|
||||
files: files,
|
||||
});
|
||||
|
||||
// Clear any previous render
|
||||
const root = document.getElementById("root");
|
||||
root.innerHTML = "";
|
||||
root.appendChild(svg);
|
||||
|
||||
window.__renderComplete = true;
|
||||
window.__renderError = null;
|
||||
return { success: true, width: svg.getAttribute("width"), height: svg.getAttribute("height") };
|
||||
} catch (err) {
|
||||
window.__renderComplete = true;
|
||||
window.__renderError = err.message;
|
||||
return { success: false, error: err.message };
|
||||
}
|
||||
};
|
||||
|
||||
// Signal that the module is loaded and ready
|
||||
window.__moduleReady = true;
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,182 +0,0 @@
|
||||
# Validation Reference
|
||||
|
||||
Checklists, validation algorithms, and common bug fixes.
|
||||
|
||||
---
|
||||
|
||||
## Pre-Flight Validation Algorithm
|
||||
|
||||
Run BEFORE writing the file:
|
||||
|
||||
```
|
||||
FUNCTION validateDiagram(elements):
|
||||
errors = []
|
||||
|
||||
// 1. Validate shape-text bindings
|
||||
FOR each shape IN elements WHERE shape.boundElements != null:
|
||||
FOR each binding IN shape.boundElements:
|
||||
textElement = findById(elements, binding.id)
|
||||
IF textElement == null:
|
||||
errors.append("Shape {shape.id} references missing text {binding.id}")
|
||||
ELSE IF textElement.containerId != shape.id:
|
||||
errors.append("Text containerId doesn't match shape")
|
||||
|
||||
// 2. Validate arrow connections
|
||||
FOR each arrow IN elements WHERE arrow.type == "arrow":
|
||||
sourceShape = findShapeNear(elements, arrow.x, arrow.y)
|
||||
IF sourceShape == null:
|
||||
errors.append("Arrow {arrow.id} doesn't start from shape edge")
|
||||
|
||||
finalPoint = arrow.points[arrow.points.length - 1]
|
||||
endX = arrow.x + finalPoint[0]
|
||||
endY = arrow.y + finalPoint[1]
|
||||
targetShape = findShapeNear(elements, endX, endY)
|
||||
IF targetShape == null:
|
||||
errors.append("Arrow {arrow.id} doesn't end at shape edge")
|
||||
|
||||
IF arrow.points.length > 2:
|
||||
IF arrow.elbowed != true:
|
||||
errors.append("Arrow {arrow.id} missing elbowed:true")
|
||||
IF arrow.roundness != null:
|
||||
errors.append("Arrow {arrow.id} should have roundness:null")
|
||||
|
||||
// 3. Validate unique IDs
|
||||
ids = [el.id for el in elements]
|
||||
duplicates = findDuplicates(ids)
|
||||
IF duplicates.length > 0:
|
||||
errors.append("Duplicate IDs: {duplicates}")
|
||||
|
||||
// 4. Validate bounding boxes
|
||||
FOR each arrow IN elements WHERE arrow.type == "arrow":
|
||||
maxX = max(abs(p[0]) for p in arrow.points)
|
||||
maxY = max(abs(p[1]) for p in arrow.points)
|
||||
IF arrow.width < maxX OR arrow.height < maxY:
|
||||
errors.append("Arrow {arrow.id} bounding box too small")
|
||||
|
||||
RETURN errors
|
||||
|
||||
FUNCTION findShapeNear(elements, x, y, tolerance=15):
|
||||
FOR each shape IN elements WHERE shape.type IN ["rectangle", "ellipse"]:
|
||||
edges = [
|
||||
(shape.x + shape.width/2, shape.y), // top
|
||||
(shape.x + shape.width/2, shape.y + shape.height), // bottom
|
||||
(shape.x, shape.y + shape.height/2), // left
|
||||
(shape.x + shape.width, shape.y + shape.height/2) // right
|
||||
]
|
||||
FOR each edge IN edges:
|
||||
IF abs(edge.x - x) < tolerance AND abs(edge.y - y) < tolerance:
|
||||
RETURN shape
|
||||
RETURN null
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Checklists
|
||||
|
||||
### Before Generating
|
||||
|
||||
- [ ] Identified all components from codebase
|
||||
- [ ] Mapped all connections/data flows
|
||||
- [ ] Chose layout pattern (vertical, horizontal, hub-and-spoke)
|
||||
- [ ] Selected color palette (default, AWS, Azure, K8s)
|
||||
- [ ] Planned grid positions
|
||||
- [ ] Created ID naming scheme
|
||||
|
||||
### During Generation
|
||||
|
||||
- [ ] Every labeled shape has BOTH shape AND text elements
|
||||
- [ ] Shape has `boundElements: [{ "type": "text", "id": "{id}-text" }]`
|
||||
- [ ] Text has `containerId: "{shape-id}"`
|
||||
- [ ] Multi-point arrows have `elbowed: true`, `roundness: null`, `roughness: 0`
|
||||
- [ ] Arrows have `startBinding` and `endBinding`
|
||||
- [ ] No diamond shapes used
|
||||
- [ ] Applied staggering formula for multiple arrows
|
||||
|
||||
### Arrow Validation (Every Arrow)
|
||||
|
||||
- [ ] Arrow `x,y` calculated from shape edge
|
||||
- [ ] Final point offset = `targetEdge - sourceEdge`
|
||||
- [ ] Arrow `width` = `max(abs(point[0]))`
|
||||
- [ ] Arrow `height` = `max(abs(point[1]))`
|
||||
- [ ] U-turn arrows have 40-60px clearance
|
||||
|
||||
### After Generation
|
||||
|
||||
- [ ] All `boundElements` IDs reference valid text elements
|
||||
- [ ] All `containerId` values reference valid shapes
|
||||
- [ ] All arrows start within 15px of shape edge
|
||||
- [ ] All arrows end within 15px of shape edge
|
||||
- [ ] No duplicate IDs
|
||||
- [ ] Arrow bounding boxes match points
|
||||
- [ ] File is valid JSON
|
||||
|
||||
---
|
||||
|
||||
## Common Bugs and Fixes
|
||||
|
||||
### Bug: Arrow appears disconnected/floating
|
||||
|
||||
**Cause**: Arrow `x,y` not calculated from shape edge.
|
||||
|
||||
**Fix**:
|
||||
```
|
||||
Rectangle bottom: arrow_x = shape.x + shape.width/2
|
||||
arrow_y = shape.y + shape.height
|
||||
```
|
||||
|
||||
### Bug: Arrow endpoint doesn't reach target
|
||||
|
||||
**Cause**: Final point offset calculated incorrectly.
|
||||
|
||||
**Fix**:
|
||||
```
|
||||
target_edge = (target.x + target.width/2, target.y)
|
||||
offset_x = target_edge.x - arrow.x
|
||||
offset_y = target_edge.y - arrow.y
|
||||
Final point = [offset_x, offset_y]
|
||||
```
|
||||
|
||||
### Bug: Multiple arrows from same source overlap
|
||||
|
||||
**Cause**: All arrows start from identical `x,y`.
|
||||
|
||||
**Fix**: Stagger start positions:
|
||||
```
|
||||
For 5 arrows from bottom edge:
|
||||
arrow1.x = shape.x + shape.width * 0.2
|
||||
arrow2.x = shape.x + shape.width * 0.35
|
||||
arrow3.x = shape.x + shape.width * 0.5
|
||||
arrow4.x = shape.x + shape.width * 0.65
|
||||
arrow5.x = shape.x + shape.width * 0.8
|
||||
```
|
||||
|
||||
### Bug: Callback arrow doesn't loop correctly
|
||||
|
||||
**Cause**: U-turn path lacks clearance.
|
||||
|
||||
**Fix**: Use 4-point path:
|
||||
```
|
||||
Points = [[0, 0], [clearance, 0], [clearance, -vert], [final_x, -vert]]
|
||||
clearance = 40-60px
|
||||
```
|
||||
|
||||
### Bug: Labels don't appear inside shapes
|
||||
|
||||
**Cause**: Using `label` property instead of separate text element.
|
||||
|
||||
**Fix**: Create TWO elements:
|
||||
1. Shape with `boundElements` referencing text
|
||||
2. Text with `containerId` referencing shape
|
||||
|
||||
### Bug: Arrows are curved, not 90-degree
|
||||
|
||||
**Cause**: Missing elbow properties.
|
||||
|
||||
**Fix**: Add all three:
|
||||
```json
|
||||
{
|
||||
"roughness": 0,
|
||||
"roundness": null,
|
||||
"elbowed": true
|
||||
}
|
||||
```
|
||||
@@ -1,10 +1,16 @@
|
||||
---
|
||||
name: mem0-memory
|
||||
description: "Store and retrieve memories using Mem0 REST API. Use when: (1) storing information for future recall, (2) searching past conversations or facts, (3) managing user/agent memory contexts, (4) building conversational AI with persistent memory. Triggers on keywords like 'remember', 'recall', 'memory', 'store for later', 'what did I say about'."
|
||||
description: "DEPRECATED: Replaced by opencode-memory plugin. See skills/memory/SKILL.md for current memory system."
|
||||
compatibility: opencode
|
||||
---
|
||||
|
||||
# Mem0 Memory
|
||||
> ⚠️ **DEPRECATED**
|
||||
>
|
||||
> This skill is deprecated. The memory system has been replaced by the opencode-memory plugin.
|
||||
>
|
||||
> **See:** `skills/memory/SKILL.md` for the current memory system.
|
||||
|
||||
# Mem0 Memory (Legacy)
|
||||
|
||||
Store and retrieve memories via Mem0 REST API at `http://localhost:8000`.
|
||||
|
||||
@@ -108,6 +114,36 @@ Combine scopes for fine-grained control:
|
||||
}
|
||||
```
|
||||
|
||||
## Memory Categories
|
||||
|
||||
Memories are classified into 5 categories for organization:
|
||||
|
||||
| Category | Definition | Obsidian Path | Example |
|
||||
|----------|------------|---------------|---------|
|
||||
| `preference` | Personal preferences | `80-memory/preferences/` | UI settings, workflow styles |
|
||||
| `fact` | Objective information | `80-memory/facts/` | Tech stack, role, constraints |
|
||||
| `decision` | Choices with rationale | `80-memory/decisions/` | Tool selections, architecture |
|
||||
| `entity` | People, orgs, systems | `80-memory/entities/` | Contacts, APIs, concepts |
|
||||
| `other` | Everything else | `80-memory/other/` | General learnings |
|
||||
|
||||
### Metadata Pattern
|
||||
|
||||
Include category in metadata when storing:
|
||||
|
||||
```json
|
||||
{
|
||||
"messages": [...],
|
||||
"user_id": "user123",
|
||||
"metadata": {
|
||||
"category": "preference",
|
||||
"source": "explicit"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- `category`: One of preference, fact, decision, entity, other
|
||||
- `source`: "explicit" (user requested) or "auto-capture" (automatic)
|
||||
|
||||
## Workflow Patterns
|
||||
|
||||
### Pattern 1: Remember User Preferences
|
||||
@@ -137,6 +173,43 @@ curl -X POST http://localhost:8000/memories \
|
||||
-d '{"messages":[...], "run_id":"SESSION_ID"}'
|
||||
```
|
||||
|
||||
## Dual-Layer Sync
|
||||
|
||||
Memories are stored in BOTH Mem0 AND the Obsidian CODEX vault for redundancy and accessibility.
|
||||
|
||||
### Sync Pattern
|
||||
|
||||
1. **Store in Mem0 first** - Get `mem0_id` from response
|
||||
2. **Create Obsidian note** - In `80-memory/<category>/` using memory template
|
||||
3. **Cross-reference**:
|
||||
- Add `mem0_id` to Obsidian note frontmatter
|
||||
- Update Mem0 metadata with `obsidian_ref` (file path)
|
||||
|
||||
### Example Flow
|
||||
|
||||
```bash
|
||||
# 1. Store in Mem0
|
||||
RESPONSE=$(curl -s -X POST http://localhost:8000/memories \
|
||||
-d '{"messages":[{"role":"user","content":"I prefer dark mode"}],"user_id":"m3tam3re","metadata":{"category":"preference","source":"explicit"}}')
|
||||
|
||||
# 2. Extract mem0_id
|
||||
MEM0_ID=$(echo $RESPONSE | jq -r '.id')
|
||||
|
||||
# 3. Create Obsidian note (via REST API or MCP)
|
||||
# Path: 80-memory/preferences/prefers-dark-mode.md
|
||||
# Frontmatter includes: mem0_id: $MEM0_ID
|
||||
|
||||
# 4. Update Mem0 with Obsidian reference
|
||||
curl -X PUT http://localhost:8000/memories/$MEM0_ID \
|
||||
-d '{"metadata":{"obsidian_ref":"80-memory/preferences/prefers-dark-mode.md"}}'
|
||||
```
|
||||
|
||||
### When Obsidian Unavailable
|
||||
|
||||
- Store in Mem0 only
|
||||
- Log sync failure
|
||||
- Retry on next access
|
||||
|
||||
## Response Format
|
||||
|
||||
Memory objects include:
|
||||
@@ -161,6 +234,45 @@ Verify API is running:
|
||||
curl http://localhost:8000/health
|
||||
```
|
||||
|
||||
### Pre-Operation Check
|
||||
|
||||
Before any memory operation, verify Mem0 is running:
|
||||
|
||||
```bash
|
||||
if ! curl -s http://localhost:8000/health > /dev/null 2>&1; then
|
||||
echo "WARNING: Mem0 unavailable. Memory operations skipped."
|
||||
# Continue without memory features
|
||||
fi
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Mem0 Unavailable
|
||||
|
||||
When `curl http://localhost:8000/health` fails:
|
||||
- Skip all memory operations
|
||||
- Warn user: "Memory system unavailable. Mem0 not running at localhost:8000"
|
||||
- Continue with degraded functionality
|
||||
|
||||
### Obsidian Unavailable
|
||||
|
||||
When vault sync fails:
|
||||
- Store in Mem0 only
|
||||
- Log: "Obsidian sync failed for memory [id]"
|
||||
- Do not block user workflow
|
||||
|
||||
### API Errors
|
||||
|
||||
| Status | Meaning | Action |
|
||||
|--------|---------|--------|
|
||||
| 400 | Bad request | Check JSON format, required fields |
|
||||
| 404 | Memory not found | Memory may have been deleted |
|
||||
| 500 | Server error | Retry, check Mem0 logs |
|
||||
|
||||
### Graceful Degradation
|
||||
|
||||
Always continue core functionality even if memory system fails. Memory is enhancement, not requirement.
|
||||
|
||||
## API Reference
|
||||
|
||||
See [references/api_reference.md](references/api_reference.md) for complete OpenAPI schema.
|
||||
|
||||
337
skills/obsidian/SKILL.md
Normal file
337
skills/obsidian/SKILL.md
Normal file
@@ -0,0 +1,337 @@
|
||||
---
|
||||
name: obsidian
|
||||
description: "Obsidian Local REST API integration for knowledge management. Use when: (1) Creating, reading, updating, or deleting notes in Obsidian vault, (2) Searching vault content by title, content, or tags, (3) Managing daily notes and journaling, (4) Working with WikiLinks and vault metadata. Triggers: 'Obsidian', 'note', 'vault', 'WikiLink', 'daily note', 'journal', 'create note'."
|
||||
compatibility: opencode
|
||||
---
|
||||
|
||||
# Obsidian
|
||||
|
||||
Knowledge management integration via Obsidian Local REST API for vault operations, note CRUD, search, and daily notes.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- **Obsidian Local REST API plugin** installed and enabled in Obsidian
|
||||
- **API server running** on default port `27124` (or configured custom port)
|
||||
- **Vault path** configured in plugin settings
|
||||
- **API key** set (optional, if authentication enabled)
|
||||
|
||||
API endpoints available at `http://127.0.0.1:27124` by default.
|
||||
|
||||
## Core Workflows
|
||||
|
||||
### List Vault Files
|
||||
|
||||
Get list of all files in vault:
|
||||
|
||||
```bash
|
||||
curl -X GET "http://127.0.0.1:27124/list"
|
||||
```
|
||||
|
||||
Returns array of file objects with `path`, `mtime`, `ctime`, `size`.
|
||||
|
||||
### Get File Metadata
|
||||
|
||||
Retrieve metadata for a specific file:
|
||||
|
||||
```bash
|
||||
curl -X GET "http://127.0.0.1:27124/get-file-info?path=Note%20Title.md"
|
||||
```
|
||||
|
||||
Returns file metadata including tags, links, frontmatter.
|
||||
|
||||
### Create Note
|
||||
|
||||
Create a new note in the vault:
|
||||
|
||||
```bash
|
||||
curl -X POST "http://127.0.0.1:27124/create-note" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"content": "# Note Title\n\nNote content..."}'
|
||||
```
|
||||
|
||||
Use `path` parameter for specific location:
|
||||
```json
|
||||
{
|
||||
"content": "# Note Title\n\nNote content...",
|
||||
"path": "subdirectory/Note Title.md"
|
||||
}
|
||||
```
|
||||
|
||||
### Read Note
|
||||
|
||||
Read note content by path:
|
||||
|
||||
```bash
|
||||
curl -X GET "http://127.0.0.1:27124/read-note?path=Note%20Title.md"
|
||||
```
|
||||
|
||||
Returns note content as plain text or structured JSON with frontmatter parsing.
|
||||
|
||||
### Update Note
|
||||
|
||||
Modify existing note:
|
||||
|
||||
```bash
|
||||
curl -X PUT "http://127.0.0.1:27124/update-note" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"path": "Note Title.md", "content": "# Updated Title\n\nNew content..."}'
|
||||
```
|
||||
|
||||
### Delete Note
|
||||
|
||||
Remove note from vault:
|
||||
|
||||
```bash
|
||||
curl -X DELETE "http://127.0.0.1:27124/delete-note?path=Note%20Title.md"
|
||||
```
|
||||
|
||||
**Warning**: This operation is irreversible. Confirm with user before executing.
|
||||
|
||||
### Search Notes
|
||||
|
||||
Find notes by content, title, or tags:
|
||||
|
||||
```bash
|
||||
# Content search
|
||||
curl -X GET "http://127.0.0.1:27124/search?q=search%20term"
|
||||
|
||||
# Search with parameters
|
||||
curl -X GET "http://127.0.0.1:27124/search?q=search%20term&path=subdirectory&context-length=100"
|
||||
```
|
||||
|
||||
Returns array of matches with file path and context snippets.
|
||||
|
||||
### Daily Notes
|
||||
|
||||
#### Get Daily Note
|
||||
|
||||
Retrieve or create daily note for specific date:
|
||||
|
||||
```bash
|
||||
# Today
|
||||
curl -X GET "http://127.0.0.1:27124/daily-note"
|
||||
|
||||
# Specific date (YYYY-MM-DD)
|
||||
curl -X GET "http://127.0.0.1:27124/daily-note?date=2026-02-03"
|
||||
```
|
||||
|
||||
Returns daily note content or creates using Obsidian's Daily Notes template.
|
||||
|
||||
#### Update Daily Note
|
||||
|
||||
Modify today's daily note:
|
||||
|
||||
```bash
|
||||
curl -X PUT "http://127.0.0.1:27124/daily-note" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"content": "## Journal\n\nToday I learned..."}'
|
||||
```
|
||||
|
||||
### Get Vault Info
|
||||
|
||||
Retrieve vault metadata:
|
||||
|
||||
```bash
|
||||
curl -X GET "http://127.0.0.1:27124/vault-info"
|
||||
```
|
||||
|
||||
Returns vault path, file count, and configuration details.
|
||||
|
||||
## Note Structure Patterns
|
||||
|
||||
### Frontmatter Conventions
|
||||
|
||||
Use consistent frontmatter for note types:
|
||||
|
||||
```yaml
|
||||
---
|
||||
date: 2026-02-03
|
||||
created: 2026-02-03T10:30:00Z
|
||||
type: note
|
||||
tags: #tag1 #tag2
|
||||
status: active
|
||||
---
|
||||
```
|
||||
|
||||
### WikiLinks
|
||||
|
||||
Reference other notes using Obsidian WikiLinks:
|
||||
- `[[Note Title]]` - Link to note by title
|
||||
- `[[Note Title|Alias]]` - Link with custom display text
|
||||
- `[[Note Title#Heading]]` - Link to specific heading
|
||||
- `![[Image.png]]` - Embed images or media
|
||||
|
||||
### Tagging
|
||||
|
||||
Use tags for categorization:
|
||||
- `#tag` - Single-word tag
|
||||
- `#nested/tag` - Hierarchical tags
|
||||
- Tags in frontmatter for metadata
|
||||
- Tags in content for inline categorization
|
||||
|
||||
## Workflow Examples
|
||||
|
||||
### Create Brainstorm Note
|
||||
|
||||
```bash
|
||||
curl -X POST "http://127.0.0.1:27124/create-note" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"path": "03-resources/brainstorms/2026-02-03-Topic.md",
|
||||
"content": "---\ndate: 2026-02-03\ncreated: 2026-02-03T10:30:00Z\ntype: brainstorm\nframework: pros-cons\nstatus: draft\ntags: #brainstorm #pros-cons\n---\n\n# Topic\n\n## Context\n\n## Options\n\n## Decision\n"
|
||||
}'
|
||||
```
|
||||
|
||||
### Append to Daily Journal
|
||||
|
||||
```bash
|
||||
# Get current daily note
|
||||
NOTE=$(curl -s "http://127.0.0.1:27124/daily-note")
|
||||
|
||||
# Append content
|
||||
curl -X PUT "http://127.0.0.1:27124/daily-note" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{\"content\": \"${NOTE}\n\n## Journal Entry\n\nLearned about Obsidian API integration.\"}"
|
||||
```
|
||||
|
||||
### Search and Link Notes
|
||||
|
||||
```bash
|
||||
# Search for related notes
|
||||
curl -s "http://127.0.0.1:27124/search?q=Obsidian"
|
||||
|
||||
# Create note with WikiLinks to found notes
|
||||
curl -X POST "http://127.0.0.1:27124/create-note" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"path": "02-areas/Obsidian API Guide.md",
|
||||
"content": "# Obsidian API Guide\n\nSee [[API Endpoints]] and [[Workflows]] for details."
|
||||
}'
|
||||
```
|
||||
|
||||
## Integration with Other Skills
|
||||
|
||||
| From Obsidian | To skill | Handoff pattern |
|
||||
|--------------|----------|----------------|
|
||||
| Note created | brainstorming | Create brainstorm note with frontmatter |
|
||||
| Daily note updated | reflection | Append conversation analysis to journal |
|
||||
| Research note | research | Save research findings with tags |
|
||||
| Project note | task-management | Link tasks to project notes |
|
||||
| Plan document | plan-writing | Save generated plan to vault |
|
||||
| Memory note | memory | Create/read memory notes in 80-memory/ |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use paths consistently** - Follow PARA structure or vault conventions
|
||||
2. **Include frontmatter** - Enables search and metadata queries
|
||||
3. **Use WikiLinks** - Creates knowledge graph connections
|
||||
4. **Validate paths** - Check file existence before operations
|
||||
5. **Handle errors** - API may return 404 for non-existent files
|
||||
6. **Escape special characters** - URL-encode paths with spaces or symbols
|
||||
7. **Backup vault** - REST API operations modify files directly
|
||||
|
||||
---
|
||||
|
||||
## Memory Folder Conventions
|
||||
|
||||
The `80-memory/` folder stores dual-layer memories synced with Mem0.
|
||||
|
||||
### Structure
|
||||
|
||||
```
|
||||
80-memory/
|
||||
├── preferences/ # Personal preferences (UI, workflow, communication)
|
||||
├── facts/ # Objective information (role, tech stack, constraints)
|
||||
├── decisions/ # Choices with rationale (tool selections, architecture)
|
||||
├── entities/ # People, organizations, systems, concepts
|
||||
└── other/ # Everything else
|
||||
```
|
||||
|
||||
### Naming Convention
|
||||
|
||||
Memory notes use kebab-case: `prefers-dark-mode.md`, `uses-typescript.md`
|
||||
|
||||
### Required Frontmatter
|
||||
|
||||
```yaml
|
||||
---
|
||||
type: memory
|
||||
category: # preference | fact | decision | entity | other
|
||||
mem0_id: # Mem0 memory ID (e.g., "mem_abc123")
|
||||
source: explicit # explicit | auto-capture
|
||||
importance: # critical | high | medium | low
|
||||
created: 2026-02-12
|
||||
updated: 2026-02-12
|
||||
tags:
|
||||
- memory
|
||||
sync_targets: []
|
||||
---
|
||||
```
|
||||
|
||||
### Key Fields
|
||||
|
||||
| Field | Purpose |
|
||||
|-------|---------|
|
||||
| `mem0_id` | Links to Mem0 entry for semantic search |
|
||||
| `category` | Determines subfolder and classification |
|
||||
| `source` | How memory was captured (explicit request vs auto) |
|
||||
| `importance` | Priority for recall ranking |
|
||||
|
||||
---
|
||||
|
||||
## Memory Note Workflows
|
||||
|
||||
### Create Memory Note
|
||||
|
||||
When creating a memory note in the vault:
|
||||
|
||||
```bash
|
||||
# Using REST API
|
||||
curl -X POST "http://127.0.0.1:27124/create-note" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"path": "80-memory/preferences/prefers-dark-mode.md",
|
||||
"content": "---\ntype: memory\ncategory: preference\nmem0_id: mem_abc123\nsource: explicit\nimportance: medium\ncreated: 2026-02-12\nupdated: 2026-02-12\ntags:\n - memory\nsync_targets: []\n---\n\n# Prefers Dark Mode\n\n## Content\n\nUser prefers dark mode in all applications.\n\n## Context\n\nStated during UI preferences discussion on 2026-02-12.\n\n## Related\n\n- [[UI Settings]]\n"
|
||||
}'
|
||||
```
|
||||
|
||||
### Read Memory Note
|
||||
|
||||
Read by path with URL encoding:
|
||||
|
||||
```bash
|
||||
curl -X GET "http://127.0.0.1:27124/read-note?path=80-memory%2Fpreferences%2Fprefers-dark-mode.md"
|
||||
```
|
||||
|
||||
### Search Memories
|
||||
|
||||
Search within memory folder:
|
||||
|
||||
```bash
|
||||
curl -X GET "http://127.0.0.1:27124/search?q=dark%20mode&path=80-memory"
|
||||
```
|
||||
|
||||
### Update Memory Note
|
||||
|
||||
Update content and frontmatter:
|
||||
|
||||
```bash
|
||||
curl -X PUT "http://127.0.0.1:27124/update-note" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"path": "80-memory/preferences/prefers-dark-mode.md",
|
||||
"content": "# Updated content..."
|
||||
}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
Common HTTP status codes:
|
||||
- `200 OK` - Success
|
||||
- `404 Not Found` - File or resource doesn't exist
|
||||
- `400 Bad Request` - Invalid parameters or malformed JSON
|
||||
- `500 Internal Server Error` - Plugin or vault error
|
||||
|
||||
Check API response body for error details before retrying operations.
|
||||
126
skills/outline/SKILL.md
Normal file
126
skills/outline/SKILL.md
Normal file
@@ -0,0 +1,126 @@
|
||||
---
|
||||
name: outline
|
||||
description: "Outline wiki integration for knowledge management and documentation workflows. Use when Opencode needs to interact with Outline for: (1) Creating and editing documents, (2) Searching and retrieving knowledge base content, (3) Managing document collections and hierarchies, (4) Handling document sharing and permissions, (5) Collaborative features like comments. Triggers: 'Outline', 'wiki', 'knowledge base', 'documentation', 'team docs', 'document in Outline', 'search Outline', 'Outline collection'."
|
||||
compatibility: opencode
|
||||
---
|
||||
|
||||
# Outline Wiki Integration
|
||||
|
||||
Outline is a team knowledge base and wiki platform. This skill provides guidance for Outline API operations and knowledge management workflows.
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
### Document Operations
|
||||
|
||||
- **Create**: Create new documents with markdown content
|
||||
- **Read**: Retrieve document content, metadata, and revisions
|
||||
- **Update**: Edit existing documents, update titles and content
|
||||
- **Delete**: Remove documents (with appropriate permissions)
|
||||
|
||||
### Collection Management
|
||||
|
||||
- **Organize**: Structure documents in collections and nested collections
|
||||
- **Hierarchies**: Create parent-child relationships
|
||||
- **Access Control**: Set permissions at collection level
|
||||
|
||||
### Search and Discovery
|
||||
|
||||
- **Full-text search**: Find documents by content
|
||||
- **Metadata filters**: Search by collection, author, date
|
||||
- **Advanced queries**: Combine multiple filters
|
||||
|
||||
### Sharing and Permissions
|
||||
|
||||
- **Public links**: Generate shareable document URLs
|
||||
- **Team access**: Manage member permissions
|
||||
- **Guest access**: Control external sharing
|
||||
|
||||
### Collaboration
|
||||
|
||||
- **Comments**: Add threaded discussions to documents
|
||||
- **Revisions**: Track document history and changes
|
||||
- **Notifications**: Stay updated on document activity
|
||||
|
||||
## Workflows
|
||||
|
||||
### Creating a New Document
|
||||
|
||||
1. Determine target collection
|
||||
2. Create document with title and initial content
|
||||
3. Set appropriate permissions
|
||||
4. Share with relevant team members if needed
|
||||
|
||||
### Searching Knowledge Base
|
||||
|
||||
1. Formulate search query
|
||||
2. Apply relevant filters (collection, date, author)
|
||||
3. Review search results
|
||||
4. Retrieve full document content when needed
|
||||
|
||||
### Organizing Documents
|
||||
|
||||
1. Review existing collection structure
|
||||
2. Identify appropriate parent collection
|
||||
3. Create or update documents in hierarchy
|
||||
4. Update collection metadata if needed
|
||||
|
||||
### Document Collaboration
|
||||
|
||||
1. Add comments for feedback or discussion
|
||||
2. Track revision history for changes
|
||||
3. Notify stakeholders when needed
|
||||
4. Resolve comments when addressed
|
||||
|
||||
## Integration Patterns
|
||||
|
||||
### Knowledge Capture
|
||||
|
||||
When capturing information from conversations or research:
|
||||
- Create document in appropriate collection
|
||||
- Use clear, descriptive titles
|
||||
- Structure content with headers for readability
|
||||
- Add tags for discoverability
|
||||
|
||||
### Documentation Updates
|
||||
|
||||
When updating existing documentation:
|
||||
- Retrieve current document revision
|
||||
- Make targeted, minimal changes
|
||||
- Add comments explaining significant updates
|
||||
- Share updates with relevant stakeholders
|
||||
|
||||
### Knowledge Retrieval
|
||||
|
||||
When searching for information:
|
||||
- Start with broad search terms
|
||||
- Refine with collection and metadata filters
|
||||
- Review multiple relevant documents
|
||||
- Cross-reference linked documents for context
|
||||
|
||||
## Common Use Cases
|
||||
|
||||
| Use Case | Recommended Approach |
|
||||
|----------|---------------------|
|
||||
| Project documentation | Create collection per project, organize by phase |
|
||||
| Team guidelines | Use dedicated collection, group by topic |
|
||||
| Meeting notes | Create documents with templates, tag by team |
|
||||
| Knowledge capture | Search before creating, link to related docs |
|
||||
| Onboarding resources | Create structured collection with step-by-step guides |
|
||||
|
||||
## Best Practices
|
||||
|
||||
- **Consistent naming**: Use clear, descriptive titles
|
||||
- **Logical organization**: Group related documents in collections
|
||||
- **Regular maintenance**: Review and update outdated content
|
||||
- **Access control**: Set appropriate permissions for sensitive content
|
||||
- **Searchability**: Use tags and metadata effectively
|
||||
- **Collaboration**: Use comments for discussions, not content changes
|
||||
|
||||
## Handoff to Other Skills
|
||||
|
||||
| Output | Next Skill | Trigger |
|
||||
|--------|------------|---------|
|
||||
| Research findings | knowledge-management | "Organize this research in Outline" |
|
||||
| Documentation draft | communications | "Share this document via email" |
|
||||
| Task from document | task-management | "Create tasks from this outline" |
|
||||
| Project plan | plan-writing | "Create project plan in Outline" |
|
||||
@@ -79,6 +79,7 @@ Executable code (Python/Bash/etc.) for tasks that require deterministic reliabil
|
||||
- **Example**: `scripts/rotate_pdf.py` for PDF rotation tasks
|
||||
- **Benefits**: Token efficient, deterministic, may be executed without loading into context
|
||||
- **Note**: Scripts may still need to be read by Opencode for patching or environment-specific adjustments
|
||||
- **Dependencies**: Scripts with external dependencies (Python packages, system tools) require those dependencies to be registered in the repository's `flake.nix`. See Step 4 for details.
|
||||
|
||||
##### References (`references/`)
|
||||
|
||||
@@ -302,6 +303,37 @@ To begin implementation, start with the reusable resources identified above: `sc
|
||||
|
||||
Added scripts must be tested by actually running them to ensure there are no bugs and that the output matches what is expected. If there are many similar scripts, only a representative sample needs to be tested to ensure confidence that they all work while balancing time to completion.
|
||||
|
||||
#### Register Dependencies in flake.nix
|
||||
|
||||
When scripts introduce external dependencies (Python packages or system tools), add them to the repository's `flake.nix`. Dependencies are defined once in `pythonEnv` (Python packages) or `packages` (system tools) inside the `skills-runtime` buildEnv. This runtime is exported as `packages.${system}.skills-runtime` and consumed by project flakes and home-manager — ensuring opencode always has the correct environment regardless of which project it runs in.
|
||||
|
||||
**Python packages** — add to the `pythonEnv` block with a comment referencing the skill:
|
||||
|
||||
```nix
|
||||
pythonEnv = pkgs.python3.withPackages (ps:
|
||||
with ps; [
|
||||
# <skill-name>: <script>.py
|
||||
<package-name>
|
||||
]);
|
||||
```
|
||||
|
||||
**System tools** (e.g. `poppler-utils`, `ffmpeg`, `imagemagick`) — add to the `paths` list in the `skills-runtime` buildEnv:
|
||||
|
||||
```nix
|
||||
skills-runtime = pkgs.buildEnv {
|
||||
name = "opencode-skills-runtime";
|
||||
paths = [
|
||||
pythonEnv
|
||||
# <skill-name>: needed by <script>
|
||||
pkgs.<tool-name>
|
||||
];
|
||||
};
|
||||
```
|
||||
|
||||
**Convention**: Each entry must include a comment with `# <skill-name>: <reason>` so dependencies remain traceable to their originating skill.
|
||||
|
||||
After adding dependencies, verify they resolve: `nix develop --command python3 -c "import <package>"`
|
||||
|
||||
Any example files and directories not needed for the skill should be deleted. The initialization script creates example files in `scripts/`, `references/`, and `assets/` to demonstrate structure, but most skills won't need all of them.
|
||||
|
||||
#### Update SKILL.md
|
||||
|
||||
@@ -6,8 +6,8 @@ Usage:
|
||||
init_skill.py <skill-name> --path <path>
|
||||
|
||||
Examples:
|
||||
init_skill.py my-new-skill --path ~/.config/opencode/skill
|
||||
init_skill.py my-api-helper --path .opencode/skill
|
||||
init_skill.py my-new-skill --path ~/.config/opencode/skills
|
||||
init_skill.py my-api-helper --path .opencode/skills
|
||||
init_skill.py custom-skill --path /custom/location
|
||||
"""
|
||||
|
||||
|
||||
Reference in New Issue
Block a user