Compare commits

...

18 Commits

Author SHA1 Message Date
m3tm3re
39ac89f388 docs: update AGENTS.md and README.md for rules system, remove beads
- Add rules/ directory documentation to both files
- Update skill count from 25 to 15 modules
- Remove beads references (issue tracking removed)
- Update skills list with current active skills
- Document flake.nix as proper Nix flake (not flake=false)
- Add rules system integration section
- Clean up sisyphus planning artifacts
- Remove deprecated skills (memory, msteams, outlook)
2026-03-03 19:40:57 +01:00
m3tm3re
1bc81fb38c chore: update readme 2026-02-18 17:32:13 +01:00
m3tm3re
1f1eabd1ed feat(rules): add strict TDD enforcement ruleset with AI patterns 2026-02-18 17:30:20 +01:00
m3tm3re
5b204c95e4 test(rules): add final QA evidence and mark review complete
Final Review Results:
- F1 (Plan Compliance): OKAY - Must Have [12/12], Must NOT Have [8/8]
- F2 (Code Quality): OKAY - All files pass quality criteria
- F3 (Manual QA): OKAY - Scenarios [5/5 pass]
- F4 (Scope Fidelity): OKAY - No unaccounted changes

All 21 tasks complete (T1-T17 + F1-F4)
2026-02-17 19:31:24 +01:00
m3tm3re
4e9da366e4 test(rules): add integration test evidence
- All 11 rule files verified (exist, under limits)
- Full lib integration verified (11 paths returned)
- Context budget verified (975 < 1500)
- All instruction paths resolve to real files
- opencode.nix rules entry verified

Refs: T17 of rules-system plan
2026-02-17 19:18:39 +01:00
m3tm3re
8910413315 feat(rules): add initial rule files for concerns, languages, and frameworks
Concerns (6 files):
- coding-style.md (163 lines): patterns, anti-patterns, error handling, SOLID
- naming.md (105 lines): naming conventions table per language
- documentation.md (149 lines): docstrings, WHY vs WHAT, README standards
- testing.md (134 lines): AAA pattern, mocking philosophy, TDD
- git-workflow.md (118 lines): conventional commits, branch naming, PR format
- project-structure.md (82 lines): directory layout, entry points, config placement

Languages (4 files):
- python.md (224 lines): uv, ruff, pyright, pytest, pydantic, idioms, anti-patterns
- typescript.md (150 lines): strict mode, discriminated unions, satisfies, as const
- nix.md (129 lines): flake structure, module patterns, alejandra, anti-patterns
- shell.md (100 lines): set -euo pipefail, shellcheck, quoting, POSIX

Frameworks (1 file):
- n8n.md (42 lines): workflow design, node patterns, Error Trigger, security

Context budget: 975 lines (concerns + python) < 1500 limit

Refs: T6-T16 of rules-system plan
2026-02-17 19:05:45 +01:00
m3tm3re
d475dde398 feat(rules): add rules directory structure and usage documentation
- Create rules/{concerns,languages,frameworks}/ directory structure
- Add USAGE.md with flake.nix integration examples
- Add plan and notepad files for rules-system implementation

Refs: T1, T5 of rules-system plan
2026-02-17 18:59:43 +01:00
m3tm3re
6fceea7460 refactor: modernize agent configs, remove beads, update README
- Upgrade all agents from glm-4.7 to glm-5 with descriptive names
- Add comprehensive permission configs (bash, edit, external_directory) for all agents
- Remove .beads/ issue tracking directory
- Update README: fix opencode URL to opencode.ai, remove beads sections, formatting cleanup
2026-02-17 09:15:15 +01:00
m3tm3re
923e2f1eaa chore(plan): mark deployment verification as blocked (requires user action) 2026-02-14 08:34:06 +01:00
m3tm3re
231b9f2e0b chore(plan): mark tasks 11-14 and definition of done as complete 2026-02-14 08:31:32 +01:00
m3tm3re
c64d71f438 docs(memory): update skills for opencode-memory plugin, deprecate mem0 2026-02-14 08:22:59 +01:00
m3tm3re
1719f70452 feat(memory): add core memory skill, update Apollo prompt and Obsidian skill
- Add skills/memory/SKILL.md: dual-layer memory orchestration
- Update prompts/apollo.txt: add memory management responsibilities
- Update skills/obsidian/SKILL.md: add memory folder conventions
2026-02-12 20:02:51 +01:00
m3tm3re
0d6ff423be Add Memory System configuration to user profile 2026-02-12 19:54:54 +01:00
m3tm3re
79e6adb362 feat(mem0-memory): add memory categories and dual-layer sync patterns 2026-02-12 19:50:39 +01:00
m3tm3re
1e03c165e7 docs: Add Obsidian MCP server configuration documentation
- Create mcp-config.md in skills/memory/references/
- Document cyanheads/obsidian-mcp-server setup for Opencode
- Include environment variables, Nix config, and troubleshooting
- Reference for Task 4 of memory-system plan
2026-02-12 19:44:03 +01:00
m3tm3re
94b89da533 finalize doc-translator skill 2026-02-11 19:58:06 +01:00
sascha.koenig
b9d535b926 fix: use POST method for Outline signed URL upload
Change HTTP method from PUT to POST on line 77 for signed URL upload,
as Outline's S3 bucket only accepts POST requests.
2026-02-11 14:16:02 +01:00
sascha.koenig
46b9c0e4e3 fix: list_outline_collections.sh - correct jq parsing to output valid JSON array 2026-02-11 14:14:55 +01:00
54 changed files with 3951 additions and 4892 deletions

39
.beads/.gitignore vendored
View File

@@ -1,39 +0,0 @@
# SQLite databases
*.db
*.db?*
*.db-journal
*.db-wal
*.db-shm
# Daemon runtime files
daemon.lock
daemon.log
daemon.pid
bd.sock
sync-state.json
last-touched
# Local version tracking (prevents upgrade notification spam after git ops)
.local_version
# Legacy database files
db.sqlite
bd.db
# Worktree redirect file (contains relative path to main repo's .beads/)
# Must not be committed as paths would be wrong in other clones
redirect
# Merge artifacts (temporary files from 3-way merge)
beads.base.jsonl
beads.base.meta.json
beads.left.jsonl
beads.left.meta.json
beads.right.jsonl
beads.right.meta.json
# NOTE: Do NOT add negation patterns (e.g., !issues.jsonl) here.
# They would override fork protection in .git/info/exclude, allowing
# contributors to accidentally commit upstream issue databases.
# The JSONL files (issues.jsonl, interactions.jsonl) and config files
# are tracked by git by default since no pattern above ignores them.

View File

@@ -1,81 +0,0 @@
# Beads - AI-Native Issue Tracking
Welcome to Beads! This repository uses **Beads** for issue tracking - a modern, AI-native tool designed to live directly in your codebase alongside your code.
## What is Beads?
Beads is issue tracking that lives in your repo, making it perfect for AI coding agents and developers who want their issues close to their code. No web UI required - everything works through the CLI and integrates seamlessly with git.
**Learn more:** [github.com/steveyegge/beads](https://github.com/steveyegge/beads)
## Quick Start
### Essential Commands
```bash
# Create new issues
bd create "Add user authentication"
# View all issues
bd list
# View issue details
bd show <issue-id>
# Update issue status
bd update <issue-id> --status in_progress
bd update <issue-id> --status done
# Sync with git remote
bd sync
```
### Working with Issues
Issues in Beads are:
- **Git-native**: Stored in `.beads/issues.jsonl` and synced like code
- **AI-friendly**: CLI-first design works perfectly with AI coding agents
- **Branch-aware**: Issues can follow your branch workflow
- **Always in sync**: Auto-syncs with your commits
## Why Beads?
**AI-Native Design**
- Built specifically for AI-assisted development workflows
- CLI-first interface works seamlessly with AI coding agents
- No context switching to web UIs
🚀 **Developer Focused**
- Issues live in your repo, right next to your code
- Works offline, syncs when you push
- Fast, lightweight, and stays out of your way
🔧 **Git Integration**
- Automatic sync with git commits
- Branch-aware issue tracking
- Intelligent JSONL merge resolution
## Get Started with Beads
Try Beads in your own projects:
```bash
# Install Beads
curl -sSL https://raw.githubusercontent.com/steveyegge/beads/main/scripts/install.sh | bash
# Initialize in your repo
bd init
# Create your first issue
bd create "Try out Beads"
```
## Learn More
- **Documentation**: [github.com/steveyegge/beads/docs](https://github.com/steveyegge/beads/tree/main/docs)
- **Quick Start Guide**: Run `bd quickstart`
- **Examples**: [github.com/steveyegge/beads/examples](https://github.com/steveyegge/beads/tree/main/examples)
---
*Beads: Issue tracking that moves at the speed of thought*

View File

@@ -1,62 +0,0 @@
# Beads Configuration File
# This file configures default behavior for all bd commands in this repository
# All settings can also be set via environment variables (BD_* prefix)
# or overridden with command-line flags
# Issue prefix for this repository (used by bd init)
# If not set, bd init will auto-detect from directory name
# Example: issue-prefix: "myproject" creates issues like "myproject-1", "myproject-2", etc.
# issue-prefix: ""
# Use no-db mode: load from JSONL, no SQLite, write back after each command
# When true, bd will use .beads/issues.jsonl as the source of truth
# instead of SQLite database
# no-db: false
# Disable daemon for RPC communication (forces direct database access)
# no-daemon: false
# Disable auto-flush of database to JSONL after mutations
# no-auto-flush: false
# Disable auto-import from JSONL when it's newer than database
# no-auto-import: false
# Enable JSON output by default
# json: false
# Default actor for audit trails (overridden by BD_ACTOR or --actor)
# actor: ""
# Path to database (overridden by BEADS_DB or --db)
# db: ""
# Auto-start daemon if not running (can also use BEADS_AUTO_START_DAEMON)
# auto-start-daemon: true
# Debounce interval for auto-flush (can also use BEADS_FLUSH_DEBOUNCE)
# flush-debounce: "5s"
# Git branch for beads commits (bd sync will commit to this branch)
# IMPORTANT: Set this for team projects so all clones use the same sync branch.
# This setting persists across clones (unlike database config which is gitignored).
# Can also use BEADS_SYNC_BRANCH env var for local override.
# If not set, bd sync will require you to run 'bd config set sync.branch <branch>'.
# sync-branch: "beads-sync"
# Multi-repo configuration (experimental - bd-307)
# Allows hydrating from multiple repositories and routing writes to the correct JSONL
# repos:
# primary: "." # Primary repo (where this database lives)
# additional: # Additional repos to hydrate from (read-only)
# - ~/beads-planning # Personal planning repo
# - ~/work-planning # Work planning repo
# Integration settings (access with 'bd config get/set')
# These are stored in the database, not in this file:
# - jira.url
# - jira.project
# - linear.url
# - linear.api-key
# - github.org
# - github.repo

View File

@@ -1,15 +0,0 @@
{"id":"AGENTS-1jw","title":"Athena prompt: Convert to numbered responsibility format","description":"Athena prompt uses bullet points under 'Core Capabilities' section instead of numbered lists. Per agent-development skill best practices, responsibilities should be numbered (1, 2, 3) for clarity. Update prompts/athena.txt to use numbered format.","status":"closed","priority":2,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-24T19:32:16.133701271+01:00","created_by":"m3tm3re","updated_at":"2026-01-26T19:32:26.165270695+01:00","closed_at":"2026-01-26T19:32:26.165270695+01:00","close_reason":"Converted responsibility subsections from ### numbered headers to numbered list format (1., 2., 3., 4.) with bold titles"}
{"id":"AGENTS-27m","title":"Create prompts/chiron-forge.txt with Chiron-Forge's build/execution mode system prompt","status":"closed","priority":2,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-02-03T18:47:31.083994237+01:00","created_by":"m3tm3re","updated_at":"2026-02-03T18:48:45.012894731+01:00","closed_at":"2026-02-03T18:48:45.012894731+01:00","close_reason":"Created prompts/chiron-forge.txt with Chiron-Forge's build/execution mode system prompt (3185 chars, 67 lines)"}
{"id":"AGENTS-7gt","title":"Athena prompt: Rename Core Capabilities to exact header","description":"Athena prompt uses 'Core Capabilities' section header instead of 'Your Core Responsibilities:'. Per agent-development skill guidelines, the exact header 'Your Core Responsibilities:' should be used for consistency. Update prompts/athena.txt to use the exact recommended header.","status":"closed","priority":1,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-24T19:32:07.223102836+01:00","created_by":"m3tm3re","updated_at":"2026-01-26T19:31:19.080626796+01:00","closed_at":"2026-01-26T19:31:19.080626796+01:00","close_reason":"Renamed 'Core Capabilities' section header to exact 'Your Core Responsibilities:' in prompts/athena.txt"}
{"id":"AGENTS-8ie","title":"Set up PARA work structure with 10 Basecamp projects","description":"Create 01-projects/work/ structure with project folders for all Basecamp projects. Each project needs: _index.md (MOC with Basecamp link), meetings/, decisions/, notes/. Also set up 02-areas/work/ for ongoing responsibilities.","status":"closed","priority":0,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-28T18:47:56.048622809+01:00","created_by":"m3tm3re","updated_at":"2026-01-28T18:57:09.033627658+01:00","closed_at":"2026-01-28T18:57:09.033627658+01:00","close_reason":"Created complete PARA work structure: 01-projects/work/ with 10 project folders (each with _index.md, meetings/, decisions/, notes/), 02-areas/work/ with 5 area files. Projects use placeholder names - user can customize with actual Basecamp data."}
{"id":"AGENTS-9cs","title":"Configure basecamp skill with real projects","description":"Configure basecamp skill to work with real projects. Need to: get user's Basecamp projects, map them to PARA structure, test morning planning workflow with Basecamp todos.","status":"closed","priority":0,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-28T18:47:56.04844425+01:00","created_by":"m3tm3re","updated_at":"2026-01-28T18:57:14.097333313+01:00","closed_at":"2026-01-28T18:57:14.097333313+01:00","close_reason":"Enhanced basecamp skill with project mapping configuration. Added section on mapping Basecamp projects to PARA structure, with configuration examples and usage patterns. Ready for user to fetch actual projects and set up mappings."}
{"id":"AGENTS-b74","title":"Create skills/msteams/SKILL.md with MS Teams Graph API integration documentation","status":"closed","priority":2,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-02-03T18:50:21.728376088+01:00","created_by":"m3tm3re","updated_at":"2026-02-03T18:52:08.609302234+01:00","closed_at":"2026-02-03T18:52:08.609302234+01:00","close_reason":"Created skills/msteams/SKILL.md with complete MS Teams Graph API integration documentation covering channels, messages, meetings, and chat operations"}
{"id":"AGENTS-ch2","title":"Create skills/outlook/SKILL.md with Outlook Graph API documentation","status":"closed","priority":2,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-02-03T18:49:24.814232462+01:00","created_by":"m3tm3re","updated_at":"2026-02-03T18:54:30.910986438+01:00","closed_at":"2026-02-03T18:54:30.910986438+01:00","close_reason":"Completed: Created skills/outlook/SKILL.md with Outlook Graph API documentation including mail CRUD, calendar, contacts, folders, and workflow examples. Validation passed."}
{"id":"AGENTS-der","title":"Create Outline skill for MCP integration","status":"closed","priority":2,"issue_type":"feature","owner":"p@m3ta.dev","created_at":"2026-01-28T18:47:56.042886345+01:00","created_by":"m3tm3re","updated_at":"2026-01-28T18:51:21.662507568+01:00","closed_at":"2026-01-28T18:51:21.662507568+01:00","close_reason":"Created outline/SKILL.md with comprehensive workflows, tool references, and integration patterns. Added references/outline-workflows.md and references/export-patterns.md for detailed examples."}
{"id":"AGENTS-fac","title":"Design Teams transcript processing workflow (manual)","description":"Design manual workflow for Teams transcript processing: DOCX upload → extract text → AI analysis → meeting note + action items → optional Basecamp sync. Create templates and integration points.","status":"closed","priority":2,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-28T18:47:56.052076817+01:00","created_by":"m3tm3re","updated_at":"2026-01-28T18:56:34.567325504+01:00","closed_at":"2026-01-28T18:56:34.567325504+01:00","close_reason":"Created comprehensive Teams transcript workflow guide in skills/meeting-notes/references/teams-transcript-workflow.md. Includes: manual step-by-step process, Python script for DOCX extraction, AI analysis prompts, Obsidian templates, Basecamp sync integration, troubleshooting guide."}
{"id":"AGENTS-in5","title":"Athena prompt: Standardize section headers","description":"Athena prompt uses 'Ethical Guidelines' and 'Methodological Rigor' headers instead of standard 'Quality Standards' and 'Edge Cases' headers. While semantically equivalent, skill recommends exact headers for consistency. Consider renaming in prompts/athena.txt.","status":"closed","priority":2,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-24T19:32:21.720932741+01:00","created_by":"m3tm3re","updated_at":"2026-01-26T19:33:15.959382333+01:00","closed_at":"2026-01-26T19:33:15.959382333+01:00","close_reason":"Renamed '## Ethical Guidelines' to '## Quality Standards' for consistency with agent-development skill guidelines"}
{"id":"AGENTS-lyd","title":"Athena agent: Add explicit mode field","description":"Athena agent is missing the explicit 'mode': 'subagent' field. Per agent-development skill guidelines, all agents should explicitly declare mode for clarity. Current config relies on default which makes intent unclear.","status":"closed","priority":0,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-24T19:31:46.255196119+01:00","created_by":"m3tm3re","updated_at":"2026-01-26T19:30:46.191545632+01:00","closed_at":"2026-01-26T19:30:46.191545632+01:00","close_reason":"Added explicit 'mode': 'subagent' field to athena agent in agent/agents.json"}
{"id":"AGENTS-mfw","title":"Athena agent: Add temperature setting","description":"Athena agent lacks explicit temperature configuration. Per agent-development skill, research/analysis agents should use temperature 0.0-0.2 for focused, deterministic, consistent results. Add 'temperature': 0.1 to agent config in agents.json.","status":"closed","priority":1,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-24T19:31:55.726506579+01:00","created_by":"m3tm3re","updated_at":"2026-01-26T19:31:06.905697638+01:00","closed_at":"2026-01-26T19:31:06.905697638+01:00","close_reason":"Added 'temperature': 0.1 to athena agent in agent/agents.json for focused, deterministic results"}
{"id":"AGENTS-mvv","title":"Enhance daily routines with work context","status":"closed","priority":1,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-28T18:47:56.066628593+01:00","created_by":"m3tm3re","updated_at":"2026-01-28T18:56:34.576536473+01:00","closed_at":"2026-01-28T18:56:34.576536473+01:00","close_reason":"Enhanced daily-routines skill with full work context integration. Added sections for: morning planning with Basecamp/Outline, evening reflection with work metrics, weekly review with project status tracking, work area health review, work inbox processing."}
{"id":"AGENTS-o45","title":"Agent development: Document validation script availability","description":"The agent-development skill references scripts/validate-agent.sh but this script doesn't exist in the repository. Consider either: (1) creating the validation script, or (2) removing the reference and only documenting the python3 alternative.","status":"closed","priority":2,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-24T19:32:27.325525742+01:00","created_by":"m3tm3re","updated_at":"2026-01-26T19:34:17.846875543+01:00","closed_at":"2026-01-26T19:34:17.846875543+01:00","close_reason":"Removed references to non-existent scripts/validate-agent.sh and documented python3 validation as the primary method"}
{"id":"AGENTS-o7l","title":"Create agents.json with 6 agent definitions","status":"closed","priority":2,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-02-03T20:13:02.959856824+01:00","created_by":"m3tm3re","updated_at":"2026-02-03T20:13:58.186033248+01:00","closed_at":"2026-02-03T20:13:58.186033248+01:00","close_reason":"Created agents.json with all 6 agent definitions (chiron, chiron-forge, hermes, athena, apollo, calliope) with proper mode, model, prompt references, and permissions. Verified with Python JSON validation."}

View File

@@ -1,4 +0,0 @@
{
"database": "beads.db",
"jsonl_export": "issues.jsonl"
}

1
.envrc Normal file
View File

@@ -0,0 +1 @@
use flake

4
.gitignore vendored
View File

@@ -8,3 +8,7 @@
.sidecar-start.sh .sidecar-start.sh
.sidecar-base .sidecar-base
.td-root .td-root
# Nix / direnv
.direnv/
result

View File

@@ -1,7 +0,0 @@
{
"active_plan": "/home/m3tam3re/p/AI/AGENTS/.sisyphus/plans/chiron-agent-framework.md",
"started_at": "2026-02-03T19:07:36.011Z",
"session_ids": ["ses_3db18d3abffeIjqxbVVqNCz5As", "ses_3db16c6daffeKLCdiQiDREMZ3C"],
"plan_name": "chiron-agent-framework",
"completed_at": "2026-02-03T20:09:00.000Z"
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,748 +0,0 @@
# Agent Permissions Refinement
## TL;DR
> **Quick Summary**: Refine OpenCode agent permissions for Chiron (planning) and Chriton-Forge (build) to implement 2025 AI security best practices with principle of least privilege, human-in-the-loop for critical actions, and explicit guardrails against permission bypass.
> **Deliverables**:
> - Updated `agents/agents.json` with refined permissions for Chiron and Chriton-Forge
> - Critical bug fix: Duplicate `external_directory` key in Chiron config
> - Enhanced secret blocking with additional patterns
> - Bash injection prevention rules
> - Git protection against secret commits and repo hijacking
> **Estimated Effort**: Medium
> **Parallel Execution**: NO - sequential changes to single config file
> **Critical Path**: Fix duplicate key → Apply Chiron permissions → Apply Chriton-Forge permissions → Validate
---
## Context
### Original Request
User wants to refine agent permissions for:
- **Chiron**: Planning agent with read-only access, restricted to read-only subagents, no file editing, can create beads issues
- **Chriton-Forge**: Build agent with write access restricted to ~/p/**, git commits allowed but git push asks, package install commands ask
- **General**: Sane defaults that are secure but open enough for autonomous work
### Interview Summary
**Key Discussions**:
- Chiron: Read-only planning, no file editing, bash denied except for `bd *` commands, external_directory ~/p/** only, task permission to restrict subagents to explore/librarian/athena + chiron-forge for handoff
- Chriton-Forge: Write access restricted to ~/p/**, git commits allow / git push ask, package install commands ask, git config deny
- Workspace path: ~/p/** is symlink to ~/projects/personal/** (just replacing path reference)
- Bash security: Block all bash redirect patterns (echo >, cat >, tee, etc.)
**Research Findings**:
- OpenCode supports granular permission rules with wildcards, last-match-wins
- 2025 best practices: Principle of least privilege, tiered permissions (read-only auto, destructive ask, JIT privileges), human-in-the-loop for critical actions
- Security hardening: Block command injection vectors, prevent git secret commits, add comprehensive secret blocking patterns
### Metis Review
**Critical Issues Identified**:
1. **Duplicate `external_directory` key** in Chiron config (lines 8-9 and 27) - second key overrides first, breaking intended behavior
2. **Bash edit bypass**: Even with `edit: deny`, bash can write files via redirection (`echo "x" > file.txt`, `cat >`, `tee`)
3. **Git secret protection**: Agent could commit secrets (read .env, then git commit .env)
4. **Git config hijacking**: Agent could modify .git/config to push to attacker-controlled repo
5. **Command injection**: Malicious content could execute via `$()`, backticks, `eval`, `source`
6. **Secret blocking incomplete**: Missing patterns for `.local/share/*`, `.cache/*`, `*.db`, `*.keychain`, `*.p12`
**Guardrails Applied**:
- Fix duplicate external_directory key (use single object with catch-all `"*": "ask"` after specific rules)
- Add bash file write protection patterns (echo >, cat >, printf >, tee, > operators)
- Add git secret protection (`git add *.env*`: deny, `git commit *.env*`: deny)
- Add git config protection (`git config *`: deny for Chriton-Forge)
- Add bash injection prevention (`$(*`, `` `*``, `eval *`, `source *`)
- Expand secret blocking with additional patterns
- Add /run/agenix/* to read deny list
---
## Work Objectives
### Core Objective
Refine OpenCode agent permissions in `agents/agents.json` to implement security hardening based on 2025 AI agent best practices while maintaining autonomous workflow capabilities.
### Concrete Deliverables
- Updated `agents/agents.json` with:
- Chiron: Read-only permissions, subagent restrictions, bash denial (except `bd *`), no file editing
- Chriton-Forge: Write access scoped to ~/p/**, git commit allow / push ask, package install ask, git config deny
- Both: Enhanced secret blocking, bash injection prevention, git secret protection
### Definition of Done
- [x] Permission configuration updated in `agents/agents.json`
- [x] JSON syntax valid (no duplicate keys, valid structure)
- [x] Workspace path validated (~/p/** exists and is correct)
- [x] Acceptance criteria tests pass (via manual verification)
### Must Have
- Chiron cannot edit files directly
- Chiron cannot write files via bash (redirects blocked)
- Chiron restricted to read-only subagents + chiron-forge for handoff
- Chriton-Forge can only write to ~/p/**
- Chriton-Forge cannot git config
- Both agents block secret file reads
- Both agents prevent command injection
- Git operations cannot commit secrets
- No duplicate keys in permission configuration
### Must NOT Have (Guardrails)
- **Edit bypass via bash**: No bash redirection patterns that allow file writes when `edit: deny`
- **Git secret commits**: No ability to git add/commit .env or credential files
- **Repo hijacking**: No git config modification allowed for Chriton-Forge
- **Command injection**: No `$()`, backticks, `eval`, `source` execution via bash
- **Write scope escape**: Chriton-Forge cannot write outside ~/p/** without asking
- **Secret exfiltration**: No access to .env, .ssh, .gnupg, credentials, secrets, .pem, .key, /run/agenix
- **Unrestricted bash for Chiron**: Only `bd *` commands allowed
---
## Verification Strategy (MANDATORY)
> This is configuration work, not code development. Manual verification is required after deployment.
### Test Decision
- **Infrastructure exists**: YES (home-manager deployment)
- **User wants tests**: NO (Manual-only verification)
- **Framework**: None
### Manual Verification Procedures
Each TODO includes EXECUTABLE verification procedures that users can run to validate changes.
**Verification Commands to Run After Deployment:**
1. **JSON Syntax Validation**:
```bash
# Validate JSON structure and no duplicate keys
jq '.' /home/m3tam3re/p/AI/AGENTS/agents/agents.json > /dev/null 2>&1
# Expected: Exit code 0 (valid JSON)
# Check for duplicate keys (manual review of chiron permission object)
# Expected: Single external_directory key, no other duplicates
```
2. **Workspace Path Validation**:
```bash
ls -la ~/p/ 2>&1
# Expected: Directory exists, shows contents (likely symlink to ~/projects/personal/)
```
3. **After Deployment - Chiron Read-Only Test** (manual):
- Have Chiron attempt to edit a test file
- Expected: Permission denied with clear error message
- Have Chiron attempt to write via bash (echo "test" > /tmp/test.txt)
- Expected: Permission denied
- Have Chiron run `bd ready` command
- Expected: Command succeeds, returns JSON output with issue list
- Have Chiron attempt to invoke build-capable subagent (sisyphus-junior)
- Expected: Permission denied
4. **After Deployment - Chiron Workspace Access** (manual):
- Have Chiron read file within ~/p/**
- Expected: Success, returns file contents
- Have Chiron read file outside ~/p/**
- Expected: Permission denied or ask user
- Have Chiron delegate to explore/librarian/athena
- Expected: Success, subagent executes
5. **After Deployment - Chriton-Forge Write Access** (manual):
- Have Chriton-Forge write test file in ~/p/** directory
- Expected: Success, file created
- Have Chriton-Forge attempt to write file to /tmp
- Expected: Ask user for approval
- Have Chriton-Forge run `git add` and `git commit -m "test"`
- Expected: Success, commit created without asking
- Have Chriton-Forge attempt `git push`
- Expected: Ask user for approval
- Have Chriton-Forge attempt `git config`
- Expected: Permission denied
- Have Chriton-Forge attempt `npm install lodash`
- Expected: Ask user for approval
6. **After Deployment - Secret Blocking Tests** (manual):
- Attempt to read .env file with both agents
- Expected: Permission denied
- Attempt to read /run/agenix/ with Chiron
- Expected: Permission denied
- Attempt to read .env.example (should be allowed)
- Expected: Success
7. **After Deployment - Bash Injection Prevention** (manual):
- Have agent attempt bash -c "$(cat /malicious)"
- Expected: Permission denied
- Have agent attempt bash -c "`cat /malicious`"
- Expected: Permission denied
- Have agent attempt eval command
- Expected: Permission denied
8. **After Deployment - Git Secret Protection** (manual):
- Have agent attempt `git add .env`
- Expected: Permission denied
- Have agent attempt `git commit .env`
- Expected: Permission denied
9. **Deployment Verification**:
```bash
# After home-manager switch, verify config is embedded correctly
cat ~/.config/opencode/config.json | jq '.agent.chiron.permission.external_directory'
# Expected: Shows ~/p/** rule, no duplicate keys
# Verify agents load without errors
# Expected: No startup errors when launching OpenCode
```
---
## Execution Strategy
### Parallel Execution Waves
> Single file sequential changes - no parallelization possible.
```
Single-Threaded Execution:
Task 1: Fix duplicate external_directory key
Task 2: Apply Chiron permission updates
Task 3: Apply Chriton-Forge permission updates
Task 4: Validate configuration
```
### Dependency Matrix
| Task | Depends On | Blocks | Can Parallelize With |
|------|------------|--------|---------------------|
| 1 | None | 2, 3 | None (must start) |
| 2 | 1 | 4 | 3 |
| 3 | 1 | 4 | 2 |
| 4 | 2, 3 | None | None (validation) |
### Agent Dispatch Summary
| Task | Recommended Agent |
|------|-----------------|
| 1 | delegate_task(category="quick", load_skills=["git-master"]) |
| 2 | delegate_task(category="quick", load_skills=["git-master"]) |
| 3 | delegate_task(category="quick", load_skills=["git-master"]) |
| 4 | User (manual verification) |
---
## TODOs
> Implementation tasks for agent configuration changes. Each task MUST include acceptance criteria with executable verification.
- [x] 1. Fix Duplicate external_directory Key in Chiron Config
**What to do**:
- Remove duplicate `external_directory` key from Chiron permission object
- Consolidate into single object with specific rule + catch-all `"*": "ask"`
- Replace `~/projects/personal/**` with `~/p/**` (symlink to same directory)
**Must NOT do**:
- Leave duplicate keys (second key overrides first, breaks config)
- Skip workspace path validation (verify ~/p/** exists)
**Recommended Agent Profile**:
> **Category**: quick
- Reason: Simple JSON edit, single file change, no complex logic
> **Skills**: git-master
- git-master: Git workflow for committing changes
> **Skills Evaluated but Omitted**:
- research: Not needed (no investigation required)
- librarian: Not needed (no external docs needed)
**Parallelization**:
- **Can Run In Parallel**: NO
- **Parallel Group**: Sequential
- **Blocks**: Tasks 2, 3 (depends on clean config)
- **Blocked By**: None (can start immediately)
**References** (CRITICAL - Be Exhaustive):
**Pattern References** (existing code to follow):
- `agents/agents.json:1-135` - Current agent configuration structure (JSON format, permission object structure)
- `agents/agents.json:7-29` - Chiron permission object (current state with duplicate key)
**API/Type References** (contracts to implement against):
- OpenCode permission schema: `{"permission": {"bash": {...}, "edit": "...", "external_directory": {...}, "task": {...}}`
**Documentation References** (specs and requirements):
- Interview draft: `.sisyphus/drafts/agent-permissions-refinement.md` - All user decisions and requirements
- Metis analysis: Critical issue #1 - Duplicate external_directory key
**External References** (libraries and frameworks):
- OpenCode docs: https://opencode.ai/docs/permissions/ - Permission system documentation (allow/ask/deny, wildcards, last-match-wins)
- OpenCode docs: https://opencode.ai/docs/agents/ - Agent configuration format
**WHY Each Reference Matters** (explain the relevance):
- `agents/agents.json` - Target file to modify, shows current structure and duplicate key bug
- Interview draft - Contains all user decisions (~/p/** path, subagent restrictions, etc.)
- OpenCode permissions docs - Explains permission system mechanics (last-match-wins critical for rule ordering)
- Metis analysis - Identifies the duplicate key bug that MUST be fixed
**Acceptance Criteria**:
> **CRITICAL: AGENT-EXECUTABLE VERIFICATION ONLY**
**Automated Verification (config validation)**:
\`\`\`bash
# Agent runs:
jq '.' /home/m3tam3re/p/AI/AGENTS/agents/agents.json > /dev/null 2>&1
# Assert: Exit code 0 (valid JSON)
# Verify single external_directory key in chiron permission object
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron.permission | keys' | grep external_directory | wc -l
# Assert: Output is "1" (exactly one external_directory key)
# Verify workspace path exists
ls -la ~/p/ 2>&1 | head -1
# Assert: Shows directory listing (not "No such file or directory")
\`\`\`
**Evidence to Capture**:
- [x] jq validation output (exit code 0)
- [x] external_directory key count output (should be "1")
- [x] Workspace path ls output (shows directory exists)
**Commit**: NO (group with Task 2 and 3)
- [x] 2. Apply Chiron Permission Updates
**What to do**:
- Set `edit` to `"deny"` (planning agent should not write files)
- Set `bash` permissions to deny all except `bd *`:
```json
"bash": {
"*": "deny",
"bd *": "allow"
}
```
- Set `external_directory` to `~/p/**` with catch-all ask:
```json
"external_directory": {
"~/p/**": "allow",
"*": "ask"
}
```
- Add `task` permission to restrict subagents:
```json
"task": {
"*": "deny",
"explore": "allow",
"librarian": "allow",
"athena": "allow",
"chiron-forge": "allow"
}
```
- Add `/run/agenix/*` to read deny list
- Add expanded secret blocking patterns: `.local/share/*`, `.cache/*`, `*.db`, `*.keychain`, `*.p12`
**Must NOT do**:
- Allow bash file write operators (echo >, cat >, tee, etc.) - will add in Task 3 for both agents
- Allow chiron to invoke build-capable subagents beyond chiron-forge
- Skip webfetch permission (should be "allow" for research capability)
**Recommended Agent Profile**:
> **Category**: quick
- Reason: JSON configuration update, follows clear specifications from draft
> **Skills**: git-master
- git-master: Git workflow for committing changes
> **Skills Evaluated but Omitted**:
- research: Not needed (all requirements documented in draft)
- librarian: Not needed (no external docs needed)
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 2 (with Task 3)
- **Blocks**: Task 4
- **Blocked By**: Task 1
**References** (CRITICAL - Be Exhaustive):
**Pattern References** (existing code to follow):
- `agents/agents.json:11-24` - Current Chiron read permissions with secret blocking patterns
- `agents/agents.json:114-132` - Athena permission object (read-only subagent reference pattern)
**API/Type References** (contracts to implement against):
- OpenCode task permission schema: `{"task": {"agent-name": "allow"}}`
**Documentation References** (specs and requirements):
- Interview draft: `.sisyphus/drafts/agent-permissions-refinement.md` - Chiron permission decisions
- Metis analysis: Guardrails #7, #8 - Secret blocking patterns, task permission implementation
**External References** (libraries and frameworks):
- OpenCode docs: https://opencode.ai/docs/agents/#task-permissions - Task permission documentation
- OpenCode docs: https://opencode.ai/docs/permissions/ - Permission level definitions and pattern matching
**WHY Each Reference Matters** (explain the relevance):
- `agents/agents.json:11-24` - Shows current secret blocking patterns to extend
- `agents/agents.json:114-132` - Shows read-only subagent pattern for reference (athena: deny bash, deny edit)
- Interview draft - Contains exact user requirements for Chiron permissions
- OpenCode task docs - Explains how to restrict subagent invocation via task permission
**Acceptance Criteria**:
> **CRITICAL: AGENT-EXECUTABLE VERIFICATION ONLY**
**Automated Verification (config validation)**:
\`\`\`bash
# Agent runs:
jq '.chiron.permission.edit' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
jq '.chiron.permission.bash."*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
jq '.chiron.permission.bash."bd *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "allow"
jq '.chiron.permission.task."*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
jq '.chiron.permission.task | keys' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Contains ["*", "athena", "chiron-forge", "explore", "librarian"]
jq '.chiron.permission.external_directory."~/p/**"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "allow"
jq '.chiron.permission.external_directory."*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "ask"
jq '.chiron.permission.read."/run/agenix/*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
\`\`\`
**Evidence to Capture**:
- [x] Edit permission value (should be "deny")
- [x] Bash wildcard permission (should be "deny")
- [x] Bash bd permission (should be "allow")
- [x] Task wildcard permission (should be "deny")
- [x] Task allowlist keys (should show 5 entries)
- [x] External directory ~/p/** permission (should be "allow")
- [x] External directory wildcard permission (should be "ask")
- [x] Read /run/agenix/* permission (should be "deny")
**Commit**: NO (group with Task 3)
- [x] 3. Apply Chriton-Forge Permission Updates
**What to do**:
- Split `git *: "ask"` into granular rules:
- Allow: `git add *`, `git commit *`, read-only commands (status, log, diff, branch, show, stash, remote)
- Ask: `git push *`
- Deny: `git config *`
- Change package managers from `"ask"` to granular rules:
- Ask for installs: `npm install *`, `npm i *`, `npx *`, `pip install *`, `pip3 install *`, `uv *`, `bun install *`, `bun i *`, `bunx *`, `yarn install *`, `yarn add *`, `pnpm install *`, `pnpm add *`, `cargo install *`, `go install *`, `make install`
- Allow other commands implicitly (let them use catch-all rules or existing allow patterns)
- Set `external_directory` to allow `~/p/**` with catch-all ask:
```json
"external_directory": {
"~/p/**": "allow",
"*": "ask"
}
```
- Add bash file write protection patterns (apply to both agents):
```json
"bash": {
"echo * > *": "deny",
"cat * > *": "deny",
"printf * > *": "deny",
"tee": "deny",
"*>*": "deny",
">*>*": "deny"
}
```
- Add bash command injection prevention (apply to both agents):
```json
"bash": {
"$(*": "deny",
"`*": "deny",
"eval *": "deny",
"source *": "deny"
}
```
- Add git secret protection patterns (apply to both agents):
```json
"bash": {
"git add *.env*": "deny",
"git commit *.env*": "deny",
"git add *credentials*": "deny",
"git add *secrets*": "deny"
}
```
- Add expanded secret blocking patterns to read permission:
- `.local/share/*`, `.cache/*`, `*.db`, `*.keychain`, `*.p12`
**Must NOT do**:
- Remove existing bash deny rules for dangerous commands (dd, mkfs, fdisk, parted, eval, sudo, su, systemctl, etc.)
- Allow git config modifications
- Allow bash to write files via any method (must block all redirect patterns)
- Skip command injection prevention ($(), backticks, eval, source)
**Recommended Agent Profile**:
> **Category**: quick
- Reason: JSON configuration update, follows clear specifications from draft
> **Skills**: git-master
- git-master: Git workflow for committing changes
> **Skills Evaluated but Omitted**:
- research: Not needed (all requirements documented in draft)
- librarian: Not needed (no external docs needed)
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 2 (with Task 2)
- **Blocks**: Task 4
- **Blocked By**: Task 1
**References** (CRITICAL - Be Exhaustive):
**Pattern References** (existing code to follow):
- `agents/agents.json:37-103` - Current Chriton-Forge bash permissions (many explicit allow/ask/deny rules)
- `agents/agents.json:37-50` - Current Chriton-Forge read permissions with secret blocking
**API/Type References** (contracts to implement against):
- OpenCode permission schema: Same as Task 2
**Documentation References** (specs and requirements):
- Interview draft: `.sisyphus/drafts/agent-permissions-refinement.md` - Chriton-Forge permission decisions
- Metis analysis: Guardrails #1-#6 - Bash edit bypass, git secret protection, command injection, git config protection
**External References** (libraries and frameworks):
- OpenCode docs: https://opencode.ai/docs/permissions/ - Permission pattern matching (wildcards, last-match-wins)
**WHY Each Reference Matters** (explain the relevance):
- `agents/agents.json:37-103` - Shows current bash permission structure (many explicit rules) to extend with new patterns
- `agents/agents.json:37-50` - Shows current secret blocking to extend with additional patterns
- Interview draft - Contains exact user requirements for Chriton-Forge permissions
- Metis analysis - Provides bash injection prevention patterns and git protection rules
**Acceptance Criteria**:
> **CRITICAL: AGENT-EXECUTABLE VERIFICATION ONLY**
**Automated Verification (config validation)**:
\`\`\`bash
# Agent runs:
# Verify git commit is allowed
jq '.chiron-forge.permission.bash."git commit *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "allow"
# Verify git push asks
jq '.chiron-forge.permission.bash."git push *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "ask"
# Verify git config is denied
jq '.chiron-forge.permission.bash."git config *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
# Verify npm install asks
jq '.chiron-forge.permission.bash."npm install *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "ask"
# Verify bash file write redirects are blocked
jq '.chiron-forge.permission.bash."echo * > *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
jq '.chiron-forge.permission.bash."cat * > *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
jq '.chiron-forge.permission.bash."tee"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
# Verify command injection is blocked
jq '.chiron-forge.permission.bash."$(*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
jq '.chiron-forge.permission.bash."`*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
# Verify git secret protection
jq '.chiron-forge.permission.bash."git add *.env*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
jq '.chiron-forge.permission.bash."git commit *.env*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
# Verify external_directory scope
jq '.chiron-forge.permission.external_directory."~/p/**"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "allow"
jq '.chiron-forge.permission.external_directory."*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "ask"
# Verify expanded secret blocking
jq '.chiron-forge.permission.read.".local/share/*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
jq '.chiron-forge.permission.read.".cache/*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
jq '.chiron-forge.permission.read."*.db"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
\`\`\`
**Evidence to Capture**:
- [x] Git commit permission (should be "allow")
- [x] Git push permission (should be "ask")
- [x] Git config permission (should be "deny")
- [x] npm install permission (should be "ask")
- [x] bash redirect echo > permission (should be "deny")
- [x] bash redirect cat > permission (should be "deny")
- [x] bash tee permission (should be "deny")
- [x] bash $() injection permission (should be "deny")
- [x] bash backtick injection permission (should be "deny")
- [x] git add *.env* permission (should be "deny")
- [x] git commit *.env* permission (should be "deny")
- [x] external_directory ~/p/** permission (should be "allow")
- [x] external_directory wildcard permission (should be "ask")
- [x] read .local/share/* permission (should be "deny")
- [x] read .cache/* permission (should be "deny")
- [x] read *.db permission (should be "deny")
**Commit**: YES (groups with Tasks 1, 2, 3)
- Message: `chore(agents): refine permissions for Chiron and Chriton-Forge with security hardening`
- Files: `agents/agents.json`
- Pre-commit: `jq '.' agents/agents.json > /dev/null 2>&1` (validate JSON)
- [x] 4. Validate Configuration (Manual Verification)
**What to do**:
- Run JSON syntax validation: `jq '.' agents/agents.json`
- Verify no duplicate keys in configuration
- Verify workspace path exists: `ls -la ~/p/`
- Document manual verification procedure for post-deployment testing
**Must NOT do**:
- Skip workspace path validation
- Skip duplicate key verification
- Proceed to deployment without validation
**Recommended Agent Profile**:
> **Category**: quick
- Reason: Simple validation commands, documentation task
> **Skills**: git-master
- git-master: Git workflow for committing validation script or notes if needed
> **Skills Evaluated but Omitted**:
- research: Not needed (validation is straightforward)
- librarian: Not needed (no external docs needed)
**Parallelization**:
- **Can Run In Parallel**: NO
- **Parallel Group**: Sequential
- **Blocks**: None (final validation task)
- **Blocked By**: Tasks 2, 3
**References** (CRITICAL - Be Exhaustive):
**Pattern References** (existing code to follow):
- `AGENTS.md` - Repository documentation structure
**API/Type References** (contracts to implement against):
- N/A (validation task)
**Documentation References** (specs and requirements):
- Interview draft: `.sisyphus/drafts/agent-permissions-refinement.md` - All user requirements
- Metis analysis: Guardrails #1-#6 - Validation requirements
**External References** (libraries and frameworks):
- N/A (validation task)
**WHY Each Reference Matters** (explain the relevance):
- Interview draft - Contains all requirements to validate against
- Metis analysis - Identifies specific validation steps (duplicate keys, workspace path, etc.)
**Acceptance Criteria**:
> **CRITICAL: AGENT-EXECUTABLE VERIFICATION ONLY**
**Automated Verification (config validation)**:
\`\`\`bash
# Agent runs:
# JSON syntax validation
jq '.' /home/m3tam3re/p/AI/AGENTS/agents/agents.json > /dev/null 2>&1
# Assert: Exit code 0
# Verify no duplicate external_directory keys
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron.permission | keys' | grep external_directory | wc -l
# Assert: Output is "1"
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron-forge.permission | keys' | grep external_directory | wc -l
# Assert: Output is "1"
# Verify workspace path exists
ls -la ~/p/ 2>&1 | head -1
# Assert: Shows directory listing (not "No such file or directory")
# Verify all permission keys are valid
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron.permission' > /dev/null 2>&1
# Assert: Exit code 0
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron-forge.permission' > /dev/null 2>&1
# Assert: Exit code 0
\`\`\`
**Evidence to Capture**:
- [x] jq validation output (exit code 0)
- [x] Chiron external_directory key count (should be "1")
- [x] Chriton-Forge external_directory key count (should be "1")
- [x] Workspace path ls output (shows directory exists)
- [x] Chiron permission object validation (exit code 0)
- [x] Chriton-Forge permission object validation (exit code 0)
**Commit**: NO (validation only, no changes)
---
## Commit Strategy
| After Task | Message | Files | Verification |
|------------|---------|-------|--------------|
| 1, 2, 3 | `chore(agents): refine permissions for Chiron and Chriton-Forge with security hardening` | agents/agents.json | `jq '.' agents/agents.json > /dev/null` |
| 4 | N/A (validation only) | N/A | N/A |
---
## Success Criteria
### Verification Commands
```bash
# Pre-deployment validation
jq '.' /home/m3tam3re/p/AI/AGENTS/agents/agents.json > /dev/null 2>&1
# Expected: Exit code 0
# Duplicate key check
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron.permission | keys' | grep external_directory | wc -l
# Expected: 1
# Workspace path validation
ls -la ~/p/ 2>&1
# Expected: Directory listing
# Post-deployment (manual)
# Have Chiron attempt file edit → Expected: Permission denied
# Have Chiron run bd ready → Expected: Success
# Have Chriton-Forge git commit → Expected: Success
# Have Chriton-Forge git push → Expected: Ask user
# Have agent read .env → Expected: Permission denied
```
### Final Checklist
- [x] Duplicate `external_directory` key fixed
- [x] Chiron edit set to "deny"
- [x] Chiron bash denied except `bd *`
- [x] Chiron task permission restricts subagents (explore, librarian, athena, chiron-forge)
- [x] Chiron external_directory allows ~/p/** only
- [x] Chriton-Forge git commit allowed, git push asks
- [x] Chriton-Forge git config denied
- [x] Chriton-Forge package install commands ask
- [x] Chriton-Forge external_directory allows ~/p/**, asks others
- [x] Bash file write operators blocked (echo >, cat >, tee, etc.)
- [x] Bash command injection blocked ($(), backticks, eval, source)
- [x] Git secret protection added (git add/commit *.env* deny)
- [x] Expanded secret blocking patterns added (.local/share/*, .cache/*, *.db, *.keychain, *.p12)
- [x] /run/agenix/* blocked in read permissions
- [x] JSON syntax valid (jq validates)
- [x] No duplicate keys in configuration
- [x] Workspace path ~/p/** exists

View File

@@ -1,977 +0,0 @@
# Chiron Personal Agent Framework
## TL;DR
> **Quick Summary**: Create an Oh-My-Opencode-style agent framework for personal productivity with Chiron as the orchestrator, 4 specialized subagents (Hermes, Athena, Apollo, Calliope), and 5 tool integration skills (Basecamp, Outline, MS Teams, Outlook, Obsidian).
>
> **Deliverables**:
> - 6 agent definitions in `agents.json`
> - 6 system prompt files in `prompts/`
> - 5 tool integration skills in `skills/`
> - Validation script extension in `scripts/`
>
> **Estimated Effort**: Medium
> **Parallel Execution**: YES - 3 waves
> **Critical Path**: Task 1 (agents.json) → Task 3-7 (prompts) → Task 9-13 (skills) → Task 14 (validation)
>
> **Status**: ✅ COMPLETE - All 14 main tasks + 6 verification items = 20/20 deliverables
---
## Context
### Original Request
Create an agent framework similar to Oh-My-Opencode but focused on personal productivity:
- Manage work tasks, appointments, projects via Basecamp, Outline, MS Teams, Outlook
- Manage private tasks and knowledge via Obsidian
- Greek mythology naming convention (avoiding Oh My OpenCode names)
- Main agent named "Chiron"
### Interview Summary
**Key Discussions**:
- **Chiron's Role**: Main orchestrator that delegates to specialized subagents
- **Agent Count**: Minimal (3-4 agents initially) + 2 primary agents
- **Domain Separation**: Separate work vs private agents with clear boundaries
- **Tool Priority**: All 4 work tools + Obsidian equally important
- **Basecamp MCP**: User confirmed working MCP at georgeantonopoulos/Basecamp-MCP-Server
**Research Findings**:
- Oh My OpenCode names to avoid: Sisyphus, Atlas, Prometheus, Hephaestus, Metis, Momus, Oracle, Librarian, Explore, Multimodal-Looker, Sisyphus-Junior
- MCP servers available for all work tools + Obsidian
- Protonmail requires custom IMAP/SMTP (deferred)
- Current repo has established skill patterns with SKILL.md + optional subdirectories
### Metis Review
**Identified Gaps** (addressed in plan):
- Delegation model clarified: Chiron uses Question tool for ambiguous requests
- Behavioral difference between Chiron and Chiron-Forge defined
- Executable acceptance criteria added for all tasks
- Edge cases documented in guardrails section
- MCP authentication assumed pre-configured by NixOS (explicit scope boundary)
---
## Work Objectives
### Core Objective
Create a personal productivity agent framework following Oh-My-Opencode patterns, enabling AI-assisted management of work and private life through specialized agents that integrate with existing tools.
### Concrete Deliverables
1. `agents/agents.json` - 6 agent definitions (2 primary, 4 subagent)
2. `prompts/chiron.txt` - Chiron (plan mode) system prompt
3. `prompts/chiron-forge.txt` - Chiron-Forge (build mode) system prompt
4. `prompts/hermes.txt` - Work communication agent prompt
5. `prompts/athena.txt` - Work knowledge agent prompt
6. `prompts/apollo.txt` - Private knowledge agent prompt
7. `prompts/calliope.txt` - Writing agent prompt
8. `skills/basecamp/SKILL.md` - Basecamp integration skill
9. `skills/outline/SKILL.md` - Outline wiki integration skill
10. `skills/msteams/SKILL.md` - MS Teams integration skill
11. `skills/outlook/SKILL.md` - Outlook email integration skill
12. `skills/obsidian/SKILL.md` - Obsidian integration skill
13. `scripts/validate-agents.sh` - Agent validation script
### Definition of Done
- [x] `python3 -c "import json; json.load(open('agents/agents.json'))"` → Exit 0
- [x] All 6 prompt files exist and are non-empty
- [x] All 5 skill directories have valid SKILL.md with YAML frontmatter
- [x] `./scripts/test-skill.sh --validate` passes for new skills
- [x] `./scripts/validate-agents.sh` passes
### Must Have
- All agents use Question tool for multi-choice decisions
- External prompt files (not inline in JSON)
- Follow existing skill structure patterns
- Greek naming convention for agents
- Clear separation between plan mode (Chiron) and build mode (Chiron-Forge)
- Skills provide tool-specific knowledge that agents load on demand
### Must NOT Have (Guardrails)
- **NO MCP server configuration** - Managed by NixOS, outside this repo
- **NO authentication handling** - Assume pre-configured MCP tools
- **NO cross-agent state sharing** - Each agent operates independently
- **NO new opencode commands** - Use existing command patterns only
- **NO generic "I'm an AI assistant" prompts** - Domain-specific responsibilities only
- **NO Protonmail integration** - Deferred to future phase
- **NO duplicate tool knowledge across skills** - Each skill focuses on ONE tool
- **NO scripts outside scripts/ directory**
- **NO model configuration changes** - Keep current `zai-coding-plan/glm-4.7`
---
## Verification Strategy (MANDATORY)
> **UNIVERSAL RULE: ZERO HUMAN INTERVENTION**
>
> ALL tasks in this plan MUST be verifiable WITHOUT any human action.
> This is NOT conditional - it applies to EVERY task, regardless of test strategy.
>
> ### Test Decision
> - **Infrastructure exists**: YES (test-skill.sh)
> - **Automated tests**: Tests-after (validation scripts)
> - **Framework**: bash + python for validation
>
> ### Agent-Executed QA Scenarios (MANDATORY - ALL tasks)
>
> **Verification Tool by Deliverable Type**:
>
> | Type | Tool | How Agent Verifies |
> |------|------|-------------------|
> | **agents.json** | Bash (python/jq) | Parse JSON, validate structure, check required fields |
> | **Prompt files** | Bash (file checks) | File exists, non-empty, contains expected sections |
> | **SKILL.md files** | Bash (test-skill.sh) | YAML frontmatter valid, name matches directory |
> | **Validation scripts** | Bash | Script is executable, runs without error, produces expected output |
---
## Execution Strategy
### Parallel Execution Waves
```
Wave 1 (Start Immediately):
├── Task 1: Create agents.json configuration [no dependencies]
└── Task 2: Create prompts/ directory structure [no dependencies]
Wave 2 (After Wave 1):
├── Task 3: Chiron prompt [depends: 2]
├── Task 4: Chiron-Forge prompt [depends: 2]
├── Task 5: Hermes prompt [depends: 2]
├── Task 6: Athena prompt [depends: 2]
├── Task 7: Apollo prompt [depends: 2]
└── Task 8: Calliope prompt [depends: 2]
Wave 3 (Can parallel with Wave 2):
├── Task 9: Basecamp skill [no dependencies]
├── Task 10: Outline skill [no dependencies]
├── Task 11: MS Teams skill [no dependencies]
├── Task 12: Outlook skill [no dependencies]
└── Task 13: Obsidian skill [no dependencies]
Wave 4 (After Wave 2 + 3):
└── Task 14: Validation script [depends: 1, 3-8]
Critical Path: Task 1 → Task 2 → Tasks 3-8 → Task 14
Parallel Speedup: ~50% faster than sequential
```
### Dependency Matrix
| Task | Depends On | Blocks | Can Parallelize With |
|------|------------|--------|---------------------|
| 1 | None | 14 | 2, 9-13 |
| 2 | None | 3-8 | 1, 9-13 |
| 3-8 | 2 | 14 | Each other, 9-13 |
| 9-13 | None | None | Each other, 1-2 |
| 14 | 1, 3-8 | None | (final) |
### Agent Dispatch Summary
| Wave | Tasks | Recommended Category |
|------|-------|---------------------|
| 1 | 1, 2 | quick |
| 2 | 3-8 | quick (parallel) |
| 3 | 9-13 | quick (parallel) |
| 4 | 14 | quick |
---
## TODOs
### Wave 1: Foundation
- [x] 1. Create agents.json with 6 agent definitions
**What to do**:
- Update existing `agents/agents.json` to add all 6 agents
- Each agent needs: description, mode, model, prompt reference
- Primary agents: chiron, chiron-forge
- Subagents: hermes, athena, apollo, calliope
- All agents should have `question: "allow"` permission
**Must NOT do**:
- Do not add MCP server configuration
- Do not change model from current pattern
- Do not add inline prompts (use file references)
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`agent-development`]
- `agent-development`: Provides agent configuration patterns and best practices
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 1 (with Task 2)
- **Blocks**: Task 14
- **Blocked By**: None
**References**:
- `agents/agents.json:1-7` - Current chiron agent configuration pattern
- `skills/agent-development/SKILL.md:40-76` - JSON agent structure reference
- `skills/agent-development/SKILL.md:226-277` - Permissions system reference
- `skills/agent-development/references/opencode-agents-json-example.md` - Complete examples
**Acceptance Criteria**:
```
Scenario: agents.json is valid JSON with all 6 agents
Tool: Bash (python)
Steps:
1. python3 -c "import json; data = json.load(open('agents/agents.json')); print(len(data))"
2. Assert: Output is "6"
3. python3 -c "import json; data = json.load(open('agents/agents.json')); print(sorted(data.keys()))"
4. Assert: Output contains ['apollo', 'athena', 'calliope', 'chiron', 'chiron-forge', 'hermes']
Expected Result: JSON parses, all 6 agents present
Evidence: Command output captured
Scenario: Each agent has required fields
Tool: Bash (python)
Steps:
1. python3 -c "
import json
data = json.load(open('agents/agents.json'))
for name, agent in data.items():
assert 'description' in agent, f'{name}: missing description'
assert 'mode' in agent, f'{name}: missing mode'
assert 'prompt' in agent, f'{name}: missing prompt'
print('All agents valid')
"
2. Assert: Output is "All agents valid"
Expected Result: All required fields present
Evidence: Validation output captured
Scenario: Primary agents have correct mode
Tool: Bash (python)
Steps:
1. python3 -c "
import json
data = json.load(open('agents/agents.json'))
assert data['chiron']['mode'] == 'primary'
assert data['chiron-forge']['mode'] == 'primary'
print('Primary modes correct')
"
Expected Result: Both primary agents have mode=primary
Evidence: Command output
Scenario: Subagents have correct mode
Tool: Bash (python)
Steps:
1. python3 -c "
import json
data = json.load(open('agents/agents.json'))
for name in ['hermes', 'athena', 'apollo', 'calliope']:
assert data[name]['mode'] == 'subagent', f'{name}: wrong mode'
print('Subagent modes correct')
"
Expected Result: All subagents have mode=subagent
Evidence: Command output
```
**Commit**: YES
- Message: `feat(agents): add chiron agent framework with 6 agents`
- Files: `agents/agents.json`
- Pre-commit: `python3 -c "import json; json.load(open('agents/agents.json'))"`
---
- [x] 2. Create prompts directory structure
**What to do**:
- Create `prompts/` directory if not exists
- Directory will hold all agent system prompt files
**Must NOT do**:
- Do not create prompt files yet (done in Wave 2)
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: []
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 1 (with Task 1)
- **Blocks**: Tasks 3-8
- **Blocked By**: None
**References**:
- `skills/agent-development/SKILL.md:148-159` - Prompt file conventions
**Acceptance Criteria**:
```
Scenario: prompts directory exists
Tool: Bash
Steps:
1. test -d prompts && echo "exists" || echo "missing"
2. Assert: Output is "exists"
Expected Result: Directory created
Evidence: Command output
```
**Commit**: NO (groups with Task 1)
---
### Wave 2: Agent Prompts
- [x] 3. Create Chiron (Plan Mode) system prompt
**What to do**:
- Create `prompts/chiron.txt`
- Define Chiron as the main orchestrator in plan/analysis mode
- Include delegation logic to subagents (Hermes, Athena, Apollo, Calliope)
- Include Question tool usage for ambiguous requests
- Focus on: planning, analysis, guidance, delegation
- Permissions: read-only, no file modifications
**Must NOT do**:
- Do not allow write/edit operations
- Do not include execution responsibilities
- Do not overlap with Chiron-Forge's build capabilities
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`agent-development`]
- `agent-development`: System prompt design patterns
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 2 (with Tasks 4-8)
- **Blocks**: Task 14
- **Blocked By**: Task 2
**References**:
- `skills/agent-development/SKILL.md:349-386` - System prompt design patterns
- `skills/agent-development/SKILL.md:397-415` - Prompt best practices
- `skills/agent-development/references/system-prompt-design.md` - Detailed prompt patterns
**Acceptance Criteria**:
```
Scenario: Chiron prompt file exists and is substantial
Tool: Bash
Steps:
1. test -f prompts/chiron.txt && echo "exists" || echo "missing"
2. Assert: Output is "exists"
3. wc -c < prompts/chiron.txt
4. Assert: Output is > 500 (substantial content)
Expected Result: File exists with meaningful content
Evidence: File size captured
Scenario: Chiron prompt contains orchestrator role
Tool: Bash (grep)
Steps:
1. grep -qi "orchestrat" prompts/chiron.txt && echo "found" || echo "missing"
2. Assert: Output is "found"
3. grep -qi "delegat" prompts/chiron.txt && echo "found" || echo "missing"
4. Assert: Output is "found"
Expected Result: Prompt describes orchestration and delegation
Evidence: grep output
Scenario: Chiron prompt references subagents
Tool: Bash (grep)
Steps:
1. grep -qi "hermes" prompts/chiron.txt && echo "found" || echo "missing"
2. grep -qi "athena" prompts/chiron.txt && echo "found" || echo "missing"
3. grep -qi "apollo" prompts/chiron.txt && echo "found" || echo "missing"
4. grep -qi "calliope" prompts/chiron.txt && echo "found" || echo "missing"
Expected Result: All 4 subagents mentioned
Evidence: grep outputs
```
**Commit**: YES (group with Tasks 4-8)
- Message: `feat(prompts): add chiron and subagent system prompts`
- Files: `prompts/*.txt`
- Pre-commit: `for f in prompts/*.txt; do test -s "$f" || exit 1; done`
---
- [x] 4. Create Chiron-Forge (Build Mode) system prompt
**What to do**:
- Create `prompts/chiron-forge.txt`
- Define as Chiron's execution/build counterpart
- Full write access for task execution
- Can modify files, run commands, complete tasks
- Still delegates to subagents for specialized domains
- Uses Question tool for destructive operations confirmation
**Must NOT do**:
- Do not make it a planning-only agent (that's Chiron)
- Do not allow destructive operations without confirmation
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`agent-development`]
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 2 (with Tasks 3, 5-8)
- **Blocks**: Task 14
- **Blocked By**: Task 2
**References**:
- `skills/agent-development/SKILL.md:316-346` - Complete agent example with chiron/chiron-forge pattern
- `skills/agent-development/SKILL.md:253-277` - Permission patterns for bash commands
**Acceptance Criteria**:
```
Scenario: Chiron-Forge prompt file exists
Tool: Bash
Steps:
1. test -f prompts/chiron-forge.txt && wc -c < prompts/chiron-forge.txt
2. Assert: Output > 500
Expected Result: File exists with substantial content
Evidence: File size
Scenario: Chiron-Forge prompt emphasizes execution
Tool: Bash (grep)
Steps:
1. grep -qi "execut" prompts/chiron-forge.txt && echo "found" || echo "missing"
2. grep -qi "build" prompts/chiron-forge.txt && echo "found" || echo "missing"
Expected Result: Execution/build terminology present
Evidence: grep output
```
**Commit**: YES (groups with Task 3)
---
- [x] 5. Create Hermes (Work Communication) system prompt
**What to do**:
- Create `prompts/hermes.txt`
- Specialization: Basecamp tasks, Outlook email, MS Teams meetings
- Greek god of communication, messengers, quick tasks
- Uses Question tool for: which tool to use, clarifying recipients
- Focus on: task updates, email drafting, meeting scheduling
**Must NOT do**:
- Do not handle documentation (Athena's domain)
- Do not handle personal/private tools (Apollo's domain)
- Do not write long-form content (Calliope's domain)
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`agent-development`]
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 2
- **Blocks**: Task 14
- **Blocked By**: Task 2
**References**:
- `skills/agent-development/SKILL.md:349-378` - Standard prompt structure
**Acceptance Criteria**:
```
Scenario: Hermes prompt defines communication domain
Tool: Bash (grep)
Steps:
1. grep -qi "basecamp" prompts/hermes.txt && echo "found" || echo "missing"
2. grep -qi "outlook\|email" prompts/hermes.txt && echo "found" || echo "missing"
3. grep -qi "teams\|meeting" prompts/hermes.txt && echo "found" || echo "missing"
Expected Result: All 3 tools mentioned
Evidence: grep outputs
```
**Commit**: YES (groups with Task 3)
---
- [x] 6. Create Athena (Work Knowledge) system prompt
**What to do**:
- Create `prompts/athena.txt`
- Specialization: Outline wiki, documentation, knowledge organization
- Greek goddess of wisdom and strategic warfare
- Focus on: wiki search, knowledge retrieval, documentation updates
- Uses Question tool for: which document to update, clarifying search scope
**Must NOT do**:
- Do not handle communication (Hermes's domain)
- Do not handle private knowledge (Apollo's domain)
- Do not write creative content (Calliope's domain)
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`agent-development`]
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 2
- **Blocks**: Task 14
- **Blocked By**: Task 2
**References**:
- `skills/agent-development/SKILL.md:349-378` - Standard prompt structure
**Acceptance Criteria**:
```
Scenario: Athena prompt defines knowledge domain
Tool: Bash (grep)
Steps:
1. grep -qi "outline" prompts/athena.txt && echo "found" || echo "missing"
2. grep -qi "wiki\|knowledge" prompts/athena.txt && echo "found" || echo "missing"
3. grep -qi "document" prompts/athena.txt && echo "found" || echo "missing"
Expected Result: Outline and knowledge terms present
Evidence: grep outputs
```
**Commit**: YES (groups with Task 3)
---
- [x] 7. Create Apollo (Private Knowledge) system prompt
**What to do**:
- Create `prompts/apollo.txt`
- Specialization: Obsidian vault, personal notes, private knowledge graph
- Greek god of knowledge, prophecy, and light
- Focus on: note search, personal task management, knowledge retrieval
- Uses Question tool for: clarifying which vault, which note
**Must NOT do**:
- Do not handle work tools (Hermes/Athena's domain)
- Do not expose personal data to work contexts
- Do not write long-form content (Calliope's domain)
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`agent-development`]
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 2
- **Blocks**: Task 14
- **Blocked By**: Task 2
**References**:
- `skills/agent-development/SKILL.md:349-378` - Standard prompt structure
**Acceptance Criteria**:
```
Scenario: Apollo prompt defines private knowledge domain
Tool: Bash (grep)
Steps:
1. grep -qi "obsidian" prompts/apollo.txt && echo "found" || echo "missing"
2. grep -qi "personal\|private" prompts/apollo.txt && echo "found" || echo "missing"
3. grep -qi "note\|vault" prompts/apollo.txt && echo "found" || echo "missing"
Expected Result: Obsidian and personal knowledge terms present
Evidence: grep outputs
```
**Commit**: YES (groups with Task 3)
---
- [x] 8. Create Calliope (Writing) system prompt
**What to do**:
- Create `prompts/calliope.txt`
- Specialization: documentation writing, reports, meeting notes, prose
- Greek muse of epic poetry and eloquence
- Focus on: drafting documents, summarizing, writing assistance
- Uses Question tool for: clarifying tone, audience, format
**Must NOT do**:
- Do not manage tools directly (delegates to other agents for tool access)
- Do not handle short communication (Hermes's domain)
- Do not overlap with Athena's wiki management
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`agent-development`]
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 2
- **Blocks**: Task 14
- **Blocked By**: Task 2
**References**:
- `skills/agent-development/SKILL.md:349-378` - Standard prompt structure
**Acceptance Criteria**:
```
Scenario: Calliope prompt defines writing domain
Tool: Bash (grep)
Steps:
1. grep -qi "writ" prompts/calliope.txt && echo "found" || echo "missing"
2. grep -qi "document" prompts/calliope.txt && echo "found" || echo "missing"
3. grep -qi "report\|summar" prompts/calliope.txt && echo "found" || echo "missing"
Expected Result: Writing and documentation terms present
Evidence: grep outputs
```
**Commit**: YES (groups with Task 3)
---
### Wave 3: Tool Integration Skills
- [x] 9. Create Basecamp integration skill
**What to do**:
- Create `skills/basecamp/SKILL.md`
- Document Basecamp MCP capabilities (63 tools from georgeantonopoulos/Basecamp-MCP-Server)
- Include: projects, todos, messages, card tables, campfire, webhooks
- Provide workflow examples for common operations
- Reference MCP tool names for agent use
**Must NOT do**:
- Do not include MCP server setup instructions (managed by Nix)
- Do not duplicate general project management advice
- Do not include authentication handling
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`skill-creator`]
- `skill-creator`: Provides skill structure patterns and validation
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 3 (with Tasks 10-13)
- **Blocks**: None
- **Blocked By**: None
**References**:
- `skills/skill-creator/SKILL.md` - Skill creation patterns
- `skills/brainstorming/SKILL.md` - Example skill structure
- https://github.com/georgeantonopoulos/Basecamp-MCP-Server - MCP tool documentation
**Acceptance Criteria**:
```
Scenario: Basecamp skill has valid structure
Tool: Bash
Steps:
1. test -d skills/basecamp && echo "dir exists"
2. test -f skills/basecamp/SKILL.md && echo "file exists"
3. ./scripts/test-skill.sh --validate basecamp || echo "validation failed"
Expected Result: Directory and SKILL.md exist, validation passes
Evidence: Command outputs
Scenario: Basecamp skill has valid frontmatter
Tool: Bash (python)
Steps:
1. python3 -c "
import yaml
content = open('skills/basecamp/SKILL.md').read()
front = content.split('---')[1]
data = yaml.safe_load(front)
assert data['name'] == 'basecamp', 'name mismatch'
assert 'description' in data, 'missing description'
print('Valid')
"
Expected Result: YAML frontmatter valid with correct name
Evidence: Python output
```
**Commit**: YES
- Message: `feat(skills): add basecamp integration skill`
- Files: `skills/basecamp/SKILL.md`
- Pre-commit: `./scripts/test-skill.sh --validate basecamp`
---
- [x] 10. Create Outline wiki integration skill
**What to do**:
- Create `skills/outline/SKILL.md`
- Document Outline API capabilities
- Include: document CRUD, search, collections, sharing
- Provide workflow examples for knowledge management
**Must NOT do**:
- Do not include MCP server setup
- Do not duplicate wiki concepts
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`skill-creator`]
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 3
- **Blocks**: None
- **Blocked By**: None
**References**:
- `skills/skill-creator/SKILL.md` - Skill creation patterns
- https://www.getoutline.com/developers - Outline API documentation
**Acceptance Criteria**:
```
Scenario: Outline skill has valid structure
Tool: Bash
Steps:
1. test -d skills/outline && test -f skills/outline/SKILL.md && echo "exists"
2. ./scripts/test-skill.sh --validate outline || echo "failed"
Expected Result: Valid skill structure
Evidence: Command output
```
**Commit**: YES
- Message: `feat(skills): add outline wiki integration skill`
- Files: `skills/outline/SKILL.md`
- Pre-commit: `./scripts/test-skill.sh --validate outline`
---
- [x] 11. Create MS Teams integration skill
**What to do**:
- Create `skills/msteams/SKILL.md`
- Document MS Teams Graph API capabilities via MCP
- Include: channels, messages, meetings, chat
- Provide workflow examples for team communication
**Must NOT do**:
- Do not include Graph API authentication flows
- Do not overlap with Outlook email functionality
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`skill-creator`]
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 3
- **Blocks**: None
- **Blocked By**: None
**References**:
- `skills/skill-creator/SKILL.md` - Skill creation patterns
- https://learn.microsoft.com/en-us/graph/api/resources/teams-api-overview - Teams API
**Acceptance Criteria**:
```
Scenario: MS Teams skill has valid structure
Tool: Bash
Steps:
1. test -d skills/msteams && test -f skills/msteams/SKILL.md && echo "exists"
2. ./scripts/test-skill.sh --validate msteams || echo "failed"
Expected Result: Valid skill structure
Evidence: Command output
```
**Commit**: YES
- Message: `feat(skills): add ms teams integration skill`
- Files: `skills/msteams/SKILL.md`
- Pre-commit: `./scripts/test-skill.sh --validate msteams`
---
- [x] 12. Create Outlook email integration skill
**What to do**:
- Create `skills/outlook/SKILL.md`
- Document Outlook Graph API capabilities via MCP
- Include: mail CRUD, calendar, contacts, folders
- Provide workflow examples for email management
**Must NOT do**:
- Do not include Graph API authentication
- Do not overlap with Teams functionality
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`skill-creator`]
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 3
- **Blocks**: None
- **Blocked By**: None
**References**:
- `skills/skill-creator/SKILL.md` - Skill creation patterns
- https://learn.microsoft.com/en-us/graph/outlook-mail-concept-overview - Outlook API
**Acceptance Criteria**:
```
Scenario: Outlook skill has valid structure
Tool: Bash
Steps:
1. test -d skills/outlook && test -f skills/outlook/SKILL.md && echo "exists"
2. ./scripts/test-skill.sh --validate outlook || echo "failed"
Expected Result: Valid skill structure
Evidence: Command output
```
**Commit**: YES
- Message: `feat(skills): add outlook email integration skill`
- Files: `skills/outlook/SKILL.md`
- Pre-commit: `./scripts/test-skill.sh --validate outlook`
---
- [x] 13. Create Obsidian integration skill
**What to do**:
- Create `skills/obsidian/SKILL.md`
- Document Obsidian Local REST API capabilities
- Include: vault operations, note CRUD, search, daily notes
- Reference skills/brainstorming/references/obsidian-workflow.md for patterns
- Provide workflow examples for personal knowledge management
**Must NOT do**:
- Do not include plugin installation
- Do not duplicate general note-taking advice
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`skill-creator`]
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 3
- **Blocks**: None
- **Blocked By**: None
**References**:
- `skills/skill-creator/SKILL.md` - Skill creation patterns
- `skills/brainstorming/SKILL.md` - Example skill structure
- `skills/brainstorming/references/obsidian-workflow.md` - Existing Obsidian patterns
- https://coddingtonbear.github.io/obsidian-local-rest-api/ - Local REST API docs
**Acceptance Criteria**:
```
Scenario: Obsidian skill has valid structure
Tool: Bash
Steps:
1. test -d skills/obsidian && test -f skills/obsidian/SKILL.md && echo "exists"
2. ./scripts/test-skill.sh --validate obsidian || echo "failed"
Expected Result: Valid skill structure
Evidence: Command output
```
**Commit**: YES
- Message: `feat(skills): add obsidian integration skill`
- Files: `skills/obsidian/SKILL.md`
- Pre-commit: `./scripts/test-skill.sh --validate obsidian`
---
### Wave 4: Validation
- [x] 14. Create agent validation script
**What to do**:
- Create `scripts/validate-agents.sh`
- Validate agents.json structure and required fields
- Verify all referenced prompt files exist
- Check prompt files are non-empty
- Integrate with existing test-skill.sh patterns
**Must NOT do**:
- Do not require MCP servers for validation
- Do not perform functional agent testing (just structural)
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: []
**Parallelization**:
- **Can Run In Parallel**: NO
- **Parallel Group**: Sequential (Wave 4)
- **Blocks**: None
- **Blocked By**: Tasks 1, 3-8
**References**:
- `scripts/test-skill.sh` - Existing validation script pattern
**Acceptance Criteria**:
```
Scenario: Validation script is executable
Tool: Bash
Steps:
1. test -x scripts/validate-agents.sh && echo "executable" || echo "not executable"
2. Assert: Output is "executable"
Expected Result: Script has execute permission
Evidence: Command output
Scenario: Validation script runs successfully
Tool: Bash
Steps:
1. ./scripts/validate-agents.sh
2. Assert: Exit code is 0
Expected Result: All validations pass
Evidence: Script output
Scenario: Validation script catches missing files
Tool: Bash
Steps:
1. mv prompts/chiron.txt prompts/chiron.txt.bak
2. ./scripts/validate-agents.sh
3. Assert: Exit code is NOT 0
4. mv prompts/chiron.txt.bak prompts/chiron.txt
Expected Result: Script detects missing prompt file
Evidence: Error output
```
**Commit**: YES
- Message: `feat(scripts): add agent validation script`
- Files: `scripts/validate-agents.sh`
- Pre-commit: `./scripts/validate-agents.sh`
---
## Commit Strategy
| After Task | Message | Files | Verification |
|------------|---------|-------|--------------|
| 1, 2 | `feat(agents): add chiron agent framework with 6 agents` | agents/agents.json, prompts/ | `python3 -c "import json; json.load(open('agents/agents.json'))"` |
| 3-8 | `feat(prompts): add chiron and subagent system prompts` | prompts/*.txt | `for f in prompts/*.txt; do test -s "$f"; done` |
| 9 | `feat(skills): add basecamp integration skill` | skills/basecamp/ | `./scripts/test-skill.sh --validate basecamp` |
| 10 | `feat(skills): add outline wiki integration skill` | skills/outline/ | `./scripts/test-skill.sh --validate outline` |
| 11 | `feat(skills): add ms teams integration skill` | skills/msteams/ | `./scripts/test-skill.sh --validate msteams` |
| 12 | `feat(skills): add outlook email integration skill` | skills/outlook/ | `./scripts/test-skill.sh --validate outlook` |
| 13 | `feat(skills): add obsidian integration skill` | skills/obsidian/ | `./scripts/test-skill.sh --validate obsidian` |
| 14 | `feat(scripts): add agent validation script` | scripts/validate-agents.sh | `./scripts/validate-agents.sh` |
---
## Success Criteria
### Verification Commands
```bash
# Validate agents.json
python3 -c "import json; json.load(open('agents/agents.json'))" # Expected: exit 0
# Count agents
python3 -c "import json; print(len(json.load(open('agents/agents.json'))))" # Expected: 6
# Validate all prompts exist
for f in chiron chiron-forge hermes athena apollo calliope; do
test -s prompts/$f.txt && echo "$f: OK" || echo "$f: MISSING"
done
# Validate all skills
./scripts/test-skill.sh --validate # Expected: all pass
# Run full validation
./scripts/validate-agents.sh # Expected: exit 0
```
### Final Checklist
- [x] All 6 agents defined in agents.json
- [x] All 6 prompt files exist and are non-empty
- [x] All 5 skills have valid SKILL.md with YAML frontmatter
- [x] validate-agents.sh passes
- [x] test-skill.sh --validate passes
- [x] No MCP configuration in repo
- [x] No inline prompts in agents.json
- [x] All agent names are Greek mythology (not conflicting with Oh My OpenCode)

View File

@@ -12,21 +12,22 @@ Configuration repository for Opencode Agent Skills, context files, and agent con
# Skill creation # Skill creation
python3 skills/skill-creator/scripts/init_skill.py <name> --path skills/ python3 skills/skill-creator/scripts/init_skill.py <name> --path skills/
# Issue tracking (beads)
bd ready && bd create "title" && bd close <id> && bd sync
``` ```
## Directory Structure ## Directory Structure
``` ```
. .
├── skills/ # Agent skills (25 modules) ├── skills/ # Agent skills (15 modules)
│ └── skill-name/ │ └── skill-name/
│ ├── SKILL.md # Required: YAML frontmatter + workflows │ ├── SKILL.md # Required: YAML frontmatter + workflows
│ ├── scripts/ # Executable code (optional) │ ├── scripts/ # Executable code (optional)
│ ├── references/ # Domain docs (optional) │ ├── references/ # Domain docs (optional)
│ └── assets/ # Templates/files (optional) │ └── assets/ # Templates/files (optional)
├── rules/ # AI coding rules (languages, concerns, frameworks)
│ ├── languages/ # Python, TypeScript, Nix, Shell
│ ├── concerns/ # Testing, naming, documentation, etc.
│ └── frameworks/ # Framework-specific rules (n8n, etc.)
├── agents/ # Agent definitions (agents.json) ├── agents/ # Agent definitions (agents.json)
├── prompts/ # System prompts (chiron*.txt) ├── prompts/ # System prompts (chiron*.txt)
├── context/ # User profiles ├── context/ # User profiles
@@ -58,7 +59,7 @@ compatibility: opencode
## Anti-Patterns (CRITICAL) ## Anti-Patterns (CRITICAL)
**Frontend Design**: NEVER use generic AI aesthetics, NEVER converge on common choices **Frontend Design**: NEVER use generic AI aesthetics, NEVER converge on common choices
**Excalidraw**: NEVER use diamond shapes (broken arrows), NEVER use `label` property **Excalidraw**: NEVER use `label` property (use boundElements + text element pairs instead)
**Debugging**: NEVER fix just symptom, ALWAYS find root cause first **Debugging**: NEVER fix just symptom, ALWAYS find root cause first
**Excel**: ALWAYS respect existing template conventions over guidelines **Excel**: ALWAYS respect existing template conventions over guidelines
**Structure**: NEVER place scripts/docs outside scripts/references/ directories **Structure**: NEVER place scripts/docs outside scripts/references/ directories
@@ -77,27 +78,46 @@ compatibility: opencode
## Deployment ## Deployment
**Nix pattern** (non-flake input): **Nix flake pattern**:
```nix ```nix
agents = { agents = {
url = "git+https://code.m3ta.dev/m3tam3re/AGENTS"; url = "git+https://code.m3ta.dev/m3tam3re/AGENTS";
flake = false; # Files only, not a Nix flake inputs.nixpkgs.follows = "nixpkgs"; # Optional but recommended
}; };
``` ```
**Exports:**
- `packages.skills-runtime` — composable runtime with all skill dependencies
- `devShells.default` — dev environment for working on skills
**Mapping** (via home-manager): **Mapping** (via home-manager):
- `skills/`, `context/`, `commands/`, `prompts/` → symlinks - `skills/`, `context/`, `commands/`, `prompts/` → symlinks
- `agents/agents.json` → embedded into config.json - `agents/agents.json` → embedded into config.json
- Agent changes: require `home-manager switch` - Agent changes: require `home-manager switch`
- Other changes: visible immediately - Other changes: visible immediately
## Rules System
Centralized AI coding rules consumed via `mkOpencodeRules` from m3ta-nixpkgs:
```nix
# In project flake.nix
m3taLib.opencode-rules.mkOpencodeRules {
inherit agents;
languages = [ "python" "typescript" ];
frameworks = [ "n8n" ];
};
```
See `rules/USAGE.md` for full documentation.
## Notes for AI Agents ## Notes for AI Agents
1. **Config-only repo** - No compilation, no build, manual validation only 1. **Config-only repo** - No compilation, no build, manual validation only
2. **Skills are documentation** - Write for AI consumption, progressive disclosure 2. **Skills are documentation** - Write for AI consumption, progressive disclosure
3. **Consistent structure** - All skills follow 4-level deep pattern (skills/name/ + optional subdirs) 3. **Consistent structure** - All skills follow 4-level deep pattern (skills/name/ + optional subdirs)
4. **Cross-cutting concerns** - Standardized SKILL.md, workflow patterns, delegation rules 4. **Cross-cutting concerns** - Standardized SKILL.md, workflow patterns, delegation rules
5. **Always push** - Session completion workflow: commit + bd sync + git push 5. **Always push** - Session completion workflow: commit + git push
## Quality Gates ## Quality Gates
@@ -105,4 +125,5 @@ Before committing:
1. `./scripts/test-skill.sh --validate` 1. `./scripts/test-skill.sh --validate`
2. Python shebang + docstrings check 2. Python shebang + docstrings check
3. No extraneous files (README.md, CHANGELOG.md in skills/) 3. No extraneous files (README.md, CHANGELOG.md in skills/)
4. Git status clean 4. If skill has scripts with external dependencies → verify `flake.nix` is updated (see skill-creator Step 4)
5. Git status clean

310
README.md
View File

@@ -1,6 +1,6 @@
# Opencode Agent Skills & Configurations # Opencode Agent Skills & Configurations
Central repository for [Opencode](https://opencode.dev) Agent Skills, AI agent configurations, custom commands, and AI-assisted workflows. This is an extensible framework for building productivity systems, automations, knowledge management, and specialized AI capabilities. Central repository for [Opencode](https://opencode.ai) Agent Skills, AI agent configurations, custom commands, and AI-assisted workflows. This is an extensible framework for building productivity systems, automations, knowledge management, and specialized AI capabilities.
## 🎯 What This Repository Provides ## 🎯 What This Repository Provides
@@ -8,36 +8,45 @@ This repository serves as a **personal AI operating system** - a collection of s
- **Productivity & Task Management** - PARA methodology, GTD workflows, project tracking - **Productivity & Task Management** - PARA methodology, GTD workflows, project tracking
- **Knowledge Management** - Note-taking, research workflows, information organization - **Knowledge Management** - Note-taking, research workflows, information organization
- **Communications** - Email management, meeting scheduling, follow-up tracking
- **AI Development** - Tools for creating new skills and agent configurations - **AI Development** - Tools for creating new skills and agent configurations
- **Memory & Context** - Persistent memory systems, conversation analysis - **Memory & Context** - Persistent memory systems, conversation analysis
- **Document Processing** - PDF manipulation, spreadsheet handling, diagram generation
- **Custom Workflows** - Domain-specific automation and specialized agents - **Custom Workflows** - Domain-specific automation and specialized agents
## 📂 Repository Structure ## 📂 Repository Structure
``` ```
. .
├── agent/ # Agent definitions (agents.json) ├── agents/ # Agent definitions (agents.json)
├── prompts/ # Agent system prompts (chiron.txt, chiron-forge.txt) ├── prompts/ # Agent system prompts (chiron.txt, chiron-forge.txt, etc.)
├── context/ # User profiles and preferences ├── context/ # User profiles and preferences
│ └── profile.md # Work style, PARA areas, preferences │ └── profile.md # Work style, PARA areas, preferences
├── command/ # Custom command definitions ├── commands/ # Custom command definitions
│ └── reflection.md │ └── reflection.md
├── skill/ # Opencode Agent Skills (11+ skills) ├── skills/ # Opencode Agent Skills (15 skills)
│ ├── task-management/ # PARA-based productivity │ ├── agent-development/ # Agent creation and configuration
│ ├── skill-creator/ # Meta-skill for creating skills
│ ├── reflection/ # Conversation analysis
│ ├── communications/ # Email & messaging
│ ├── calendar-scheduling/ # Time management
│ ├── mem0-memory/ # Persistent memory
│ ├── research/ # Investigation workflows
│ ├── knowledge-management/ # Note capture & organization
│ ├── basecamp/ # Basecamp project management │ ├── basecamp/ # Basecamp project management
│ ├── brainstorming/ # Ideation & strategic thinking │ ├── brainstorming/ # Ideation & strategic thinking
── plan-writing/ # Project planning templates ── doc-translator/ # Documentation translation
│ ├── excalidraw/ # Architecture diagrams
│ ├── frontend-design/ # UI/UX design patterns
│ ├── memory/ # Persistent memory system
│ ├── obsidian/ # Obsidian vault management
│ ├── outline/ # Outline wiki integration
│ ├── pdf/ # PDF manipulation toolkit
│ ├── prompt-engineering-patterns/ # Prompt patterns
│ ├── reflection/ # Conversation analysis
│ ├── skill-creator/ # Meta-skill for creating skills
│ ├── systematic-debugging/ # Debugging methodology
│ └── xlsx/ # Spreadsheet handling
├── scripts/ # Repository utility scripts ├── scripts/ # Repository utility scripts
│ └── test-skill.sh # Test skills without deploying │ └── test-skill.sh # Test skills without deploying
├── .beads/ # Issue tracking database ├── rules/ # AI coding rules
│ ├── languages/ # Python, TypeScript, Nix, Shell
│ ├── concerns/ # Testing, naming, documentation
│ └── frameworks/ # Framework-specific rules (n8n)
├── flake.nix # Nix flake: dev shell + skills-runtime export
├── .envrc # direnv config (use flake)
├── AGENTS.md # Developer documentation ├── AGENTS.md # Developer documentation
└── README.md # This file └── README.md # This file
``` ```
@@ -46,43 +55,96 @@ This repository serves as a **personal AI operating system** - a collection of s
### Prerequisites ### Prerequisites
- **Opencode** - AI coding assistant ([opencode.dev](https://opencode.dev)) - **Nix** with flakes enabled — for reproducible dependency management and deployment
- **Nix** (optional) - For declarative deployment via home-manager - **direnv** (recommended) — auto-activates the development environment when entering the repo
- **Python 3** - For skill validation and creation scripts - **Opencode** — AI coding assistant ([opencode.ai](https://opencode.ai))
- **bd (beads)** (optional) - For issue tracking
### Installation ### Installation
#### Option 1: Nix Flake (Recommended) #### Option 1: Nix Flake (Recommended)
This repository is consumed as a **non-flake input** by your NixOS configuration: This repository is a **Nix flake** that exports:
- **`devShells.default`** — development environment for working on skills (activated via direnv)
- **`packages.skills-runtime`** — composable runtime with all skill script dependencies (Python packages + system tools)
**Consume in your system flake:**
```nix ```nix
# In your flake.nix # flake.nix
inputs.agents = { inputs.agents = {
url = "git+https://code.m3ta.dev/m3tam3re/AGENTS"; url = "git+https://code.m3ta.dev/m3tam3re/AGENTS";
flake = false; # Pure files, not a Nix flake inputs.nixpkgs.follows = "nixpkgs";
}; };
# In your home-manager module (e.g., opencode.nix) # In your home-manager module (e.g., opencode.nix)
xdg.configFile = { xdg.configFile = {
"opencode/skill".source = "${inputs.agents}/skill"; "opencode/skills".source = "${inputs.agents}/skills";
"opencode/context".source = "${inputs.agents}/context"; "opencode/context".source = "${inputs.agents}/context";
"opencode/command".source = "${inputs.agents}/command"; "opencode/commands".source = "${inputs.agents}/commands";
"opencode/prompts".source = "${inputs.agents}/prompts"; "opencode/prompts".source = "${inputs.agents}/prompts";
}; };
# Agent config is embedded into config.json, not deployed as files # Agent config is embedded into config.json, not deployed as files
programs.opencode.settings.agent = builtins.fromJSON programs.opencode.settings.agent = builtins.fromJSON
(builtins.readFile "${inputs.agents}/agent/agents.json"); (builtins.readFile "${inputs.agents}/agents/agents.json");
``` ```
Rebuild your system: **Deploy skills via home-manager:**
```nix
# home-manager module (e.g., opencode.nix)
{ inputs, system, ... }:
{
# Skill files — symlinked, changes visible immediately
xdg.configFile = {
"opencode/skills".source = "${inputs.agents}/skills";
"opencode/context".source = "${inputs.agents}/context";
"opencode/commands".source = "${inputs.agents}/commands";
"opencode/prompts".source = "${inputs.agents}/prompts";
};
# Agent config — embedded into config.json (requires home-manager switch)
programs.opencode.settings.agent = builtins.fromJSON
(builtins.readFile "${inputs.agents}/agents/agents.json");
# Skills runtime — ensures opencode always has script dependencies
home.packages = [ inputs.agents.packages.${system}.skills-runtime ];
}
```
**Compose into project flakes** (so opencode has skill deps in any project):
```nix
# Any project's flake.nix
{
inputs.agents.url = "git+https://code.m3ta.dev/m3tam3re/AGENTS";
inputs.agents.inputs.nixpkgs.follows = "nixpkgs";
outputs = { self, nixpkgs, agents, ... }:
let
system = "x86_64-linux";
pkgs = nixpkgs.legacyPackages.${system};
in {
devShells.${system}.default = pkgs.mkShell {
packages = [
# project-specific tools
pkgs.nodejs
# skill script dependencies
agents.packages.${system}.skills-runtime
];
};
};
}
```
Rebuild:
```bash ```bash
home-manager switch home-manager switch
``` ```
**Note**: The `agent/` directory is NOT deployed as files. Instead, `agents.json` is read at Nix evaluation time and embedded into the opencode `config.json`. **Note**: The `agents/` directory is NOT deployed as files. Instead, `agents.json` is read at Nix evaluation time and embedded into the opencode `config.json`.
#### Option 2: Manual Installation #### Option 2: Manual Installation
@@ -92,8 +154,11 @@ Clone and symlink:
# Clone repository # Clone repository
git clone https://github.com/yourusername/AGENTS.git ~/AGENTS git clone https://github.com/yourusername/AGENTS.git ~/AGENTS
# Create symlink to Opencode config directory # Create symlinks to Opencode config directory
ln -s ~/AGENTS ~/.config/opencode ln -s ~/AGENTS/skills ~/.config/opencode/skills
ln -s ~/AGENTS/context ~/.config/opencode/context
ln -s ~/AGENTS/commands ~/.config/opencode/commands
ln -s ~/AGENTS/prompts ~/.config/opencode/prompts
``` ```
### Verify Installation ### Verify Installation
@@ -101,8 +166,8 @@ ln -s ~/AGENTS ~/.config/opencode
Check that Opencode can see your skills: Check that Opencode can see your skills:
```bash ```bash
# Skills should be available at ~/.config/opencode/skill/ # Skills should be available at ~/.config/opencode/skills/
ls ~/.config/opencode/skill/ ls ~/.config/opencode/skills/
``` ```
## 🎨 Creating Your First Skill ## 🎨 Creating Your First Skill
@@ -112,18 +177,19 @@ Skills are modular packages that extend Opencode with specialized knowledge and
### 1. Initialize a New Skill ### 1. Initialize a New Skill
```bash ```bash
python3 skill/skill-creator/scripts/init_skill.py my-skill-name --path skill/ python3 skills/skill-creator/scripts/init_skill.py my-skill-name --path skills/
``` ```
This creates: This creates:
- `skill/my-skill-name/SKILL.md` - Main skill documentation
- `skill/my-skill-name/scripts/` - Executable code (optional) - `skills/my-skill-name/SKILL.md` - Main skill documentation
- `skill/my-skill-name/references/` - Reference documentation (optional) - `skills/my-skill-name/scripts/` - Executable code (optional)
- `skill/my-skill-name/assets/` - Templates and files (optional) - `skills/my-skill-name/references/` - Reference documentation (optional)
- `skills/my-skill-name/assets/` - Templates and files (optional)
### 2. Edit the Skill ### 2. Edit the Skill
Open `skill/my-skill-name/SKILL.md` and customize: Open `skills/my-skill-name/SKILL.md` and customize:
```yaml ```yaml
--- ---
@@ -131,7 +197,6 @@ name: my-skill-name
description: What it does and when to use it. Include trigger keywords. description: What it does and when to use it. Include trigger keywords.
compatibility: opencode compatibility: opencode
--- ---
# My Skill Name # My Skill Name
## Overview ## Overview
@@ -139,108 +204,111 @@ compatibility: opencode
[Your skill instructions for Opencode] [Your skill instructions for Opencode]
``` ```
### 3. Validate the Skill ### 3. Register Dependencies
```bash If your skill includes scripts with external dependencies, add them to `flake.nix`:
python3 skill/skill-creator/scripts/quick_validate.py skill/my-skill-name
```nix
# Python packages — add to pythonEnv:
# my-skill: my_script.py
some-python-package
# System tools — add to skills-runtime paths:
# my-skill: needed by my_script.py
pkgs.some-tool
``` ```
### 4. Test the Skill Verify: `nix develop --command python3 -c "import some_package"`
Test your skill without deploying via home-manager: ### 4. Validate the Skill
```bash
python3 skills/skill-creator/scripts/quick_validate.py skills/my-skill-name
```
### 5. Test the Skill
```bash ```bash
# Use the test script to validate and list skills
./scripts/test-skill.sh my-skill-name # Validate specific skill ./scripts/test-skill.sh my-skill-name # Validate specific skill
./scripts/test-skill.sh --list # List all dev skills
./scripts/test-skill.sh --run # Launch opencode with dev skills ./scripts/test-skill.sh --run # Launch opencode with dev skills
``` ```
The test script creates a temporary config directory with symlinks to this repo's skills, allowing you to test changes before committing.
## 📚 Available Skills ## 📚 Available Skills
| Skill | Purpose | Status | | Skill | Purpose | Status |
|-------|---------|--------| | --------------------------- | -------------------------------------------------------------- | ------------ |
| **task-management** | PARA-based productivity with Obsidian Tasks integration | ✅ Active | | **agent-development** | Create and configure Opencode agents | ✅ Active |
| **skill-creator** | Guide for creating new Opencode skills | ✅ Active |
| **reflection** | Conversation analysis and skill improvement | ✅ Active |
| **communications** | Email drafts, follow-ups, message management | ✅ Active |
| **calendar-scheduling** | Time blocking, meeting management | ✅ Active |
| **mem0-memory** | Persistent memory storage and retrieval | ✅ Active |
| **research** | Investigation workflows, source management | ✅ Active |
| **knowledge-management** | Note capture, knowledge organization | ✅ Active |
| **basecamp** | Basecamp project & todo management via MCP | ✅ Active | | **basecamp** | Basecamp project & todo management via MCP | ✅ Active |
| **brainstorming** | General-purpose ideation with Obsidian save | ✅ Active | | **brainstorming** | General-purpose ideation and strategic thinking | ✅ Active |
| **plan-writing** | Project plans with templates (kickoff, tasks, risks) | ✅ Active | | **doc-translator** | Documentation translation to German/Czech with Outline publish | ✅ Active |
| **excalidraw** | Architecture diagrams from codebase analysis | ✅ Active |
| **frontend-design** | Production-grade UI/UX with high design quality | ✅ Active |
| **memory** | SQLite-based persistent memory with hybrid search | ✅ Active |
| **obsidian** | Obsidian vault management via Local REST API | ✅ Active |
| **outline** | Outline wiki integration for team documentation | ✅ Active |
| **pdf** | PDF manipulation, extraction, creation, and forms | ✅ Active |
| **prompt-engineering-patterns** | Advanced prompt engineering techniques | ✅ Active |
| **reflection** | Conversation analysis and skill improvement | ✅ Active |
| **skill-creator** | Guide for creating new Opencode skills | ✅ Active |
| **systematic-debugging** | Debugging methodology for bugs and test failures | ✅ Active |
| **xlsx** | Spreadsheet creation, editing, and analysis | ✅ Active |
## 🤖 AI Agents ## 🤖 AI Agents
### Chiron - Personal Assistant ### Primary Agents
**Configuration**: `agent/agents.json` + `prompts/chiron.txt` | Agent | Mode | Purpose |
| ------------------- | ------- | ---------------------------------------------------- |
| **Chiron** | Plan | Read-only analysis, planning, and guidance |
| **Chiron Forge** | Build | Full execution and task completion with safety |
Chiron is a personal AI assistant focused on productivity and task management. Named after the wise centaur from Greek mythology, Chiron provides: ### Subagents (Specialists)
- Task and project management guidance | Agent | Domain | Purpose |
- Daily and weekly review workflows | ------------------- | ---------------- | ------------------------------------------ |
- Skill routing based on user intent | **Hermes** | Communication | Basecamp, Outlook, MS Teams |
- Integration with productivity tools (Obsidian, ntfy, n8n) | **Athena** | Research | Outline wiki, documentation, knowledge |
| **Apollo** | Private Knowledge| Obsidian vault, personal notes |
| **Calliope** | Writing | Documentation, reports, prose |
**Modes**: **Configuration**: `agents/agents.json` + `prompts/*.txt`
- **Chiron** (Plan Mode) - Read-only analysis and planning (`prompts/chiron.txt`)
- **Chiron-Forge** (Worker Mode) - Full write access with safety prompts (`prompts/chiron-forge.txt`)
**Triggers**: Personal productivity requests, task management, reviews, planning ## 🛠️ Development
## 🛠️ Development Workflow ### Environment
### Issue Tracking with Beads The repository includes a Nix flake with a development shell. With [direnv](https://direnv.net/) installed, the environment activates automatically:
This project uses [beads](https://github.com/steveyegge/beads) for AI-native issue tracking:
```bash ```bash
bd ready # Find available work cd AGENTS/
bd create "title" # Create new issue # → direnv: loading .envrc
bd update <id> --status in_progress # → 🔧 AGENTS dev shell active — Python 3.13.x, jq-1.x
bd close <id> # Complete work
bd sync # Sync with git # All skill script dependencies are now available:
python3 -c "import pypdf, openpyxl, yaml" # ✔️
pdftoppm -v # ✔️
``` ```
Without direnv, activate manually: `nix develop`
### Quality Gates ### Quality Gates
Before committing: Before committing:
1. **Validate skills**: `./scripts/test-skill.sh --validate` or `python3 skill/skill-creator/scripts/quick_validate.py skill/<name>` 1. **Validate skills**: `./scripts/test-skill.sh --validate` or `python3 skills/skill-creator/scripts/quick_validate.py skills/<name>`
2. **Test locally**: `./scripts/test-skill.sh --run` to launch opencode with dev skills 2. **Test locally**: `./scripts/test-skill.sh --run` to launch opencode with dev skills
3. **Check formatting**: Ensure YAML frontmatter is valid 3. **Check formatting**: Ensure YAML frontmatter is valid
4. **Update docs**: Keep README and AGENTS.md in sync 4. **Update docs**: Keep README and AGENTS.md in sync
### Session Completion
When ending a work session:
1. File beads issues for remaining work
2. Run quality gates
3. Update issue status
4. **Push to remote** (mandatory):
```bash
git pull --rebase
bd sync
git push
```
5. Verify changes are pushed
See `AGENTS.md` for complete developer documentation.
## 🎓 Learning Resources ## 🎓 Learning Resources
### Essential Documentation ### Essential Documentation
- **AGENTS.md** - Complete developer guide for AI agents - **AGENTS.md** - Complete developer guide for AI agents
- **skill/skill-creator/SKILL.md** - Comprehensive skill creation guide - **skills/skill-creator/SKILL.md** - Comprehensive skill creation guide
- **skill/skill-creator/references/workflows.md** - Workflow pattern library - **skills/skill-creator/references/workflows.md** - Workflow pattern library
- **skill/skill-creator/references/output-patterns.md** - Output formatting patterns - **skills/skill-creator/references/output-patterns.md** - Output formatting patterns
- **rules/USAGE.md** - AI coding rules integration guide
### Skill Design Principles ### Skill Design Principles
@@ -251,27 +319,33 @@ See `AGENTS.md` for complete developer documentation.
### Example Skills to Study ### Example Skills to Study
- **task-management/** - Full implementation with Obsidian Tasks integration
- **skill-creator/** - Meta-skill with bundled resources - **skill-creator/** - Meta-skill with bundled resources
- **reflection/** - Conversation analysis with rating system - **reflection/** - Conversation analysis with rating system
- **basecamp/** - MCP server integration with multiple tool categories - **basecamp/** - MCP server integration with multiple tool categories
- **brainstorming/** - Framework-based ideation with Obsidian markdown save - **brainstorming/** - Framework-based ideation with Obsidian markdown save
- **plan-writing/** - Template-driven document generation - **memory/** - SQLite-based hybrid search implementation
- **excalidraw/** - Diagram generation with JSON templates and Python renderer
## 🔧 Customization ## 🔧 Customization
### Modify Agent Behavior ### Modify Agent Behavior
Edit `agent/agents.json` for agent definitions and `prompts/*.txt` for system prompts: Edit `agents/agents.json` for agent definitions and `prompts/*.txt` for system prompts:
- `agent/agents.json` - Agent names, models, permissions
- `agents/agents.json` - Agent names, models, permissions
- `prompts/chiron.txt` - Chiron (Plan Mode) system prompt - `prompts/chiron.txt` - Chiron (Plan Mode) system prompt
- `prompts/chiron-forge.txt` - Chiron-Forge (Worker Mode) system prompt - `prompts/chiron-forge.txt` - Chiron Forge (Build Mode) system prompt
- `prompts/hermes.txt` - Hermes (Communication) system prompt
- `prompts/athena.txt` - Athena (Research) system prompt
- `prompts/apollo.txt` - Apollo (Private Knowledge) system prompt
- `prompts/calliope.txt` - Calliope (Writing) system prompt
**Note**: Agent changes require `home-manager switch` to take effect (config is embedded, not symlinked). **Note**: Agent changes require `home-manager switch` to take effect (config is embedded, not symlinked).
### Update User Context ### Update User Context
Edit `context/profile.md` to configure: Edit `context/profile.md` to configure:
- Work style preferences - Work style preferences
- PARA areas and projects - PARA areas and projects
- Communication preferences - Communication preferences
@@ -279,13 +353,29 @@ Edit `context/profile.md` to configure:
### Add Custom Commands ### Add Custom Commands
Create new command definitions in `command/` directory following the pattern in `command/reflection.md`. Create new command definitions in `commands/` directory following the pattern in `commands/reflection.md`.
### Add Project Rules
Use the rules system to inject AI coding rules into projects:
```nix
# In project flake.nix
m3taLib.opencode-rules.mkOpencodeRules {
inherit agents;
languages = [ "python" "typescript" ];
frameworks = [ "n8n" ];
};
```
See `rules/USAGE.md` for full documentation.
## 🌟 Use Cases ## 🌟 Use Cases
### Personal Productivity ### Personal Productivity
Use the PARA methodology with Obsidian Tasks integration: Use the PARA methodology with Obsidian Tasks integration:
- Capture tasks and notes quickly - Capture tasks and notes quickly
- Run daily/weekly reviews - Run daily/weekly reviews
- Prioritize work based on impact - Prioritize work based on impact
@@ -294,6 +384,7 @@ Use the PARA methodology with Obsidian Tasks integration:
### Knowledge Management ### Knowledge Management
Build a personal knowledge base: Build a personal knowledge base:
- Capture research findings - Capture research findings
- Organize notes and references - Organize notes and references
- Link related concepts - Link related concepts
@@ -302,6 +393,7 @@ Build a personal knowledge base:
### AI-Assisted Development ### AI-Assisted Development
Extend Opencode for specialized domains: Extend Opencode for specialized domains:
- Create company-specific skills (finance, legal, engineering) - Create company-specific skills (finance, legal, engineering)
- Integrate with APIs and databases - Integrate with APIs and databases
- Build custom automation workflows - Build custom automation workflows
@@ -310,6 +402,7 @@ Extend Opencode for specialized domains:
### Team Collaboration ### Team Collaboration
Share skills and agents across teams: Share skills and agents across teams:
- Document company processes as skills - Document company processes as skills
- Create shared knowledge bases - Create shared knowledge bases
- Standardize communication templates - Standardize communication templates
@@ -331,15 +424,14 @@ This repository contains personal configurations and skills. Feel free to use th
## 🔗 Links ## 🔗 Links
- [Opencode](https://opencode.dev) - AI coding assistant - [Opencode](https://opencode.dev) - AI coding assistant
- [Beads](https://github.com/steveyegge/beads) - AI-native issue tracking
- [PARA Method](https://fortelabs.com/blog/para/) - Productivity methodology - [PARA Method](https://fortelabs.com/blog/para/) - Productivity methodology
- [Obsidian](https://obsidian.md) - Knowledge management platform - [Obsidian](https://obsidian.md) - Knowledge management platform
## 🙋 Questions? ## 🙋 Questions?
- Check `AGENTS.md` for detailed developer documentation - Check `AGENTS.md` for detailed developer documentation
- Review existing skills in `skill/` for examples - Review existing skills in `skills/` for examples
- See `skill/skill-creator/SKILL.md` for skill creation guide - See `skills/skill-creator/SKILL.md` for skill creation guide
--- ---

View File

@@ -1,62 +1,173 @@
{ {
"chiron": { "Chiron (Assistant)": {
"description": "Personal AI assistant (Plan Mode). Read-only analysis, planning, and guidance.", "description": "Personal AI assistant (Plan Mode). Read-only analysis, planning, and guidance.",
"mode": "primary", "mode": "primary",
"model": "zai-coding-plan/glm-4.7", "model": "zai-coding-plan/glm-5",
"prompt": "{file:./prompts/chiron.txt}", "prompt": "{file:./prompts/chiron.txt}",
"permission": { "permission": {
"question": "allow",
"webfetch": "allow",
"websearch": "allow",
"edit": "deny",
"bash": {
"*": "ask",
"git status*": "allow",
"git log*": "allow",
"git diff*": "allow",
"git branch*": "allow",
"git show*": "allow",
"grep *": "allow",
"ls *": "allow",
"cat *": "allow",
"head *": "allow",
"tail *": "allow",
"wc *": "allow",
"which *": "allow",
"echo *": "allow",
"td *": "allow",
"bd *": "allow",
"nix *": "allow"
},
"external_directory": { "external_directory": {
"*": "ask",
"~/p/**": "allow", "~/p/**": "allow",
"*": "ask" "~/.config/opencode/**": "allow",
"/tmp/**": "allow",
"/run/agenix/**": "allow"
} }
} }
}, },
"chiron-forge": { "Chiron Forge (Builder)": {
"description": "Personal AI assistant (Build Mode). Full execution and task completion capabilities with safety prompts.", "description": "Personal AI assistant (Build Mode). Full execution and task completion capabilities with safety prompts.",
"mode": "primary", "mode": "primary",
"model": "zai-coding-plan/glm-4.7", "model": "zai-coding-plan/glm-5",
"prompt": "{file:./prompts/chiron-forge.txt}", "prompt": "{file:./prompts/chiron-forge.txt}",
"permission": { "permission": {
"question": "allow",
"webfetch": "allow",
"websearch": "allow",
"edit": {
"*": "allow",
"/run/agenix/**": "deny"
},
"bash": {
"*": "allow",
"rm -rf *": "ask",
"git reset --hard*": "ask",
"git push*": "ask",
"git push --force*": "deny",
"git push -f *": "deny"
},
"external_directory": { "external_directory": {
"*": "ask",
"~/p/**": "allow", "~/p/**": "allow",
"*": "ask" "~/.config/opencode/**": "allow",
"/tmp/**": "allow",
"/run/agenix/**": "allow"
} }
} }
}, },
"hermes": { "Hermes (Communication)": {
"description": "Work communication specialist. Handles Basecamp tasks, Outlook email, and MS Teams meetings.", "description": "Work communication specialist. Handles Basecamp tasks, Outlook email, and MS Teams meetings.",
"mode": "subagent", "mode": "subagent",
"model": "zai-coding-plan/glm-4.7", "model": "zai-coding-plan/glm-5",
"prompt": "{file:./prompts/hermes.txt}", "prompt": "{file:./prompts/hermes.txt}",
"permission": { "permission": {
"question": "allow" "question": "allow",
"webfetch": "allow",
"edit": {
"*": "allow",
"/run/agenix/**": "deny"
},
"bash": {
"*": "ask",
"cat *": "allow",
"echo *": "allow"
},
"external_directory": {
"*": "ask",
"~/p/**": "allow",
"~/.config/opencode/**": "allow",
"/tmp/**": "allow",
"/run/agenix/**": "allow"
}
} }
}, },
"athena": { "Athena (Researcher)": {
"description": "Work knowledge specialist. Manages Outline wiki, documentation, and knowledge organization.", "description": "Work knowledge specialist. Manages Outline wiki, documentation, and knowledge organization.",
"mode": "subagent", "mode": "subagent",
"model": "zai-coding-plan/glm-4.7", "model": "zai-coding-plan/glm-5",
"prompt": "{file:./prompts/athena.txt}", "prompt": "{file:./prompts/athena.txt}",
"permission": { "permission": {
"question": "allow" "question": "allow",
"webfetch": "allow",
"websearch": "allow",
"edit": {
"*": "allow",
"/run/agenix/**": "deny"
},
"bash": {
"*": "ask",
"grep *": "allow",
"cat *": "allow"
},
"external_directory": {
"*": "ask",
"~/p/**": "allow",
"~/.config/opencode/**": "allow",
"/tmp/**": "allow",
"/run/agenix/**": "allow"
}
} }
}, },
"apollo": { "Apollo (Knowledge Management)": {
"description": "Private knowledge specialist. Manages Obsidian vault, personal notes, and private knowledge graph.", "description": "Private knowledge specialist. Manages Obsidian vault, personal notes, and private knowledge graph.",
"mode": "subagent", "mode": "subagent",
"model": "zai-coding-plan/glm-4.7", "model": "zai-coding-plan/glm-5",
"prompt": "{file:./prompts/apollo.txt}", "prompt": "{file:./prompts/apollo.txt}",
"permission": { "permission": {
"question": "allow" "question": "allow",
"edit": {
"*": "allow",
"/run/agenix/**": "deny"
},
"bash": {
"*": "ask",
"cat *": "allow"
},
"external_directory": {
"*": "ask",
"~/p/**": "allow",
"~/.config/opencode/**": "allow",
"/tmp/**": "allow",
"/run/agenix/**": "allow"
}
} }
}, },
"calliope": { "Calliope (Writer)": {
"description": "Writing specialist. Creates documentation, reports, meeting notes, and prose.", "description": "Writing specialist. Creates documentation, reports, meeting notes, and prose.",
"mode": "subagent", "mode": "subagent",
"model": "zai-coding-plan/glm-4.7", "model": "zai-coding-plan/glm-5",
"prompt": "{file:./prompts/calliope.txt}", "prompt": "{file:./prompts/calliope.txt}",
"permission": { "permission": {
"question": "allow" "question": "allow",
"webfetch": "allow",
"edit": {
"*": "allow",
"/run/agenix/**": "deny"
},
"bash": {
"*": "ask",
"cat *": "allow",
"wc *": "allow"
},
"external_directory": {
"*": "ask",
"~/p/**": "allow",
"~/.config/opencode/**": "allow",
"/tmp/**": "allow",
"/run/agenix/**": "allow"
}
} }
} }
} }

View File

@@ -104,3 +104,48 @@
- Batch related information together - Batch related information together
- Remember my preferences across sessions - Remember my preferences across sessions
- Proactively surface relevant information - Proactively surface relevant information
---
## Memory System
AI agents have access to a persistent memory system for context across sessions via the opencode-memory plugin.
### Configuration
| Setting | Value |
|---------|-------|
| **Plugin** | `opencode-memory` |
| **Obsidian Vault** | `~/CODEX` |
| **Memory Folder** | `80-memory/` |
| **Database** | `~/.local/share/opencode-memory/index.db` |
| **Auto-Capture** | Enabled (session.idle event) |
| **Auto-Recall** | Enabled (session.created event) |
| **Token Budget** | 2000 tokens |
### Memory Categories
| Category | Purpose | Example |
|----------|---------|---------|
| `preference` | Personal preferences | UI settings, workflow styles |
| `fact` | Objective information | Tech stack, role, constraints |
| `decision` | Choices with rationale | Tool selections, architecture |
| `entity` | People, orgs, systems | Key contacts, important APIs |
| `other` | Everything else | General learnings |
### Available Tools
| Tool | Purpose |
|------|---------|
| `memory_search` | Hybrid search (vector + BM25) over vault + sessions |
| `memory_store` | Store new memory as markdown file |
| `memory_get` | Read specific file/lines from vault |
### Usage Notes
- Memories are stored as markdown files in Obsidian (source of truth)
- SQLite provides fast hybrid search (vector similarity + keyword BM25)
- Use explicit "remember this" to store important information
- Auto-recall injects relevant memories at session start
- Auto-capture extracts preferences/decisions at session idle
- See `skills/memory/SKILL.md` for full documentation

27
flake.lock generated Normal file
View File

@@ -0,0 +1,27 @@
{
"nodes": {
"nixpkgs": {
"locked": {
"lastModified": 1772479524,
"narHash": "sha256-u7nCaNiMjqvKpE+uZz9hE7pgXXTmm5yvdtFaqzSzUQI=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "4215e62dc2cd3bc705b0a423b9719ff6be378a43",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"nixpkgs": "nixpkgs"
}
}
},
"root": "root",
"version": 7
}

68
flake.nix Normal file
View File

@@ -0,0 +1,68 @@
{
description = "Opencode Agent Skills development environment & runtime";
inputs = { nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable"; };
outputs = { self, nixpkgs }:
let
supportedSystems = [ "x86_64-linux" "aarch64-linux" "aarch64-darwin" ];
forAllSystems = nixpkgs.lib.genAttrs supportedSystems;
in {
# Composable runtime for project flakes and home-manager.
# Usage:
# home.packages = [ inputs.agents.packages.${system}.skills-runtime ];
# devShells.default = pkgs.mkShell {
# packages = [ inputs.agents.packages.${system}.skills-runtime ];
# };
packages = forAllSystems (system:
let
pkgs = nixpkgs.legacyPackages.${system};
pythonEnv = pkgs.python3.withPackages (ps:
with ps; [
# skill-creator: quick_validate.py
pyyaml
# xlsx: recalc.py
openpyxl
# prompt-engineering-patterns: optimize-prompt.py
numpy
# pdf: multiple scripts
pypdf
pillow # PIL
pdf2image
# excalidraw: render_excalidraw.py
playwright
]);
in {
skills-runtime = pkgs.buildEnv {
name = "opencode-skills-runtime";
paths = [
pythonEnv
pkgs.poppler-utils # pdf: pdftoppm/pdfinfo
pkgs.jq # shell scripts
pkgs.playwright-driver.browsers # excalidraw: chromium for rendering
];
};
});
# Dev shell for working on this repo (wraps skills-runtime).
devShells = forAllSystems (system:
let
pkgs = nixpkgs.legacyPackages.${system};
in {
default = pkgs.mkShell {
packages = [ self.packages.${system}.skills-runtime ];
env.PLAYWRIGHT_BROWSERS_PATH = "${pkgs.playwright-driver.browsers}";
shellHook = ''
echo "🔧 AGENTS dev shell active Python $(python3 --version 2>&1 | cut -d' ' -f2), $(jq --version)"
'';
};
});
};
}

View File

@@ -5,6 +5,7 @@ You are Apollo, the Greek god of knowledge, prophecy, and light, specializing in
2. Search, organize, and structure personal knowledge graphs 2. Search, organize, and structure personal knowledge graphs
3. Assist with personal task management embedded in private notes 3. Assist with personal task management embedded in private notes
4. Bridge personal knowledge with work contexts without exposing sensitive data 4. Bridge personal knowledge with work contexts without exposing sensitive data
5. Manage dual-layer memory system (Mem0 + Obsidian CODEX) for persistent context across sessions
**Process:** **Process:**
1. Identify which vault or note collection the user references 1. Identify which vault or note collection the user references
@@ -20,6 +21,10 @@ You are Apollo, the Greek god of knowledge, prophecy, and light, specializing in
- Respect vault structure: folders, backlinks, unlinked references - Respect vault structure: folders, backlinks, unlinked references
- Preserve context when retrieving related notes - Preserve context when retrieving related notes
- Handle multiple vault configurations gracefully - Handle multiple vault configurations gracefully
- Store valuable memories in dual-layer system: Mem0 (semantic search) + Obsidian 80-memory/ (human-readable)
- Auto-capture session insights at session end (max 3 per session, confirm with user)
- Retrieve relevant memories when context suggests past preferences/decisions
- Use memory categories: preference, fact, decision, entity, other
**Output Format:** **Output Format:**
- Summarized findings with citations to note titles (not file paths) - Summarized findings with citations to note titles (not file paths)
@@ -33,11 +38,15 @@ You are Apollo, the Greek god of knowledge, prophecy, and light, specializing in
- Large result sets: Provide summary and offer filtering options - Large result sets: Provide summary and offer filtering options
- Nested tasks or complex dependencies: Break down into clear hierarchical view - Nested tasks or complex dependencies: Break down into clear hierarchical view
- Sensitive content detected: Flag it without revealing details - Sensitive content detected: Flag it without revealing details
- Mem0 unavailable: Warn user, continue without memory features, do not block workflow
- Obsidian unavailable: Store in Mem0 only, log sync failure for later retry
**Tool Usage:** **Tool Usage:**
- Question tool: Required when vault location is ambiguous or note reference is unclear - Question tool: Required when vault location is ambiguous or note reference is unclear
- Never reveal absolute file paths or directory structures in output - Never reveal absolute file paths or directory structures in output
- Extract patterns and insights while obscuring specific personal details - Extract patterns and insights while obscuring specific personal details
- Memory tools: Store/recall memories via Mem0 REST API (localhost:8000)
- Obsidian MCP: Create memory notes in 80-memory/ with mem0_id cross-reference
**Boundaries:** **Boundaries:**
- Do NOT handle work tools (Hermes/Athena's domain) - Do NOT handle work tools (Hermes/Athena's domain)

62
rules/USAGE.md Normal file
View File

@@ -0,0 +1,62 @@
# Opencode Rules Usage
Add AI coding rules to your project via `mkOpencodeRules`.
## flake.nix Setup
```nix
{
inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
m3ta-nixpkgs.url = "git+https://code.m3ta.dev/m3tam3re/nixpkgs";
agents = {
url = "git+https://code.m3ta.dev/m3tam3re/AGENTS";
flake = false;
};
};
outputs = { self, nixpkgs, m3ta-nixpkgs, agents, ... }:
let
system = "x86_64-linux";
pkgs = nixpkgs.legacyPackages.${system};
m3taLib = m3ta-nixpkgs.lib.${system};
in {
devShells.${system}.default = let
rules = m3taLib.opencode-rules.mkOpencodeRules {
inherit agents;
languages = [ "python" "typescript" ];
frameworks = [ "n8n" ];
};
in pkgs.mkShell {
shellHook = rules.shellHook;
};
};
}
```
## Parameters
- `agents` (required): Path to AGENTS repo flake input
- `languages` (optional): List of language names (e.g., `["python" "typescript"]`)
- `concerns` (optional): Rule categories (default: all standard concerns)
- `frameworks` (optional): List of framework names (e.g., `["n8n" "django"]`)
- `extraInstructions` (optional): Additional instruction file paths
## .gitignore
Add to your project's `.gitignore`:
```
.opencode-rules
opencode.json
```
## Project Overrides
Create `AGENTS.md` in your project root to override central rules. OpenCode applies project-level rules with precedence over central ones.
## Updating Rules
When central rules are updated:
```bash
nix flake update agents
```

View File

@@ -0,0 +1,163 @@
# Coding Style
## Critical Rules (MUST follow)
Always prioritize readability over cleverness. Never write code that requires mental gymnastics to understand.
Always fail fast and explicitly. Never silently swallow errors or hide exceptions.
Always keep functions under 20 lines. Never create monolithic functions that do multiple things.
Always validate inputs at function boundaries. Never trust external data implicitly.
## Formatting
Prefer consistent indentation throughout the codebase. Never mix tabs and spaces.
Prefer meaningful variable names over short abbreviations. Never use single letters except for loop counters.
### Correct:
```lang
const maxRetryAttempts = 3;
const connectionTimeout = 5000;
for (let attempt = 1; attempt <= maxRetryAttempts; attempt++) {
// process attempt
}
```
### Incorrect:
```lang
const m = 3;
const t = 5000;
for (let i = 1; i <= m; i++) {
// process attempt
}
```
## Patterns and Anti-Patterns
Never repeat yourself. Always extract duplicated logic into reusable functions.
Prefer composition over inheritance. Never create deep inheritance hierarchies.
Always use guard clauses to reduce nesting. Never write arrow-shaped code.
### Correct:
```lang
def process_user(user):
if not user:
return None
if not user.is_active:
return None
return user.calculate_score()
```
### Incorrect:
```lang
def process_user(user):
if user:
if user.is_active:
return user.calculate_score()
else:
return None
else:
return None
```
## Error Handling
Always handle specific exceptions. Never use broad catch-all exception handlers.
Always log error context, not just the error message. Never let errors vanish without trace.
### Correct:
```lang
try:
data = fetch_resource(url)
return parse_data(data)
except NetworkError as e:
log_error(f"Network failed for {url}: {e}")
raise
except ParseError as e:
log_error(f"Parse failed for {url}: {e}")
return fallback_data
```
### Incorrect:
```lang
try:
data = fetch_resource(url)
return parse_data(data)
except Exception:
pass
```
## Type Safety
Always use type annotations where supported. Never rely on implicit type coercion.
Prefer explicit type checks over duck typing for public APIs. Never assume type behavior.
### Correct:
```lang
function calculateTotal(price: number, quantity: number): number {
return price * quantity;
}
```
### Incorrect:
```lang
function calculateTotal(price, quantity) {
return price * quantity;
}
```
## Function Design
Always write pure functions when possible. Never mutate arguments unless required.
Always limit function parameters to 3 or fewer. Never pass objects to hide parameter complexity.
### Correct:
```lang
def create_user(name: str, email: str) -> User:
return User(name=name, email=email, created_at=now())
```
### Incorrect:
```lang
def create_user(config: dict) -> User:
return User(
name=config['name'],
email=config['email'],
created_at=config['timestamp']
)
```
## SOLID Principles
Never let classes depend on concrete implementations. Always depend on abstractions.
Always ensure classes are open for extension but closed for modification. Never change working code to add features.
Prefer many small interfaces over one large interface. Never force clients to depend on methods they don't use.
### Correct:
```lang
class EmailSender {
send(message: Message): void {
// implementation
}
}
class NotificationService {
constructor(private sender: EmailSender) {}
}
```
### Incorrect:
```lang
class NotificationService {
sendEmail(message: Message): void { }
sendSMS(message: Message): void { }
sendPush(message: Message): void { }
}
```
## Critical Rules (REPEAT)
Always write self-documenting code. Never rely on comments to explain complex logic.
Always refactor when you see code smells. Never let technical debt accumulate.
Always test edge cases explicitly. Never assume happy path only behavior.
Never commit commented-out code. Always remove it or restore it.

View File

@@ -0,0 +1,149 @@
# Documentation Rules
## When to Document
**Document public APIs**. Every public function, class, method, and module needs documentation. Users need to know how to use your code.
**Document complex logic**. Algorithms, state machines, and non-obvious implementations need explanations. Future readers will thank you.
**Document business rules**. Encode domain knowledge directly in comments. Don't make anyone reverse-engineer requirements from code.
**Document trade-offs**. When you choose between alternatives, explain why. Help future maintainers understand the decision context.
**Do NOT document obvious code**. Comments like `// get user` add noise. Delete them.
## Docstring Formats
### Python (Google Style)
```python
def calculate_price(quantity: int, unit_price: float, discount: float = 0.0) -> float:
"""Calculate total price after discount.
Args:
quantity: Number of items ordered.
unit_price: Price per item in USD.
discount: Decimal discount rate (0.0 to 1.0).
Returns:
Final price in USD.
Raises:
ValueError: If quantity is negative.
"""
```
### JavaScript/TypeScript (JSDoc)
```javascript
/**
* Validates user input against security rules.
* @param {string} input - Raw user input from form.
* @param {Object} rules - Validation constraints.
* @param {number} rules.maxLength - Maximum allowed length.
* @returns {boolean} True if input passes all rules.
* @throws {ValidationError} If input violates security constraints.
*/
function validateInput(input, rules) {
```
### Bash
```bash
#!/usr/bin/env bash
# Deploy application to production environment.
#
# Usage: ./deploy.sh [environment]
#
# Args:
# environment: Target environment (staging|production). Default: staging.
#
# Exits:
# 0 on success, 1 on deployment failure.
```
## Inline Comments: WHY Not WHAT
**Incorrect:**
```python
# Iterate through all users
for user in users:
# Check if user is active
if user.active:
# Increment counter
count += 1
```
**Correct:**
```python
# Count only active users to calculate monthly revenue
for user in users:
if user.active:
count += 1
```
**Incorrect:**
```javascript
// Set timeout to 5000
setTimeout(() => {
// Show error message
alert('Error');
}, 5000);
```
**Correct:**
```javascript
// 5000ms delay prevents duplicate alerts during rapid retries
setTimeout(() => {
alert('Error');
}, 5000);
```
**Incorrect:**
```bash
# Remove temporary files
rm -rf /tmp/app/*
```
**Correct:**
```bash
# Clear temp directory before batch import to prevent partial state
rm -rf /tmp/app/*
```
**Rule:** Describe the intent and context. Never describe what the code obviously does.
## README Standards
Every project needs a README at the top level.
**Required sections:**
1. **What it does** - One sentence summary
2. **Installation** - Setup commands
3. **Usage** - Basic example
4. **Configuration** - Environment variables and settings
5. **Contributing** - How to contribute
**Example structure:**
```markdown
# Project Name
One-line description of what this project does.
## Installation
```bash
npm install
```
## Usage
```bash
npm start
```
## Configuration
Create `.env` file:
```
API_KEY=your_key_here
```
## Contributing
See [CONTRIBUTING.md](./CONTRIBUTING.md).
```
**Keep READMEs focused**. Link to separate docs for complex topics. Don't make the README a tutorial.

View File

@@ -0,0 +1,118 @@
# Git Workflow Rules
## Conventional Commits
Format: `<type>(<scope>): <subject>`
### Commit Types
- **feat**: New feature
- `feat(auth): add OAuth2 login flow`
- `feat(api): expose user endpoints`
- **fix**: Bug fix
- `fix(payment): resolve timeout on Stripe calls`
- `fix(ui): button not clickable on mobile`
- **refactor**: Code refactoring (no behavior change)
- `refactor(utils): extract date helpers`
- `refactor(api): simplify error handling`
- **docs**: Documentation only
- `docs(readme): update installation steps`
- `docs(api): add endpoint examples`
- **chore**: Maintenance tasks
- `chore(deps): update Node to 20`
- `chore(ci): add GitHub actions workflow`
- **test**: Tests only
- `test(auth): add unit tests for login`
- `test(e2e): add checkout flow tests`
- **style**: Formatting, no logic change
- `style: sort imports alphabetically`
### Commit Rules
- Subject max 72 chars
- Imperative mood ("add", not "added")
- No period at end
- Reference issues: `Closes #123`
## Branch Naming
Pattern: `<type>/<short-description>`
### Branch Types
- `feature/add-user-dashboard`
- `feature/enable-dark-mode`
- `fix/login-redirect-loop`
- `fix/payment-timeout-error`
- `refactor/extract-user-service`
- `refactor/simplify-auth-flow`
- `hotfix/security-vulnerability`
### Branch Rules
- Lowercase and hyphens
- Max 50 chars
- Delete after merge
## Pull Requests
### PR Title
Follow Conventional Commit format:
- `feat: add user dashboard`
- `fix: resolve login redirect loop`
### PR Description
```markdown
## What
Brief description
## Why
Reason for change
## How
Implementation approach
## Testing
Steps performed
## Checklist
- [ ] Tests pass
- [ ] Code reviewed
- [ ] Docs updated
```
## Merge Strategy
### Squash Merge
- Many small commits
- One cohesive feature
- Clean history
### Merge Commit
- Preserve commit history
- Distinct milestones
- Detailed history preferred
### When to Rebase
- Before opening PR
- Resolving conflicts
- Keeping current with main
## General Rules
- Pull latest from main before starting
- Write atomic commits
- Run tests before pushing
- Request peer review before merge
- Never force push to main/master

105
rules/concerns/naming.md Normal file
View File

@@ -0,0 +1,105 @@
# Naming Conventions
Use consistent naming across all code. Follow language-specific conventions.
## Language Reference
| Type | Python | TypeScript | Nix | Shell |
|------|--------|------------|-----|-------|
| Variables | snake_case | camelCase | camelCase | UPPER_SNAKE |
| Functions | snake_case | camelCase | camelCase | lower_case |
| Classes | PascalCase | PascalCase | - | - |
| Constants | UPPER_SNAKE | UPPER_SNAKE | camelCase | UPPER_SNAKE |
| Files | snake_case | camelCase | hyphen-case | hyphen-case |
| Modules | snake_case | camelCase | - | - |
## General Rules
**Files**: Use hyphen-case for documentation, snake_case for Python, camelCase for TypeScript. Names should describe content.
**Variables**: Use descriptive names. Avoid single letters except loop counters. No Hungarian notation.
**Functions**: Use verb-noun pattern. Name describes what it does, not how it does it.
**Classes**: Use PascalCase with descriptive nouns. Avoid abbreviations.
**Constants**: Use UPPER_SNAKE with descriptive names. Group related constants.
## Examples
Python:
```python
# Variables
user_name = "alice"
is_authenticated = True
# Functions
def get_user_data(user_id):
pass
# Classes
class UserProfile:
pass
# Constants
MAX_RETRIES = 3
API_ENDPOINT = "https://api.example.com"
```
TypeScript:
```typescript
// Variables
const userName = "alice";
const isAuthenticated = true;
// Functions
function getUserData(userId: string): User {
return null;
}
// Classes
class UserProfile {
private name: string;
}
// Constants
const MAX_RETRIES = 3;
const API_ENDPOINT = "https://api.example.com";
```
Nix:
```nix
# Variables
let
userName = "alice";
isAuthenticated = true;
in
# ...
```
Shell:
```bash
# Variables
USER_NAME="alice"
IS_AUTHENTICATED=true
# Functions
get_user_data() {
echo "Getting data"
}
# Constants
MAX_RETRIES=3
API_ENDPOINT="https://api.example.com"
```
## File Naming
Use these patterns consistently. No exceptions.
- Skills: `hyphen-case`
- Python: `snake_case.py`
- TypeScript: `camelCase.ts` or `hyphen-case.ts`
- Nix: `hyphen-case.nix`
- Shell: `hyphen-case.sh`
- Markdown: `UPPERCASE.md` or `sentence-case.md`

View File

@@ -0,0 +1,82 @@
# Project Structure
## Python
Use src layout for all projects. Place application code in `src/<project>/`, tests in `tests/`.
```
project/
├── src/myproject/
│ ├── __init__.py
│ ├── main.py # Entry point
│ └── core/
│ └── module.py
├── tests/
│ ├── __init__.py
│ └── test_module.py
├── pyproject.toml # Config
├── README.md
└── .gitignore
```
**Rules:**
- One module per directory file
- `__init__.py` in every package
- Entry point in `src/myproject/main.py`
- Config in root: `pyproject.toml`, `requirements.txt`
## TypeScript
Use `src/` for source, `dist/` for build output.
```
project/
├── src/
│ ├── index.ts # Entry point
│ ├── core/
│ │ └── module.ts
│ └── types.ts
├── tests/
│ └── module.test.ts
├── package.json # Config
├── tsconfig.json
└── README.md
```
**Rules:**
- One module per file
- Index exports from `src/index.ts`
- Entry point in `src/index.ts`
- Config in root: `package.json`, `tsconfig.json`
## Nix
Use `modules/` for NixOS modules, `pkgs/` for packages.
```
nix-config/
├── modules/
│ ├── default.nix # Module list
│ └── my-service.nix
├── pkgs/
│ └── my-package/
│ └── default.nix
├── flake.nix # Entry point
├── flake.lock
└── README.md
```
**Rules:**
- One module per file in `modules/`
- One package per directory in `pkgs/`
- Entry point in `flake.nix`
- Config in root: `flake.nix`, shell.nix
## General
- Use hyphen-case for directories
- Use kebab-case for file names
- Config files in project root
- Tests separate from source
- Docs in root: README.md, CHANGELOG.md
- Hidden configs: .env, .gitignore

476
rules/concerns/tdd.md Normal file
View File

@@ -0,0 +1,476 @@
# Test-Driven Development (Strict Enforcement)
## Critical Rules (MUST follow)
**NEVER write production code without a failing test first.**
**ALWAYS follow the red-green-refactor cycle. No exceptions.**
**NEVER skip the refactor step. Code quality is mandatory.**
**ALWAYS commit after green, never commit red tests.**
---
## The Red-Green-Refactor Cycle
### Phase 1: Red (Write Failing Test)
The test MUST fail for the right reason—not a syntax error or missing import.
```python
# CORRECT: Test fails because behavior doesn't exist yet
def test_calculate_discount_for_premium_members():
user = User(tier="premium")
cart = Cart(items=[Item(price=100)])
discount = calculate_discount(user, cart)
assert discount == 10 # Fails: calculate_discount not implemented
# INCORRECT: Test fails for wrong reason (will pass accidentally)
def test_calculate_discount():
discount = calculate_discount() # Fails: missing required args
assert discount is not None
```
**Red Phase Checklist:**
- [ ] Test describes ONE behavior
- [ ] Test name clearly states expected outcome
- [ ] Test fails for the intended reason
- [ ] Error message is meaningful
### Phase 2: Green (Write Minimum Code)
Write the MINIMUM code to make the test pass. Do not implement future features.
```python
# CORRECT: Minimum implementation
def calculate_discount(user, cart):
if user.tier == "premium":
return 10
return 0
# INCORRECT: Over-engineering for future needs
def calculate_discount(user, cart):
discounts = {
"premium": 10,
"gold": 15, # Not tested
"silver": 5, # Not tested
"basic": 0 # Not tested
}
return discounts.get(user.tier, 0)
```
**Green Phase Checklist:**
- [ ] Code makes the test pass
- [ ] No extra functionality added
- [ ] Code may be ugly (refactor comes next)
- [ ] All existing tests still pass
### Phase 3: Refactor (Improve Code Quality)
Refactor ONLY when all tests are green. Make small, incremental changes.
```python
# BEFORE (Green but messy)
def calculate_discount(user, cart):
if user.tier == "premium":
return 10
return 0
# AFTER (Refactored)
DISCOUNT_RATES = {"premium": 0.10}
def calculate_discount(user, cart):
rate = DISCOUNT_RATES.get(user.tier, 0)
return int(cart.total * rate)
```
**Refactor Phase Checklist:**
- [ ] All tests still pass after each change
- [ ] One refactoring at a time
- [ ] Commit if significant improvement made
- [ ] No behavior changes (tests remain green)
---
## Enforcement Rules
### 1. Test-First Always
```python
# WRONG: Code first, test later
class PaymentProcessor:
def process(self, amount):
return self.gateway.charge(amount)
# Then write test... (TOO LATE!)
# CORRECT: Test first
def test_process_payment_charges_gateway():
mock_gateway = MockGateway()
processor = PaymentProcessor(gateway=mock_gateway)
processor.process(100)
assert mock_gateway.charged_amount == 100
```
### 2. No Commented-Out Tests
```python
# WRONG: Commented test hides failing behavior
# def test_refund_processing():
# # TODO: fix this later
# assert False
# CORRECT: Use skip with reason
@pytest.mark.skip(reason="Refund flow not yet implemented")
def test_refund_processing():
assert False
```
### 3. Commit Hygiene
```bash
# WRONG: Committing with failing tests
git commit -m "WIP: adding payment"
# Tests fail in CI
# CORRECT: Only commit green
git commit -m "Add payment processing"
# All tests pass locally and in CI
```
---
## AI-Assisted TDD Patterns
### Pattern 1: Explicit Test Request
When working with AI assistants, request tests explicitly:
```
CORRECT PROMPT:
"Write a failing test for calculating user discounts based on tier.
Then implement the minimum code to make it pass."
INCORRECT PROMPT:
"Implement a discount calculator with tier support."
```
### Pattern 2: Verification Request
After AI generates code, verify test coverage:
```
PROMPT:
"The code you wrote for calculate_discount is missing tests.
First, show me a failing test for the edge case where cart is empty.
Then make it pass with minimum code."
```
### Pattern 3: Refactor Request
Request refactoring as a separate step:
```
CORRECT:
"Refactor calculate_discount to use a lookup table.
Run tests after each change."
INCORRECT:
"Refactor and add new features at the same time."
```
### Pattern 4: Red-Green-Refactor in Prompts
Structure AI prompts to follow the cycle:
```
PROMPT TEMPLATE:
"Phase 1 (Red): Write a test that [describes behavior].
The test should fail because [reason].
Show me the failing test output.
Phase 2 (Green): Write the minimum code to pass this test.
No extra features.
Phase 3 (Refactor): Review the code. Suggest improvements.
I'll approve before you apply changes."
```
### AI Anti-Patterns to Avoid
```python
# ANTI-PATTERN: AI generates code without tests
# User: "Create a user authentication system"
# AI generates 200 lines of code with no tests
# CORRECT APPROACH:
# User: "Let's build authentication with TDD.
# First, write a failing test for successful login."
# ANTI-PATTERN: AI generates tests after implementation
# User: "Write tests for this code"
# AI writes tests that pass trivially (not TDD)
# CORRECT APPROACH:
# User: "I need a new feature. Write the failing test first."
```
---
## Legacy Code Strategy
### 1. Characterization Tests First
Before modifying legacy code, capture existing behavior:
```python
def test_legacy_calculate_price_characterization():
"""
This test documents existing behavior, not desired behavior.
Do not change expected values without understanding impact.
"""
# Given: Current production inputs
order = Order(items=[Item(price=100, quantity=2)])
# When: Execute legacy code
result = legacy_calculate_price(order)
# Then: Capture ACTUAL output (even if wrong)
assert result == 215 # Includes mystery 7.5% surcharge
```
### 2. Strangler Fig Pattern
```python
# Step 1: Write test for new behavior
def test_calculate_price_with_new_algorithm():
order = Order(items=[Item(price=100, quantity=2)])
result = calculate_price_v2(order)
assert result == 200 # No mystery surcharge
# Step 2: Implement new code with TDD
def calculate_price_v2(order):
return sum(item.price * item.quantity for item in order.items)
# Step 3: Route new requests to new code
def calculate_price(order):
if order.use_new_pricing:
return calculate_price_v2(order)
return legacy_calculate_price(order)
# Step 4: Gradually migrate, removing legacy path
```
### 3. Safe Refactoring Sequence
```python
# 1. Add characterization tests
# 2. Extract method (tests stay green)
# 3. Add unit tests for extracted method
# 4. Refactor extracted method with TDD
# 5. Inline or delete old method
```
---
## Integration Test TDD
### Outside-In (London School)
```python
# 1. Write acceptance test (fails end-to-end)
def test_user_can_complete_purchase():
user = create_user()
add_item_to_cart(user, item)
result = complete_purchase(user)
assert result.status == "success"
assert user.has_receipt()
# 2. Drop down to unit test for first component
def test_cart_calculates_total():
cart = Cart()
cart.add(Item(price=100))
assert cart.total == 100
# 3. Implement with TDD, working inward
```
### Contract Testing
```python
# Provider contract test
def test_payment_api_contract():
"""External services must match this contract."""
response = client.post("/payments", json={
"amount": 100,
"currency": "USD"
})
assert response.status_code == 201
assert "transaction_id" in response.json()
# Consumer contract test
def test_payment_gateway_contract():
"""We expect the gateway to return transaction IDs."""
mock_gateway = MockPaymentGateway()
mock_gateway.expect_charge(amount=100).and_return(
transaction_id="tx_123"
)
result = process_payment(mock_gateway, amount=100)
assert result.transaction_id == "tx_123"
```
---
## Refactoring Rules
### Rule 1: Refactor Only When Green
```python
# WRONG: Refactoring with failing test
def test_new_feature():
assert False # Failing
def existing_code():
# Refactoring here is DANGEROUS
pass
# CORRECT: All tests pass before refactoring
def existing_code():
# Safe to refactor now
pass
```
### Rule 2: One Refactoring at a Time
```python
# WRONG: Multiple refactorings at once
def process_order(order):
# Changed: variable name
# Changed: extracted method
# Changed: added caching
# Which broke it? Who knows.
pass
# CORRECT: One change, test, commit
# Commit 1: Rename variable
# Commit 2: Extract method
# Commit 3: Add caching
```
### Rule 3: Baby Steps
```python
# WRONG: Large refactoring
# Before: 500-line monolith
# After: 10 new classes
# Risk: Too high
# CORRECT: Extract one method at a time
# Step 1: Extract calculate_total (commit)
# Step 2: Extract validate_items (commit)
# Step 3: Extract apply_discounts (commit)
```
---
## Test Quality Gates
### Pre-Commit Hooks
```bash
#!/bin/bash
# .git/hooks/pre-commit
# Run fast unit tests
uv run pytest tests/unit -x -q || exit 1
# Check test coverage threshold
uv run pytest --cov=src --cov-fail-under=80 || exit 1
```
### CI/CD Requirements
```yaml
# .github/workflows/test.yml
- name: Run Tests
run: |
pytest --cov=src --cov-report=xml --cov-fail-under=80
- name: Check Test Quality
run: |
# Fail if new code lacks tests
diff-cover coverage.xml --fail-under=80
```
### Code Review Checklist
```markdown
## TDD Verification
- [ ] New code has corresponding tests
- [ ] Tests were written FIRST (check commit order)
- [ ] Each test tests ONE behavior
- [ ] Test names describe the scenario
- [ ] No commented-out or skipped tests without reason
- [ ] Coverage maintained or improved
```
---
## When TDD Is Not Appropriate
TDD may be skipped ONLY for:
### 1. Exploratory Prototypes
```python
# prototype.py - Delete after learning
# No tests needed for throwaway exploration
def quick_test_api():
response = requests.get("https://api.example.com")
print(response.json())
```
### 2. One-Time Scripts
```python
# migrate_data.py - Run once, discard
# Tests would cost more than value provided
```
### 3. Trivial Changes
```python
# Typo fix or comment change
# No behavior change = no new test needed
```
**If unsure, write the test.**
---
## Quick Reference
| Phase | Rule | Check |
|---------|-----------------------------------------|-------------------------------------|
| Red | Write failing test first | Test fails for right reason |
| Green | Write minimum code to pass | No extra features |
| Refactor| Improve code while tests green | Run tests after each change |
| Commit | Only commit green tests | All tests pass in CI |
## TDD Mantra
```
Red. Green. Refactor. Commit. Repeat.
No test = No code.
No green = No commit.
No refactor = Technical debt.
```

134
rules/concerns/testing.md Normal file
View File

@@ -0,0 +1,134 @@
# Testing Rules
## Arrange-Act-Assert Pattern
Structure every test in three distinct phases:
```python
# Arrange: Set up the test data and conditions
user = User(name="Alice", role="admin")
session = create_test_session(user.id)
# Act: Execute the behavior under test
result = grant_permission(session, "read_documents")
# Assert: Verify the expected outcome
assert result.granted is True
assert result.permissions == ["read_documents"]
```
Never mix phases. Comment each phase clearly for complex setups. Keep Act phase to one line if possible.
## Behavior vs Implementation Testing
Test behavior, not implementation details:
```python
# GOOD: Tests the observable behavior
def test_user_can_login():
response = login("alice@example.com", "password123")
assert response.status_code == 200
assert "session_token" in response.cookies
# BAD: Tests internal implementation
def test_login_sets_database_flag():
login("alice@example.com", "password123")
user = User.get(email="alice@example.com")
assert user._logged_in_flag is True # Private field
```
Focus on inputs and outputs. Test public contracts. Refactor internals freely without breaking tests.
## Mocking Philosophy
Mock external dependencies, not internal code:
```python
# GOOD: Mock external services
@patch("requests.post")
def test_sends_notification_to_slack(mock_post):
send_notification("Build complete!")
mock_post.assert_called_once_with(
"https://slack.com/api/chat.postMessage",
json={"text": "Build complete!"}
)
# BAD: Mock internal methods
@patch("NotificationService._format_message")
def test_notification_formatting(mock_format):
# Don't mock private methods
send_notification("Build complete!")
```
Mock when:
- Dependency is slow (database, network, file system)
- Dependency is unreliable (external APIs)
- Dependency is expensive (third-party services)
Don't mock when:
- Testing the dependency itself
- The dependency is fast and stable
- The mock becomes more complex than real implementation
## Coverage Expectations
Write tests for:
- Critical business logic (aim for 90%+)
- Edge cases and error paths (aim for 80%+)
- Public APIs and contracts (aim for 100%)
Don't obsess over:
- Trivial getters/setters
- Generated code
- One-line wrappers
Coverage is a floor, not a ceiling. A test suite at 100% coverage that doesn't verify behavior is worthless.
## Test-Driven Development
Follow the red-green-refactor cycle:
1. Red: Write failing test for new behavior
2. Green: Write minimum code to pass
3. Refactor: improve code while tests stay green
Write tests first for new features. Write tests after for bug fixes. Never refactor without tests.
## Test Organization
Group tests by feature or behavior, not by file structure. Name tests to describe the scenario:
```python
class TestUserAuthentication:
def test_valid_credentials_succeeds(self):
pass
def test_invalid_credentials_fails(self):
pass
def test_locked_account_fails(self):
pass
```
Each test should stand alone. Avoid shared state between tests. Use fixtures or setup methods to reduce duplication.
## Test Data
Use realistic test data that reflects production scenarios:
```python
# GOOD: Realistic values
user = User(
email="alice@example.com",
name="Alice Smith",
age=28
)
# BAD: Placeholder values
user = User(
email="test@test.com",
name="Test User",
age=999
)
```
Avoid magic strings and numbers. Use named constants for expected values that change often.

View File

42
rules/frameworks/n8n.md Normal file
View File

@@ -0,0 +1,42 @@
# n8n Workflow Automation Rules
## Workflow Design
- Start with a clear trigger: Webhook, Schedule, or Event source
- Keep workflows under 20 nodes for maintainability
- Group related logic with sub-workflows
- Use the "Switch" node for conditional branching
- Add "Wait" nodes between rate-limited API calls
## Node Naming
- Use verb-based names: `Fetch Users`, `Transform Data`, `Send Email`
- Prefix data nodes: `Get_`, `Set_`, `Update_`
- Prefix conditionals: `Check_`, `If_`, `When_`
- Prefix actions: `Send_`, `Create_`, `Delete_`
- Add version suffix to API nodes: `API_v1_Users`
## Error Handling
- Always add an Error Trigger node
- Route errors to a "Notify Failure" branch
- Log error details: `$json.error.message`, `$json.node.name`
- Send alerts on critical failures
- Add "Continue On Fail" for non-essential nodes
## Data Flow
- Use "Set" nodes to normalize output structure
- Reference previous nodes: `{{ $json.field }}`
- Use "Merge" node to combine multiple data sources
- Apply "Code" node for complex transformations
- Clean data before sending to external APIs
## Credential Security
- Store all secrets in n8n credentials manager
- Never hardcode API keys or tokens
- Use environment-specific credential sets
- Rotate credentials regularly
- Limit credential scope to minimum required permissions
## Testing
- Test each node independently with "Execute Node"
- Verify data structure at each step
- Mock external dependencies during development
- Log workflow execution for debugging

0
rules/languages/.gitkeep Normal file
View File

129
rules/languages/nix.md Normal file
View File

@@ -0,0 +1,129 @@
# Nix Code Conventions
## Formatting
- Use `alejandra` for formatting
- camelCase for variables, `PascalCase` for types
- 2 space indentation (alejandra default)
- No trailing whitespace
## Flake Structure
```nix
{
description = "Description here";
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
flake-utils.url = "github:numtide/flake-utils";
};
outputs = { self, nixpkgs, flake-utils, ... }:
flake-utils.lib.eachDefaultSystem (system:
let
pkgs = nixpkgs.legacyPackages.${system};
in {
packages.default = pkgs.hello;
devShells.default = pkgs.mkShell {
buildInputs = [ pkgs.hello ];
};
}
);
}
```
## Module Patterns
Standard module function signature:
```nix
{ config, lib, pkgs, ... }:
{
options.myService.enable = lib.mkEnableOption "my service";
config = lib.mkIf config.myService.enable {
services.myService.enable = true;
};
}
```
## Conditionals and Merging
- Use `mkIf` for conditional config
- Use `mkMerge` to combine multiple config sets
- Use `mkOptionDefault` for defaults that can be overridden
```nix
config = lib.mkMerge [
(lib.mkIf cfg.enable { ... })
(lib.mkIf cfg.extraConfig { ... })
];
```
## Anti-Patterns (AVOID)
### `with pkgs;`
Bad: Pollutes namespace, hard to trace origins
```nix
{ pkgs, ... }:
{
packages = with pkgs; [ vim git ];
}
```
Good: Explicit references
```nix
{ pkgs, ... }:
{
packages = [ pkgs.vim pkgs.git ];
}
```
### `builtins.fetchTarball`
Use flake inputs instead. `fetchTarball` is non-reproducible.
### Impure operations
Avoid `import <nixpkgs>` in flakes. Always use inputs.
### `builtins.getAttr` / `builtins.hasAttr`
Use `lib.attrByPath` or `lib.optionalAttrs` instead.
## Home Manager Patterns
```nix
{ config, pkgs, lib, ... }:
{
home.packages = with pkgs; [ ripgrep fd ];
programs.zsh.enable = true;
xdg.configFile."myapp/config".text = "...";
}
```
## Overlays
```nix
{ config, lib, pkgs, ... }:
let
myOverlay = final: prev: {
myPackage = prev.myPackage.overrideAttrs (old: { ... });
};
in
{
nixpkgs.overlays = [ myOverlay ];
}
```
## Imports and References
- Use flake inputs for dependencies
- `lib` is always available in modules
- Reference packages via `pkgs.packageName`
- Use `callPackage` for complex package definitions
## File Organization
```
flake.nix # Entry point
modules/ # NixOS modules
services/
my-service.nix
overlays/ # Package overrides
default.nix
```

224
rules/languages/python.md Normal file
View File

@@ -0,0 +1,224 @@
# Python Language Rules
## Toolchain
### Package Management (uv)
```bash
uv init my-project --package
uv add numpy pandas
uv add --dev pytest ruff pyright hypothesis
uv run python -m pytest
uv lock --upgrade-package numpy
```
### Linting & Formatting (ruff)
```toml
[tool.ruff]
line-length = 100
target-version = "py311"
[tool.ruff.lint]
select = ["E", "F", "W", "I", "N", "UP"]
ignore = ["E501"]
[tool.ruff.format]
quote-style = "double"
```
### Type Checking (pyright)
```toml
[tool.pyright]
typeCheckingMode = "strict"
reportMissingTypeStubs = true
reportUnknownMemberType = true
```
### Testing (pytest + hypothesis)
```python
import pytest
from hypothesis import given, strategies as st
@given(st.integers(), st.integers())
def test_addition_commutative(a, b):
assert a + b == b + a
@pytest.fixture
def user_data():
return {"name": "Alice", "age": 30}
def test_user_creation(user_data):
user = User(**user_data)
assert user.name == "Alice"
```
### Data Validation (Pydantic)
```python
from pydantic import BaseModel, Field, validator
class User(BaseModel):
name: str = Field(min_length=1, max_length=100)
age: int = Field(ge=0, le=150)
email: str
@validator('email')
def email_must_contain_at(cls, v):
if '@' not in v:
raise ValueError('must contain @')
return v
```
## Idioms
### Comprehensions
```python
# List comprehension
squares = [x**2 for x in range(10) if x % 2 == 0]
# Dict comprehension
word_counts = {word: text.count(word) for word in unique_words}
# Set comprehension
unique_chars = {char for char in text if char.isalpha()}
```
### Context Managers
```python
# Built-in context managers
with open('file.txt', 'r') as f:
content = f.read()
# Custom context manager
from contextlib import contextmanager
@contextmanager
def timer():
start = time.time()
yield
print(f"Elapsed: {time.time() - start:.2f}s")
```
### Generators
```python
def fibonacci():
a, b = 0, 1
while True:
yield a
a, b = b, a + b
def read_lines(file_path):
with open(file_path) as f:
for line in f:
yield line.strip()
```
### F-strings
```python
name = "Alice"
age = 30
# Basic interpolation
msg = f"Name: {name}, Age: {age}"
# Expression evaluation
msg = f"Next year: {age + 1}"
# Format specs
msg = f"Price: ${price:.2f}"
msg = f"Hex: {0xFF:X}"
```
## Anti-Patterns
### Bare Except
```python
# AVOID: Catches all exceptions including SystemExit
try:
risky_operation()
except:
pass
# USE: Catch specific exceptions
try:
risky_operation()
except ValueError as e:
log_error(e)
except KeyError as e:
log_error(e)
```
### Mutable Defaults
```python
# AVOID: Default argument created once
def append_item(item, items=[]):
items.append(item)
return items
# USE: None as sentinel
def append_item(item, items=None):
if items is None:
items = []
items.append(item)
return items
```
### Global State
```python
# AVOID: Global mutable state
counter = 0
def increment():
global counter
counter += 1
# USE: Class-based state
class Counter:
def __init__(self):
self.count = 0
def increment(self):
self.count += 1
```
### Star Imports
```python
# AVOID: Pollutes namespace, unclear origins
from module import *
# USE: Explicit imports
from module import specific_function, MyClass
import module as m
```
## Project Setup
### pyproject.toml Structure
```toml
[project]
name = "my-project"
version = "0.1.0"
requires-python = ">=3.11"
dependencies = [
"pydantic>=2.0",
"httpx>=0.25",
]
[project.optional-dependencies]
dev = ["pytest", "ruff", "pyright", "hypothesis"]
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
```
### src Layout
```
my-project/
├── pyproject.toml
└── src/
└── my_project/
├── __init__.py
├── main.py
└── utils/
├── __init__.py
└── helpers.py
```

100
rules/languages/shell.md Normal file
View File

@@ -0,0 +1,100 @@
# Shell Scripting Rules
## Shebang
Always use `#!/usr/bin/env bash` for portability. Never hardcode `/bin/bash`.
```bash
#!/usr/bin/env bash
```
## Strict Mode
Enable strict mode in every script.
```bash
#!/usr/bin/env bash
set -euo pipefail
```
- `-e`: Exit on error
- `-u`: Error on unset variables
- `-o pipefail`: Return exit status of last failed pipe command
## Shellcheck
Run shellcheck on all scripts before committing.
```bash
shellcheck script.sh
```
## Quoting
Quote all variable expansions and command substitutions. Use arrays instead of word-splitting strings.
```bash
# Good
"${var}"
files=("file1.txt" "file2.txt")
for f in "${files[@]}"; do
process "$f"
done
# Bad
$var
files="file1.txt file2.txt"
for f in $files; do
process $f
done
```
## Functions
Define with parentheses, use `local` for variables.
```bash
my_function() {
local result
result=$(some_command)
echo "$result"
}
```
## Command Substitution
Use `$()` not backticks. Nests cleanly.
```bash
# Good
output=$(ls "$dir")
# Bad
output=`ls $dir`
```
## POSIX Portability
Write POSIX-compliant scripts when targeting `/bin/sh`.
- Use `[[` only for bash scripts
- Use `printf` instead of `echo -e`
- Avoid `[[`, `((`, `&>` in sh scripts
## Error Handling
Use `trap` for cleanup.
```bash
cleanup() {
rm -f /tmp/lockfile
}
trap cleanup EXIT
```
## Readability
- Use 2-space indentation
- Limit lines to 80 characters
- Add comments for non-obvious logic
- Separate sections with blank lines

View File

@@ -0,0 +1,150 @@
# TypeScript Patterns
## Strict tsconfig
Always enable strict mode and key safety options:
```json
{
"compilerOptions": {
"strict": true,
"noUncheckedIndexedAccess": true,
"noImplicitReturns": true,
"noFallthroughCasesInSwitch": true,
"noUnusedLocals": true,
"noUnusedParameters": true
}
}
```
## Discriminated Unions
Use discriminated unions for exhaustive type safety:
```ts
type Result =
| { success: true; data: string }
| { success: false; error: Error };
function handleResult(result: Result): string {
if (result.success) {
return result.data;
}
throw result.error;
}
```
## Branded Types
Prevent type confusion with nominal branding:
```ts
type UserId = string & { readonly __brand: unique symbol };
type Email = string & { readonly __brand: unique symbol };
function createUserId(id: string): UserId {
return id as UserId;
}
function sendEmail(email: Email, userId: UserId) {}
```
## satisfies Operator
Use `satisfies` for type-safe object literal inference:
```ts
const config = {
port: 3000,
host: "localhost",
} satisfies {
port: number;
host: string;
debug?: boolean;
};
config.port; // number
config.host; // string
```
## as const Assertions
Freeze literal types with `as const`:
```ts
const routes = {
home: "/",
about: "/about",
contact: "/contact",
} as const;
type Route = typeof routes[keyof typeof routes];
```
## Modern Features
```ts
// Promise.withResolvers()
const { promise, resolve, reject } = Promise.withResolvers<string>();
// Object.groupBy()
const users = [
{ name: "Alice", role: "admin" },
{ name: "Bob", role: "user" },
];
const grouped = Object.groupBy(users, u => u.role);
// using statement for disposables
class Resource implements Disposable {
async [Symbol.asyncDispose]() {
await this.cleanup();
}
}
async function withResource() {
using r = new Resource();
}
```
## Toolchain
Prefer modern tooling:
- Runtime: `bun` or `tsx` (no `tsc` for execution)
- Linting: `biome` (preferred) or `eslint`
- Formatting: `biome` (built-in) or `prettier`
## Anti-Patterns
Avoid these TypeScript patterns:
```ts
// NEVER use as any
const data = response as any;
// NEVER use @ts-ignore
// @ts-ignore
const value = unknownFunction();
// NEVER use ! assertion (non-null)
const element = document.querySelector("#foo")!;
// NEVER use enum (prefer union)
enum Status { Active, Inactive } // ❌
// Prefer const object or union
type Status = "Active" | "Inactive"; // ✅
const Status = { Active: "Active", Inactive: "Inactive" } as const; // ✅
```
## Indexed Access Safety
With `noUncheckedIndexedAccess`, handle undefined:
```ts
const arr: string[] = ["a", "b"];
const item = arr[0]; // string | undefined
const item2 = arr.at(0); // string | undefined
const map = new Map<string, number>();
const value = map.get("key"); // number | undefined
```

View File

@@ -8,7 +8,7 @@
# ./scripts/test-skill.sh --run # Launch interactive opencode session # ./scripts/test-skill.sh --run # Launch interactive opencode session
# #
# This script creates a temporary XDG_CONFIG_HOME with symlinks to this # This script creates a temporary XDG_CONFIG_HOME with symlinks to this
# repository's skill/, context/, command/, and prompts/ directories, # repository's skills/, context/, command/, and prompts/ directories,
# allowing you to test skill changes before deploying via home-manager. # allowing you to test skill changes before deploying via home-manager.
set -euo pipefail set -euo pipefail
@@ -72,17 +72,17 @@ list_skills() {
validate_skill() { validate_skill() {
local skill_name="$1" local skill_name="$1"
local skill_path="$REPO_ROOT/skill/$skill_name" local skill_path="$REPO_ROOT/skills/$skill_name"
if [[ ! -d "$skill_path" ]]; then if [[ ! -d "$skill_path" ]]; then
echo -e "${RED}❌ Skill not found: $skill_name${NC}" echo -e "${RED}❌ Skill not found: $skill_name${NC}"
echo "Available skills:" echo "Available skills:"
ls -1 "$REPO_ROOT/skill/" ls -1 "$REPO_ROOT/skills/"
exit 1 exit 1
fi fi
echo -e "${YELLOW}Validating skill: $skill_name${NC}" echo -e "${YELLOW}Validating skill: $skill_name${NC}"
if python3 "$REPO_ROOT/skill/skill-creator/scripts/quick_validate.py" "$skill_path"; then if python3 "$REPO_ROOT/skills/skill-creator/scripts/quick_validate.py" "$skill_path"; then
echo -e "${GREEN}✅ Skill '$skill_name' is valid${NC}" echo -e "${GREEN}✅ Skill '$skill_name' is valid${NC}"
else else
echo -e "${RED}❌ Skill '$skill_name' has validation errors${NC}" echo -e "${RED}❌ Skill '$skill_name' has validation errors${NC}"
@@ -95,14 +95,14 @@ validate_all() {
echo "" echo ""
local failed=0 local failed=0
for skill_dir in "$REPO_ROOT/skill/"*/; do for skill_dir in "$REPO_ROOT/skills/"*/; do
local skill_name=$(basename "$skill_dir") local skill_name=$(basename "$skill_dir")
echo -n " $skill_name: " echo -n " $skill_name: "
if python3 "$REPO_ROOT/skill/skill-creator/scripts/quick_validate.py" "$skill_dir" > /dev/null 2>&1; then if python3 "$REPO_ROOT/skills/skill-creator/scripts/quick_validate.py" "$skill_dir" > /dev/null 2>&1; then
echo -e "${GREEN}${NC}" echo -e "${GREEN}${NC}"
else else
echo -e "${RED}${NC}" echo -e "${RED}${NC}"
python3 "$REPO_ROOT/skill/skill-creator/scripts/quick_validate.py" "$skill_dir" 2>&1 | sed 's/^/ /' python3 "$REPO_ROOT/skills/skill-creator/scripts/quick_validate.py" "$skill_dir" 2>&1 | sed 's/^/ /'
((failed++)) || true ((failed++)) || true
fi fi
done done

View File

@@ -72,76 +72,72 @@ If an image download fails, log it and continue. Use a placeholder in the final
### 4. Upload Images to Outline ### 4. Upload Images to Outline
**IMPORTANT:** Always use Outline MCP tools for all Outline operations. If Outline tools throw errors: MCP-outline does not support attachment creation. Use the bundled script for image uploads:
1. Load the outline skill first: `skill name=outline`
2. Retry with `skill_mcp` tool for outline operations
3. Only fallback to direct API calls via `bash` after exhausting MCP options
mcp-outline cannot create attachments. Use direct API calls via `bash` for image uploads only.
**Required env:** `OUTLINE_API_KEY` (read from /run/agenix/outline-key)
For each downloaded image:
```bash ```bash
#!/usr/bin/env bash # Upload with optional document association
set -euo pipefail bash scripts/upload_image_to_outline.sh "/tmp/doc-images/screenshot.png" "$DOCUMENT_ID"
IMAGE_PATH="/tmp/doc-images/screenshot.png" # Upload without document (attach later)
IMAGE_NAME="$(basename "$IMAGE_PATH")" bash scripts/upload_image_to_outline.sh "/tmp/doc-images/screenshot.png"
CONTENT_TYPE="image/png" # Detect from extension: png->image/png, jpg/jpeg->image/jpeg, gif->image/gif, svg->image/svg+xml, webp->image/webp
# 1. Get file size (cross-platform)
FILESIZE=$(stat -f%z "$IMAGE_PATH" 2>/dev/null || stat -c%s "$IMAGE_PATH")
# 2. Create attachment record
RESPONSE=$(curl -s -X POST "https://wiki.az-gruppe.com/api/attachments.create" \
-H "Authorization: Bearer $OUTLINE_API_KEY" \
-H "Content-Type: application/json" \
-d "{
\"name\": \"$IMAGE_NAME\",
\"contentType\": \"$CONTENT_TYPE\",
\"size\": $FILESIZE
}")
# 3. Extract URLs from response
UPLOAD_URL=$(echo "$RESPONSE" | jq -r '.data.uploadUrl')
ATTACHMENT_URL=$(echo "$RESPONSE" | jq -r '.data.attachment.url')
# 4. Check for errors
if [ "$UPLOAD_URL" = "null" ] || [ -z "$UPLOAD_URL" ]; then
echo "ERROR: Failed to create attachment. Response: $RESPONSE" >&2
exit 1
fi
# 5. Upload binary to signed URL
curl -s -X PUT "$UPLOAD_URL" \
-H "Content-Type: $CONTENT_TYPE" \
--data-binary "@$IMAGE_PATH"
# 6. Output the attachment URL for use in markdown
echo "$ATTACHMENT_URL"
``` ```
Replace image references in the translated markdown: The script handles API key loading from `/run/agenix/outline-key`, content-type detection, the two-step presigned POST flow, and retries. Output is JSON: `{"success": true, "attachment_url": "https://..."}`.
Replace image references in the translated markdown with the returned `attachment_url`:
```markdown ```markdown
![description](ATTACHMENT_URL) ![description](ATTACHMENT_URL)
``` ```
**Content-Type detection by extension:** For all other Outline operations (documents, collections, search), use MCP tools (`Outline_*`).
| Extension | Content-Type |
|-----------|-------------|
| `.png` | `image/png` |
| `.jpg`, `.jpeg` | `image/jpeg` |
| `.gif` | `image/gif` |
| `.svg` | `image/svg+xml` |
| `.webp` | `image/webp` |
### 5. Translate with TEEM Format ### 5. Translate with TEEM Format
Translate the entire document into each target language. Apply TEEM format to UI elements. Translate the entire document into each target language. Apply TEEM format to UI elements.
#### Address Form (CRITICAL)
**Always use the informal "you" form** in ALL target languages:
- **German**: Use **"Du"** (informal), NEVER "Sie" (formal)
- **Czech**: Use **"ty"** (informal), NEVER "vy" (formal)
- This applies to all translations — documentation should feel approachable and direct
#### Infobox / Callout Formatting
Source documentation often uses admonitions, callouts, or info boxes (e.g., GitHub-style `> [!NOTE]`, Docusaurus `:::note`, or custom HTML boxes). **Convert ALL such elements** to Outline's callout syntax:
```markdown
:::tip
Tip or best practice content here.
:::
:::info
Informational content here.
:::
:::warning
Warning or caution content here.
:::
:::success
Success message or positive outcome here.
:::
```
**Mapping rules** (source → Outline):
| Source pattern | Outline syntax |
|---|---|
| Note, Info, Information | `:::info` |
| Tip, Hint, Best Practice | `:::tip` |
| Warning, Caution, Danger, Important | `:::warning` |
| Success, Done, Check | `:::success` |
**CRITICAL formatting**: The closing `:::` MUST be on its own line with an empty line before it. Content goes directly after the opening line.
#### TEEM Rules #### TEEM Rules
**Format:** `**English UI Term** (Translation)` **Format:** `**English UI Term** (Translation)`
@@ -217,7 +213,7 @@ Use mcp-outline tools to publish:
|-------|--------| |-------|--------|
| URL fetch fails | Use `question` to ask for alternative URL or manual paste | | URL fetch fails | Use `question` to ask for alternative URL or manual paste |
| Image download fails | Continue with placeholder, note in completion report | | Image download fails | Continue with placeholder, note in completion report |
| Outline API error (attachments) | Save markdown to `/tmp/doc-translator-backup-TIMESTAMP.md`, report error | | Outline API error (attachments) | Script retries 3x with backoff; on final failure save markdown to `/tmp/doc-translator-backup-TIMESTAMP.md`, report error |
| Outline API error (document) | Save markdown to `/tmp/doc-translator-backup-TIMESTAMP.md`, report error | | Outline API error (document) | Save markdown to `/tmp/doc-translator-backup-TIMESTAMP.md`, report error |
| Ambiguous UI term | Use `question` to ask user for correct translation | | Ambiguous UI term | Use `question` to ask user for correct translation |
| Large document (>5000 words) | Ask user if splitting into multiple docs is preferred | | Large document (>5000 words) | Ask user if splitting into multiple docs is preferred |
@@ -254,9 +250,9 @@ Items Needing Review:
## Environment Variables ## Environment Variables
| Variable | Purpose | | Variable | Purpose | Source |
|----------|---------| |----------|---------|--------|
| `OUTLINE_API_KEY` | Bearer token for wiki.az-gruppe.com API | | `OUTLINE_API_KEY` | Bearer token for wiki.az-gruppe.com API | Auto-loaded from `/run/agenix/outline-key` by upload script |
## Integration with Other Skills ## Integration with Other Skills

View File

@@ -0,0 +1,116 @@
#!/usr/bin/env bash
# Upload an image to Outline via presigned POST (two-step flow)
#
# Usage:
# upload_image_to_outline.sh <image_path> [document_id]
#
# Environment:
# OUTLINE_API_KEY - Bearer token for wiki.az-gruppe.com API
# Auto-loaded from /run/agenix/outline-key if not set
#
# Output (JSON to stdout):
# {"success": true, "attachment_url": "https://..."}
# Error (JSON to stderr):
# {"success": false, "error": "error message"}
set -euo pipefail
MAX_RETRIES=3
RETRY_DELAY=2
if [ $# -lt 1 ] || [ $# -gt 2 ]; then
echo '{"success": false, "error": "Usage: upload_image_to_outline.sh <image_path> [document_id]"}' >&2
exit 1
fi
IMAGE_PATH="$1"
DOCUMENT_ID="${2:-}"
if [ -z "${OUTLINE_API_KEY:-}" ]; then
if [ -f /run/agenix/outline-key ]; then
OUTLINE_API_KEY=$(cat /run/agenix/outline-key)
export OUTLINE_API_KEY
else
echo '{"success": false, "error": "OUTLINE_API_KEY not set and /run/agenix/outline-key not found"}' >&2
exit 1
fi
fi
# Check if file exists
if [ ! -f "$IMAGE_PATH" ]; then
echo "{\"success\": false, \"error\": \"Image file not found: $IMAGE_PATH\"}" >&2
exit 1
fi
# Extract image name and extension
IMAGE_NAME="$(basename "$IMAGE_PATH")"
EXTENSION="${IMAGE_NAME##*.}"
# Detect content type by extension
case "${EXTENSION,,}" in
png) CONTENT_TYPE="image/png" ;;
jpg|jpeg) CONTENT_TYPE="image/jpeg" ;;
gif) CONTENT_TYPE="image/gif" ;;
svg) CONTENT_TYPE="image/svg+xml" ;;
webp) CONTENT_TYPE="image/webp" ;;
*) CONTENT_TYPE="application/octet-stream" ;;
esac
FILESIZE=$(stat -c%s "$IMAGE_PATH" 2>/dev/null || stat -f%z "$IMAGE_PATH" 2>/dev/null)
if [ -z "$FILESIZE" ]; then
echo "{\"success\": false, \"error\": \"Failed to get file size for: $IMAGE_PATH\"}" >&2
exit 1
fi
REQUEST_BODY=$(jq -n \
--arg name "$IMAGE_NAME" \
--arg contentType "$CONTENT_TYPE" \
--argjson size "$FILESIZE" \
--arg documentId "$DOCUMENT_ID" \
'if $documentId == "" then
{name: $name, contentType: $contentType, size: $size}
else
{name: $name, contentType: $contentType, size: $size, documentId: $documentId}
end')
# Step 1: Create attachment record
RESPONSE=$(curl -s -X POST "https://wiki.az-gruppe.com/api/attachments.create" \
-H "Authorization: Bearer $OUTLINE_API_KEY" \
-H "Content-Type: application/json" \
-d "$REQUEST_BODY")
UPLOAD_URL=$(echo "$RESPONSE" | jq -r '.data.uploadUrl // empty')
ATTACHMENT_URL=$(echo "$RESPONSE" | jq -r '.data.attachment.url // empty')
if [ -z "$UPLOAD_URL" ]; then
ERROR_MSG=$(echo "$RESPONSE" | jq -r '.message // "Failed to create attachment"')
echo "{\"success\": false, \"error\": \"$ERROR_MSG\", \"response\": $(echo "$RESPONSE" | jq -c .)}" >&2
exit 1
fi
FORM_ARGS=()
while IFS= read -r line; do
key=$(echo "$line" | jq -r '.key')
value=$(echo "$line" | jq -r '.value')
FORM_ARGS+=(-F "$key=$value")
done < <(echo "$RESPONSE" | jq -c '.data.form | to_entries[]')
# Step 2: Upload binary to presigned URL with retry
for attempt in $(seq 1 "$MAX_RETRIES"); do
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" -X POST "$UPLOAD_URL" \
"${FORM_ARGS[@]}" \
-F "file=@$IMAGE_PATH")
if [ "$HTTP_CODE" = "200" ] || [ "$HTTP_CODE" = "201" ] || [ "$HTTP_CODE" = "204" ]; then
echo "{\"success\": true, \"attachment_url\": \"$ATTACHMENT_URL\"}"
exit 0
fi
if [ "$attempt" -lt "$MAX_RETRIES" ]; then
sleep "$((RETRY_DELAY * attempt))"
fi
done
echo "{\"success\": false, \"error\": \"Upload failed after $MAX_RETRIES attempts (last HTTP $HTTP_CODE)\"}" >&2
exit 1

View File

@@ -1,266 +1,544 @@
--- ---
name: excalidraw name: excalidraw
description: Generate architecture diagrams as .excalidraw files from codebase analysis. Use when the user asks to create architecture diagrams, system diagrams, visualize codebase structure, or generate excalidraw files. description: "Create Excalidraw diagram JSON files that make visual arguments. Use when: (1) user wants to visualize workflows, architectures, or concepts, (2) creating system diagrams, (3) generating .excalidraw files. Triggers: excalidraw, diagram, visualize, architecture diagram, system diagram."
compatibility: opencode
--- ---
# Excalidraw Diagram Generator # Excalidraw Diagram Creator
Generate architecture diagrams as `.excalidraw` files directly from codebase analysis. Generate `.excalidraw` JSON files that **argue visually**, not just display information.
## Customization
**All colors and brand-specific styles live in one file:** `references/color-palette.md`. Read it before generating any diagram and use it as the single source of truth for all color choices — shape fills, strokes, text colors, evidence artifact backgrounds, everything.
To make this skill produce diagrams in your own brand style, edit `color-palette.md`. Everything else in this file is universal design methodology and Excalidraw best practices.
--- ---
## Quick Start ## Core Philosophy
**User just asks:** **Diagrams should ARGUE, not DISPLAY.**
```
"Generate an architecture diagram for this project"
"Create an excalidraw diagram of the system"
"Visualize this codebase as an excalidraw file"
```
**Claude Code will:** A diagram isn't formatted text. It's a visual argument that shows relationships, causality, and flow that words alone can't express. The shape should BE the meaning.
1. Analyze the codebase (any language/framework)
2. Identify components, services, databases, APIs
3. Map relationships and data flows
4. Generate valid `.excalidraw` JSON with dynamic IDs and labels
**No prerequisites:** Works without existing diagrams, Terraform, or specific file types. **The Isomorphism Test**: If you removed all text, would the structure alone communicate the concept? If not, redesign.
**The Education Test**: Could someone learn something concrete from this diagram, or does it just label boxes? A good diagram teaches—it shows actual formats, real event names, concrete examples.
--- ---
## Critical Rules ## Depth Assessment (Do This First)
### 1. NEVER Use Diamond Shapes Before designing, determine what level of detail this diagram needs:
Diamond arrow connections are broken in raw Excalidraw JSON. Use styled rectangles instead: ### Simple/Conceptual Diagrams
Use abstract shapes when:
- Explaining a mental model or philosophy
- The audience doesn't need technical specifics
- The concept IS the abstraction (e.g., "separation of concerns")
| Semantic Meaning | Rectangle Style | ### Comprehensive/Technical Diagrams
|------------------|-----------------| Use concrete examples when:
| Orchestrator/Hub | Coral (`#ffa8a8`/`#c92a2a`) + strokeWidth: 3 | - Diagramming a real system, protocol, or architecture
| Decision Point | Orange (`#ffd8a8`/`#e8590c`) + dashed stroke | - The diagram will be used to teach or explain (e.g., YouTube video)
- The audience needs to understand what things actually look like
- You're showing how multiple technologies integrate
### 2. Labels Require TWO Elements **For technical diagrams, you MUST include evidence artifacts** (see below).
The `label` property does NOT work in raw JSON. Every labeled shape needs: ---
```json ## Research Mandate (For Technical Diagrams)
// 1. Shape with boundElements reference
{
"id": "my-box",
"type": "rectangle",
"boundElements": [{ "type": "text", "id": "my-box-text" }]
}
// 2. Separate text element with containerId **Before drawing anything technical, research the actual specifications.**
{
"id": "my-box-text", If you're diagramming a protocol, API, or framework:
"type": "text", 1. Look up the actual JSON/data formats
"containerId": "my-box", 2. Find the real event names, method names, or API endpoints
"text": "My Label" 3. Understand how the pieces actually connect
} 4. Use real terminology, not generic placeholders
Bad: "Protocol" → "Frontend"
Good: "AG-UI streams events (RUN_STARTED, STATE_DELTA, A2UI_UPDATE)" → "CopilotKit renders via createA2UIMessageRenderer()"
**Research makes diagrams accurate AND educational.**
---
## Evidence Artifacts
Evidence artifacts are concrete examples that prove your diagram is accurate and help viewers learn. Include them in technical diagrams.
**Types of evidence artifacts** (choose what's relevant to your diagram):
| Artifact Type | When to Use | How to Render |
|---------------|-------------|---------------|
| **Code snippets** | APIs, integrations, implementation details | Dark rectangle + syntax-colored text (see color palette for evidence artifact colors) |
| **Data/JSON examples** | Data formats, schemas, payloads | Dark rectangle + colored text (see color palette) |
| **Event/step sequences** | Protocols, workflows, lifecycles | Timeline pattern (line + dots + labels) |
| **UI mockups** | Showing actual output/results | Nested rectangles mimicking real UI |
| **Real input content** | Showing what goes IN to a system | Rectangle with sample content visible |
| **API/method names** | Real function calls, endpoints | Use actual names from docs, not placeholders |
**Example**: For a diagram about a streaming protocol, you might show:
- The actual event names from the spec (not just "Event 1", "Event 2")
- A code snippet showing how to connect
- What the streamed data actually looks like
**Example**: For a diagram about a data transformation pipeline:
- Show sample input data (actual format, not "Input")
- Show sample output data (actual format, not "Output")
- Show intermediate states if relevant
The key principle: **show what things actually look like**, not just what they're called.
---
## Multi-Zoom Architecture
Comprehensive diagrams operate at multiple zoom levels simultaneously. Think of it like a map that shows both the country borders AND the street names.
### Level 1: Summary Flow
A simplified overview showing the full pipeline or process at a glance. Often placed at the top or bottom of the diagram.
*Example*: `Input → Processing → Output` or `Client → Server → Database`
### Level 2: Section Boundaries
Labeled regions that group related components. These create visual "rooms" that help viewers understand what belongs together.
*Example*: Grouping by responsibility (Backend / Frontend), by phase (Setup / Execution / Cleanup), or by team (User / System / External)
### Level 3: Detail Inside Sections
Evidence artifacts, code snippets, and concrete examples within each section. This is where the educational value lives.
*Example*: Inside a "Backend" section, you might show the actual API response format, not just a box labeled "API Response"
**For comprehensive diagrams, aim to include all three levels.** The summary gives context, the sections organize, and the details teach.
### Bad vs Good
| Bad (Displaying) | Good (Arguing) |
|------------------|----------------|
| 5 equal boxes with labels | Each concept has a shape that mirrors its behavior |
| Card grid layout | Visual structure matches conceptual structure |
| Icons decorating text | Shapes that ARE the meaning |
| Same container for everything | Distinct visual vocabulary per concept |
| Everything in a box | Free-floating text with selective containers |
### Simple vs Comprehensive (Know Which You Need)
| Simple Diagram | Comprehensive Diagram |
|----------------|----------------------|
| Generic labels: "Input" → "Process" → "Output" | Specific: shows what the input/output actually looks like |
| Named boxes: "API", "Database", "Client" | Named boxes + examples of actual requests/responses |
| "Events" or "Messages" label | Timeline with real event/message names from the spec |
| "UI" or "Dashboard" rectangle | Mockup showing actual UI elements and content |
| ~30 seconds to explain | ~2-3 minutes of teaching content |
| Viewer learns the structure | Viewer learns the structure AND the details |
**Simple diagrams** are fine for abstract concepts, quick overviews, or when the audience already knows the details. **Comprehensive diagrams** are needed for technical architectures, tutorials, educational content, or when you want the diagram itself to teach.
---
## Container vs. Free-Floating Text
**Not every piece of text needs a shape around it.** Default to free-floating text. Add containers only when they serve a purpose.
| Use a Container When... | Use Free-Floating Text When... |
|------------------------|-------------------------------|
| It's the focal point of a section | It's a label or description |
| It needs visual grouping with other elements | It's supporting detail or metadata |
| Arrows need to connect to it | It describes something nearby |
| The shape itself carries meaning (decision diamond, etc.) | It's a section title, subtitle, or annotation |
| It represents a distinct "thing" in the system | It's a section title, subtitle, or annotation |
**Typography as hierarchy**: Use font size, weight, and color to create visual hierarchy without boxes. A 28px title doesn't need a rectangle around it.
**The container test**: For each boxed element, ask "Would this work as free-floating text?" If yes, remove the container.
---
## Design Process (Do This BEFORE Generating JSON)
### Step 0: Assess Depth Required
Before anything else, determine if this needs to be:
- **Simple/Conceptual**: Abstract shapes, labels, relationships (mental models, philosophies)
- **Comprehensive/Technical**: Concrete examples, code snippets, real data (systems, architectures, tutorials)
**If comprehensive**: Do research first. Look up actual specs, formats, event names, APIs.
### Step 1: Understand Deeply
Read the content. For each concept, ask:
- What does this concept **DO**? (not what IS it)
- What relationships exist between concepts?
- What's the core transformation or flow?
- **What would someone need to SEE to understand this?** (not just read about)
### Step 2: Map Concepts to Patterns
For each concept, find the visual pattern that mirrors its behavior:
| If the concept... | Use this pattern |
|-------------------|------------------|
| Spawns multiple outputs | **Fan-out** (radial arrows from center) |
| Combines inputs into one | **Convergence** (funnel, arrows merging) |
| Has hierarchy/nesting | **Tree** (lines + free-floating text) |
| Is a sequence of steps | **Timeline** (line + dots + free-floating labels) |
| Loops or improves continuously | **Spiral/Cycle** (arrow returning to start) |
| Is an abstract state or context | **Cloud** (overlapping ellipses) |
| Transforms input to output | **Assembly line** (before → process → after) |
| Compares two things | **Side-by-side** (parallel with contrast) |
| Separates into phases | **Gap/Break** (visual separation between sections) |
### Step 3: Ensure Variety
For multi-concept diagrams: **each major concept must use a different visual pattern**. No uniform cards or grids.
### Step 4: Sketch the Flow
Before JSON, mentally trace how the eye moves through the diagram. There should be a clear visual story.
### Step 5: Generate JSON
Only now create the Excalidraw elements. **See below for how to handle large diagrams.**
### Step 6: Render & Validate (MANDATORY)
After generating the JSON, you MUST run the render-view-fix loop until the diagram looks right. This is not optional — see the **Render & Validate** section below for the full process.
---
## Large / Comprehensive Diagram Strategy
**For comprehensive or technical diagrams, you MUST build the JSON one section at a time.** Do NOT attempt to generate the entire file in a single pass. This is a hard constraint — output token limits mean a comprehensive diagram easily exceeds capacity in one shot. Even if it didn't, generating everything at once leads to worse quality. Section-by-section is better in every way.
### The Section-by-Section Workflow
**Phase 1: Build each section**
1. **Create the base file** with the JSON wrapper (`type`, `version`, `appState`, `files`) and the first section of elements.
2. **Add one section per edit.** Each section gets its own dedicated pass — take your time with it. Think carefully about the layout, spacing, and how this section connects to what's already there.
3. **Use descriptive string IDs** (e.g., `"trigger_rect"`, `"arrow_fan_left"`) so cross-section references are readable.
4. **Namespace seeds by section** (e.g., section 1 uses 100xxx, section 2 uses 200xxx) to avoid collisions.
5. **Update cross-section bindings** as you go. When a new section's element needs to bind to an element from a previous section (e.g., an arrow connecting sections), edit the earlier element's `boundElements` array at the same time.
**Phase 2: Review the whole**
After all sections are in place, read through the complete JSON and check:
- Are cross-section arrows bound correctly on both ends?
- Is the overall spacing balanced, or are some sections cramped while others have too much whitespace?
- Do IDs and bindings all reference elements that actually exist?
Fix any alignment or binding issues before rendering.
**Phase 3: Render & validate**
Now run the render-view-fix loop from the Render & Validate section. This is where you'll catch visual issues that aren't obvious from JSON — overlaps, clipping, imbalanced composition.
### Section Boundaries
Plan your sections around natural visual groupings from the diagram plan. A typical large diagram might split into:
- **Section 1**: Entry point / trigger
- **Section 2**: First decision or routing
- **Section 3**: Main content (hero section — may be the largest single section)
- **Section 4-N**: Remaining phases, outputs, etc.
Each section should be independently understandable: its elements, internal arrows, and any cross-references to adjacent sections.
### What NOT to Do
- **Don't generate the entire diagram in one response.** You will hit the output token limit and produce truncated, broken JSON. Even if the diagram is small enough to fit, splitting into sections produces better results.
- **Don't write a Python generator script.** The templating and coordinate math seem helpful but introduce a layer of indirection that makes debugging harder. Hand-crafted JSON with descriptive IDs is more maintainable.
---
## Visual Pattern Library
### Fan-Out (One-to-Many)
Central element with arrows radiating to multiple targets. Use for: sources, PRDs, root causes, central hubs.
```
□ → ○
``` ```
### 3. Elbow Arrows Need Three Properties ### Convergence (Many-to-One)
Multiple inputs merging through arrows to single output. Use for: aggregation, funnels, synthesis.
```
○ ↘
○ → □
○ ↗
```
For 90-degree corners (not curved): ### Tree (Hierarchy)
Parent-child branching with connecting lines and free-floating text (no boxes needed). Use for: file systems, org charts, taxonomies.
```
label
├── label
│ ├── label
│ └── label
└── label
```
Use `line` elements for the trunk and branches, free-floating text for labels.
### Spiral/Cycle (Continuous Loop)
Elements in sequence with arrow returning to start. Use for: feedback loops, iterative processes, evolution.
```
□ → □
↑ ↓
□ ← □
```
### Cloud (Abstract State)
Overlapping ellipses with varied sizes. Use for: context, memory, conversations, mental states.
### Assembly Line (Transformation)
Input → Process Box → Output with clear before/after. Use for: transformations, processing, conversion.
```
○○○ → [PROCESS] → □□□
chaos order
```
### Side-by-Side (Comparison)
Two parallel structures with visual contrast. Use for: before/after, options, trade-offs.
### Gap/Break (Separation)
Visual whitespace or barrier between sections. Use for: phase changes, context resets, boundaries.
### Lines as Structure
Use lines (type: `line`, not arrows) as primary structural elements instead of boxes:
- **Timelines**: Vertical or horizontal line with small dots (10-20px ellipses) at intervals, free-floating labels beside each dot
- **Tree structures**: Vertical trunk line + horizontal branch lines, with free-floating text labels (no boxes needed)
- **Dividers**: Thin dashed lines to separate sections
- **Flow spines**: A central line that elements relate to, rather than connecting boxes
```
Timeline: Tree:
●─── Label 1 │
│ ├── item
●─── Label 2 │ ├── sub
│ │ └── sub
●─── Label 3 └── item
```
Lines + free-floating text often creates a cleaner result than boxes + contained text.
---
## Shape Meaning
Choose shape based on what it represents—or use no shape at all:
| Concept Type | Shape | Why |
|--------------|-------|-----|
| Labels, descriptions, details | **none** (free-floating text) | Typography creates hierarchy |
| Section titles, annotations | **none** (free-floating text) | Font size/weight is enough |
| Markers on a timeline | small `ellipse` (10-20px) | Visual anchor, not container |
| Start, trigger, input | `ellipse` | Soft, origin-like |
| End, output, result | `ellipse` | Completion, destination |
| Decision, condition | `diamond` | Classic decision symbol |
| Process, action, step | `rectangle` | Contained action |
| Abstract state, context | overlapping `ellipse` | Fuzzy, cloud-like |
| Hierarchy node | lines + text (no boxes) | Structure through lines |
**Rule**: Default to no container. Add shapes only when they carry meaning. Aim for <30% of text elements to be inside containers.
---
## Color as Meaning
Colors encode information, not decoration. Every color choice should come from `references/color-palette.md` — the semantic shape colors, text hierarchy colors, and evidence artifact colors are all defined there.
**Key principles:**
- Each semantic purpose (start, end, decision, AI, error, etc.) has a specific fill/stroke pair
- Free-floating text uses color for hierarchy (titles, subtitles, details — each at a different level)
- Evidence artifacts (code snippets, JSON examples) use their own dark background + colored text scheme
- Always pair a darker stroke with a lighter fill for contrast
**Do not invent new colors.** If a concept doesn't fit an existing semantic category, use Primary/Neutral or Secondary.
---
## Modern Aesthetics
For clean, professional diagrams:
### Roughness
- `roughness: 0` — Clean, crisp edges. Use for modern/technical diagrams.
- `roughness: 1` — Hand-drawn, organic feel. Use for brainstorming/informal diagrams.
**Default to 0** for most professional use cases.
### Stroke Width
- `strokeWidth: 1` — Thin, elegant. Good for lines, dividers, subtle connections.
- `strokeWidth: 2` — Standard. Good for shapes and primary arrows.
- `strokeWidth: 3` — Bold. Use sparingly for emphasis (main flow line, key connections).
### Opacity
**Always use `opacity: 100` for all elements.** Use color, size, and stroke width to create hierarchy instead of transparency.
### Small Markers Instead of Shapes
Instead of full shapes, use small dots (10-20px ellipses) as:
- Timeline markers
- Bullet points
- Connection nodes
- Visual anchors for free-floating text
---
## Layout Principles
### Hierarchy Through Scale
- **Hero**: 300×150 - visual anchor, most important
- **Primary**: 180×90
- **Secondary**: 120×60
- **Small**: 60×40
### Whitespace = Importance
The most important element has the most empty space around it (200px+).
### Flow Direction
Guide the eye: typically left→right or top→bottom for sequences, radial for hub-and-spoke.
### Connections Required
Position alone doesn't show relationships. If A relates to B, there must be an arrow.
---
## Text Rules
**CRITICAL**: The JSON `text` property contains ONLY readable words.
```json ```json
{ {
"type": "arrow", "id": "myElement1",
"roughness": 0, // Clean lines "text": "Start",
"roundness": null, // Sharp corners "originalText": "Start"
"elbowed": true // 90-degree mode
} }
``` ```
### 4. Arrow Edge Calculations Settings: `fontSize: 16`, `fontFamily: 3`, `textAlign: "center"`, `verticalAlign: "middle"`
Arrows must start/end at shape edges, not centers:
| Edge | Formula |
|------|---------|
| Top | `(x + width/2, y)` |
| Bottom | `(x + width/2, y + height)` |
| Left | `(x, y + height/2)` |
| Right | `(x + width, y + height/2)` |
**Detailed arrow routing:** See `references/arrows.md`
--- ---
## Element Types ## JSON Structure
| Type | Use For |
|------|---------|
| `rectangle` | Services, databases, containers, orchestrators |
| `ellipse` | Users, external systems, start/end points |
| `text` | Labels inside shapes, titles, annotations |
| `arrow` | Data flow, connections, dependencies |
| `line` | Grouping boundaries, separators |
**Full JSON format:** See `references/json-format.md`
---
## Workflow
### Step 1: Analyze Codebase
Discover components by looking for:
| Codebase Type | What to Look For |
|---------------|------------------|
| Monorepo | `packages/*/package.json`, workspace configs |
| Microservices | `docker-compose.yml`, k8s manifests |
| IaC | Terraform/Pulumi resource definitions |
| Backend API | Route definitions, controllers, DB models |
| Frontend | Component hierarchy, API calls |
**Use tools:**
- `Glob``**/package.json`, `**/Dockerfile`, `**/*.tf`
- `Grep``app.get`, `@Controller`, `CREATE TABLE`
- `Read` → README, config files, entry points
### Step 2: Plan Layout
**Vertical flow (most common):**
```
Row 1: Users/Entry points (y: 100)
Row 2: Frontend/Gateway (y: 230)
Row 3: Orchestration (y: 380)
Row 4: Services (y: 530)
Row 5: Data layer (y: 680)
Columns: x = 100, 300, 500, 700, 900
Element size: 160-200px x 80-90px
```
**Other patterns:** See `references/examples.md`
### Step 3: Generate Elements
For each component:
1. Create shape with unique `id`
2. Add `boundElements` referencing text
3. Create text with `containerId`
4. Choose color based on type
**Color palettes:** See `references/colors.md`
### Step 4: Add Connections
For each relationship:
1. Calculate source edge point
2. Plan elbow route (avoid overlaps)
3. Create arrow with `points` array
4. Match stroke color to destination type
**Arrow patterns:** See `references/arrows.md`
### Step 5: Add Grouping (Optional)
For logical groupings:
- Large transparent rectangle with `strokeStyle: "dashed"`
- Standalone text label at top-left
### Step 6: Validate and Write
Run validation before writing. Save to `docs/` or user-specified path.
**Validation checklist:** See `references/validation.md`
---
## Quick Arrow Reference
**Straight down:**
```json ```json
{ "points": [[0, 0], [0, 110]], "x": 590, "y": 290 } {
"type": "excalidraw",
"version": 2,
"source": "https://excalidraw.com",
"elements": [...],
"appState": {
"viewBackgroundColor": "#ffffff",
"gridSize": 20
},
"files": {}
}
``` ```
**L-shape (left then down):** ## Element Templates
```json
{ "points": [[0, 0], [-325, 0], [-325, 125]], "x": 525, "y": 420 }
```
**U-turn (callback):** See `references/element-templates.md` for copy-paste JSON templates for each element type (text, line, dot, rectangle, arrow). Pull colors from `references/color-palette.md` based on each element's semantic purpose.
```json
{ "points": [[0, 0], [50, 0], [50, -125], [20, -125]], "x": 710, "y": 440 }
```
**Arrow width/height** = bounding box of points:
```
points [[0,0], [-440,0], [-440,70]] → width=440, height=70
```
**Multiple arrows from same edge** - stagger positions:
```
5 arrows: 20%, 35%, 50%, 65%, 80% across edge width
```
--- ---
## Default Color Palette ## Render & Validate (MANDATORY)
| Component | Background | Stroke | You cannot judge a diagram from JSON alone. After generating or editing the Excalidraw JSON, you MUST render it to PNG, view the image, and fix what you see — in a loop until it's right. This is a core part of the workflow, not a final check.
|-----------|------------|--------|
| Frontend | `#a5d8ff` | `#1971c2` |
| Backend/API | `#d0bfff` | `#7048e8` |
| Database | `#b2f2bb` | `#2f9e44` |
| Storage | `#ffec99` | `#f08c00` |
| AI/ML | `#e599f7` | `#9c36b5` |
| External APIs | `#ffc9c9` | `#e03131` |
| Orchestration | `#ffa8a8` | `#c92a2a` |
| Message Queue | `#fff3bf` | `#fab005` |
| Cache | `#ffe8cc` | `#fd7e14` |
| Users | `#e7f5ff` | `#1971c2` |
**Cloud-specific palettes:** See `references/colors.md` ### How to Render
Run the render script from the skill's `references/` directory:
```bash
python3 <skill-references-dir>/render_excalidraw.py <path-to-file.excalidraw>
```
This outputs a PNG next to the `.excalidraw` file. Then use the **Read tool** on the PNG to actually view it.
### The Loop
After generating the initial JSON, run this cycle:
**1. Render & View** — Run the render script, then Read the PNG.
**2. Audit against your original vision** — Before looking for bugs, compare the rendered result to what you designed in Steps 1-4. Ask:
- Does the visual structure match the conceptual structure you planned?
- Does each section use the pattern you intended (fan-out, convergence, timeline, etc.)?
- Does the eye flow through the diagram in the order you designed?
- Is the visual hierarchy correct — hero elements dominant, supporting elements smaller?
- For technical diagrams: are the evidence artifacts (code snippets, data examples) readable and properly placed?
**3. Check for visual defects:**
- Text clipped by or overflowing its container
- Text or shapes overlapping other elements
- Arrows crossing through elements instead of routing around them
- Arrows landing on the wrong element or pointing into empty space
- Labels floating ambiguously (not clearly anchored to what they describe)
- Uneven spacing between elements that should be evenly spaced
- Sections with too much whitespace next to sections that are too cramped
- Text too small to read at the rendered size
- Overall composition feels lopsided or unbalanced
**4. Fix** — Edit the JSON to address everything you found. Common fixes:
- Widen containers when text is clipped
- Adjust `x`/`y` coordinates to fix spacing and alignment
- Add intermediate waypoints to arrow `points` arrays to route around elements
- Reposition labels closer to the element they describe
- Resize elements to rebalance visual weight across sections
**5. Re-render & re-view** — Run the render script again and Read the new PNG.
**6. Repeat** — Keep cycling until the diagram passes both the vision check (Step 2) and the defect check (Step 3). Typically takes 2-4 iterations. Don't stop after one pass just because there are no critical bugs — if the composition could be better, improve it.
### When to Stop
The loop is done when:
- The rendered diagram matches the conceptual design from your planning steps
- No text is clipped, overlapping, or unreadable
- Arrows route cleanly and connect to the right elements
- Spacing is consistent and the composition is balanced
- You'd be comfortable showing it to someone without caveats
--- ---
## Quick Validation Checklist ## Quality Checklist
Before writing file: ### Depth & Evidence (Check First for Technical Diagrams)
- [ ] Every shape with label has boundElements + text element 1. **Research done**: Did you look up actual specs, formats, event names?
- [ ] Text elements have containerId matching shape 2. **Evidence artifacts**: Are there code snippets, JSON examples, or real data?
- [ ] Multi-point arrows have `elbowed: true`, `roundness: null` 3. **Multi-zoom**: Does it have summary flow + section boundaries + detail?
- [ ] Arrow x,y = source shape edge point 4. **Concrete over abstract**: Real content shown, not just labeled boxes?
- [ ] Arrow final point offset reaches target edge 5. **Educational value**: Could someone learn something concrete from this?
- [ ] No diamond shapes
- [ ] No duplicate IDs
**Full validation algorithm:** See `references/validation.md` ### Conceptual
6. **Isomorphism**: Does each visual structure mirror its concept's behavior?
7. **Argument**: Does the diagram SHOW something text alone couldn't?
8. **Variety**: Does each major concept use a different visual pattern?
9. **No uniform containers**: Avoided card grids and equal boxes?
--- ### Container Discipline
10. **Minimal containers**: Could any boxed element work as free-floating text instead?
11. **Lines as structure**: Are tree/timeline patterns using lines + text rather than boxes?
12. **Typography hierarchy**: Are font size and color creating visual hierarchy (reducing need for boxes)?
## Common Issues ### Structural
13. **Connections**: Every relationship has an arrow or line
14. **Flow**: Clear visual path for the eye to follow
15. **Hierarchy**: Important elements are larger/more isolated
| Issue | Fix | ### Technical
|-------|-----| 16. **Text clean**: `text` contains only readable words
| Labels don't appear | Use TWO elements (shape + text), not `label` property | 17. **Font**: `fontFamily: 3`
| Arrows curved | Add `elbowed: true`, `roundness: null`, `roughness: 0` | 18. **Roughness**: `roughness: 0` for clean/modern (unless hand-drawn style requested)
| Arrows floating | Calculate x,y from shape edge, not center | 19. **Opacity**: `opacity: 100` for all elements (no transparency)
| Arrows overlapping | Stagger start positions across edge | 20. **Container ratio**: <30% of text elements should be inside containers
**Detailed bug fixes:** See `references/validation.md` ### Visual Validation (Render Required)
21. **Rendered to PNG**: Diagram has been rendered and visually inspected
--- 22. **No text overflow**: All text fits within its container
23. **No overlapping elements**: Shapes and text don't overlap unintentionally
## Reference Files 24. **Even spacing**: Similar elements have consistent spacing
25. **Arrows land correctly**: Arrows connect to intended elements without crossing others
| File | Contents | 26. **Readable at export size**: Text is legible in the rendered PNG
|------|----------| 27. **Balanced composition**: No large empty voids or overcrowded regions
| `references/json-format.md` | Element types, required properties, text bindings |
| `references/arrows.md` | Routing algorithm, patterns, bindings, staggering |
| `references/colors.md` | Default, AWS, Azure, GCP, K8s palettes |
| `references/examples.md` | Complete JSON examples, layout patterns |
| `references/validation.md` | Checklists, validation algorithm, bug fixes |
---
## Output
- **Location:** `docs/architecture/` or user-specified
- **Filename:** Descriptive, e.g., `system-architecture.excalidraw`
- **Testing:** Open in https://excalidraw.com or VS Code extension

View File

@@ -1,288 +0,0 @@
# Arrow Routing Reference
Complete guide for creating elbow arrows with proper connections.
---
## Critical: Elbow Arrow Properties
Three required properties for 90-degree corners:
```json
{
"type": "arrow",
"roughness": 0, // Clean lines
"roundness": null, // Sharp corners (not curved)
"elbowed": true // Enables elbow mode
}
```
**Without these, arrows will be curved, not 90-degree elbows.**
---
## Edge Calculation Formulas
| Shape Type | Edge | Formula |
|------------|------|---------|
| Rectangle | Top | `(x + width/2, y)` |
| Rectangle | Bottom | `(x + width/2, y + height)` |
| Rectangle | Left | `(x, y + height/2)` |
| Rectangle | Right | `(x + width, y + height/2)` |
| Ellipse | Top | `(x + width/2, y)` |
| Ellipse | Bottom | `(x + width/2, y + height)` |
---
## Universal Arrow Routing Algorithm
```
FUNCTION createArrow(source, target, sourceEdge, targetEdge):
// Step 1: Get source edge point
sourcePoint = getEdgePoint(source, sourceEdge)
// Step 2: Get target edge point
targetPoint = getEdgePoint(target, targetEdge)
// Step 3: Calculate offsets
dx = targetPoint.x - sourcePoint.x
dy = targetPoint.y - sourcePoint.y
// Step 4: Determine routing pattern
IF sourceEdge == "bottom" AND targetEdge == "top":
IF abs(dx) < 10: // Nearly aligned
points = [[0, 0], [0, dy]]
ELSE: // Need L-shape
points = [[0, 0], [dx, 0], [dx, dy]]
ELSE IF sourceEdge == "right" AND targetEdge == "left":
IF abs(dy) < 10:
points = [[0, 0], [dx, 0]]
ELSE:
points = [[0, 0], [0, dy], [dx, dy]]
ELSE IF sourceEdge == targetEdge: // U-turn
clearance = 50
IF sourceEdge == "right":
points = [[0, 0], [clearance, 0], [clearance, dy], [dx, dy]]
ELSE IF sourceEdge == "bottom":
points = [[0, 0], [0, clearance], [dx, clearance], [dx, dy]]
// Step 5: Calculate bounding box
width = max(abs(p[0]) for p in points)
height = max(abs(p[1]) for p in points)
RETURN {x: sourcePoint.x, y: sourcePoint.y, points, width, height}
FUNCTION getEdgePoint(shape, edge):
SWITCH edge:
"top": RETURN (shape.x + shape.width/2, shape.y)
"bottom": RETURN (shape.x + shape.width/2, shape.y + shape.height)
"left": RETURN (shape.x, shape.y + shape.height/2)
"right": RETURN (shape.x + shape.width, shape.y + shape.height/2)
```
---
## Arrow Patterns Reference
| Pattern | Points | Use Case |
|---------|--------|----------|
| Down | `[[0,0], [0,h]]` | Vertical connection |
| Right | `[[0,0], [w,0]]` | Horizontal connection |
| L-left-down | `[[0,0], [-w,0], [-w,h]]` | Go left, then down |
| L-right-down | `[[0,0], [w,0], [w,h]]` | Go right, then down |
| L-down-left | `[[0,0], [0,h], [-w,h]]` | Go down, then left |
| L-down-right | `[[0,0], [0,h], [w,h]]` | Go down, then right |
| S-shape | `[[0,0], [0,h1], [w,h1], [w,h2]]` | Navigate around obstacles |
| U-turn | `[[0,0], [w,0], [w,-h], [0,-h]]` | Callback/return arrows |
---
## Worked Examples
### Vertical Connection (Bottom to Top)
```
Source: x=500, y=200, width=180, height=90
Target: x=500, y=400, width=180, height=90
source_bottom = (500 + 180/2, 200 + 90) = (590, 290)
target_top = (500 + 180/2, 400) = (590, 400)
Arrow x = 590, y = 290
Distance = 400 - 290 = 110
Points = [[0, 0], [0, 110]]
```
### Fan-out (One to Many)
```
Orchestrator: x=570, y=400, width=140, height=80
Target: x=120, y=550, width=160, height=80
orchestrator_bottom = (570 + 140/2, 400 + 80) = (640, 480)
target_top = (120 + 160/2, 550) = (200, 550)
Arrow x = 640, y = 480
Horizontal offset = 200 - 640 = -440
Vertical offset = 550 - 480 = 70
Points = [[0, 0], [-440, 0], [-440, 70]] // Left first, then down
```
### U-turn (Callback)
```
Source: x=570, y=400, width=140, height=80
Target: x=550, y=270, width=180, height=90
Connection: Right of source -> Right of target
source_right = (570 + 140, 400 + 80/2) = (710, 440)
target_right = (550 + 180, 270 + 90/2) = (730, 315)
Arrow x = 710, y = 440
Vertical distance = 315 - 440 = -125
Final x offset = 730 - 710 = 20
Points = [[0, 0], [50, 0], [50, -125], [20, -125]]
// Right 50px (clearance), up 125px, left 30px
```
---
## Staggering Multiple Arrows
When N arrows leave from same edge, spread evenly:
```
FUNCTION getStaggeredPositions(shape, edge, numArrows):
positions = []
FOR i FROM 0 TO numArrows-1:
percentage = 0.2 + (0.6 * i / (numArrows - 1))
IF edge == "bottom" OR edge == "top":
x = shape.x + shape.width * percentage
y = (edge == "bottom") ? shape.y + shape.height : shape.y
ELSE:
x = (edge == "right") ? shape.x + shape.width : shape.x
y = shape.y + shape.height * percentage
positions.append({x, y})
RETURN positions
// Examples:
// 2 arrows: 20%, 80%
// 3 arrows: 20%, 50%, 80%
// 5 arrows: 20%, 35%, 50%, 65%, 80%
```
---
## Arrow Bindings
For better visual attachment, use `startBinding` and `endBinding`:
```json
{
"id": "arrow-workflow-convert",
"type": "arrow",
"x": 525,
"y": 420,
"width": 325,
"height": 125,
"points": [[0, 0], [-325, 0], [-325, 125]],
"roughness": 0,
"roundness": null,
"elbowed": true,
"startBinding": {
"elementId": "cloud-workflows",
"focus": 0,
"gap": 1,
"fixedPoint": [0.5, 1]
},
"endBinding": {
"elementId": "convert-pdf-service",
"focus": 0,
"gap": 1,
"fixedPoint": [0.5, 0]
},
"startArrowhead": null,
"endArrowhead": "arrow"
}
```
### fixedPoint Values
- Top center: `[0.5, 0]`
- Bottom center: `[0.5, 1]`
- Left center: `[0, 0.5]`
- Right center: `[1, 0.5]`
### Update Shape boundElements
```json
{
"id": "cloud-workflows",
"boundElements": [
{ "type": "text", "id": "cloud-workflows-text" },
{ "type": "arrow", "id": "arrow-workflow-convert" }
]
}
```
---
## Bidirectional Arrows
For two-way data flows:
```json
{
"type": "arrow",
"startArrowhead": "arrow",
"endArrowhead": "arrow"
}
```
Arrowhead options: `null`, `"arrow"`, `"bar"`, `"dot"`, `"triangle"`
---
## Arrow Labels
Position standalone text near arrow midpoint:
```json
{
"id": "arrow-api-db-label",
"type": "text",
"x": 305, // Arrow x + offset
"y": 245, // Arrow midpoint
"text": "SQL",
"fontSize": 12,
"containerId": null,
"backgroundColor": "#ffffff"
}
```
**Positioning formula:**
- Vertical: `label.y = arrow.y + (total_height / 2)`
- Horizontal: `label.x = arrow.x + (total_width / 2)`
- L-shaped: Position at corner or longest segment midpoint
---
## Width/Height Calculation
Arrow `width` and `height` = bounding box of path:
```
points = [[0, 0], [-440, 0], [-440, 70]]
width = abs(-440) = 440
height = abs(70) = 70
points = [[0, 0], [50, 0], [50, -125], [20, -125]]
width = max(abs(50), abs(20)) = 50
height = abs(-125) = 125
```

View File

@@ -0,0 +1,67 @@
# Color Palette & Brand Style
**This is the single source of truth for all colors and brand-specific styles.** To customize diagrams for your own brand, edit this file — everything else in the skill is universal.
---
## Shape Colors (Semantic)
Colors encode meaning, not decoration. Each semantic purpose has a fill/stroke pair.
| Semantic Purpose | Fill | Stroke |
|------------------|------|--------|
| Primary/Neutral | `#3b82f6` | `#1e3a5f` |
| Secondary | `#60a5fa` | `#1e3a5f` |
| Tertiary | `#93c5fd` | `#1e3a5f` |
| Start/Trigger | `#fed7aa` | `#c2410c` |
| End/Success | `#a7f3d0` | `#047857` |
| Warning/Reset | `#fee2e2` | `#dc2626` |
| Decision | `#fef3c7` | `#b45309` |
| AI/LLM | `#ddd6fe` | `#6d28d9` |
| Inactive/Disabled | `#dbeafe` | `#1e40af` (use dashed stroke) |
| Error | `#fecaca` | `#b91c1c` |
**Rule**: Always pair a darker stroke with a lighter fill for contrast.
---
## Text Colors (Hierarchy)
Use color on free-floating text to create visual hierarchy without containers.
| Level | Color | Use For |
|-------|-------|---------|
| Title | `#1e40af` | Section headings, major labels |
| Subtitle | `#3b82f6` | Subheadings, secondary labels |
| Body/Detail | `#64748b` | Descriptions, annotations, metadata |
| On light fills | `#374151` | Text inside light-colored shapes |
| On dark fills | `#ffffff` | Text inside dark-colored shapes |
---
## Evidence Artifact Colors
Used for code snippets, data examples, and other concrete evidence inside technical diagrams.
| Artifact | Background | Text Color |
|----------|-----------|------------|
| Code snippet | `#1e293b` | Syntax-colored (language-appropriate) |
| JSON/data example | `#1e293b` | `#22c55e` (green) |
---
## Default Stroke & Line Colors
| Element | Color |
|---------|-------|
| Arrows | Use the stroke color of the source element's semantic purpose |
| Structural lines (dividers, trees, timelines) | Primary stroke (`#1e3a5f`) or Slate (`#64748b`) |
| Marker dots (fill + stroke) | Primary fill (`#3b82f6`) |
---
## Background
| Property | Value |
|----------|-------|
| Canvas background | `#ffffff` |

View File

@@ -1,91 +0,0 @@
# Color Palettes Reference
Color schemes for different platforms and component types.
---
## Default Palette (Platform-Agnostic)
| Component Type | Background | Stroke | Example |
|----------------|------------|--------|---------|
| Frontend/UI | `#a5d8ff` | `#1971c2` | Next.js, React apps |
| Backend/API | `#d0bfff` | `#7048e8` | API servers, processors |
| Database | `#b2f2bb` | `#2f9e44` | PostgreSQL, MySQL, MongoDB |
| Storage | `#ffec99` | `#f08c00` | Object storage, file systems |
| AI/ML Services | `#e599f7` | `#9c36b5` | ML models, AI APIs |
| External APIs | `#ffc9c9` | `#e03131` | Third-party services |
| Orchestration | `#ffa8a8` | `#c92a2a` | Workflows, schedulers |
| Validation | `#ffd8a8` | `#e8590c` | Validators, checkers |
| Network/Security | `#dee2e6` | `#495057` | VPC, IAM, firewalls |
| Classification | `#99e9f2` | `#0c8599` | Routers, classifiers |
| Users/Actors | `#e7f5ff` | `#1971c2` | User ellipses |
| Message Queue | `#fff3bf` | `#fab005` | Kafka, RabbitMQ, SQS |
| Cache | `#ffe8cc` | `#fd7e14` | Redis, Memcached |
| Monitoring | `#d3f9d8` | `#40c057` | Prometheus, Grafana |
---
## AWS Palette
| Service Category | Background | Stroke |
|-----------------|------------|--------|
| Compute (EC2, Lambda, ECS) | `#ff9900` | `#cc7a00` |
| Storage (S3, EBS) | `#3f8624` | `#2d6119` |
| Database (RDS, DynamoDB) | `#3b48cc` | `#2d3899` |
| Networking (VPC, Route53) | `#8c4fff` | `#6b3dcc` |
| Security (IAM, KMS) | `#dd344c` | `#b12a3d` |
| Analytics (Kinesis, Athena) | `#8c4fff` | `#6b3dcc` |
| ML (SageMaker, Bedrock) | `#01a88d` | `#017d69` |
---
## Azure Palette
| Service Category | Background | Stroke |
|-----------------|------------|--------|
| Compute | `#0078d4` | `#005a9e` |
| Storage | `#50e6ff` | `#3cb5cc` |
| Database | `#0078d4` | `#005a9e` |
| Networking | `#773adc` | `#5a2ca8` |
| Security | `#ff8c00` | `#cc7000` |
| AI/ML | `#50e6ff` | `#3cb5cc` |
---
## GCP Palette
| Service Category | Background | Stroke |
|-----------------|------------|--------|
| Compute (GCE, Cloud Run) | `#4285f4` | `#3367d6` |
| Storage (GCS) | `#34a853` | `#2d8e47` |
| Database (Cloud SQL, Firestore) | `#ea4335` | `#c53929` |
| Networking | `#fbbc04` | `#d99e04` |
| AI/ML (Vertex AI) | `#9334e6` | `#7627b8` |
---
## Kubernetes Palette
| Component | Background | Stroke |
|-----------|------------|--------|
| Pod | `#326ce5` | `#2756b8` |
| Service | `#326ce5` | `#2756b8` |
| Deployment | `#326ce5` | `#2756b8` |
| ConfigMap/Secret | `#7f8c8d` | `#626d6e` |
| Ingress | `#00d4aa` | `#00a888` |
| Node | `#303030` | `#1a1a1a` |
| Namespace | `#f0f0f0` | `#c0c0c0` (dashed) |
---
## Diagram Type Suggestions
| Diagram Type | Recommended Layout | Key Elements |
|--------------|-------------------|--------------|
| Microservices | Vertical flow | Services, databases, queues, API gateway |
| Data Pipeline | Horizontal flow | Sources, transformers, sinks, storage |
| Event-Driven | Hub-and-spoke | Event bus center, producers/consumers |
| Kubernetes | Layered groups | Namespace boxes, pods inside deployments |
| CI/CD | Horizontal flow | Source -> Build -> Test -> Deploy -> Monitor |
| Network | Hierarchical | Internet -> LB -> VPC -> Subnets -> Instances |
| User Flow | Swimlanes | User actions, system responses, external calls |

View File

@@ -0,0 +1,182 @@
# Element Templates
Copy-paste JSON templates for each Excalidraw element type. The `strokeColor` and `backgroundColor` values are placeholders — always pull actual colors from `color-palette.md` based on the element's semantic purpose.
## Free-Floating Text (no container)
```json
{
"type": "text",
"id": "label1",
"x": 100, "y": 100,
"width": 200, "height": 25,
"text": "Section Title",
"originalText": "Section Title",
"fontSize": 20,
"fontFamily": 3,
"textAlign": "left",
"verticalAlign": "top",
"strokeColor": "<title color from palette>",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"seed": 11111,
"version": 1,
"versionNonce": 22222,
"isDeleted": false,
"groupIds": [],
"boundElements": null,
"link": null,
"locked": false,
"containerId": null,
"lineHeight": 1.25
}
```
## Line (structural, not arrow)
```json
{
"type": "line",
"id": "line1",
"x": 100, "y": 100,
"width": 0, "height": 200,
"strokeColor": "<structural line color from palette>",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"seed": 44444,
"version": 1,
"versionNonce": 55555,
"isDeleted": false,
"groupIds": [],
"boundElements": null,
"link": null,
"locked": false,
"points": [[0, 0], [0, 200]]
}
```
## Small Marker Dot
```json
{
"type": "ellipse",
"id": "dot1",
"x": 94, "y": 94,
"width": 12, "height": 12,
"strokeColor": "<marker dot color from palette>",
"backgroundColor": "<marker dot color from palette>",
"fillStyle": "solid",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"seed": 66666,
"version": 1,
"versionNonce": 77777,
"isDeleted": false,
"groupIds": [],
"boundElements": null,
"link": null,
"locked": false
}
```
## Rectangle
```json
{
"type": "rectangle",
"id": "elem1",
"x": 100, "y": 100, "width": 180, "height": 90,
"strokeColor": "<stroke from palette based on semantic purpose>",
"backgroundColor": "<fill from palette based on semantic purpose>",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"seed": 12345,
"version": 1,
"versionNonce": 67890,
"isDeleted": false,
"groupIds": [],
"boundElements": [{"id": "text1", "type": "text"}],
"link": null,
"locked": false,
"roundness": {"type": 3}
}
```
## Text (centered in shape)
```json
{
"type": "text",
"id": "text1",
"x": 130, "y": 132,
"width": 120, "height": 25,
"text": "Process",
"originalText": "Process",
"fontSize": 16,
"fontFamily": 3,
"textAlign": "center",
"verticalAlign": "middle",
"strokeColor": "<text color — match parent shape's stroke or use 'on light/dark fills' from palette>",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"seed": 11111,
"version": 1,
"versionNonce": 22222,
"isDeleted": false,
"groupIds": [],
"boundElements": null,
"link": null,
"locked": false,
"containerId": "elem1",
"lineHeight": 1.25
}
```
## Arrow
```json
{
"type": "arrow",
"id": "arrow1",
"x": 282, "y": 145, "width": 118, "height": 0,
"strokeColor": "<arrow color — typically matches source element's stroke from palette>",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"seed": 33333,
"version": 1,
"versionNonce": 44444,
"isDeleted": false,
"groupIds": [],
"boundElements": null,
"link": null,
"locked": false,
"points": [[0, 0], [118, 0]],
"startBinding": {"elementId": "elem1", "focus": 0, "gap": 2},
"endBinding": {"elementId": "elem2", "focus": 0, "gap": 2},
"startArrowhead": null,
"endArrowhead": "arrow"
}
```
For curves: use 3+ points in `points` array.

View File

@@ -1,381 +0,0 @@
# Complete Examples Reference
Full JSON examples showing proper element structure.
---
## 3-Tier Architecture Example
This is a REFERENCE showing JSON structure. Replace IDs, labels, positions, and colors based on discovered components.
```json
{
"type": "excalidraw",
"version": 2,
"source": "claude-code-excalidraw-skill",
"elements": [
{
"id": "user",
"type": "ellipse",
"x": 150,
"y": 50,
"width": 100,
"height": 60,
"angle": 0,
"strokeColor": "#1971c2",
"backgroundColor": "#e7f5ff",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": { "type": 2 },
"seed": 1,
"version": 1,
"versionNonce": 1,
"isDeleted": false,
"boundElements": [{ "type": "text", "id": "user-text" }],
"updated": 1,
"link": null,
"locked": false
},
{
"id": "user-text",
"type": "text",
"x": 175,
"y": 67,
"width": 50,
"height": 25,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": null,
"seed": 2,
"version": 1,
"versionNonce": 2,
"isDeleted": false,
"boundElements": null,
"updated": 1,
"link": null,
"locked": false,
"text": "User",
"fontSize": 16,
"fontFamily": 1,
"textAlign": "center",
"verticalAlign": "middle",
"baseline": 14,
"containerId": "user",
"originalText": "User",
"lineHeight": 1.25
},
{
"id": "frontend",
"type": "rectangle",
"x": 100,
"y": 180,
"width": 200,
"height": 80,
"angle": 0,
"strokeColor": "#1971c2",
"backgroundColor": "#a5d8ff",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": { "type": 3 },
"seed": 3,
"version": 1,
"versionNonce": 3,
"isDeleted": false,
"boundElements": [{ "type": "text", "id": "frontend-text" }],
"updated": 1,
"link": null,
"locked": false
},
{
"id": "frontend-text",
"type": "text",
"x": 105,
"y": 195,
"width": 190,
"height": 50,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": null,
"seed": 4,
"version": 1,
"versionNonce": 4,
"isDeleted": false,
"boundElements": null,
"updated": 1,
"link": null,
"locked": false,
"text": "Frontend\nNext.js",
"fontSize": 16,
"fontFamily": 1,
"textAlign": "center",
"verticalAlign": "middle",
"baseline": 14,
"containerId": "frontend",
"originalText": "Frontend\nNext.js",
"lineHeight": 1.25
},
{
"id": "database",
"type": "rectangle",
"x": 100,
"y": 330,
"width": 200,
"height": 80,
"angle": 0,
"strokeColor": "#2f9e44",
"backgroundColor": "#b2f2bb",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": { "type": 3 },
"seed": 5,
"version": 1,
"versionNonce": 5,
"isDeleted": false,
"boundElements": [{ "type": "text", "id": "database-text" }],
"updated": 1,
"link": null,
"locked": false
},
{
"id": "database-text",
"type": "text",
"x": 105,
"y": 345,
"width": 190,
"height": 50,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": null,
"seed": 6,
"version": 1,
"versionNonce": 6,
"isDeleted": false,
"boundElements": null,
"updated": 1,
"link": null,
"locked": false,
"text": "Database\nPostgreSQL",
"fontSize": 16,
"fontFamily": 1,
"textAlign": "center",
"verticalAlign": "middle",
"baseline": 14,
"containerId": "database",
"originalText": "Database\nPostgreSQL",
"lineHeight": 1.25
},
{
"id": "arrow-user-frontend",
"type": "arrow",
"x": 200,
"y": 115,
"width": 0,
"height": 60,
"angle": 0,
"strokeColor": "#1971c2",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": null,
"seed": 7,
"version": 1,
"versionNonce": 7,
"isDeleted": false,
"boundElements": null,
"updated": 1,
"link": null,
"locked": false,
"points": [[0, 0], [0, 60]],
"lastCommittedPoint": null,
"startBinding": null,
"endBinding": null,
"startArrowhead": null,
"endArrowhead": "arrow",
"elbowed": true
},
{
"id": "arrow-frontend-database",
"type": "arrow",
"x": 200,
"y": 265,
"width": 0,
"height": 60,
"angle": 0,
"strokeColor": "#2f9e44",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": null,
"seed": 8,
"version": 1,
"versionNonce": 8,
"isDeleted": false,
"boundElements": null,
"updated": 1,
"link": null,
"locked": false,
"points": [[0, 0], [0, 60]],
"lastCommittedPoint": null,
"startBinding": null,
"endBinding": null,
"startArrowhead": null,
"endArrowhead": "arrow",
"elbowed": true
}
],
"appState": {
"gridSize": 20,
"viewBackgroundColor": "#ffffff"
},
"files": {}
}
```
---
## Layout Patterns
### Vertical Flow (Most Common)
```
Grid positioning:
- Column width: 200-250px
- Row height: 130-150px
- Element size: 160-200px x 80-90px
- Spacing: 40-50px between elements
Row positions (y):
Row 0: 20 (title)
Row 1: 100 (users/entry points)
Row 2: 230 (frontend/gateway)
Row 3: 380 (orchestration)
Row 4: 530 (services)
Row 5: 680 (data layer)
Row 6: 830 (external services)
Column positions (x):
Col 0: 100
Col 1: 300
Col 2: 500
Col 3: 700
Col 4: 900
```
### Horizontal Flow (Pipelines)
```
Stage positions (x):
Stage 0: 100 (input/source)
Stage 1: 350 (transform 1)
Stage 2: 600 (transform 2)
Stage 3: 850 (transform 3)
Stage 4: 1100 (output/sink)
All stages at same y: 200
Arrows: "right" -> "left" connections
```
### Hub-and-Spoke
```
Center hub: x=500, y=350
8 positions at 45° increments:
N: (500, 150)
NE: (640, 210)
E: (700, 350)
SE: (640, 490)
S: (500, 550)
SW: (360, 490)
W: (300, 350)
NW: (360, 210)
```
---
## Complex Architecture Layout
```
Row 0: Title/Header (y: 20)
Row 1: Users/Clients (y: 80)
Row 2: Frontend/Gateway (y: 200)
Row 3: Orchestration (y: 350)
Row 4: Processing Services (y: 550)
Row 5: Data Layer (y: 680)
Row 6: External Services (y: 830)
Columns (x):
Col 0: 120
Col 1: 320
Col 2: 520
Col 3: 720
Col 4: 920
```
---
## Diagram Complexity Guidelines
| Complexity | Max Elements | Max Arrows | Approach |
|------------|-------------|------------|----------|
| Simple | 5-10 | 5-10 | Single file, no groups |
| Medium | 10-25 | 15-30 | Use grouping rectangles |
| Complex | 25-50 | 30-60 | Split into multiple diagrams |
| Very Complex | 50+ | 60+ | Multiple focused diagrams |
**When to split:**
- More than 50 elements
- Create: `architecture-overview.excalidraw`, `architecture-data-layer.excalidraw`
**When to use groups:**
- 3+ related services
- Same deployment unit
- Logical boundaries (VPC, Security Zone)

View File

@@ -1,210 +0,0 @@
# Excalidraw JSON Format Reference
Complete reference for Excalidraw JSON structure and element types.
---
## File Structure
```json
{
"type": "excalidraw",
"version": 2,
"source": "claude-code-excalidraw-skill",
"elements": [],
"appState": {
"gridSize": 20,
"viewBackgroundColor": "#ffffff"
},
"files": {}
}
```
---
## Element Types
| Type | Use For | Arrow Reliability |
|------|---------|-------------------|
| `rectangle` | Services, components, databases, containers, orchestrators, decision points | Excellent |
| `ellipse` | Users, external systems, start/end points | Good |
| `text` | Labels inside shapes, titles, annotations | N/A |
| `arrow` | Data flow, connections, dependencies | N/A |
| `line` | Grouping boundaries, separators | N/A |
### BANNED: Diamond Shapes
**NEVER use `type: "diamond"` in generated diagrams.**
Diamond arrow connections are fundamentally broken in raw Excalidraw JSON:
- Excalidraw applies `roundness` to diamond vertices during rendering
- Visual edges appear offset from mathematical edge points
- No offset formula reliably compensates
- Arrows appear disconnected/floating
**Use styled rectangles instead** for visual distinction:
| Semantic Meaning | Rectangle Style |
|------------------|-----------------|
| Orchestrator/Hub | Coral (`#ffa8a8`/`#c92a2a`) + strokeWidth: 3 |
| Decision Point | Orange (`#ffd8a8`/`#e8590c`) + dashed stroke |
| Central Router | Larger size + bold color |
---
## Required Element Properties
Every element MUST have these properties:
```json
{
"id": "unique-id-string",
"type": "rectangle",
"x": 100,
"y": 100,
"width": 200,
"height": 80,
"angle": 0,
"strokeColor": "#1971c2",
"backgroundColor": "#a5d8ff",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": { "type": 3 },
"seed": 1,
"version": 1,
"versionNonce": 1,
"isDeleted": false,
"boundElements": null,
"updated": 1,
"link": null,
"locked": false
}
```
---
## Text Inside Shapes (Labels)
**Every labeled shape requires TWO elements:**
### Shape with boundElements
```json
{
"id": "{component-id}",
"type": "rectangle",
"x": 500,
"y": 200,
"width": 200,
"height": 90,
"strokeColor": "#1971c2",
"backgroundColor": "#a5d8ff",
"boundElements": [{ "type": "text", "id": "{component-id}-text" }],
// ... other required properties
}
```
### Text with containerId
```json
{
"id": "{component-id}-text",
"type": "text",
"x": 505, // shape.x + 5
"y": 220, // shape.y + (shape.height - text.height) / 2
"width": 190, // shape.width - 10
"height": 50,
"text": "{Component Name}\n{Subtitle}",
"fontSize": 16,
"fontFamily": 1,
"textAlign": "center",
"verticalAlign": "middle",
"containerId": "{component-id}",
"originalText": "{Component Name}\n{Subtitle}",
"lineHeight": 1.25,
// ... other required properties
}
```
### DO NOT Use the `label` Property
The `label` property is for the JavaScript API, NOT raw JSON files:
```json
// WRONG - will show empty boxes
{ "type": "rectangle", "label": { "text": "My Label" } }
// CORRECT - requires TWO elements
// 1. Shape with boundElements reference
// 2. Separate text element with containerId
```
### Text Positioning
- Text `x` = shape `x` + 5
- Text `y` = shape `y` + (shape.height - text.height) / 2
- Text `width` = shape `width` - 10
- Use `\n` for multi-line labels
- Always use `textAlign: "center"` and `verticalAlign: "middle"`
### ID Naming Convention
Always use pattern: `{shape-id}-text` for text element IDs.
---
## Dynamic ID Generation
IDs and labels are generated from codebase analysis:
| Discovered Component | Generated ID | Generated Label |
|---------------------|--------------|-----------------|
| Express API server | `express-api` | `"API Server\nExpress.js"` |
| PostgreSQL database | `postgres-db` | `"PostgreSQL\nDatabase"` |
| Redis cache | `redis-cache` | `"Redis\nCache Layer"` |
| S3 bucket for uploads | `s3-uploads` | `"S3 Bucket\nuploads/"` |
| Lambda function | `lambda-processor` | `"Lambda\nProcessor"` |
| React frontend | `react-frontend` | `"React App\nFrontend"` |
---
## Grouping with Dashed Rectangles
For logical groupings (namespaces, VPCs, pipelines):
```json
{
"id": "group-ai-pipeline",
"type": "rectangle",
"x": 100,
"y": 500,
"width": 1000,
"height": 280,
"strokeColor": "#9c36b5",
"backgroundColor": "transparent",
"strokeStyle": "dashed",
"roughness": 0,
"roundness": null,
"boundElements": null
}
```
Group labels are standalone text (no containerId) at top-left:
```json
{
"id": "group-ai-pipeline-label",
"type": "text",
"x": 120,
"y": 510,
"text": "AI Processing Pipeline (Cloud Run)",
"textAlign": "left",
"verticalAlign": "top",
"containerId": null
}
```

View File

@@ -0,0 +1,71 @@
# Excalidraw JSON Schema
## Element Types
| Type | Use For |
|------|---------|
| `rectangle` | Processes, actions, components |
| `ellipse` | Entry/exit points, external systems |
| `diamond` | Decisions, conditionals |
| `arrow` | Connections between shapes |
| `text` | Labels inside shapes |
| `line` | Non-arrow connections |
| `frame` | Grouping containers |
## Common Properties
All elements share these:
| Property | Type | Description |
|----------|------|-------------|
| `id` | string | Unique identifier |
| `type` | string | Element type |
| `x`, `y` | number | Position in pixels |
| `width`, `height` | number | Size in pixels |
| `strokeColor` | string | Border color (hex) |
| `backgroundColor` | string | Fill color (hex or "transparent") |
| `fillStyle` | string | "solid", "hachure", "cross-hatch" |
| `strokeWidth` | number | 1, 2, or 4 |
| `strokeStyle` | string | "solid", "dashed", "dotted" |
| `roughness` | number | 0 (smooth), 1 (default), 2 (rough) |
| `opacity` | number | 0-100 |
| `seed` | number | Random seed for roughness |
## Text-Specific Properties
| Property | Description |
|----------|-------------|
| `text` | The display text |
| `originalText` | Same as text |
| `fontSize` | Size in pixels (16-20 recommended) |
| `fontFamily` | 3 for monospace (use this) |
| `textAlign` | "left", "center", "right" |
| `verticalAlign` | "top", "middle", "bottom" |
| `containerId` | ID of parent shape |
## Arrow-Specific Properties
| Property | Description |
|----------|-------------|
| `points` | Array of [x, y] coordinates |
| `startBinding` | Connection to start shape |
| `endBinding` | Connection to end shape |
| `startArrowhead` | null, "arrow", "bar", "dot", "triangle" |
| `endArrowhead` | null, "arrow", "bar", "dot", "triangle" |
## Binding Format
```json
{
"elementId": "shapeId",
"focus": 0,
"gap": 2
}
```
## Rectangle Roundness
Add for rounded corners:
```json
"roundness": { "type": 3 }
```

View File

@@ -0,0 +1,205 @@
#!/usr/bin/env python3
"""Render Excalidraw JSON to PNG using Playwright + headless Chromium.
Usage:
python3 render_excalidraw.py <path-to-file.excalidraw> [--output path.png] [--scale 2] [--width 1920]
Dependencies (playwright, chromium) are provided by the Nix flake / direnv environment.
"""
from __future__ import annotations
import argparse
import json
import sys
from pathlib import Path
def validate_excalidraw(data: dict) -> list[str]:
"""Validate Excalidraw JSON structure. Returns list of errors (empty = valid)."""
errors: list[str] = []
if data.get("type") != "excalidraw":
errors.append(f"Expected type 'excalidraw', got '{data.get('type')}'")
if "elements" not in data:
errors.append("Missing 'elements' array")
elif not isinstance(data["elements"], list):
errors.append("'elements' must be an array")
elif len(data["elements"]) == 0:
errors.append("'elements' array is empty — nothing to render")
return errors
def compute_bounding_box(elements: list[dict]) -> tuple[float, float, float, float]:
"""Compute bounding box (min_x, min_y, max_x, max_y) across all elements."""
min_x = float("inf")
min_y = float("inf")
max_x = float("-inf")
max_y = float("-inf")
for el in elements:
if el.get("isDeleted"):
continue
x = el.get("x", 0)
y = el.get("y", 0)
w = el.get("width", 0)
h = el.get("height", 0)
# For arrows/lines, points array defines the shape relative to x,y
if el.get("type") in ("arrow", "line") and "points" in el:
for px, py in el["points"]:
min_x = min(min_x, x + px)
min_y = min(min_y, y + py)
max_x = max(max_x, x + px)
max_y = max(max_y, y + py)
else:
min_x = min(min_x, x)
min_y = min(min_y, y)
max_x = max(max_x, x + abs(w))
max_y = max(max_y, y + abs(h))
if min_x == float("inf"):
return (0, 0, 800, 600)
return (min_x, min_y, max_x, max_y)
def render(
excalidraw_path: Path,
output_path: Path | None = None,
scale: int = 2,
max_width: int = 1920,
) -> Path:
"""Render an .excalidraw file to PNG. Returns the output PNG path."""
# Import playwright here so validation errors show before import errors
try:
from playwright.sync_api import sync_playwright
except ImportError:
print("ERROR: playwright not installed.", file=sys.stderr)
print("Ensure the Nix dev shell is active (direnv allow).", file=sys.stderr)
sys.exit(1)
# Read and validate
raw = excalidraw_path.read_text(encoding="utf-8")
try:
data = json.loads(raw)
except json.JSONDecodeError as e:
print(f"ERROR: Invalid JSON in {excalidraw_path}: {e}", file=sys.stderr)
sys.exit(1)
errors = validate_excalidraw(data)
if errors:
print(f"ERROR: Invalid Excalidraw file:", file=sys.stderr)
for err in errors:
print(f" - {err}", file=sys.stderr)
sys.exit(1)
# Compute viewport size from element bounding box
elements = [e for e in data["elements"] if not e.get("isDeleted")]
min_x, min_y, max_x, max_y = compute_bounding_box(elements)
padding = 80
diagram_w = max_x - min_x + padding * 2
diagram_h = max_y - min_y + padding * 2
# Cap viewport width, let height be natural
vp_width = min(int(diagram_w), max_width)
vp_height = max(int(diagram_h), 600)
# Output path
if output_path is None:
output_path = excalidraw_path.with_suffix(".png")
# Template path (same directory as this script)
template_path = Path(__file__).parent / "render_template.html"
if not template_path.exists():
print(f"ERROR: Template not found at {template_path}", file=sys.stderr)
sys.exit(1)
template_url = template_path.as_uri()
with sync_playwright() as p:
try:
browser = p.chromium.launch(headless=True)
except Exception as e:
if "Executable doesn't exist" in str(e) or "browserType.launch" in str(e):
print("ERROR: Chromium not installed for Playwright.", file=sys.stderr)
print("Ensure the Nix dev shell is active (direnv allow).", file=sys.stderr)
sys.exit(1)
raise
page = browser.new_page(
viewport={"width": vp_width, "height": vp_height},
device_scale_factor=scale,
)
# Load the template
page.goto(template_url)
# Wait for the ES module to load (imports from esm.sh)
page.wait_for_function("window.__moduleReady === true", timeout=30000)
# Inject the diagram data and render
json_str = json.dumps(data)
result = page.evaluate(f"window.renderDiagram({json_str})")
if not result or not result.get("success"):
error_msg = (
result.get("error", "Unknown render error")
if result
else "renderDiagram returned null"
)
print(f"ERROR: Render failed: {error_msg}", file=sys.stderr)
browser.close()
sys.exit(1)
# Wait for render completion signal
page.wait_for_function("window.__renderComplete === true", timeout=15000)
# Screenshot the SVG element
svg_el = page.query_selector("#root svg")
if svg_el is None:
print("ERROR: No SVG element found after render.", file=sys.stderr)
browser.close()
sys.exit(1)
svg_el.screenshot(path=str(output_path))
browser.close()
return output_path
def main() -> None:
"""Entry point for rendering Excalidraw JSON files to PNG."""
parser = argparse.ArgumentParser(description="Render Excalidraw JSON to PNG")
parser.add_argument("input", type=Path, help="Path to .excalidraw JSON file")
parser.add_argument(
"--output",
"-o",
type=Path,
default=None,
help="Output PNG path (default: same name with .png)",
)
parser.add_argument(
"--scale", "-s", type=int, default=2, help="Device scale factor (default: 2)"
)
parser.add_argument(
"--width",
"-w",
type=int,
default=1920,
help="Max viewport width (default: 1920)",
)
args = parser.parse_args()
if not args.input.exists():
print(f"ERROR: File not found: {args.input}", file=sys.stderr)
sys.exit(1)
png_path = render(args.input, args.output, args.scale, args.width)
print(str(png_path))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,57 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<style>
* { margin: 0; padding: 0; box-sizing: border-box; }
body { background: #ffffff; overflow: hidden; }
#root { display: inline-block; }
#root svg { display: block; }
</style>
</head>
<body>
<div id="root"></div>
<script type="module">
import { exportToSvg } from "https://esm.sh/@excalidraw/excalidraw?bundle";
window.renderDiagram = async function(jsonData) {
try {
const data = typeof jsonData === "string" ? JSON.parse(jsonData) : jsonData;
const elements = data.elements || [];
const appState = data.appState || {};
const files = data.files || {};
// Force white background in appState
appState.viewBackgroundColor = appState.viewBackgroundColor || "#ffffff";
appState.exportWithDarkMode = false;
const svg = await exportToSvg({
elements: elements,
appState: {
...appState,
exportBackground: true,
},
files: files,
});
// Clear any previous render
const root = document.getElementById("root");
root.innerHTML = "";
root.appendChild(svg);
window.__renderComplete = true;
window.__renderError = null;
return { success: true, width: svg.getAttribute("width"), height: svg.getAttribute("height") };
} catch (err) {
window.__renderComplete = true;
window.__renderError = err.message;
return { success: false, error: err.message };
}
};
// Signal that the module is loaded and ready
window.__moduleReady = true;
</script>
</body>
</html>

View File

@@ -1,182 +0,0 @@
# Validation Reference
Checklists, validation algorithms, and common bug fixes.
---
## Pre-Flight Validation Algorithm
Run BEFORE writing the file:
```
FUNCTION validateDiagram(elements):
errors = []
// 1. Validate shape-text bindings
FOR each shape IN elements WHERE shape.boundElements != null:
FOR each binding IN shape.boundElements:
textElement = findById(elements, binding.id)
IF textElement == null:
errors.append("Shape {shape.id} references missing text {binding.id}")
ELSE IF textElement.containerId != shape.id:
errors.append("Text containerId doesn't match shape")
// 2. Validate arrow connections
FOR each arrow IN elements WHERE arrow.type == "arrow":
sourceShape = findShapeNear(elements, arrow.x, arrow.y)
IF sourceShape == null:
errors.append("Arrow {arrow.id} doesn't start from shape edge")
finalPoint = arrow.points[arrow.points.length - 1]
endX = arrow.x + finalPoint[0]
endY = arrow.y + finalPoint[1]
targetShape = findShapeNear(elements, endX, endY)
IF targetShape == null:
errors.append("Arrow {arrow.id} doesn't end at shape edge")
IF arrow.points.length > 2:
IF arrow.elbowed != true:
errors.append("Arrow {arrow.id} missing elbowed:true")
IF arrow.roundness != null:
errors.append("Arrow {arrow.id} should have roundness:null")
// 3. Validate unique IDs
ids = [el.id for el in elements]
duplicates = findDuplicates(ids)
IF duplicates.length > 0:
errors.append("Duplicate IDs: {duplicates}")
// 4. Validate bounding boxes
FOR each arrow IN elements WHERE arrow.type == "arrow":
maxX = max(abs(p[0]) for p in arrow.points)
maxY = max(abs(p[1]) for p in arrow.points)
IF arrow.width < maxX OR arrow.height < maxY:
errors.append("Arrow {arrow.id} bounding box too small")
RETURN errors
FUNCTION findShapeNear(elements, x, y, tolerance=15):
FOR each shape IN elements WHERE shape.type IN ["rectangle", "ellipse"]:
edges = [
(shape.x + shape.width/2, shape.y), // top
(shape.x + shape.width/2, shape.y + shape.height), // bottom
(shape.x, shape.y + shape.height/2), // left
(shape.x + shape.width, shape.y + shape.height/2) // right
]
FOR each edge IN edges:
IF abs(edge.x - x) < tolerance AND abs(edge.y - y) < tolerance:
RETURN shape
RETURN null
```
---
## Checklists
### Before Generating
- [ ] Identified all components from codebase
- [ ] Mapped all connections/data flows
- [ ] Chose layout pattern (vertical, horizontal, hub-and-spoke)
- [ ] Selected color palette (default, AWS, Azure, K8s)
- [ ] Planned grid positions
- [ ] Created ID naming scheme
### During Generation
- [ ] Every labeled shape has BOTH shape AND text elements
- [ ] Shape has `boundElements: [{ "type": "text", "id": "{id}-text" }]`
- [ ] Text has `containerId: "{shape-id}"`
- [ ] Multi-point arrows have `elbowed: true`, `roundness: null`, `roughness: 0`
- [ ] Arrows have `startBinding` and `endBinding`
- [ ] No diamond shapes used
- [ ] Applied staggering formula for multiple arrows
### Arrow Validation (Every Arrow)
- [ ] Arrow `x,y` calculated from shape edge
- [ ] Final point offset = `targetEdge - sourceEdge`
- [ ] Arrow `width` = `max(abs(point[0]))`
- [ ] Arrow `height` = `max(abs(point[1]))`
- [ ] U-turn arrows have 40-60px clearance
### After Generation
- [ ] All `boundElements` IDs reference valid text elements
- [ ] All `containerId` values reference valid shapes
- [ ] All arrows start within 15px of shape edge
- [ ] All arrows end within 15px of shape edge
- [ ] No duplicate IDs
- [ ] Arrow bounding boxes match points
- [ ] File is valid JSON
---
## Common Bugs and Fixes
### Bug: Arrow appears disconnected/floating
**Cause**: Arrow `x,y` not calculated from shape edge.
**Fix**:
```
Rectangle bottom: arrow_x = shape.x + shape.width/2
arrow_y = shape.y + shape.height
```
### Bug: Arrow endpoint doesn't reach target
**Cause**: Final point offset calculated incorrectly.
**Fix**:
```
target_edge = (target.x + target.width/2, target.y)
offset_x = target_edge.x - arrow.x
offset_y = target_edge.y - arrow.y
Final point = [offset_x, offset_y]
```
### Bug: Multiple arrows from same source overlap
**Cause**: All arrows start from identical `x,y`.
**Fix**: Stagger start positions:
```
For 5 arrows from bottom edge:
arrow1.x = shape.x + shape.width * 0.2
arrow2.x = shape.x + shape.width * 0.35
arrow3.x = shape.x + shape.width * 0.5
arrow4.x = shape.x + shape.width * 0.65
arrow5.x = shape.x + shape.width * 0.8
```
### Bug: Callback arrow doesn't loop correctly
**Cause**: U-turn path lacks clearance.
**Fix**: Use 4-point path:
```
Points = [[0, 0], [clearance, 0], [clearance, -vert], [final_x, -vert]]
clearance = 40-60px
```
### Bug: Labels don't appear inside shapes
**Cause**: Using `label` property instead of separate text element.
**Fix**: Create TWO elements:
1. Shape with `boundElements` referencing text
2. Text with `containerId` referencing shape
### Bug: Arrows are curved, not 90-degree
**Cause**: Missing elbow properties.
**Fix**: Add all three:
```json
{
"roughness": 0,
"roundness": null,
"elbowed": true
}
```

View File

@@ -1,10 +1,16 @@
--- ---
name: mem0-memory name: mem0-memory
description: "Store and retrieve memories using Mem0 REST API. Use when: (1) storing information for future recall, (2) searching past conversations or facts, (3) managing user/agent memory contexts, (4) building conversational AI with persistent memory. Triggers on keywords like 'remember', 'recall', 'memory', 'store for later', 'what did I say about'." description: "DEPRECATED: Replaced by opencode-memory plugin. See skills/memory/SKILL.md for current memory system."
compatibility: opencode compatibility: opencode
--- ---
# Mem0 Memory > ⚠️ **DEPRECATED**
>
> This skill is deprecated. The memory system has been replaced by the opencode-memory plugin.
>
> **See:** `skills/memory/SKILL.md` for the current memory system.
# Mem0 Memory (Legacy)
Store and retrieve memories via Mem0 REST API at `http://localhost:8000`. Store and retrieve memories via Mem0 REST API at `http://localhost:8000`.
@@ -108,6 +114,36 @@ Combine scopes for fine-grained control:
} }
``` ```
## Memory Categories
Memories are classified into 5 categories for organization:
| Category | Definition | Obsidian Path | Example |
|----------|------------|---------------|---------|
| `preference` | Personal preferences | `80-memory/preferences/` | UI settings, workflow styles |
| `fact` | Objective information | `80-memory/facts/` | Tech stack, role, constraints |
| `decision` | Choices with rationale | `80-memory/decisions/` | Tool selections, architecture |
| `entity` | People, orgs, systems | `80-memory/entities/` | Contacts, APIs, concepts |
| `other` | Everything else | `80-memory/other/` | General learnings |
### Metadata Pattern
Include category in metadata when storing:
```json
{
"messages": [...],
"user_id": "user123",
"metadata": {
"category": "preference",
"source": "explicit"
}
}
```
- `category`: One of preference, fact, decision, entity, other
- `source`: "explicit" (user requested) or "auto-capture" (automatic)
## Workflow Patterns ## Workflow Patterns
### Pattern 1: Remember User Preferences ### Pattern 1: Remember User Preferences
@@ -137,6 +173,43 @@ curl -X POST http://localhost:8000/memories \
-d '{"messages":[...], "run_id":"SESSION_ID"}' -d '{"messages":[...], "run_id":"SESSION_ID"}'
``` ```
## Dual-Layer Sync
Memories are stored in BOTH Mem0 AND the Obsidian CODEX vault for redundancy and accessibility.
### Sync Pattern
1. **Store in Mem0 first** - Get `mem0_id` from response
2. **Create Obsidian note** - In `80-memory/<category>/` using memory template
3. **Cross-reference**:
- Add `mem0_id` to Obsidian note frontmatter
- Update Mem0 metadata with `obsidian_ref` (file path)
### Example Flow
```bash
# 1. Store in Mem0
RESPONSE=$(curl -s -X POST http://localhost:8000/memories \
-d '{"messages":[{"role":"user","content":"I prefer dark mode"}],"user_id":"m3tam3re","metadata":{"category":"preference","source":"explicit"}}')
# 2. Extract mem0_id
MEM0_ID=$(echo $RESPONSE | jq -r '.id')
# 3. Create Obsidian note (via REST API or MCP)
# Path: 80-memory/preferences/prefers-dark-mode.md
# Frontmatter includes: mem0_id: $MEM0_ID
# 4. Update Mem0 with Obsidian reference
curl -X PUT http://localhost:8000/memories/$MEM0_ID \
-d '{"metadata":{"obsidian_ref":"80-memory/preferences/prefers-dark-mode.md"}}'
```
### When Obsidian Unavailable
- Store in Mem0 only
- Log sync failure
- Retry on next access
## Response Format ## Response Format
Memory objects include: Memory objects include:
@@ -161,6 +234,45 @@ Verify API is running:
curl http://localhost:8000/health curl http://localhost:8000/health
``` ```
### Pre-Operation Check
Before any memory operation, verify Mem0 is running:
```bash
if ! curl -s http://localhost:8000/health > /dev/null 2>&1; then
echo "WARNING: Mem0 unavailable. Memory operations skipped."
# Continue without memory features
fi
```
## Error Handling
### Mem0 Unavailable
When `curl http://localhost:8000/health` fails:
- Skip all memory operations
- Warn user: "Memory system unavailable. Mem0 not running at localhost:8000"
- Continue with degraded functionality
### Obsidian Unavailable
When vault sync fails:
- Store in Mem0 only
- Log: "Obsidian sync failed for memory [id]"
- Do not block user workflow
### API Errors
| Status | Meaning | Action |
|--------|---------|--------|
| 400 | Bad request | Check JSON format, required fields |
| 404 | Memory not found | Memory may have been deleted |
| 500 | Server error | Retry, check Mem0 logs |
### Graceful Degradation
Always continue core functionality even if memory system fails. Memory is enhancement, not requirement.
## API Reference ## API Reference
See [references/api_reference.md](references/api_reference.md) for complete OpenAPI schema. See [references/api_reference.md](references/api_reference.md) for complete OpenAPI schema.

View File

@@ -1,108 +0,0 @@
---
name: msteams
description: "Microsoft Teams Graph API integration for team communication. Use when: (1) Managing teams and channels, (2) Sending/receiving channel messages, (3) Scheduling or managing meetings, (4) Handling chat conversations. Triggers: 'Teams', 'meeting', 'channel', 'team message', 'chat', 'Teams message'."
compatibility: opencode
---
# Microsoft Teams Integration
Microsoft Teams Graph API integration for managing team communication, channels, messages, meetings, and chat conversations via MCP tools.
## Core Capabilities
### Teams & Channels
- **List joined teams**: Retrieve all teams the user is a member of
- **Manage channels**: Create, list, and manage channels within teams
- **Team membership**: Add, remove, and update team members
### Channel Messages
- **Send messages**: Post messages to channels with rich text support
- **Retrieve messages**: List channel messages with filtering by date range
- **Message management**: Read and respond to channel communications
### Online Meetings
- **Schedule meetings**: Create online meetings with participants
- **Manage meetings**: Update meeting details and coordinates
- **Meeting access**: Retrieve join links and meeting information
- **Presence**: Check user presence and activity status
### Chat
- **Direct messages**: 1:1 chat conversations with users
- **Group chats**: Multi-person chat conversations
- **Chat messages**: Send and receive chat messages
## Common Workflows
### Send Channel Message
1. Identify target team and channel
2. Compose message content
3. Use MCP tool to send message to channel
Example:
```
"Post a message to the 'General' channel in 'Engineering' team about the deployment status"
```
### Schedule Meeting
1. Determine meeting participants
2. Set meeting time and duration
3. Create meeting title and description
4. Use MCP tool to create online meeting
Example:
```
"Schedule a meeting with @alice and @bob for Friday 2pm to discuss the project roadmap"
```
### List Channel Messages
1. Specify team and channel
2. Define date range (required for polling)
3. Retrieve and display messages
Example:
```
"Show me all messages in #general from the last week"
```
### Send Direct Message
1. Identify recipient user
2. Compose message
3. Use MCP chat tool to send message
Example:
```
"Send a message to @john asking if the PR review is complete"
```
## MCP Tool Categories
The MS Teams MCP server provides tool categories for:
- **Channels**: Team and channel management operations
- **Messages**: Channel message operations
- **Meetings**: Online meeting scheduling and management
- **Chat**: Direct and group chat operations
## Important Constraints
**Authentication**: Do NOT include Graph API authentication flows. The MCP server handles authentication configuration.
**Polling limits**: When retrieving messages, always specify a date range. Polling the same resource more than once per day is a violation of Microsoft APIs Terms of Use.
**Email overlap**: Do NOT overlap with Outlook email functionality. This skill focuses on Teams-specific communication (channels, chat, meetings), not email operations.
**File storage**: Files in channels are stored in SharePoint. Use SharePoint-specific operations for file management.
## Domain Boundaries
This skill integrates with **Hermes** (work communication agent). Hermes loads this skill when user requests:
- Teams-related operations
- Meeting scheduling or management
- Channel communication
- Teams chat conversations
For email operations, Hermes uses the **outlook** skill instead.

View File

@@ -218,6 +218,7 @@ curl -X POST "http://127.0.0.1:27124/create-note" \
| Research note | research | Save research findings with tags | | Research note | research | Save research findings with tags |
| Project note | task-management | Link tasks to project notes | | Project note | task-management | Link tasks to project notes |
| Plan document | plan-writing | Save generated plan to vault | | Plan document | plan-writing | Save generated plan to vault |
| Memory note | memory | Create/read memory notes in 80-memory/ |
## Best Practices ## Best Practices
@@ -229,6 +230,102 @@ curl -X POST "http://127.0.0.1:27124/create-note" \
6. **Escape special characters** - URL-encode paths with spaces or symbols 6. **Escape special characters** - URL-encode paths with spaces or symbols
7. **Backup vault** - REST API operations modify files directly 7. **Backup vault** - REST API operations modify files directly
---
## Memory Folder Conventions
The `80-memory/` folder stores dual-layer memories synced with Mem0.
### Structure
```
80-memory/
├── preferences/ # Personal preferences (UI, workflow, communication)
├── facts/ # Objective information (role, tech stack, constraints)
├── decisions/ # Choices with rationale (tool selections, architecture)
├── entities/ # People, organizations, systems, concepts
└── other/ # Everything else
```
### Naming Convention
Memory notes use kebab-case: `prefers-dark-mode.md`, `uses-typescript.md`
### Required Frontmatter
```yaml
---
type: memory
category: # preference | fact | decision | entity | other
mem0_id: # Mem0 memory ID (e.g., "mem_abc123")
source: explicit # explicit | auto-capture
importance: # critical | high | medium | low
created: 2026-02-12
updated: 2026-02-12
tags:
- memory
sync_targets: []
---
```
### Key Fields
| Field | Purpose |
|-------|---------|
| `mem0_id` | Links to Mem0 entry for semantic search |
| `category` | Determines subfolder and classification |
| `source` | How memory was captured (explicit request vs auto) |
| `importance` | Priority for recall ranking |
---
## Memory Note Workflows
### Create Memory Note
When creating a memory note in the vault:
```bash
# Using REST API
curl -X POST "http://127.0.0.1:27124/create-note" \
-H "Content-Type: application/json" \
-d '{
"path": "80-memory/preferences/prefers-dark-mode.md",
"content": "---\ntype: memory\ncategory: preference\nmem0_id: mem_abc123\nsource: explicit\nimportance: medium\ncreated: 2026-02-12\nupdated: 2026-02-12\ntags:\n - memory\nsync_targets: []\n---\n\n# Prefers Dark Mode\n\n## Content\n\nUser prefers dark mode in all applications.\n\n## Context\n\nStated during UI preferences discussion on 2026-02-12.\n\n## Related\n\n- [[UI Settings]]\n"
}'
```
### Read Memory Note
Read by path with URL encoding:
```bash
curl -X GET "http://127.0.0.1:27124/read-note?path=80-memory%2Fpreferences%2Fprefers-dark-mode.md"
```
### Search Memories
Search within memory folder:
```bash
curl -X GET "http://127.0.0.1:27124/search?q=dark%20mode&path=80-memory"
```
### Update Memory Note
Update content and frontmatter:
```bash
curl -X PUT "http://127.0.0.1:27124/update-note" \
-H "Content-Type: application/json" \
-d '{
"path": "80-memory/preferences/prefers-dark-mode.md",
"content": "# Updated content..."
}'
```
---
## Error Handling ## Error Handling
Common HTTP status codes: Common HTTP status codes:

View File

@@ -1,231 +0,0 @@
---
name: outlook
description: "Outlook Graph API integration for email, calendar, and contact management. Use when: (1) Reading or sending emails, (2) Managing inbox and folders, (3) Working with calendar events and appointments, (4) Managing contacts, (5) Organizing email messages. Triggers: 'email', 'Outlook', 'inbox', 'calendar', 'contact', 'message', 'folder', 'appointment', 'meeting'."
compatibility: opencode
---
# Outlook
Outlook Graph API integration for mail, calendar, and contact management via MCP. Enables comprehensive email workflows, calendar coordination, and contact organization.
## Overview
Outlook is Microsoft 365's messaging and communication hub. This skill provides access to:
- **Mail**: Send, receive, search, organize, and categorize messages
- **Calendar**: Create and manage events, appointments, and meetings
- **Contacts**: Manage contact information and relationships
- **Folders**: Organize messages into custom folder structures
## Core Workflows
### 1. Reading Emails
Retrieve messages from mailbox:
```
User: "Check my inbox for new messages from John"
AI: Use Outlook MCP to:
1. List messages in Inbox folder
2. Filter by sender = "John"
3. Display summary (subject, date, importance)
```
**Use search and sort** to efficiently find messages:
- Filter by folder (Inbox, Sent, Drafts, etc.)
- Sort by date, importance, or subject
- Search by sender, subject, or content
### 2. Sending Emails
Compose and send new messages:
```
User: "Send an email to sarah@company.com about the project update"
AI: Use Outlook MCP to:
1. Create message draft with recipient and subject
2. Add body content
3. Send the message
```
**Key options**:
- Add attachments
- Set importance (low, normal, high)
- Add recipients (to, cc, bcc)
- Use categories for organization
### 3. Replying and Forwarding
Respond to existing messages:
```
User: "Reply to the last email from Mike with thanks"
AI: Use Outlook MCP to:
1. Retrieve the most recent message from Mike
2. Create reply with appropriate body
3. Send the reply
```
**Supports**:
- Reply (to sender)
- Reply All (to all recipients)
- Forward (to new recipients)
### 4. Managing Folders
Organize messages into folder structure:
```
User: "Move all emails from vendor@supplier.com to Vendors folder"
AI: Use Outlook MCP to:
1. Search for messages from vendor@supplier.com
2. Move matching messages to Vendors folder
3. Confirm move operation
```
**Folder operations**:
- List folders
- Create new folders
- Move messages between folders
- Delete folders (careful!)
### 5. Calendar Events
Manage calendar and schedule meetings:
```
User: "Schedule a team meeting for Friday at 2pm"
AI: Use Outlook MCP to:
1. Create calendar event
2. Set date/time (Friday 2:00 PM)
3. Add attendees
4. Send meeting invitation
```
**Event details**:
- Subject and location
- Start/end times
- Attendees and required/optional status
- Meeting body/agenda
- Recurrence (for recurring meetings)
### 6. Managing Contacts
Create and maintain contact information:
```
User: "Add Jane Smith to contacts: jane@company.com, 555-1234"
AI: Use Outlook MCP to:
1. Create contact object
2. Add email address and phone number
3. Save to contacts
```
**Contact fields**:
- Name (first, last, display name)
- Email addresses (multiple)
- Phone numbers (multiple types)
- Company, job title
- Notes
## Advanced Features
### Message Organization
**Categories**: Tag messages with color-coded categories for enhanced organization
```
User: "Tag these project emails as 'Urgent' category"
AI: Use Outlook MCP to:
1. Retrieve specified messages
2. Assign category (e.g., "Urgent")
3. Confirm categorization
```
**Importance**: Mark messages as high, normal, or low importance
```
User: "Mark this message as high priority"
AI: Use Outlook MCP to update message importance flag
```
**Search**: Find messages by sender, subject, content, or date range
```
User: "Find all emails about Q4 budget from October"
AI: Use Outlook MCP to search with filters:
- Subject contains "budget"
- Date range: October
- Optionally filter by sender
```
### Email Intelligence
**Focused Inbox**: Access messages categorized as focused vs other
**Mail Tips**: Check recipient status before sending (auto-reply, full mailbox)
**MIME Support**: Handle email in MIME format for interoperability
## Integration with Other Skills
This skill focuses on Outlook-specific operations. For related functionality:
| Need | Skill | When to Use |
|------|-------|-------------|
| **Team project updates** | basecamp | "Update the Basecamp todo" |
| **Team channel messages** | msteams | "Post this in the Teams channel" |
| **Private notes about emails** | obsidian | "Save this to Obsidian" |
| **Drafting long-form emails** | calliope | "Help me write a professional email" |
| **Short quick messages** | hermes (this skill) | "Send a quick update" |
## Common Patterns
### Email Triage Workflow
1. **Scan inbox**: List messages sorted by date
2. **Categorize**: Assign categories based on content/urgency
3. **Action**: Reply, forward, or move to appropriate folder
4. **Track**: Flag for follow-up if needed
### Meeting Coordination
1. **Check availability**: Query calendar for conflicts
2. **Propose time**: Suggest multiple time options
3. **Create event**: Set up meeting with attendees
4. **Follow up**: Send reminder or agenda
### Project Communication
1. **Search thread**: Find all messages related to project
2. **Organize**: Move to project folder
3. **Categorize**: Tag with project category
4. **Summarize**: Extract key points if needed
## Quality Standards
- **Accurate recipient addressing**: Verify email addresses before sending
- **Clear subject lines**: Ensure subjects accurately reflect content
- **Appropriate categorization**: Use categories consistently
- **Folder hygiene**: Maintain organized folder structure
- **Respect privacy**: Do not share sensitive content indiscriminately
## Edge Cases
**Multiple mailboxes**: This skill supports primary and shared mailboxes, not archive mailboxes
**Large attachments**: Use appropriate attachment handling for large files
**Meeting conflicts**: Check calendar availability before scheduling
**Email limits**: Respect rate limits and sending quotas
**Deleted items**: Use caution with delete operations (consider archiving instead)
## Boundaries
- **Do NOT handle Teams-specific messaging** (Teams's domain)
- **Do NOT handle Basecamp communication** (basecamp's domain)
- **Do NOT manage wiki documentation** (Athena's domain)
- **Do NOT access private Obsidian vaults** (Apollo's domain)
- **Do NOT write creative email content** (delegate to calliope for drafts)

View File

@@ -79,6 +79,7 @@ Executable code (Python/Bash/etc.) for tasks that require deterministic reliabil
- **Example**: `scripts/rotate_pdf.py` for PDF rotation tasks - **Example**: `scripts/rotate_pdf.py` for PDF rotation tasks
- **Benefits**: Token efficient, deterministic, may be executed without loading into context - **Benefits**: Token efficient, deterministic, may be executed without loading into context
- **Note**: Scripts may still need to be read by Opencode for patching or environment-specific adjustments - **Note**: Scripts may still need to be read by Opencode for patching or environment-specific adjustments
- **Dependencies**: Scripts with external dependencies (Python packages, system tools) require those dependencies to be registered in the repository's `flake.nix`. See Step 4 for details.
##### References (`references/`) ##### References (`references/`)
@@ -302,6 +303,37 @@ To begin implementation, start with the reusable resources identified above: `sc
Added scripts must be tested by actually running them to ensure there are no bugs and that the output matches what is expected. If there are many similar scripts, only a representative sample needs to be tested to ensure confidence that they all work while balancing time to completion. Added scripts must be tested by actually running them to ensure there are no bugs and that the output matches what is expected. If there are many similar scripts, only a representative sample needs to be tested to ensure confidence that they all work while balancing time to completion.
#### Register Dependencies in flake.nix
When scripts introduce external dependencies (Python packages or system tools), add them to the repository's `flake.nix`. Dependencies are defined once in `pythonEnv` (Python packages) or `packages` (system tools) inside the `skills-runtime` buildEnv. This runtime is exported as `packages.${system}.skills-runtime` and consumed by project flakes and home-manager — ensuring opencode always has the correct environment regardless of which project it runs in.
**Python packages** — add to the `pythonEnv` block with a comment referencing the skill:
```nix
pythonEnv = pkgs.python3.withPackages (ps:
with ps; [
# <skill-name>: <script>.py
<package-name>
]);
```
**System tools** (e.g. `poppler-utils`, `ffmpeg`, `imagemagick`) — add to the `paths` list in the `skills-runtime` buildEnv:
```nix
skills-runtime = pkgs.buildEnv {
name = "opencode-skills-runtime";
paths = [
pythonEnv
# <skill-name>: needed by <script>
pkgs.<tool-name>
];
};
```
**Convention**: Each entry must include a comment with `# <skill-name>: <reason>` so dependencies remain traceable to their originating skill.
After adding dependencies, verify they resolve: `nix develop --command python3 -c "import <package>"`
Any example files and directories not needed for the skill should be deleted. The initialization script creates example files in `scripts/`, `references/`, and `assets/` to demonstrate structure, but most skills won't need all of them. Any example files and directories not needed for the skill should be deleted. The initialization script creates example files in `scripts/`, `references/`, and `assets/` to demonstrate structure, but most skills won't need all of them.
#### Update SKILL.md #### Update SKILL.md

View File

@@ -6,8 +6,8 @@ Usage:
init_skill.py <skill-name> --path <path> init_skill.py <skill-name> --path <path>
Examples: Examples:
init_skill.py my-new-skill --path ~/.config/opencode/skill init_skill.py my-new-skill --path ~/.config/opencode/skills
init_skill.py my-api-helper --path .opencode/skill init_skill.py my-api-helper --path .opencode/skills
init_skill.py custom-skill --path /custom/location init_skill.py custom-skill --path /custom/location
""" """