docs: update AGENTS.md and README.md for rules system, remove beads

- Add rules/ directory documentation to both files
- Update skill count from 25 to 15 modules
- Remove beads references (issue tracking removed)
- Update skills list with current active skills
- Document flake.nix as proper Nix flake (not flake=false)
- Add rules system integration section
- Clean up sisyphus planning artifacts
- Remove deprecated skills (memory, msteams, outlook)
This commit is contained in:
m3tm3re
2026-03-03 19:38:48 +01:00
parent 1bc81fb38c
commit 39ac89f388
46 changed files with 1357 additions and 8550 deletions

1
.envrc Normal file
View File

@@ -0,0 +1 @@
use flake

4
.gitignore vendored
View File

@@ -8,3 +8,7 @@
.sidecar-start.sh .sidecar-start.sh
.sidecar-base .sidecar-base
.td-root .td-root
# Nix / direnv
.direnv/
result

View File

@@ -1,9 +0,0 @@
{
"active_plan": "/home/m3tam3re/p/AI/AGENTS/.sisyphus/plans/rules-system.md",
"started_at": "2026-02-17T17:50:08.922Z",
"session_ids": [
"ses_393691db2ffe4YZvieMFehJe54"
],
"plan_name": "rules-system",
"agent": "atlas"
}

View File

@@ -1 +0,0 @@
{"instructions":[".opencode-rules/concerns/coding-style.md",".opencode-rules/concerns/naming.md",".opencode-rules/concerns/documentation.md",".opencode-rules/concerns/testing.md",".opencode-rules/concerns/git-workflow.md",".opencode-rules/concerns/project-structure.md"],"shellHook":"# Create/update symlink to AGENTS rules directory\nln -sfn /nix/store/wsqzf0z3hg8mhpq484f24fm72qp4k6sg-AGENTS/rules .opencode-rules\n\n# Generate opencode.json configuration file\ncat > opencode.json <<'OPENCODE_EOF'\n{\"$schema\":\"https://opencode.ai/config.json\",\"instructions\":[\".opencode-rules/concerns/coding-style.md\",\".opencode-rules/concerns/naming.md\",\".opencode-rules/concerns/documentation.md\",\".opencode-rules/concerns/testing.md\",\".opencode-rules/concerns/git-workflow.md\",\".opencode-rules/concerns/project-structure.md\"]}\nOPENCODE_EOF\n"}

View File

@@ -1 +0,0 @@
{"instructions":[".opencode-rules/concerns/coding-style.md",".opencode-rules/concerns/naming.md",".opencode-rules/concerns/documentation.md",".opencode-rules/concerns/testing.md",".opencode-rules/concerns/git-workflow.md",".opencode-rules/concerns/project-structure.md",".opencode-rules/languages/python.md"],"shellHook":"# Create/update symlink to AGENTS rules directory\nln -sfn /nix/store/4li05383sgf4z0l6bxv8hmvgs600y56x-AGENTS/rules .opencode-rules\n\n# Generate opencode.json configuration file\ncat > opencode.json <<'OPENCODE_EOF'\n{\"$schema\":\"https://opencode.ai/config.json\",\"instructions\":[\".opencode-rules/concerns/coding-style.md\",\".opencode-rules/concerns/naming.md\",\".opencode-rules/concerns/documentation.md\",\".opencode-rules/concerns/testing.md\",\".opencode-rules/concerns/git-workflow.md\",\".opencode-rules/concerns/project-structure.md\",\".opencode-rules/languages/python.md\"]}\nOPENCODE_EOF\n"}

View File

@@ -1 +0,0 @@
{"instructions":[".opencode-rules/concerns/coding-style.md",".opencode-rules/concerns/naming.md",".opencode-rules/concerns/documentation.md",".opencode-rules/concerns/testing.md",".opencode-rules/concerns/git-workflow.md",".opencode-rules/concerns/project-structure.md",".opencode-rules/languages/python.md",".opencode-rules/languages/typescript.md",".opencode-rules/languages/nix.md",".opencode-rules/languages/shell.md"],"shellHook":"# Create/update symlink to AGENTS rules directory\nln -sfn /nix/store/qzsdn3m85qwarpd43x8k28sja40r21p7-AGENTS/rules .opencode-rules\n\n# Generate opencode.json configuration file\ncat > opencode.json <<'OPENCODE_EOF'\n{\"$schema\":\"https://opencode.ai/config.json\",\"instructions\":[\".opencode-rules/concerns/coding-style.md\",\".opencode-rules/concerns/naming.md\",\".opencode-rules/concerns/documentation.md\",\".opencode-rules/concerns/testing.md\",\".opencode-rules/concerns/git-workflow.md\",\".opencode-rules/concerns/project-structure.md\",\".opencode-rules/languages/python.md\",\".opencode-rules/languages/typescript.md\",\".opencode-rules/languages/nix.md\",\".opencode-rules/languages/shell.md\"]}\nOPENCODE_EOF\n"}

View File

@@ -1 +0,0 @@
{"instructions":[".opencode-rules/concerns/coding-style.md",".opencode-rules/concerns/naming.md",".opencode-rules/concerns/documentation.md",".opencode-rules/concerns/testing.md",".opencode-rules/concerns/git-workflow.md",".opencode-rules/concerns/project-structure.md",".opencode-rules/languages/python.md",".opencode-rules/frameworks/n8n.md"],"shellHook":"# Create/update symlink to AGENTS rules directory\nln -sfn /nix/store/55brjhy9m1vcgrnd100vmwf9bycjpzpi-AGENTS/rules .opencode-rules\n\n# Generate opencode.json configuration file\ncat > opencode.json <<'OPENCODE_EOF'\n{\"$schema\":\"https://opencode.ai/config.json\",\"instructions\":[\".opencode-rules/concerns/coding-style.md\",\".opencode-rules/concerns/naming.md\",\".opencode-rules/concerns/documentation.md\",\".opencode-rules/concerns/testing.md\",\".opencode-rules/concerns/git-workflow.md\",\".opencode-rules/concerns/project-structure.md\",\".opencode-rules/languages/python.md\",\".opencode-rules/frameworks/n8n.md\"]}\nOPENCODE_EOF\n"}

View File

@@ -1 +0,0 @@
{"instructions":[".opencode-rules/concerns/coding-style.md",".opencode-rules/concerns/naming.md",".opencode-rules/concerns/documentation.md",".opencode-rules/concerns/testing.md",".opencode-rules/concerns/git-workflow.md",".opencode-rules/concerns/project-structure.md",".opencode-rules/languages/python.md",".opencode-rules/custom.md"],"shellHook":"# Create/update symlink to AGENTS rules directory\nln -sfn /nix/store/r8yfirsyyii9x05qd5kfdvzcqv7sx6az-AGENTS/rules .opencode-rules\n\n# Generate opencode.json configuration file\ncat > opencode.json <<'OPENCODE_EOF'\n{\"$schema\":\"https://opencode.ai/config.json\",\"instructions\":[\".opencode-rules/concerns/coding-style.md\",\".opencode-rules/concerns/naming.md\",\".opencode-rules/concerns/documentation.md\",\".opencode-rules/concerns/testing.md\",\".opencode-rules/concerns/git-workflow.md\",\".opencode-rules/concerns/project-structure.md\",\".opencode-rules/languages/python.md\",\".opencode-rules/custom.md\"]}\nOPENCODE_EOF\n"}

View File

@@ -1,153 +0,0 @@
# Opencode Rules Nix Module - Manual QA Results
## Test Summary
Date: 2025-02-17
Module: `/home/m3tam3re/p/NIX/nixpkgs/lib/opencode-rules.nix`
Test Type: Manual QA (nix eval)
---
## Scenario Results
### Scenario 1: Empty Config (Defaults Only)
**Command**: `nix eval --impure --json --expr 'let pkgs = import <nixpkgs> {}; m3taLib = import /home/m3tam3re/p/NIX/nixpkgs/lib {lib = pkgs.lib;}; in m3taLib.opencode-rules.mkOpencodeRules { agents = /home/m3tam3re/p/AI/AGENTS; }'`
**Results**:
- ✅ Valid JSON output
- ✅ Has `$schema` field in embedded opencode.json
- ✅ Has `instructions` field
- ✅ Correct instruction count: 6 (default concerns only)
**Expected Instructions**:
1. `.opencode-rules/concerns/coding-style.md`
2. `.opencode-rules/concerns/naming.md`
3. `.opencode-rules/concerns/documentation.md`
4. `.opencode-rules/concerns/testing.md`
5. `.opencode-rules/concerns/git-workflow.md`
6. `.opencode-rules/concerns/project-structure.md`
---
### Scenario 2: Single Language (Python)
**Command**: `nix eval --impure --json --expr 'let pkgs = import <nixpkgs> {}; m3taLib = import /home/m3tam3re/p/NIX/nixpkgs/lib {lib = pkgs.lib;}; in m3taLib.opencode-rules.mkOpencodeRules { agents = /home/m3tam3re/p/AI/AGENTS; languages = ["python"]; }'`
**Results**:
- ✅ Valid JSON output
- ✅ Has `$schema` field in embedded opencode.json
- ✅ Has `instructions` field
- ✅ Correct instruction count: 7 (6 concerns + 1 language)
**Expected Instructions**:
- All 6 default concerns
- `.opencode-rules/languages/python.md`
---
### Scenario 3: Multi-Language
**Command**: `nix eval --impure --json --expr 'let pkgs = import <nixpkgs> {}; m3taLib = import /home/m3tam3re/p/NIX/nixpkgs/lib {lib = pkgs.lib;}; in m3taLib.opencode-rules.mkOpencodeRules { agents = /home/m3tam3re/p/AI/AGENTS; languages = ["python" "typescript" "nix" "shell"]; }'`
**Results**:
- ✅ Valid JSON output
- ✅ Has `$schema` field in embedded opencode.json
- ✅ Has `instructions` field
- ✅ Correct instruction count: 10 (6 concerns + 4 languages)
**Expected Instructions**:
- All 6 default concerns
- `.opencode-rules/languages/python.md`
- `.opencode-rules/languages/typescript.md`
- `.opencode-rules/languages/nix.md`
- `.opencode-rules/languages/shell.md`
---
### Scenario 4: With Frameworks
**Command**: `nix eval --impure --json --expr 'let pkgs = import <nixpkgs> {}; m3taLib = import /home/m3tam3re/p/NIX/nixpkgs/lib {lib = pkgs.lib;}; in m3taLib.opencode-rules.mkOpencodeRules { agents = /home/m3tam3re/p/AI/AGENTS; languages = ["python"]; frameworks = ["n8n"]; }'`
**Results**:
- ✅ Valid JSON output
- ✅ Has `$schema` field in embedded opencode.json
- ✅ Has `instructions` field
- ✅ Correct instruction count: 8 (6 concerns + 1 language + 1 framework)
**Expected Instructions**:
- All 6 default concerns
- `.opencode-rules/languages/python.md`
- `.opencode-rules/frameworks/n8n.md`
---
### Scenario 5: Extra Instructions
**Command**: `nix eval --impure --json --expr 'let pkgs = import <nixpkgs> {}; m3taLib = import /home/m3tam3re/p/NIX/nixpkgs/lib {lib = pkgs.lib;}; in m3taLib.opencode-rules.mkOpencodeRules { agents = /home/m3tam3re/p/AI/AGENTS; languages = ["python"]; extraInstructions = [".opencode-rules/custom.md"]; }'`
**Results**:
- ✅ Valid JSON output
- ✅ Has `$schema` field in embedded opencode.json
- ✅ Has `instructions` field
- ✅ Correct instruction count: 8 (6 concerns + 1 language + 1 custom)
**Expected Instructions**:
- All 6 default concerns
- `.opencode-rules/languages/python.md`
- `.opencode-rules/custom.md`
---
## Content Quality Spot Checks
### 1. coding-style.md (Concern Rule)
**Assessment**: ✅ High Quality
- Clear critical rules with "Always/Never" directives
- Good vs. bad code examples
- Comprehensive coverage: formatting, patterns, error handling, type safety, function design, SOLID
- Well-structured sections
### 2. python.md (Language Rule)
**Assessment**: ✅ High Quality
- Modern toolchain recommendations (uv, ruff, pyright, pytest, hypothesis)
- Common idioms with practical examples
- Anti-patterns with explanations
- Project setup structure
- Clear, actionable code snippets
### 3. n8n.md (Framework Rule)
**Assessment**: ✅ High Quality
- Concise workflow design principles
- Clear naming conventions
- Error handling patterns
- Security best practices
- Actionable testing guidelines
---
## Issues Encountered
### Socket File Issue
**Issue**: `nix eval` failed with `error: file '/home/m3tam3re/p/AI/AGENTS/.beads/bd.sock' has an unsupported type`
**Workaround**: Temporarily moved `.beads` directory outside the AGENTS tree during testing
**Root Cause**: Nix attempts to evaluate/store the `agents` path recursively and encounters unsupported socket files (Unix domain sockets)
**Recommendation**: Consider adding `.beads` to `.gitignore` and excluding it from path evaluation if possible, or document this limitation for users
---
## Final Verdict
```
Scenarios [5/5 pass] | VERDICT: OKAY
```
### Summary
- All 5 test scenarios executed successfully
- All JSON outputs are valid and properly structured
- All embedded `opencode.json` configurations have required `$schema` and `instructions` fields
- Instruction counts match expected values for each scenario
- Rule content quality is high across concern, language, and framework rules
- Shell hook properly generates symlink and configuration file
### Notes
- Socket file issue requires workaround (documented)
- Module correctly handles default concerns, multiple languages, frameworks, and custom instructions
- Code examples in rules are clear and actionable

View File

@@ -1,6 +0,0 @@
=== Context Budget ===
Concerns: 751
Python: 224
Total (concerns + python): 975
Limit: 1500
RESULT: PASS (under 1500)

View File

@@ -1 +0,0 @@
[".opencode-rules/concerns/coding-style.md",".opencode-rules/concerns/naming.md",".opencode-rules/concerns/documentation.md",".opencode-rules/concerns/testing.md",".opencode-rules/concerns/git-workflow.md",".opencode-rules/concerns/project-structure.md",".opencode-rules/languages/python.md",".opencode-rules/languages/typescript.md",".opencode-rules/languages/nix.md",".opencode-rules/languages/shell.md",".opencode-rules/frameworks/n8n.md"]

View File

@@ -1,16 +0,0 @@
=== Task 17 Integration Test ===
File Line Counts:
163 /home/m3tam3re/p/AI/AGENTS/rules/concerns/coding-style.md
149 /home/m3tam3re/p/AI/AGENTS/rules/concerns/documentation.md
118 /home/m3tam3re/p/AI/AGENTS/rules/concerns/git-workflow.md
105 /home/m3tam3re/p/AI/AGENTS/rules/concerns/naming.md
82 /home/m3tam3re/p/AI/AGENTS/rules/concerns/project-structure.md
134 /home/m3tam3re/p/AI/AGENTS/rules/concerns/testing.md
129 /home/m3tam3re/p/AI/AGENTS/rules/languages/nix.md
224 /home/m3tam3re/p/AI/AGENTS/rules/languages/python.md
100 /home/m3tam3re/p/AI/AGENTS/rules/languages/shell.md
150 /home/m3tam3re/p/AI/AGENTS/rules/languages/typescript.md
42 /home/m3tam3re/p/AI/AGENTS/rules/frameworks/n8n.md
1396 total
RESULT: All 11 files present

View File

@@ -1,13 +0,0 @@
=== Path Resolution Check ===
OK: rules/concerns/coding-style.md exists
OK: rules/concerns/naming.md exists
OK: rules/concerns/documentation.md exists
OK: rules/concerns/testing.md exists
OK: rules/concerns/git-workflow.md exists
OK: rules/concerns/project-structure.md exists
OK: rules/languages/python.md exists
OK: rules/languages/typescript.md exists
OK: rules/languages/nix.md exists
OK: rules/languages/shell.md exists
OK: rules/frameworks/n8n.md exists
RESULT: All paths resolve

File diff suppressed because it is too large Load Diff

View File

@@ -1,28 +0,0 @@
## Task 5: Update Mem0 Memory Skill (2026-02-12)
### Decisions Made
1. **Section Placement**: Added new sections without disrupting existing content structure
- "Memory Categories" after "Identity Scopes" (line ~109)
- "Dual-Layer Sync" after "Workflow Patterns" (line ~138)
- Extended "Health Check" section with Pre-Operation Check
- "Error Handling" at end, before API Reference
2. **Content Structure**:
- Memory Categories: 5-category classification with table format
- Dual-Layer Sync: Complete sync pattern with bash example
- Health Check: Added pre-operation verification
- Error Handling: Comprehensive graceful degradation patterns
3. **Validation Approach**:
- Used `./scripts/test-skill.sh --validate` for skill structure validation
- All sections verified with grep commands
- Commit and push completed successfully
### Success Patterns
- Edit tool works well for adding sections to existing markdown files
- Preserving existing content while adding new sections
- Using grep for verification of section additions
- `./scripts/test-skill.sh --validate` validates YAML frontmatter automatically

View File

@@ -1,47 +0,0 @@
## Core Memory Skill Creation (2026-02-12)
**Task**: Create `skills/memory/SKILL.md` - dual-layer memory orchestration skill
**Pattern Identified**:
- Skill structure follows YAML frontmatter with required fields:
- `name`: skill identifier
- `description`: Use when (X), triggers (Y) pattern
- `compatibility`: "opencode"
- Markdown structure: Overview, Prerequisites, Workflows, Error Handling, Integration, Quick Reference, See Also
**Verification Pattern**:
```bash
test -f <path> && echo "File exists"
grep "name: <skill>" <path>
grep "key-term" <path>
```
**Key Design Decision**:
- Central orchestration skill that references underlying implementation skills (mem0-memory, obsidian)
- 4 core workflows: Store, Recall, Auto-Capture, Auto-Recall
- Error handling with graceful degradation
## Apollo Agent Prompt Update (2026-02-12)
**Task**: Add memory management responsibilities to Apollo agent system prompt
**Edit Pattern**: Multiple targeted edits to single file preserving existing content
- Line number-based edits require precise matching of surrounding context
- Edit order: Core Responsibilities → Quality Standards → Tool Usage → Edge Cases
- Each edit inserts new bullet items without removing existing content
**Key Additions**:
1. Core Responsibilities: "Manage dual-layer memory system (Mem0 + Obsidian CODEX)"
2. Quality Standards: Memory storage, auto-capture, retrieval, categories
3. Tool Usage: Mem0 REST API (localhost:8000), Obsidian MCP integration
4. Edge Cases: Mem0 unavailable, Obsidian unavailable handling
**Verification Pattern**:
```bash
grep -c "memory" ~/p/AI/AGENTS/prompts/apollo.txt # Count occurrences
grep "Mem0" ~/p/AI/AGENTS/prompts/apollo.txt # Check specific term
grep -i "auto-capture" ~/p/AI/AGENTS/prompts/apollo.txt # Case-insensitive
```
**Observation**: grep is case-sensitive by default - use -i for case-insensitive searches

View File

@@ -1,120 +0,0 @@
# Opencode Memory Plugin — Learnings
## Session: ses_3a5a47a05ffeoNYfz2RARYsHX9
Started: 2026-02-14
### Architecture Decisions
- SQLite + FTS5 + vec0 replaces mem0+qdrant entirely
- Markdown at ~/CODEX/80-memory/ is source of truth
- SQLite DB at ~/.local/share/opencode-memory/index.db is derived index
- OpenAI text-embedding-3-small for embeddings (1536 dimensions)
- Hybrid search: 0.7 vector weight + 0.3 BM25 weight
- Chunking: 400 tokens, 80 overlap (tiktoken cl100k_base)
### Key Patterns from Openclaw
- MemoryIndexManager pattern (1590 lines) — file watching, chunking, indexing
- Hybrid scoring with weighted combination
- Embedding cache by content_hash + model
- Two sources: "memory" (markdown files) + "sessions" (transcripts)
- Two tools: memory_search (hybrid query) + memory_get (read lines)
### Technical Stack
- Runtime: bun
- Test framework: bun test (TDD)
- SQLite: better-sqlite3 (synchronous API)
- Embeddings: openai npm package
- Chunking: tiktoken (cl100k_base encoding)
- File watching: chokidar
- Validation: zod (for tool schemas)
### Vec0 Extension Findings (Task 1)
- **vec0 extension**: NOT AVAILABLE - requires vec0.so shared library not present
- **Alternative solution**: sqlite-vec package (v0.1.7-alpha.2) successfully tested
- **Loading mechanism**: `sqliteVec.load(db)` loads vector extension into database
- **Test result**: Works with Node.js (better-sqlite3 native module compatible)
- **Note**: better-sqlite3 does NOT work with Bun runtime (native module incompatibility)
- **Testing command**: `node -e "const Database = require('better-sqlite3'); const sqliteVec = require('sqlite-vec'); const db = new Database(':memory:'); sqliteVec.load(db); console.log('OK')"`
### Bun Runtime Limitations
- better-sqlite3 native module NOT compatible with Bun (ERR_DLOPEN_FAILED)
- Use Node.js for any code requiring better-sqlite3
- Alternative: bun:sqlite API (similar API, but not same library)
## Wave Progress
- Wave 1: IN PROGRESS (Task 1)
- Wave 2-6: PENDING
### Configuration Module Implementation (Task: Config Module)
- **TDD approach**: RED-GREEN-REFACTOR cycle successfully applied
- **Pattern**: Default config object + resolveConfig() function for merging
- **Path expansion**: `expandPath()` helper function handles `~``$HOME` expansion
- **Test coverage**: 10 tests covering defaults, overrides, path expansion, and config merging
- **TypeScript best practices**: Proper type exports from types.ts, type imports in config.ts
- **Defaults match openclaw**: chunking (400/80), search weights (0.7/0.3), minScore (0.35), maxResults (6)
- **Bun test framework**: Fast execution (~20ms for 10 tests), clean output
### Database Schema Implementation (Task 2)
- **TDD approach**: RED-GREEN-REFACTOR cycle successfully applied for db module
- **Schema tables**: meta, files, chunks, embedding_cache, chunks_fts (FTS5), chunks_vec (vec0)
- **WAL mode**: Enabled via `db.pragma('journal_mode = WAL')` for better concurrency
- **Foreign keys**: Enabled via `db.pragma('foreign_keys = ON')`
- **sqlite-vec integration**: Loaded via `sqliteVec.load(db)` for vector search capabilities
- **FTS5 virtual table**: External content table referencing chunks for full-text search
- **vec0 virtual table**: 1536-dimension float array for OpenAI text-embedding-3-small embeddings
- **Test execution**: Use Node.js with tsx for TypeScript execution (not Bun runtime)
- **Buffer handling**: Float32Array must be converted to Buffer via `Buffer.from(array.buffer)` for SQLite binding
- **In-memory databases**: WAL mode returns 'memory' for :memory: DBs, 'wal' for file-based DBs
- **Test coverage**: 9 tests covering table creation, data insertion, FTS5, vec0, WAL mode, and clean closure
- **Error handling**: better-sqlite3 throws "The database connection is not open" for operations on closed DBs
### Node.js Test Execution
- **Issue**: better-sqlite3 not compatible with Bun runtime (native module)
- **Solution**: Use Node.js with tsx (TypeScript executor) for running tests
- **Command**: `npx tsx --test src/__tests__/db.test.ts`
- **Node.test API**: Uses `describe`, `it`, `before`, `after` from 'node:test' module
- **Assertions**: Use `assert` from 'node:assert' module
- **Cleanup**: Use `after()` hooks for database cleanup, not `afterEach()` (node:test difference)
### Embedding Provider Implementation (Task: Embeddings Module)
- **TDD approach**: RED-GREEN-REFACTOR cycle successfully applied for embeddings module
- **Mock database**: Created in-memory mock for testing since better-sqlite3 incompatible with Bun
- **Float32 precision**: embeddings stored/retrieved via Float32Array has limited precision (use toBeCloseTo in tests)
- **Cache implementation**: content_hash + model composite key in embedding_cache table
- **Retry logic**: Exponential backoff (1s, 2s, 4s) for 429/500 errors, max 3 retries
- **Test coverage**: 11 tests covering embed(), embedBatch(), cache hits/misses, API failures, retries, buffer conversion
- **Helper functions**: embeddingToBuffer() and bufferToEmbedding() for Float32Array ↔ Buffer conversion
- **Bun spyOn**: Use mockClear() to reset call count without replacing mock implementation
- **Buffer size**: Float32 embedding stored as Buffer with size = dimensions * 4 bytes
### FTS5 BM25 Search Implementation (Task: FTS5 Search Module)
- **TDD approach**: RED-GREEN-REFACTOR cycle successfully applied for search module
- **buildFtsQuery()**: Extracts alphanumeric tokens via regex `/[A-Za-z0-9_]+/g`, quotes them, joins with AND
- **FTS5 escaping**: Tokens are quoted to handle special characters (e.g., `"term"`)
- **BM25 score normalization**: `bm25RankToScore(rank)` converts BM25 rank to 0-1 score using `1 / (1 + normalized)`
- **FTS5 external content tables**: The schema uses `content='chunks', content_rowid='rowid'` but requires manual insertion into chunks_fts
- **Test data setup**: Must manually insert into chunks_fts after inserting into chunks (external content doesn't auto-populate)
- **BM25 ranking**: Results are ordered by `rank` column (lower rank = better match for FTS5)
- **Error handling**: searchFTS catches SQL errors and returns empty array (graceful degradation)
- **MaxResults parameter**: Respects LIMIT clause in SQL query
- **SearchResult interface**: Includes id, filePath, startLine, endLine, text, contentHash, source, score (all required)
- **Prefix matching**: FTS5 supports prefix queries automatically via token matching (e.g., "test" matches "testing")
- **No matches**: Returns empty array when query has no valid tokens or no matches found
- **Test coverage**: 7 tests covering basic search, exact keywords, partial words, no matches, ranking, maxResults, and metadata
### Hybrid Search Implementation (Task: Hybrid Search Combiner)
- **TDD approach**: RED-GREEN-REFACTOR cycle successfully applied for hybrid search
- **Weighted scoring**: Combined score = vectorWeight * vectorScore + textWeight * textScore (default: 0.7/0.3)
- **Result merging**: Uses Map<string, HybridSearchResult> to merge results by chunk ID, preventing duplicates
- **Dual-score tracking**: Each result tracks both vectorScore and textScore separately, allowing for degraded modes
- **Graceful degradation**: Works with FTS5-only (vector search fails) or vector-only (FTS5 fails)
- **minScore filtering**: Results below minScore threshold are filtered out after score calculation
- **Score sorting**: Results sorted by combined score in descending order before applying maxResults limit
- **Vector search fallback**: searchVector catches errors and returns empty array, allowing FTS5-only operation
- **FTS5 query fallback**: searchFTS catches SQL errors and returns empty array, allowing vector-only operation
- **Database cleanup**: beforeEach must delete from chunks_fts, chunks_vec, chunks, and files to avoid state bleed
- **Virtual table corruption**: Deleting from FTS5/vec0 virtual tables can cause corruption - use try/catch to recreate
- **SearchResult type conflict**: SearchResult is imported from types.ts, don't re-export in search.ts
- **Test isolation**: Virtual tables (chunks_fts, chunks_vec) must be cleared and potentially recreated between tests
- **Buffer conversion**: queryEmbedding converted to Buffer via Buffer.from(new Float32Array(array).buffer)
- **Debug logging**: process.env.DEBUG_SEARCH flag enables detailed logging of FTS5 and vector search results
- **Test coverage**: 9 tests covering combination, weighting, minScore filtering, deduplication, sorting, maxResults, degraded modes (FTS5-only, vector-only), and custom weights

View File

@@ -1,60 +0,0 @@
# Rules System - Learnings
## 2026-02-17T17:50 Session Start
### Architecture Pattern
- Nix helper lives in nixpkgs repo (not AGENTS) - follows ports.nix pattern
- AGENTS repo stays pure content (markdown rule files only)
- Pattern: `{lib}: { mkOpencodeRules = ...; }`
### Key Files
- nixpkgs: `/home/m3tam3re/p/NIX/nixpkgs/lib/ports.nix` (reference pattern)
- nixos-config: `/home/m3tam3re/p/NIX/nixos-config/home/features/coding/opencode.nix` (deployment)
- AGENTS: `rules/` directory (content)
### mkOpencodeRules Signature
```nix
mkOpencodeRules {
agents = inputs.agents; # Non-flake input path
languages = [ "python" "typescript" ];
concerns ? [ "coding-style" "naming" "documentation" "testing" "git-workflow" "project-structure" ];
frameworks ? [ "n8n" ];
extraInstructions ? [];
}
```
### Consumption Pattern
```nix
let
m3taLib = inputs.m3ta-nixpkgs.lib.${system};
rules = m3taLib.opencode-rules.mkOpencodeRules {
agents = inputs.agents;
languages = [ "python" ];
};
in pkgs.mkShell { shellHook = rules.shellHook; }
```
### Wave 1: Directory Structure (2026-02-17T18:54)
- Successfully created rules/ directory with subdirectories: concerns/, languages/, frameworks/
- Added .gitkeep files to each subdirectory (git needs at least one file to track empty directories)
- Pattern reference: followed skills/ directory structure convention
- USAGE.md already existed in rules/ (created by previous wave)
- AGENTS repo stays pure content - no Nix files added (as planned)
- Verification: ls confirms all three .gitkeep files exist in proper locations
### Wave 2: Nix Helper Implementation (2026-02-17T19:02)
- Successfully created `/home/m3tam3re/p/NIX/nixpkgs/lib/opencode-rules.nix`
- Followed ports.nix pattern EXACTLY: `{lib}: { mkOpencodeRules = ...; }`
- Function signature: `{ agents, languages ? [], concerns ? [...], frameworks ? [], extraInstructions ? [] }`
- Returns: `{ shellHook, instructions }`
- Instructions list built using map functions for each category (concerns, languages, frameworks, extra)
- ShellHook creates symlink `.opencode-rules``${agents}/rules` and generates `opencode.json` with `$schema`
- JSON generation uses `builtins.toJSON opencodeConfig` where opencodeConfig = `{ "$schema" = "..."; inherit instructions; }`
- Comprehensive doc comments added matching ports.nix style (multi-line comments with usage examples)
- All paths relative to project root via `.opencode-rules/` prefix
- Verification passed:
- `nix eval --impure` shows file loads and exposes `mkOpencodeRules`
- Function returns `{ instructions, shellHook }`
- Instructions list builds correctly (concerns + languages + frameworks + extra)
- `nix-instantiate --parse` validates syntax is correct
- ShellHook contains both symlink creation and JSON generation (heredoc pattern)

View File

@@ -1,748 +0,0 @@
# Agent Permissions Refinement
## TL;DR
> **Quick Summary**: Refine OpenCode agent permissions for Chiron (planning) and Chriton-Forge (build) to implement 2025 AI security best practices with principle of least privilege, human-in-the-loop for critical actions, and explicit guardrails against permission bypass.
> **Deliverables**:
> - Updated `agents/agents.json` with refined permissions for Chiron and Chriton-Forge
> - Critical bug fix: Duplicate `external_directory` key in Chiron config
> - Enhanced secret blocking with additional patterns
> - Bash injection prevention rules
> - Git protection against secret commits and repo hijacking
> **Estimated Effort**: Medium
> **Parallel Execution**: NO - sequential changes to single config file
> **Critical Path**: Fix duplicate key → Apply Chiron permissions → Apply Chriton-Forge permissions → Validate
---
## Context
### Original Request
User wants to refine agent permissions for:
- **Chiron**: Planning agent with read-only access, restricted to read-only subagents, no file editing, can create beads issues
- **Chriton-Forge**: Build agent with write access restricted to ~/p/**, git commits allowed but git push asks, package install commands ask
- **General**: Sane defaults that are secure but open enough for autonomous work
### Interview Summary
**Key Discussions**:
- Chiron: Read-only planning, no file editing, bash denied except for `bd *` commands, external_directory ~/p/** only, task permission to restrict subagents to explore/librarian/athena + chiron-forge for handoff
- Chriton-Forge: Write access restricted to ~/p/**, git commits allow / git push ask, package install commands ask, git config deny
- Workspace path: ~/p/** is symlink to ~/projects/personal/** (just replacing path reference)
- Bash security: Block all bash redirect patterns (echo >, cat >, tee, etc.)
**Research Findings**:
- OpenCode supports granular permission rules with wildcards, last-match-wins
- 2025 best practices: Principle of least privilege, tiered permissions (read-only auto, destructive ask, JIT privileges), human-in-the-loop for critical actions
- Security hardening: Block command injection vectors, prevent git secret commits, add comprehensive secret blocking patterns
### Metis Review
**Critical Issues Identified**:
1. **Duplicate `external_directory` key** in Chiron config (lines 8-9 and 27) - second key overrides first, breaking intended behavior
2. **Bash edit bypass**: Even with `edit: deny`, bash can write files via redirection (`echo "x" > file.txt`, `cat >`, `tee`)
3. **Git secret protection**: Agent could commit secrets (read .env, then git commit .env)
4. **Git config hijacking**: Agent could modify .git/config to push to attacker-controlled repo
5. **Command injection**: Malicious content could execute via `$()`, backticks, `eval`, `source`
6. **Secret blocking incomplete**: Missing patterns for `.local/share/*`, `.cache/*`, `*.db`, `*.keychain`, `*.p12`
**Guardrails Applied**:
- Fix duplicate external_directory key (use single object with catch-all `"*": "ask"` after specific rules)
- Add bash file write protection patterns (echo >, cat >, printf >, tee, > operators)
- Add git secret protection (`git add *.env*`: deny, `git commit *.env*`: deny)
- Add git config protection (`git config *`: deny for Chriton-Forge)
- Add bash injection prevention (`$(*`, `` `*``, `eval *`, `source *`)
- Expand secret blocking with additional patterns
- Add /run/agenix/* to read deny list
---
## Work Objectives
### Core Objective
Refine OpenCode agent permissions in `agents/agents.json` to implement security hardening based on 2025 AI agent best practices while maintaining autonomous workflow capabilities.
### Concrete Deliverables
- Updated `agents/agents.json` with:
- Chiron: Read-only permissions, subagent restrictions, bash denial (except `bd *`), no file editing
- Chriton-Forge: Write access scoped to ~/p/**, git commit allow / push ask, package install ask, git config deny
- Both: Enhanced secret blocking, bash injection prevention, git secret protection
### Definition of Done
- [x] Permission configuration updated in `agents/agents.json`
- [x] JSON syntax valid (no duplicate keys, valid structure)
- [x] Workspace path validated (~/p/** exists and is correct)
- [x] Acceptance criteria tests pass (via manual verification)
### Must Have
- Chiron cannot edit files directly
- Chiron cannot write files via bash (redirects blocked)
- Chiron restricted to read-only subagents + chiron-forge for handoff
- Chriton-Forge can only write to ~/p/**
- Chriton-Forge cannot git config
- Both agents block secret file reads
- Both agents prevent command injection
- Git operations cannot commit secrets
- No duplicate keys in permission configuration
### Must NOT Have (Guardrails)
- **Edit bypass via bash**: No bash redirection patterns that allow file writes when `edit: deny`
- **Git secret commits**: No ability to git add/commit .env or credential files
- **Repo hijacking**: No git config modification allowed for Chriton-Forge
- **Command injection**: No `$()`, backticks, `eval`, `source` execution via bash
- **Write scope escape**: Chriton-Forge cannot write outside ~/p/** without asking
- **Secret exfiltration**: No access to .env, .ssh, .gnupg, credentials, secrets, .pem, .key, /run/agenix
- **Unrestricted bash for Chiron**: Only `bd *` commands allowed
---
## Verification Strategy (MANDATORY)
> This is configuration work, not code development. Manual verification is required after deployment.
### Test Decision
- **Infrastructure exists**: YES (home-manager deployment)
- **User wants tests**: NO (Manual-only verification)
- **Framework**: None
### Manual Verification Procedures
Each TODO includes EXECUTABLE verification procedures that users can run to validate changes.
**Verification Commands to Run After Deployment:**
1. **JSON Syntax Validation**:
```bash
# Validate JSON structure and no duplicate keys
jq '.' /home/m3tam3re/p/AI/AGENTS/agents/agents.json > /dev/null 2>&1
# Expected: Exit code 0 (valid JSON)
# Check for duplicate keys (manual review of chiron permission object)
# Expected: Single external_directory key, no other duplicates
```
2. **Workspace Path Validation**:
```bash
ls -la ~/p/ 2>&1
# Expected: Directory exists, shows contents (likely symlink to ~/projects/personal/)
```
3. **After Deployment - Chiron Read-Only Test** (manual):
- Have Chiron attempt to edit a test file
- Expected: Permission denied with clear error message
- Have Chiron attempt to write via bash (echo "test" > /tmp/test.txt)
- Expected: Permission denied
- Have Chiron run `bd ready` command
- Expected: Command succeeds, returns JSON output with issue list
- Have Chiron attempt to invoke build-capable subagent (sisyphus-junior)
- Expected: Permission denied
4. **After Deployment - Chiron Workspace Access** (manual):
- Have Chiron read file within ~/p/**
- Expected: Success, returns file contents
- Have Chiron read file outside ~/p/**
- Expected: Permission denied or ask user
- Have Chiron delegate to explore/librarian/athena
- Expected: Success, subagent executes
5. **After Deployment - Chriton-Forge Write Access** (manual):
- Have Chriton-Forge write test file in ~/p/** directory
- Expected: Success, file created
- Have Chriton-Forge attempt to write file to /tmp
- Expected: Ask user for approval
- Have Chriton-Forge run `git add` and `git commit -m "test"`
- Expected: Success, commit created without asking
- Have Chriton-Forge attempt `git push`
- Expected: Ask user for approval
- Have Chriton-Forge attempt `git config`
- Expected: Permission denied
- Have Chriton-Forge attempt `npm install lodash`
- Expected: Ask user for approval
6. **After Deployment - Secret Blocking Tests** (manual):
- Attempt to read .env file with both agents
- Expected: Permission denied
- Attempt to read /run/agenix/ with Chiron
- Expected: Permission denied
- Attempt to read .env.example (should be allowed)
- Expected: Success
7. **After Deployment - Bash Injection Prevention** (manual):
- Have agent attempt bash -c "$(cat /malicious)"
- Expected: Permission denied
- Have agent attempt bash -c "`cat /malicious`"
- Expected: Permission denied
- Have agent attempt eval command
- Expected: Permission denied
8. **After Deployment - Git Secret Protection** (manual):
- Have agent attempt `git add .env`
- Expected: Permission denied
- Have agent attempt `git commit .env`
- Expected: Permission denied
9. **Deployment Verification**:
```bash
# After home-manager switch, verify config is embedded correctly
cat ~/.config/opencode/config.json | jq '.agent.chiron.permission.external_directory'
# Expected: Shows ~/p/** rule, no duplicate keys
# Verify agents load without errors
# Expected: No startup errors when launching OpenCode
```
---
## Execution Strategy
### Parallel Execution Waves
> Single file sequential changes - no parallelization possible.
```
Single-Threaded Execution:
Task 1: Fix duplicate external_directory key
Task 2: Apply Chiron permission updates
Task 3: Apply Chriton-Forge permission updates
Task 4: Validate configuration
```
### Dependency Matrix
| Task | Depends On | Blocks | Can Parallelize With |
|------|------------|--------|---------------------|
| 1 | None | 2, 3 | None (must start) |
| 2 | 1 | 4 | 3 |
| 3 | 1 | 4 | 2 |
| 4 | 2, 3 | None | None (validation) |
### Agent Dispatch Summary
| Task | Recommended Agent |
|------|-----------------|
| 1 | delegate_task(category="quick", load_skills=["git-master"]) |
| 2 | delegate_task(category="quick", load_skills=["git-master"]) |
| 3 | delegate_task(category="quick", load_skills=["git-master"]) |
| 4 | User (manual verification) |
---
## TODOs
> Implementation tasks for agent configuration changes. Each task MUST include acceptance criteria with executable verification.
- [x] 1. Fix Duplicate external_directory Key in Chiron Config
**What to do**:
- Remove duplicate `external_directory` key from Chiron permission object
- Consolidate into single object with specific rule + catch-all `"*": "ask"`
- Replace `~/projects/personal/**` with `~/p/**` (symlink to same directory)
**Must NOT do**:
- Leave duplicate keys (second key overrides first, breaks config)
- Skip workspace path validation (verify ~/p/** exists)
**Recommended Agent Profile**:
> **Category**: quick
- Reason: Simple JSON edit, single file change, no complex logic
> **Skills**: git-master
- git-master: Git workflow for committing changes
> **Skills Evaluated but Omitted**:
- research: Not needed (no investigation required)
- librarian: Not needed (no external docs needed)
**Parallelization**:
- **Can Run In Parallel**: NO
- **Parallel Group**: Sequential
- **Blocks**: Tasks 2, 3 (depends on clean config)
- **Blocked By**: None (can start immediately)
**References** (CRITICAL - Be Exhaustive):
**Pattern References** (existing code to follow):
- `agents/agents.json:1-135` - Current agent configuration structure (JSON format, permission object structure)
- `agents/agents.json:7-29` - Chiron permission object (current state with duplicate key)
**API/Type References** (contracts to implement against):
- OpenCode permission schema: `{"permission": {"bash": {...}, "edit": "...", "external_directory": {...}, "task": {...}}`
**Documentation References** (specs and requirements):
- Interview draft: `.sisyphus/drafts/agent-permissions-refinement.md` - All user decisions and requirements
- Metis analysis: Critical issue #1 - Duplicate external_directory key
**External References** (libraries and frameworks):
- OpenCode docs: https://opencode.ai/docs/permissions/ - Permission system documentation (allow/ask/deny, wildcards, last-match-wins)
- OpenCode docs: https://opencode.ai/docs/agents/ - Agent configuration format
**WHY Each Reference Matters** (explain the relevance):
- `agents/agents.json` - Target file to modify, shows current structure and duplicate key bug
- Interview draft - Contains all user decisions (~/p/** path, subagent restrictions, etc.)
- OpenCode permissions docs - Explains permission system mechanics (last-match-wins critical for rule ordering)
- Metis analysis - Identifies the duplicate key bug that MUST be fixed
**Acceptance Criteria**:
> **CRITICAL: AGENT-EXECUTABLE VERIFICATION ONLY**
**Automated Verification (config validation)**:
\`\`\`bash
# Agent runs:
jq '.' /home/m3tam3re/p/AI/AGENTS/agents/agents.json > /dev/null 2>&1
# Assert: Exit code 0 (valid JSON)
# Verify single external_directory key in chiron permission object
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron.permission | keys' | grep external_directory | wc -l
# Assert: Output is "1" (exactly one external_directory key)
# Verify workspace path exists
ls -la ~/p/ 2>&1 | head -1
# Assert: Shows directory listing (not "No such file or directory")
\`\`\`
**Evidence to Capture**:
- [x] jq validation output (exit code 0)
- [x] external_directory key count output (should be "1")
- [x] Workspace path ls output (shows directory exists)
**Commit**: NO (group with Task 2 and 3)
- [x] 2. Apply Chiron Permission Updates
**What to do**:
- Set `edit` to `"deny"` (planning agent should not write files)
- Set `bash` permissions to deny all except `bd *`:
```json
"bash": {
"*": "deny",
"bd *": "allow"
}
```
- Set `external_directory` to `~/p/**` with catch-all ask:
```json
"external_directory": {
"~/p/**": "allow",
"*": "ask"
}
```
- Add `task` permission to restrict subagents:
```json
"task": {
"*": "deny",
"explore": "allow",
"librarian": "allow",
"athena": "allow",
"chiron-forge": "allow"
}
```
- Add `/run/agenix/*` to read deny list
- Add expanded secret blocking patterns: `.local/share/*`, `.cache/*`, `*.db`, `*.keychain`, `*.p12`
**Must NOT do**:
- Allow bash file write operators (echo >, cat >, tee, etc.) - will add in Task 3 for both agents
- Allow chiron to invoke build-capable subagents beyond chiron-forge
- Skip webfetch permission (should be "allow" for research capability)
**Recommended Agent Profile**:
> **Category**: quick
- Reason: JSON configuration update, follows clear specifications from draft
> **Skills**: git-master
- git-master: Git workflow for committing changes
> **Skills Evaluated but Omitted**:
- research: Not needed (all requirements documented in draft)
- librarian: Not needed (no external docs needed)
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 2 (with Task 3)
- **Blocks**: Task 4
- **Blocked By**: Task 1
**References** (CRITICAL - Be Exhaustive):
**Pattern References** (existing code to follow):
- `agents/agents.json:11-24` - Current Chiron read permissions with secret blocking patterns
- `agents/agents.json:114-132` - Athena permission object (read-only subagent reference pattern)
**API/Type References** (contracts to implement against):
- OpenCode task permission schema: `{"task": {"agent-name": "allow"}}`
**Documentation References** (specs and requirements):
- Interview draft: `.sisyphus/drafts/agent-permissions-refinement.md` - Chiron permission decisions
- Metis analysis: Guardrails #7, #8 - Secret blocking patterns, task permission implementation
**External References** (libraries and frameworks):
- OpenCode docs: https://opencode.ai/docs/agents/#task-permissions - Task permission documentation
- OpenCode docs: https://opencode.ai/docs/permissions/ - Permission level definitions and pattern matching
**WHY Each Reference Matters** (explain the relevance):
- `agents/agents.json:11-24` - Shows current secret blocking patterns to extend
- `agents/agents.json:114-132` - Shows read-only subagent pattern for reference (athena: deny bash, deny edit)
- Interview draft - Contains exact user requirements for Chiron permissions
- OpenCode task docs - Explains how to restrict subagent invocation via task permission
**Acceptance Criteria**:
> **CRITICAL: AGENT-EXECUTABLE VERIFICATION ONLY**
**Automated Verification (config validation)**:
\`\`\`bash
# Agent runs:
jq '.chiron.permission.edit' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
jq '.chiron.permission.bash."*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
jq '.chiron.permission.bash."bd *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "allow"
jq '.chiron.permission.task."*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
jq '.chiron.permission.task | keys' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Contains ["*", "athena", "chiron-forge", "explore", "librarian"]
jq '.chiron.permission.external_directory."~/p/**"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "allow"
jq '.chiron.permission.external_directory."*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "ask"
jq '.chiron.permission.read."/run/agenix/*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
\`\`\`
**Evidence to Capture**:
- [x] Edit permission value (should be "deny")
- [x] Bash wildcard permission (should be "deny")
- [x] Bash bd permission (should be "allow")
- [x] Task wildcard permission (should be "deny")
- [x] Task allowlist keys (should show 5 entries)
- [x] External directory ~/p/** permission (should be "allow")
- [x] External directory wildcard permission (should be "ask")
- [x] Read /run/agenix/* permission (should be "deny")
**Commit**: NO (group with Task 3)
- [x] 3. Apply Chriton-Forge Permission Updates
**What to do**:
- Split `git *: "ask"` into granular rules:
- Allow: `git add *`, `git commit *`, read-only commands (status, log, diff, branch, show, stash, remote)
- Ask: `git push *`
- Deny: `git config *`
- Change package managers from `"ask"` to granular rules:
- Ask for installs: `npm install *`, `npm i *`, `npx *`, `pip install *`, `pip3 install *`, `uv *`, `bun install *`, `bun i *`, `bunx *`, `yarn install *`, `yarn add *`, `pnpm install *`, `pnpm add *`, `cargo install *`, `go install *`, `make install`
- Allow other commands implicitly (let them use catch-all rules or existing allow patterns)
- Set `external_directory` to allow `~/p/**` with catch-all ask:
```json
"external_directory": {
"~/p/**": "allow",
"*": "ask"
}
```
- Add bash file write protection patterns (apply to both agents):
```json
"bash": {
"echo * > *": "deny",
"cat * > *": "deny",
"printf * > *": "deny",
"tee": "deny",
"*>*": "deny",
">*>*": "deny"
}
```
- Add bash command injection prevention (apply to both agents):
```json
"bash": {
"$(*": "deny",
"`*": "deny",
"eval *": "deny",
"source *": "deny"
}
```
- Add git secret protection patterns (apply to both agents):
```json
"bash": {
"git add *.env*": "deny",
"git commit *.env*": "deny",
"git add *credentials*": "deny",
"git add *secrets*": "deny"
}
```
- Add expanded secret blocking patterns to read permission:
- `.local/share/*`, `.cache/*`, `*.db`, `*.keychain`, `*.p12`
**Must NOT do**:
- Remove existing bash deny rules for dangerous commands (dd, mkfs, fdisk, parted, eval, sudo, su, systemctl, etc.)
- Allow git config modifications
- Allow bash to write files via any method (must block all redirect patterns)
- Skip command injection prevention ($(), backticks, eval, source)
**Recommended Agent Profile**:
> **Category**: quick
- Reason: JSON configuration update, follows clear specifications from draft
> **Skills**: git-master
- git-master: Git workflow for committing changes
> **Skills Evaluated but Omitted**:
- research: Not needed (all requirements documented in draft)
- librarian: Not needed (no external docs needed)
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 2 (with Task 2)
- **Blocks**: Task 4
- **Blocked By**: Task 1
**References** (CRITICAL - Be Exhaustive):
**Pattern References** (existing code to follow):
- `agents/agents.json:37-103` - Current Chriton-Forge bash permissions (many explicit allow/ask/deny rules)
- `agents/agents.json:37-50` - Current Chriton-Forge read permissions with secret blocking
**API/Type References** (contracts to implement against):
- OpenCode permission schema: Same as Task 2
**Documentation References** (specs and requirements):
- Interview draft: `.sisyphus/drafts/agent-permissions-refinement.md` - Chriton-Forge permission decisions
- Metis analysis: Guardrails #1-#6 - Bash edit bypass, git secret protection, command injection, git config protection
**External References** (libraries and frameworks):
- OpenCode docs: https://opencode.ai/docs/permissions/ - Permission pattern matching (wildcards, last-match-wins)
**WHY Each Reference Matters** (explain the relevance):
- `agents/agents.json:37-103` - Shows current bash permission structure (many explicit rules) to extend with new patterns
- `agents/agents.json:37-50` - Shows current secret blocking to extend with additional patterns
- Interview draft - Contains exact user requirements for Chriton-Forge permissions
- Metis analysis - Provides bash injection prevention patterns and git protection rules
**Acceptance Criteria**:
> **CRITICAL: AGENT-EXECUTABLE VERIFICATION ONLY**
**Automated Verification (config validation)**:
\`\`\`bash
# Agent runs:
# Verify git commit is allowed
jq '.chiron-forge.permission.bash."git commit *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "allow"
# Verify git push asks
jq '.chiron-forge.permission.bash."git push *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "ask"
# Verify git config is denied
jq '.chiron-forge.permission.bash."git config *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
# Verify npm install asks
jq '.chiron-forge.permission.bash."npm install *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "ask"
# Verify bash file write redirects are blocked
jq '.chiron-forge.permission.bash."echo * > *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
jq '.chiron-forge.permission.bash."cat * > *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
jq '.chiron-forge.permission.bash."tee"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
# Verify command injection is blocked
jq '.chiron-forge.permission.bash."$(*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
jq '.chiron-forge.permission.bash."`*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
# Verify git secret protection
jq '.chiron-forge.permission.bash."git add *.env*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
jq '.chiron-forge.permission.bash."git commit *.env*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
# Verify external_directory scope
jq '.chiron-forge.permission.external_directory."~/p/**"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "allow"
jq '.chiron-forge.permission.external_directory."*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "ask"
# Verify expanded secret blocking
jq '.chiron-forge.permission.read.".local/share/*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
jq '.chiron-forge.permission.read.".cache/*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
jq '.chiron-forge.permission.read."*.db"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
# Assert: Output is "deny"
\`\`\`
**Evidence to Capture**:
- [x] Git commit permission (should be "allow")
- [x] Git push permission (should be "ask")
- [x] Git config permission (should be "deny")
- [x] npm install permission (should be "ask")
- [x] bash redirect echo > permission (should be "deny")
- [x] bash redirect cat > permission (should be "deny")
- [x] bash tee permission (should be "deny")
- [x] bash $() injection permission (should be "deny")
- [x] bash backtick injection permission (should be "deny")
- [x] git add *.env* permission (should be "deny")
- [x] git commit *.env* permission (should be "deny")
- [x] external_directory ~/p/** permission (should be "allow")
- [x] external_directory wildcard permission (should be "ask")
- [x] read .local/share/* permission (should be "deny")
- [x] read .cache/* permission (should be "deny")
- [x] read *.db permission (should be "deny")
**Commit**: YES (groups with Tasks 1, 2, 3)
- Message: `chore(agents): refine permissions for Chiron and Chriton-Forge with security hardening`
- Files: `agents/agents.json`
- Pre-commit: `jq '.' agents/agents.json > /dev/null 2>&1` (validate JSON)
- [x] 4. Validate Configuration (Manual Verification)
**What to do**:
- Run JSON syntax validation: `jq '.' agents/agents.json`
- Verify no duplicate keys in configuration
- Verify workspace path exists: `ls -la ~/p/`
- Document manual verification procedure for post-deployment testing
**Must NOT do**:
- Skip workspace path validation
- Skip duplicate key verification
- Proceed to deployment without validation
**Recommended Agent Profile**:
> **Category**: quick
- Reason: Simple validation commands, documentation task
> **Skills**: git-master
- git-master: Git workflow for committing validation script or notes if needed
> **Skills Evaluated but Omitted**:
- research: Not needed (validation is straightforward)
- librarian: Not needed (no external docs needed)
**Parallelization**:
- **Can Run In Parallel**: NO
- **Parallel Group**: Sequential
- **Blocks**: None (final validation task)
- **Blocked By**: Tasks 2, 3
**References** (CRITICAL - Be Exhaustive):
**Pattern References** (existing code to follow):
- `AGENTS.md` - Repository documentation structure
**API/Type References** (contracts to implement against):
- N/A (validation task)
**Documentation References** (specs and requirements):
- Interview draft: `.sisyphus/drafts/agent-permissions-refinement.md` - All user requirements
- Metis analysis: Guardrails #1-#6 - Validation requirements
**External References** (libraries and frameworks):
- N/A (validation task)
**WHY Each Reference Matters** (explain the relevance):
- Interview draft - Contains all requirements to validate against
- Metis analysis - Identifies specific validation steps (duplicate keys, workspace path, etc.)
**Acceptance Criteria**:
> **CRITICAL: AGENT-EXECUTABLE VERIFICATION ONLY**
**Automated Verification (config validation)**:
\`\`\`bash
# Agent runs:
# JSON syntax validation
jq '.' /home/m3tam3re/p/AI/AGENTS/agents/agents.json > /dev/null 2>&1
# Assert: Exit code 0
# Verify no duplicate external_directory keys
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron.permission | keys' | grep external_directory | wc -l
# Assert: Output is "1"
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron-forge.permission | keys' | grep external_directory | wc -l
# Assert: Output is "1"
# Verify workspace path exists
ls -la ~/p/ 2>&1 | head -1
# Assert: Shows directory listing (not "No such file or directory")
# Verify all permission keys are valid
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron.permission' > /dev/null 2>&1
# Assert: Exit code 0
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron-forge.permission' > /dev/null 2>&1
# Assert: Exit code 0
\`\`\`
**Evidence to Capture**:
- [x] jq validation output (exit code 0)
- [x] Chiron external_directory key count (should be "1")
- [x] Chriton-Forge external_directory key count (should be "1")
- [x] Workspace path ls output (shows directory exists)
- [x] Chiron permission object validation (exit code 0)
- [x] Chriton-Forge permission object validation (exit code 0)
**Commit**: NO (validation only, no changes)
---
## Commit Strategy
| After Task | Message | Files | Verification |
|------------|---------|-------|--------------|
| 1, 2, 3 | `chore(agents): refine permissions for Chiron and Chriton-Forge with security hardening` | agents/agents.json | `jq '.' agents/agents.json > /dev/null` |
| 4 | N/A (validation only) | N/A | N/A |
---
## Success Criteria
### Verification Commands
```bash
# Pre-deployment validation
jq '.' /home/m3tam3re/p/AI/AGENTS/agents/agents.json > /dev/null 2>&1
# Expected: Exit code 0
# Duplicate key check
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron.permission | keys' | grep external_directory | wc -l
# Expected: 1
# Workspace path validation
ls -la ~/p/ 2>&1
# Expected: Directory listing
# Post-deployment (manual)
# Have Chiron attempt file edit → Expected: Permission denied
# Have Chiron run bd ready → Expected: Success
# Have Chriton-Forge git commit → Expected: Success
# Have Chriton-Forge git push → Expected: Ask user
# Have agent read .env → Expected: Permission denied
```
### Final Checklist
- [x] Duplicate `external_directory` key fixed
- [x] Chiron edit set to "deny"
- [x] Chiron bash denied except `bd *`
- [x] Chiron task permission restricts subagents (explore, librarian, athena, chiron-forge)
- [x] Chiron external_directory allows ~/p/** only
- [x] Chriton-Forge git commit allowed, git push asks
- [x] Chriton-Forge git config denied
- [x] Chriton-Forge package install commands ask
- [x] Chriton-Forge external_directory allows ~/p/**, asks others
- [x] Bash file write operators blocked (echo >, cat >, tee, etc.)
- [x] Bash command injection blocked ($(), backticks, eval, source)
- [x] Git secret protection added (git add/commit *.env* deny)
- [x] Expanded secret blocking patterns added (.local/share/*, .cache/*, *.db, *.keychain, *.p12)
- [x] /run/agenix/* blocked in read permissions
- [x] JSON syntax valid (jq validates)
- [x] No duplicate keys in configuration
- [x] Workspace path ~/p/** exists

View File

@@ -1,977 +0,0 @@
# Chiron Personal Agent Framework
## TL;DR
> **Quick Summary**: Create an Oh-My-Opencode-style agent framework for personal productivity with Chiron as the orchestrator, 4 specialized subagents (Hermes, Athena, Apollo, Calliope), and 5 tool integration skills (Basecamp, Outline, MS Teams, Outlook, Obsidian).
>
> **Deliverables**:
> - 6 agent definitions in `agents.json`
> - 6 system prompt files in `prompts/`
> - 5 tool integration skills in `skills/`
> - Validation script extension in `scripts/`
>
> **Estimated Effort**: Medium
> **Parallel Execution**: YES - 3 waves
> **Critical Path**: Task 1 (agents.json) → Task 3-7 (prompts) → Task 9-13 (skills) → Task 14 (validation)
>
> **Status**: ✅ COMPLETE - All 14 main tasks + 6 verification items = 20/20 deliverables
---
## Context
### Original Request
Create an agent framework similar to Oh-My-Opencode but focused on personal productivity:
- Manage work tasks, appointments, projects via Basecamp, Outline, MS Teams, Outlook
- Manage private tasks and knowledge via Obsidian
- Greek mythology naming convention (avoiding Oh My OpenCode names)
- Main agent named "Chiron"
### Interview Summary
**Key Discussions**:
- **Chiron's Role**: Main orchestrator that delegates to specialized subagents
- **Agent Count**: Minimal (3-4 agents initially) + 2 primary agents
- **Domain Separation**: Separate work vs private agents with clear boundaries
- **Tool Priority**: All 4 work tools + Obsidian equally important
- **Basecamp MCP**: User confirmed working MCP at georgeantonopoulos/Basecamp-MCP-Server
**Research Findings**:
- Oh My OpenCode names to avoid: Sisyphus, Atlas, Prometheus, Hephaestus, Metis, Momus, Oracle, Librarian, Explore, Multimodal-Looker, Sisyphus-Junior
- MCP servers available for all work tools + Obsidian
- Protonmail requires custom IMAP/SMTP (deferred)
- Current repo has established skill patterns with SKILL.md + optional subdirectories
### Metis Review
**Identified Gaps** (addressed in plan):
- Delegation model clarified: Chiron uses Question tool for ambiguous requests
- Behavioral difference between Chiron and Chiron-Forge defined
- Executable acceptance criteria added for all tasks
- Edge cases documented in guardrails section
- MCP authentication assumed pre-configured by NixOS (explicit scope boundary)
---
## Work Objectives
### Core Objective
Create a personal productivity agent framework following Oh-My-Opencode patterns, enabling AI-assisted management of work and private life through specialized agents that integrate with existing tools.
### Concrete Deliverables
1. `agents/agents.json` - 6 agent definitions (2 primary, 4 subagent)
2. `prompts/chiron.txt` - Chiron (plan mode) system prompt
3. `prompts/chiron-forge.txt` - Chiron-Forge (build mode) system prompt
4. `prompts/hermes.txt` - Work communication agent prompt
5. `prompts/athena.txt` - Work knowledge agent prompt
6. `prompts/apollo.txt` - Private knowledge agent prompt
7. `prompts/calliope.txt` - Writing agent prompt
8. `skills/basecamp/SKILL.md` - Basecamp integration skill
9. `skills/outline/SKILL.md` - Outline wiki integration skill
10. `skills/msteams/SKILL.md` - MS Teams integration skill
11. `skills/outlook/SKILL.md` - Outlook email integration skill
12. `skills/obsidian/SKILL.md` - Obsidian integration skill
13. `scripts/validate-agents.sh` - Agent validation script
### Definition of Done
- [x] `python3 -c "import json; json.load(open('agents/agents.json'))"` → Exit 0
- [x] All 6 prompt files exist and are non-empty
- [x] All 5 skill directories have valid SKILL.md with YAML frontmatter
- [x] `./scripts/test-skill.sh --validate` passes for new skills
- [x] `./scripts/validate-agents.sh` passes
### Must Have
- All agents use Question tool for multi-choice decisions
- External prompt files (not inline in JSON)
- Follow existing skill structure patterns
- Greek naming convention for agents
- Clear separation between plan mode (Chiron) and build mode (Chiron-Forge)
- Skills provide tool-specific knowledge that agents load on demand
### Must NOT Have (Guardrails)
- **NO MCP server configuration** - Managed by NixOS, outside this repo
- **NO authentication handling** - Assume pre-configured MCP tools
- **NO cross-agent state sharing** - Each agent operates independently
- **NO new opencode commands** - Use existing command patterns only
- **NO generic "I'm an AI assistant" prompts** - Domain-specific responsibilities only
- **NO Protonmail integration** - Deferred to future phase
- **NO duplicate tool knowledge across skills** - Each skill focuses on ONE tool
- **NO scripts outside scripts/ directory**
- **NO model configuration changes** - Keep current `zai-coding-plan/glm-4.7`
---
## Verification Strategy (MANDATORY)
> **UNIVERSAL RULE: ZERO HUMAN INTERVENTION**
>
> ALL tasks in this plan MUST be verifiable WITHOUT any human action.
> This is NOT conditional - it applies to EVERY task, regardless of test strategy.
>
> ### Test Decision
> - **Infrastructure exists**: YES (test-skill.sh)
> - **Automated tests**: Tests-after (validation scripts)
> - **Framework**: bash + python for validation
>
> ### Agent-Executed QA Scenarios (MANDATORY - ALL tasks)
>
> **Verification Tool by Deliverable Type**:
>
> | Type | Tool | How Agent Verifies |
> |------|------|-------------------|
> | **agents.json** | Bash (python/jq) | Parse JSON, validate structure, check required fields |
> | **Prompt files** | Bash (file checks) | File exists, non-empty, contains expected sections |
> | **SKILL.md files** | Bash (test-skill.sh) | YAML frontmatter valid, name matches directory |
> | **Validation scripts** | Bash | Script is executable, runs without error, produces expected output |
---
## Execution Strategy
### Parallel Execution Waves
```
Wave 1 (Start Immediately):
├── Task 1: Create agents.json configuration [no dependencies]
└── Task 2: Create prompts/ directory structure [no dependencies]
Wave 2 (After Wave 1):
├── Task 3: Chiron prompt [depends: 2]
├── Task 4: Chiron-Forge prompt [depends: 2]
├── Task 5: Hermes prompt [depends: 2]
├── Task 6: Athena prompt [depends: 2]
├── Task 7: Apollo prompt [depends: 2]
└── Task 8: Calliope prompt [depends: 2]
Wave 3 (Can parallel with Wave 2):
├── Task 9: Basecamp skill [no dependencies]
├── Task 10: Outline skill [no dependencies]
├── Task 11: MS Teams skill [no dependencies]
├── Task 12: Outlook skill [no dependencies]
└── Task 13: Obsidian skill [no dependencies]
Wave 4 (After Wave 2 + 3):
└── Task 14: Validation script [depends: 1, 3-8]
Critical Path: Task 1 → Task 2 → Tasks 3-8 → Task 14
Parallel Speedup: ~50% faster than sequential
```
### Dependency Matrix
| Task | Depends On | Blocks | Can Parallelize With |
|------|------------|--------|---------------------|
| 1 | None | 14 | 2, 9-13 |
| 2 | None | 3-8 | 1, 9-13 |
| 3-8 | 2 | 14 | Each other, 9-13 |
| 9-13 | None | None | Each other, 1-2 |
| 14 | 1, 3-8 | None | (final) |
### Agent Dispatch Summary
| Wave | Tasks | Recommended Category |
|------|-------|---------------------|
| 1 | 1, 2 | quick |
| 2 | 3-8 | quick (parallel) |
| 3 | 9-13 | quick (parallel) |
| 4 | 14 | quick |
---
## TODOs
### Wave 1: Foundation
- [x] 1. Create agents.json with 6 agent definitions
**What to do**:
- Update existing `agents/agents.json` to add all 6 agents
- Each agent needs: description, mode, model, prompt reference
- Primary agents: chiron, chiron-forge
- Subagents: hermes, athena, apollo, calliope
- All agents should have `question: "allow"` permission
**Must NOT do**:
- Do not add MCP server configuration
- Do not change model from current pattern
- Do not add inline prompts (use file references)
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`agent-development`]
- `agent-development`: Provides agent configuration patterns and best practices
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 1 (with Task 2)
- **Blocks**: Task 14
- **Blocked By**: None
**References**:
- `agents/agents.json:1-7` - Current chiron agent configuration pattern
- `skills/agent-development/SKILL.md:40-76` - JSON agent structure reference
- `skills/agent-development/SKILL.md:226-277` - Permissions system reference
- `skills/agent-development/references/opencode-agents-json-example.md` - Complete examples
**Acceptance Criteria**:
```
Scenario: agents.json is valid JSON with all 6 agents
Tool: Bash (python)
Steps:
1. python3 -c "import json; data = json.load(open('agents/agents.json')); print(len(data))"
2. Assert: Output is "6"
3. python3 -c "import json; data = json.load(open('agents/agents.json')); print(sorted(data.keys()))"
4. Assert: Output contains ['apollo', 'athena', 'calliope', 'chiron', 'chiron-forge', 'hermes']
Expected Result: JSON parses, all 6 agents present
Evidence: Command output captured
Scenario: Each agent has required fields
Tool: Bash (python)
Steps:
1. python3 -c "
import json
data = json.load(open('agents/agents.json'))
for name, agent in data.items():
assert 'description' in agent, f'{name}: missing description'
assert 'mode' in agent, f'{name}: missing mode'
assert 'prompt' in agent, f'{name}: missing prompt'
print('All agents valid')
"
2. Assert: Output is "All agents valid"
Expected Result: All required fields present
Evidence: Validation output captured
Scenario: Primary agents have correct mode
Tool: Bash (python)
Steps:
1. python3 -c "
import json
data = json.load(open('agents/agents.json'))
assert data['chiron']['mode'] == 'primary'
assert data['chiron-forge']['mode'] == 'primary'
print('Primary modes correct')
"
Expected Result: Both primary agents have mode=primary
Evidence: Command output
Scenario: Subagents have correct mode
Tool: Bash (python)
Steps:
1. python3 -c "
import json
data = json.load(open('agents/agents.json'))
for name in ['hermes', 'athena', 'apollo', 'calliope']:
assert data[name]['mode'] == 'subagent', f'{name}: wrong mode'
print('Subagent modes correct')
"
Expected Result: All subagents have mode=subagent
Evidence: Command output
```
**Commit**: YES
- Message: `feat(agents): add chiron agent framework with 6 agents`
- Files: `agents/agents.json`
- Pre-commit: `python3 -c "import json; json.load(open('agents/agents.json'))"`
---
- [x] 2. Create prompts directory structure
**What to do**:
- Create `prompts/` directory if not exists
- Directory will hold all agent system prompt files
**Must NOT do**:
- Do not create prompt files yet (done in Wave 2)
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: []
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 1 (with Task 1)
- **Blocks**: Tasks 3-8
- **Blocked By**: None
**References**:
- `skills/agent-development/SKILL.md:148-159` - Prompt file conventions
**Acceptance Criteria**:
```
Scenario: prompts directory exists
Tool: Bash
Steps:
1. test -d prompts && echo "exists" || echo "missing"
2. Assert: Output is "exists"
Expected Result: Directory created
Evidence: Command output
```
**Commit**: NO (groups with Task 1)
---
### Wave 2: Agent Prompts
- [x] 3. Create Chiron (Plan Mode) system prompt
**What to do**:
- Create `prompts/chiron.txt`
- Define Chiron as the main orchestrator in plan/analysis mode
- Include delegation logic to subagents (Hermes, Athena, Apollo, Calliope)
- Include Question tool usage for ambiguous requests
- Focus on: planning, analysis, guidance, delegation
- Permissions: read-only, no file modifications
**Must NOT do**:
- Do not allow write/edit operations
- Do not include execution responsibilities
- Do not overlap with Chiron-Forge's build capabilities
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`agent-development`]
- `agent-development`: System prompt design patterns
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 2 (with Tasks 4-8)
- **Blocks**: Task 14
- **Blocked By**: Task 2
**References**:
- `skills/agent-development/SKILL.md:349-386` - System prompt design patterns
- `skills/agent-development/SKILL.md:397-415` - Prompt best practices
- `skills/agent-development/references/system-prompt-design.md` - Detailed prompt patterns
**Acceptance Criteria**:
```
Scenario: Chiron prompt file exists and is substantial
Tool: Bash
Steps:
1. test -f prompts/chiron.txt && echo "exists" || echo "missing"
2. Assert: Output is "exists"
3. wc -c < prompts/chiron.txt
4. Assert: Output is > 500 (substantial content)
Expected Result: File exists with meaningful content
Evidence: File size captured
Scenario: Chiron prompt contains orchestrator role
Tool: Bash (grep)
Steps:
1. grep -qi "orchestrat" prompts/chiron.txt && echo "found" || echo "missing"
2. Assert: Output is "found"
3. grep -qi "delegat" prompts/chiron.txt && echo "found" || echo "missing"
4. Assert: Output is "found"
Expected Result: Prompt describes orchestration and delegation
Evidence: grep output
Scenario: Chiron prompt references subagents
Tool: Bash (grep)
Steps:
1. grep -qi "hermes" prompts/chiron.txt && echo "found" || echo "missing"
2. grep -qi "athena" prompts/chiron.txt && echo "found" || echo "missing"
3. grep -qi "apollo" prompts/chiron.txt && echo "found" || echo "missing"
4. grep -qi "calliope" prompts/chiron.txt && echo "found" || echo "missing"
Expected Result: All 4 subagents mentioned
Evidence: grep outputs
```
**Commit**: YES (group with Tasks 4-8)
- Message: `feat(prompts): add chiron and subagent system prompts`
- Files: `prompts/*.txt`
- Pre-commit: `for f in prompts/*.txt; do test -s "$f" || exit 1; done`
---
- [x] 4. Create Chiron-Forge (Build Mode) system prompt
**What to do**:
- Create `prompts/chiron-forge.txt`
- Define as Chiron's execution/build counterpart
- Full write access for task execution
- Can modify files, run commands, complete tasks
- Still delegates to subagents for specialized domains
- Uses Question tool for destructive operations confirmation
**Must NOT do**:
- Do not make it a planning-only agent (that's Chiron)
- Do not allow destructive operations without confirmation
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`agent-development`]
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 2 (with Tasks 3, 5-8)
- **Blocks**: Task 14
- **Blocked By**: Task 2
**References**:
- `skills/agent-development/SKILL.md:316-346` - Complete agent example with chiron/chiron-forge pattern
- `skills/agent-development/SKILL.md:253-277` - Permission patterns for bash commands
**Acceptance Criteria**:
```
Scenario: Chiron-Forge prompt file exists
Tool: Bash
Steps:
1. test -f prompts/chiron-forge.txt && wc -c < prompts/chiron-forge.txt
2. Assert: Output > 500
Expected Result: File exists with substantial content
Evidence: File size
Scenario: Chiron-Forge prompt emphasizes execution
Tool: Bash (grep)
Steps:
1. grep -qi "execut" prompts/chiron-forge.txt && echo "found" || echo "missing"
2. grep -qi "build" prompts/chiron-forge.txt && echo "found" || echo "missing"
Expected Result: Execution/build terminology present
Evidence: grep output
```
**Commit**: YES (groups with Task 3)
---
- [x] 5. Create Hermes (Work Communication) system prompt
**What to do**:
- Create `prompts/hermes.txt`
- Specialization: Basecamp tasks, Outlook email, MS Teams meetings
- Greek god of communication, messengers, quick tasks
- Uses Question tool for: which tool to use, clarifying recipients
- Focus on: task updates, email drafting, meeting scheduling
**Must NOT do**:
- Do not handle documentation (Athena's domain)
- Do not handle personal/private tools (Apollo's domain)
- Do not write long-form content (Calliope's domain)
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`agent-development`]
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 2
- **Blocks**: Task 14
- **Blocked By**: Task 2
**References**:
- `skills/agent-development/SKILL.md:349-378` - Standard prompt structure
**Acceptance Criteria**:
```
Scenario: Hermes prompt defines communication domain
Tool: Bash (grep)
Steps:
1. grep -qi "basecamp" prompts/hermes.txt && echo "found" || echo "missing"
2. grep -qi "outlook\|email" prompts/hermes.txt && echo "found" || echo "missing"
3. grep -qi "teams\|meeting" prompts/hermes.txt && echo "found" || echo "missing"
Expected Result: All 3 tools mentioned
Evidence: grep outputs
```
**Commit**: YES (groups with Task 3)
---
- [x] 6. Create Athena (Work Knowledge) system prompt
**What to do**:
- Create `prompts/athena.txt`
- Specialization: Outline wiki, documentation, knowledge organization
- Greek goddess of wisdom and strategic warfare
- Focus on: wiki search, knowledge retrieval, documentation updates
- Uses Question tool for: which document to update, clarifying search scope
**Must NOT do**:
- Do not handle communication (Hermes's domain)
- Do not handle private knowledge (Apollo's domain)
- Do not write creative content (Calliope's domain)
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`agent-development`]
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 2
- **Blocks**: Task 14
- **Blocked By**: Task 2
**References**:
- `skills/agent-development/SKILL.md:349-378` - Standard prompt structure
**Acceptance Criteria**:
```
Scenario: Athena prompt defines knowledge domain
Tool: Bash (grep)
Steps:
1. grep -qi "outline" prompts/athena.txt && echo "found" || echo "missing"
2. grep -qi "wiki\|knowledge" prompts/athena.txt && echo "found" || echo "missing"
3. grep -qi "document" prompts/athena.txt && echo "found" || echo "missing"
Expected Result: Outline and knowledge terms present
Evidence: grep outputs
```
**Commit**: YES (groups with Task 3)
---
- [x] 7. Create Apollo (Private Knowledge) system prompt
**What to do**:
- Create `prompts/apollo.txt`
- Specialization: Obsidian vault, personal notes, private knowledge graph
- Greek god of knowledge, prophecy, and light
- Focus on: note search, personal task management, knowledge retrieval
- Uses Question tool for: clarifying which vault, which note
**Must NOT do**:
- Do not handle work tools (Hermes/Athena's domain)
- Do not expose personal data to work contexts
- Do not write long-form content (Calliope's domain)
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`agent-development`]
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 2
- **Blocks**: Task 14
- **Blocked By**: Task 2
**References**:
- `skills/agent-development/SKILL.md:349-378` - Standard prompt structure
**Acceptance Criteria**:
```
Scenario: Apollo prompt defines private knowledge domain
Tool: Bash (grep)
Steps:
1. grep -qi "obsidian" prompts/apollo.txt && echo "found" || echo "missing"
2. grep -qi "personal\|private" prompts/apollo.txt && echo "found" || echo "missing"
3. grep -qi "note\|vault" prompts/apollo.txt && echo "found" || echo "missing"
Expected Result: Obsidian and personal knowledge terms present
Evidence: grep outputs
```
**Commit**: YES (groups with Task 3)
---
- [x] 8. Create Calliope (Writing) system prompt
**What to do**:
- Create `prompts/calliope.txt`
- Specialization: documentation writing, reports, meeting notes, prose
- Greek muse of epic poetry and eloquence
- Focus on: drafting documents, summarizing, writing assistance
- Uses Question tool for: clarifying tone, audience, format
**Must NOT do**:
- Do not manage tools directly (delegates to other agents for tool access)
- Do not handle short communication (Hermes's domain)
- Do not overlap with Athena's wiki management
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`agent-development`]
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 2
- **Blocks**: Task 14
- **Blocked By**: Task 2
**References**:
- `skills/agent-development/SKILL.md:349-378` - Standard prompt structure
**Acceptance Criteria**:
```
Scenario: Calliope prompt defines writing domain
Tool: Bash (grep)
Steps:
1. grep -qi "writ" prompts/calliope.txt && echo "found" || echo "missing"
2. grep -qi "document" prompts/calliope.txt && echo "found" || echo "missing"
3. grep -qi "report\|summar" prompts/calliope.txt && echo "found" || echo "missing"
Expected Result: Writing and documentation terms present
Evidence: grep outputs
```
**Commit**: YES (groups with Task 3)
---
### Wave 3: Tool Integration Skills
- [x] 9. Create Basecamp integration skill
**What to do**:
- Create `skills/basecamp/SKILL.md`
- Document Basecamp MCP capabilities (63 tools from georgeantonopoulos/Basecamp-MCP-Server)
- Include: projects, todos, messages, card tables, campfire, webhooks
- Provide workflow examples for common operations
- Reference MCP tool names for agent use
**Must NOT do**:
- Do not include MCP server setup instructions (managed by Nix)
- Do not duplicate general project management advice
- Do not include authentication handling
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`skill-creator`]
- `skill-creator`: Provides skill structure patterns and validation
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 3 (with Tasks 10-13)
- **Blocks**: None
- **Blocked By**: None
**References**:
- `skills/skill-creator/SKILL.md` - Skill creation patterns
- `skills/brainstorming/SKILL.md` - Example skill structure
- https://github.com/georgeantonopoulos/Basecamp-MCP-Server - MCP tool documentation
**Acceptance Criteria**:
```
Scenario: Basecamp skill has valid structure
Tool: Bash
Steps:
1. test -d skills/basecamp && echo "dir exists"
2. test -f skills/basecamp/SKILL.md && echo "file exists"
3. ./scripts/test-skill.sh --validate basecamp || echo "validation failed"
Expected Result: Directory and SKILL.md exist, validation passes
Evidence: Command outputs
Scenario: Basecamp skill has valid frontmatter
Tool: Bash (python)
Steps:
1. python3 -c "
import yaml
content = open('skills/basecamp/SKILL.md').read()
front = content.split('---')[1]
data = yaml.safe_load(front)
assert data['name'] == 'basecamp', 'name mismatch'
assert 'description' in data, 'missing description'
print('Valid')
"
Expected Result: YAML frontmatter valid with correct name
Evidence: Python output
```
**Commit**: YES
- Message: `feat(skills): add basecamp integration skill`
- Files: `skills/basecamp/SKILL.md`
- Pre-commit: `./scripts/test-skill.sh --validate basecamp`
---
- [x] 10. Create Outline wiki integration skill
**What to do**:
- Create `skills/outline/SKILL.md`
- Document Outline API capabilities
- Include: document CRUD, search, collections, sharing
- Provide workflow examples for knowledge management
**Must NOT do**:
- Do not include MCP server setup
- Do not duplicate wiki concepts
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`skill-creator`]
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 3
- **Blocks**: None
- **Blocked By**: None
**References**:
- `skills/skill-creator/SKILL.md` - Skill creation patterns
- https://www.getoutline.com/developers - Outline API documentation
**Acceptance Criteria**:
```
Scenario: Outline skill has valid structure
Tool: Bash
Steps:
1. test -d skills/outline && test -f skills/outline/SKILL.md && echo "exists"
2. ./scripts/test-skill.sh --validate outline || echo "failed"
Expected Result: Valid skill structure
Evidence: Command output
```
**Commit**: YES
- Message: `feat(skills): add outline wiki integration skill`
- Files: `skills/outline/SKILL.md`
- Pre-commit: `./scripts/test-skill.sh --validate outline`
---
- [x] 11. Create MS Teams integration skill
**What to do**:
- Create `skills/msteams/SKILL.md`
- Document MS Teams Graph API capabilities via MCP
- Include: channels, messages, meetings, chat
- Provide workflow examples for team communication
**Must NOT do**:
- Do not include Graph API authentication flows
- Do not overlap with Outlook email functionality
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`skill-creator`]
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 3
- **Blocks**: None
- **Blocked By**: None
**References**:
- `skills/skill-creator/SKILL.md` - Skill creation patterns
- https://learn.microsoft.com/en-us/graph/api/resources/teams-api-overview - Teams API
**Acceptance Criteria**:
```
Scenario: MS Teams skill has valid structure
Tool: Bash
Steps:
1. test -d skills/msteams && test -f skills/msteams/SKILL.md && echo "exists"
2. ./scripts/test-skill.sh --validate msteams || echo "failed"
Expected Result: Valid skill structure
Evidence: Command output
```
**Commit**: YES
- Message: `feat(skills): add ms teams integration skill`
- Files: `skills/msteams/SKILL.md`
- Pre-commit: `./scripts/test-skill.sh --validate msteams`
---
- [x] 12. Create Outlook email integration skill
**What to do**:
- Create `skills/outlook/SKILL.md`
- Document Outlook Graph API capabilities via MCP
- Include: mail CRUD, calendar, contacts, folders
- Provide workflow examples for email management
**Must NOT do**:
- Do not include Graph API authentication
- Do not overlap with Teams functionality
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`skill-creator`]
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 3
- **Blocks**: None
- **Blocked By**: None
**References**:
- `skills/skill-creator/SKILL.md` - Skill creation patterns
- https://learn.microsoft.com/en-us/graph/outlook-mail-concept-overview - Outlook API
**Acceptance Criteria**:
```
Scenario: Outlook skill has valid structure
Tool: Bash
Steps:
1. test -d skills/outlook && test -f skills/outlook/SKILL.md && echo "exists"
2. ./scripts/test-skill.sh --validate outlook || echo "failed"
Expected Result: Valid skill structure
Evidence: Command output
```
**Commit**: YES
- Message: `feat(skills): add outlook email integration skill`
- Files: `skills/outlook/SKILL.md`
- Pre-commit: `./scripts/test-skill.sh --validate outlook`
---
- [x] 13. Create Obsidian integration skill
**What to do**:
- Create `skills/obsidian/SKILL.md`
- Document Obsidian Local REST API capabilities
- Include: vault operations, note CRUD, search, daily notes
- Reference skills/brainstorming/references/obsidian-workflow.md for patterns
- Provide workflow examples for personal knowledge management
**Must NOT do**:
- Do not include plugin installation
- Do not duplicate general note-taking advice
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`skill-creator`]
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 3
- **Blocks**: None
- **Blocked By**: None
**References**:
- `skills/skill-creator/SKILL.md` - Skill creation patterns
- `skills/brainstorming/SKILL.md` - Example skill structure
- `skills/brainstorming/references/obsidian-workflow.md` - Existing Obsidian patterns
- https://coddingtonbear.github.io/obsidian-local-rest-api/ - Local REST API docs
**Acceptance Criteria**:
```
Scenario: Obsidian skill has valid structure
Tool: Bash
Steps:
1. test -d skills/obsidian && test -f skills/obsidian/SKILL.md && echo "exists"
2. ./scripts/test-skill.sh --validate obsidian || echo "failed"
Expected Result: Valid skill structure
Evidence: Command output
```
**Commit**: YES
- Message: `feat(skills): add obsidian integration skill`
- Files: `skills/obsidian/SKILL.md`
- Pre-commit: `./scripts/test-skill.sh --validate obsidian`
---
### Wave 4: Validation
- [x] 14. Create agent validation script
**What to do**:
- Create `scripts/validate-agents.sh`
- Validate agents.json structure and required fields
- Verify all referenced prompt files exist
- Check prompt files are non-empty
- Integrate with existing test-skill.sh patterns
**Must NOT do**:
- Do not require MCP servers for validation
- Do not perform functional agent testing (just structural)
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: []
**Parallelization**:
- **Can Run In Parallel**: NO
- **Parallel Group**: Sequential (Wave 4)
- **Blocks**: None
- **Blocked By**: Tasks 1, 3-8
**References**:
- `scripts/test-skill.sh` - Existing validation script pattern
**Acceptance Criteria**:
```
Scenario: Validation script is executable
Tool: Bash
Steps:
1. test -x scripts/validate-agents.sh && echo "executable" || echo "not executable"
2. Assert: Output is "executable"
Expected Result: Script has execute permission
Evidence: Command output
Scenario: Validation script runs successfully
Tool: Bash
Steps:
1. ./scripts/validate-agents.sh
2. Assert: Exit code is 0
Expected Result: All validations pass
Evidence: Script output
Scenario: Validation script catches missing files
Tool: Bash
Steps:
1. mv prompts/chiron.txt prompts/chiron.txt.bak
2. ./scripts/validate-agents.sh
3. Assert: Exit code is NOT 0
4. mv prompts/chiron.txt.bak prompts/chiron.txt
Expected Result: Script detects missing prompt file
Evidence: Error output
```
**Commit**: YES
- Message: `feat(scripts): add agent validation script`
- Files: `scripts/validate-agents.sh`
- Pre-commit: `./scripts/validate-agents.sh`
---
## Commit Strategy
| After Task | Message | Files | Verification |
|------------|---------|-------|--------------|
| 1, 2 | `feat(agents): add chiron agent framework with 6 agents` | agents/agents.json, prompts/ | `python3 -c "import json; json.load(open('agents/agents.json'))"` |
| 3-8 | `feat(prompts): add chiron and subagent system prompts` | prompts/*.txt | `for f in prompts/*.txt; do test -s "$f"; done` |
| 9 | `feat(skills): add basecamp integration skill` | skills/basecamp/ | `./scripts/test-skill.sh --validate basecamp` |
| 10 | `feat(skills): add outline wiki integration skill` | skills/outline/ | `./scripts/test-skill.sh --validate outline` |
| 11 | `feat(skills): add ms teams integration skill` | skills/msteams/ | `./scripts/test-skill.sh --validate msteams` |
| 12 | `feat(skills): add outlook email integration skill` | skills/outlook/ | `./scripts/test-skill.sh --validate outlook` |
| 13 | `feat(skills): add obsidian integration skill` | skills/obsidian/ | `./scripts/test-skill.sh --validate obsidian` |
| 14 | `feat(scripts): add agent validation script` | scripts/validate-agents.sh | `./scripts/validate-agents.sh` |
---
## Success Criteria
### Verification Commands
```bash
# Validate agents.json
python3 -c "import json; json.load(open('agents/agents.json'))" # Expected: exit 0
# Count agents
python3 -c "import json; print(len(json.load(open('agents/agents.json'))))" # Expected: 6
# Validate all prompts exist
for f in chiron chiron-forge hermes athena apollo calliope; do
test -s prompts/$f.txt && echo "$f: OK" || echo "$f: MISSING"
done
# Validate all skills
./scripts/test-skill.sh --validate # Expected: all pass
# Run full validation
./scripts/validate-agents.sh # Expected: exit 0
```
### Final Checklist
- [x] All 6 agents defined in agents.json
- [x] All 6 prompt files exist and are non-empty
- [x] All 5 skills have valid SKILL.md with YAML frontmatter
- [x] validate-agents.sh passes
- [x] test-skill.sh --validate passes
- [x] No MCP configuration in repo
- [x] No inline prompts in agents.json
- [x] All agent names are Greek mythology (not conflicting with Oh My OpenCode)

View File

@@ -1,897 +0,0 @@
# Memory System for AGENTS + Obsidian CODEX
## TL;DR
> **Quick Summary**: Build a dual-layer memory system equivalent to openclaw's — Mem0 for fast semantic search/auto-recall + Obsidian CODEX vault for human-readable, versioned knowledge. Memories are stored in both layers and cross-referenced via IDs.
>
> **Deliverables**:
> - New `skills/memory/SKILL.md` — Core orchestration skill (auto-capture, auto-recall, dual-layer sync)
> - New `80-memory/` folder in CODEX vault with category subfolders + memory template
> - Obsidian MCP server configuration (cyanheads/obsidian-mcp-server)
> - Updated skills (mem0-memory, obsidian), Apollo prompt, CODEX docs, user profile
>
> **Estimated Effort**: Medium (9 tasks across config/docs, no traditional code)
> **Parallel Execution**: YES — 4 waves
> **Critical Path**: Task 1 (vault infra) → Task 4 (memory skill) → Task 9 (validation)
---
## Context
### Original Request
Adapt openclaw's memory system for the opencode AGENTS repo, integrated with the Obsidian CODEX vault at `~/CODEX`. The vault should serve as a "second brain" for both the user AND AI agents.
### Interview Summary
**Key Discussions**:
- Analyzed openclaw's 3-layer memory architecture (SQLite+vectors builtin, memory-core plugin, memory-lancedb plugin with auto-capture/auto-recall)
- User confirmed Mem0 is available self-hosted at localhost:8000 — just needs spinning up
- User chose `80-memory/` as dedicated vault folder with category subfolders
- User chose auto+explicit capture (LLM extraction at session end + "remember this" commands)
- User chose agent QA only (no unit test infrastructure — repo is config/docs only)
- No Obsidian MCP server currently configured — plan to add cyanheads/obsidian-mcp-server
**Research Findings**:
- cyanheads/obsidian-mcp-server (363 stars) — Best MCP server: frontmatter management, vault cache, search with pagination, tag management
- GitHub Copilot's memory system: citation-based verification pattern (Phase 2 candidate)
- Production recommendation: dual-layer (operational memory + documented knowledge)
- Mem0 provides semantic search, user_id/agent_id/run_id scoping, metadata support, `/health` endpoint
- Auto-capture best practice: max 3 per session, LLM extraction > regex patterns
### Metis Review
**Identified Gaps** (addressed):
- 80-memory/ subfolders vs flat pattern: Resolved — follows `30-resources/` pattern (subfolders by TYPE), not `50-zettelkasten/` flat pattern
- Mem0 health check: Added prerequisite validation step
- Error handling undefined: Defined — Mem0 unavailable → skip, Obsidian unavailable → Mem0 only
- Deployment order: Defined — CODEX first → MCP config → skills → validation
- Scope creep risk: Locked down — citation verification, memory deletion/lifecycle, dashboards all Phase 2
- Agent role clarity: Defined — memory skill loadable by any agent, Apollo is primary memory specialist
---
## Work Objectives
### Core Objective
Build a dual-layer memory system for opencode agents that stores memories in Mem0 (semantic search, operational) AND the Obsidian CODEX vault (human-readable, versioned, wiki-linked). Equivalent in capability to openclaw's memory system.
### Concrete Deliverables
**AGENTS repo** (`~/p/AI/AGENTS`):
- `skills/memory/SKILL.md` — NEW: Core memory skill
- `skills/memory/references/mcp-config.md` — NEW: Obsidian MCP server config documentation
- `skills/mem0-memory/SKILL.md` — UPDATED: Add categories, dual-layer sync
- `skills/obsidian/SKILL.md` — UPDATED: Add 80-memory/ conventions
- `prompts/apollo.txt` — UPDATED: Add memory management responsibilities
- `context/profile.md` — UPDATED: Add memory system configuration
**CODEX vault** (`~/CODEX`):
- `80-memory/` — NEW: Folder with subfolders (preferences/, facts/, decisions/, entities/, other/)
- `templates/memory.md` — NEW: Memory note template
- `tag-taxonomy.md` — UPDATED: Add #memory/* tags
- `AGENTS.md` — UPDATED: Add 80-memory/ docs, folder decision tree, memory workflows
- `README.md` — UPDATED: Add 80-memory/ to folder structure
**Infrastructure** (Nix home-manager — outside AGENTS repo):
- Add cyanheads/obsidian-mcp-server to opencode.json MCP section
### Definition of Done
- [x] All 11 files created/updated as specified
- [x] `curl http://localhost:8000/health` returns 200 (Mem0 running)
- [~] `curl http://127.0.0.1:27124/vault-info` returns vault info (Obsidian REST API) — *Requires Obsidian desktop app to be open*
- [x] `./scripts/test-skill.sh --validate` passes for new/updated skills
- [x] 80-memory/ folder exists in CODEX vault with 5 subfolders
- [x] Memory template creates valid notes with correct frontmatter
### Must Have
- Dual-layer storage: every memory in Mem0 AND Obsidian
- Auto-capture at session end (LLM-based, max 3 per session)
- Explicit "remember this" command support
- Auto-recall: inject relevant memories before agent starts
- 5 categories: preference, fact, decision, entity, other
- Health checks before memory operations
- Cross-reference: mem0_id in Obsidian frontmatter, obsidian_ref in Mem0 metadata
- Error handling: graceful degradation when either layer unavailable
### Must NOT Have (Guardrails)
- NO citation-based memory verification (Phase 2)
- NO memory expiration/lifecycle management (Phase 2)
- NO memory deletion/forget functionality (Phase 2)
- NO memory search UI or Obsidian dashboards (Phase 2)
- NO conflict resolution UI between layers (manual edit only)
- NO unit tests (repo has no test infrastructure — agent QA only)
- NO subfolders in 50-zettelkasten/ or 70-tasks/ (respect flat structure)
- NO new memory categories beyond the 5 defined
- NO modifications to existing Obsidian templates (only ADD memory.md)
- NO changes to agents.json (no new agents or agent config changes)
---
## Verification Strategy
> **UNIVERSAL RULE: ZERO HUMAN INTERVENTION**
>
> ALL tasks MUST be verifiable WITHOUT any human action.
> Every criterion is verifiable by running a command or checking file existence.
### Test Decision
- **Infrastructure exists**: NO (config-only repo)
- **Automated tests**: None (agent QA only)
- **Framework**: N/A
### Agent-Executed QA Scenarios (MANDATORY — ALL tasks)
Verification tools by deliverable type:
| Type | Tool | How Agent Verifies |
|------|------|-------------------|
| Vault folders/files | Bash (ls, test -f) | Check existence, content |
| Skill YAML frontmatter | Bash (grep, python) | Parse and validate fields |
| Mem0 API | Bash (curl) | Send requests, parse JSON |
| Obsidian REST API | Bash (curl) | Read notes, check frontmatter |
| MCP server | Bash (npx) | Test server startup |
---
## Execution Strategy
### Parallel Execution Waves
```
Wave 1 (Start Immediately — no dependencies):
├── Task 1: CODEX vault memory infrastructure (folders, template, tags)
└── Task 3: Obsidian MCP server config documentation
Wave 2 (After Wave 1 — depends on vault structure existing):
├── Task 2: CODEX vault documentation updates (AGENTS.md, README.md)
├── Task 4: Create core memory skill (skills/memory/SKILL.md)
├── Task 5: Update Mem0 memory skill
└── Task 6: Update Obsidian skill
Wave 3 (After Wave 2 — depends on skill content for prompt/profile):
├── Task 7: Update Apollo agent prompt
└── Task 8: Update user context profile
Wave 4 (After all — final validation):
└── Task 9: End-to-end validation
Critical Path: Task 1 → Task 4 → Task 9
Parallel Speedup: ~50% faster than sequential
```
### Dependency Matrix
| Task | Depends On | Blocks | Can Parallelize With |
|------|------------|--------|---------------------|
| 1 | None | 2, 4, 5, 6 | 3 |
| 2 | 1 | 9 | 4, 5, 6 |
| 3 | None | 4 | 1 |
| 4 | 1, 3 | 7, 8, 9 | 5, 6 |
| 5 | 1 | 9 | 4, 6 |
| 6 | 1 | 9 | 4, 5 |
| 7 | 4 | 9 | 8 |
| 8 | 4 | 9 | 7 |
| 9 | ALL | None | None (final) |
### Agent Dispatch Summary
| Wave | Tasks | Recommended Agents |
|------|-------|-------------------|
| 1 | 1, 3 | task(category="quick", load_skills=["obsidian"], run_in_background=false) |
| 2 | 2, 4, 5, 6 | dispatch parallel: task(category="unspecified-high") for Task 4; task(category="quick") for 2, 5, 6 |
| 3 | 7, 8 | task(category="quick", run_in_background=false) |
| 4 | 9 | task(category="unspecified-low", run_in_background=false) |
---
## TODOs
- [x] 1. CODEX Vault Memory Infrastructure
**What to do**:
- Create `80-memory/` folder with 5 subfolders: `preferences/`, `facts/`, `decisions/`, `entities/`, `other/`
- Create each subfolder with a `.gitkeep` file so git tracks empty directories
- Create `templates/memory.md` — memory note template with frontmatter:
```yaml
---
type: memory
category: # preference | fact | decision | entity | other
mem0_id: # Mem0 memory ID (e.g., "mem_abc123")
source: explicit # explicit | auto-capture
importance: # critical | high | medium | low
created: <% tp.date.now("YYYY-MM-DD") %>
updated: <% tp.date.now("YYYY-MM-DD") %>
tags:
- memory
sync_targets: []
---
# Memory Title
## Content
<!-- The actual memory content -->
## Context
<!-- When/where this was learned, conversation context -->
## Related
<!-- Wiki links to related notes -->
```
- Update `tag-taxonomy.md` — add `#memory` tag category with subtags:
```
#memory
├── #memory/preference
├── #memory/fact
├── #memory/decision
├── #memory/entity
└── #memory/other
```
Include usage examples and definitions for each category
**Must NOT do**:
- Do NOT create subfolders inside 50-zettelkasten/ or 70-tasks/
- Do NOT modify existing templates (only ADD memory.md)
- Do NOT use Templater syntax that doesn't match existing templates
**Recommended Agent Profile**:
- **Category**: `quick`
- Reason: Simple file creation, no complex logic
- **Skills**: [`obsidian`]
- `obsidian`: Vault conventions, frontmatter patterns, template structure
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 1 (with Task 3)
- **Blocks**: Tasks 2, 4, 5, 6
- **Blocked By**: None
**References**:
**Pattern References**:
- `/home/m3tam3re/CODEX/30-resources/` — Subfolder-by-type pattern to follow (bookmarks/, literature/, meetings/, people/, recipes/)
- `/home/m3tam3re/CODEX/templates/task.md` — Template frontmatter pattern (type, status, created, updated, tags, sync_targets)
- `/home/m3tam3re/CODEX/templates/bookmark.md` — Simpler template example
**Documentation References**:
- `/home/m3tam3re/CODEX/AGENTS.md:22-27` — Frontmatter conventions (required fields: type, created, updated)
- `/home/m3tam3re/CODEX/AGENTS.md:163-176` — Template locations table (add memory row)
- `/home/m3tam3re/CODEX/tag-taxonomy.md:1-18` — Tag structure rules (max 3 levels, kebab-case)
**WHY Each Reference Matters**:
- `30-resources/` shows that subfolders-by-type is the established vault pattern for categorized content
- `task.md` template shows the exact frontmatter field set expected by the vault
- `tag-taxonomy.md` rules show the 3-level max hierarchy constraint for new tags
**Acceptance Criteria**:
**Agent-Executed QA Scenarios:**
```
Scenario: Verify 80-memory folder structure
Tool: Bash
Steps:
1. test -d /home/m3tam3re/CODEX/80-memory/preferences
2. test -d /home/m3tam3re/CODEX/80-memory/facts
3. test -d /home/m3tam3re/CODEX/80-memory/decisions
4. test -d /home/m3tam3re/CODEX/80-memory/entities
5. test -d /home/m3tam3re/CODEX/80-memory/other
Expected Result: All 5 directories exist (exit code 0 for each)
Evidence: Shell output captured
Scenario: Verify memory template exists with correct frontmatter
Tool: Bash
Steps:
1. test -f /home/m3tam3re/CODEX/templates/memory.md
2. grep "type: memory" /home/m3tam3re/CODEX/templates/memory.md
3. grep "category:" /home/m3tam3re/CODEX/templates/memory.md
4. grep "mem0_id:" /home/m3tam3re/CODEX/templates/memory.md
Expected Result: File exists and contains required frontmatter fields
Evidence: grep output captured
Scenario: Verify tag-taxonomy updated with memory tags
Tool: Bash
Steps:
1. grep "#memory" /home/m3tam3re/CODEX/tag-taxonomy.md
2. grep "#memory/preference" /home/m3tam3re/CODEX/tag-taxonomy.md
3. grep "#memory/fact" /home/m3tam3re/CODEX/tag-taxonomy.md
Expected Result: All memory tags present in taxonomy
Evidence: grep output captured
```
**Commit**: YES
- Message: `feat(vault): add 80-memory folder structure and memory template`
- Files: `80-memory/`, `templates/memory.md`, `tag-taxonomy.md`
- Repo: `~/CODEX`
---
- [x] 2. CODEX Vault Documentation Updates
**What to do**:
- Update `AGENTS.md`:
- Add `80-memory/` row to Folder Structure table (line ~11)
- Add `#### 80-memory` section in Folder Details (after 70-tasks section, ~line 161)
- Update Folder Decision Tree to include memory branch: `Is it a memory/learned fact? → YES → 80-memory/`
- Add Memory template row to Template Locations table (line ~165)
- Add Memory Workflows section (after Sync Workflow): create memory, retrieve memory, dual-layer sync
- Update `README.md`:
- Add `80-memory/` to folder structure diagram with subfolders
- Add `80-memory/` row to Folder Details section
- Add memory template to Templates table
**Must NOT do**:
- Do NOT rewrite existing sections — only ADD new content
- Do NOT remove any existing folder/template documentation
**Recommended Agent Profile**:
- **Category**: `quick`
- Reason: Documentation additions to existing files, following established patterns
- **Skills**: [`obsidian`]
- `obsidian`: Vault documentation conventions
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 2 (with Tasks 4, 5, 6)
- **Blocks**: Task 9
- **Blocked By**: Task 1 (needs folder structure to reference)
**References**:
**Pattern References**:
- `/home/m3tam3re/CODEX/AGENTS.md:110-161` — Existing Folder Details sections to follow pattern
- `/home/m3tam3re/CODEX/AGENTS.md:75-108` — Folder Decision Tree format
- `/home/m3tam3re/CODEX/README.md` — Folder structure diagram format
**WHY Each Reference Matters**:
- AGENTS.md folder details show the exact format: Purpose, Structure (flat/subfolders), Key trait, When to use, Naming convention
- Decision tree shows the exact `├─ YES →` format to follow
**Acceptance Criteria**:
```
Scenario: Verify AGENTS.md has 80-memory documentation
Tool: Bash
Steps:
1. grep "80-memory" /home/m3tam3re/CODEX/AGENTS.md
2. grep "Is it a memory" /home/m3tam3re/CODEX/AGENTS.md
3. grep "templates/memory.md" /home/m3tam3re/CODEX/AGENTS.md
Expected Result: All three patterns found
Evidence: grep output
Scenario: Verify README.md has 80-memory in structure
Tool: Bash
Steps:
1. grep "80-memory" /home/m3tam3re/CODEX/README.md
2. grep "preferences/" /home/m3tam3re/CODEX/README.md
Expected Result: Folder and subfolder documented
Evidence: grep output
```
**Commit**: YES
- Message: `docs(vault): add 80-memory documentation to AGENTS.md and README.md`
- Files: `AGENTS.md`, `README.md`
- Repo: `~/CODEX`
---
- [x] 3. Obsidian MCP Server Configuration Documentation
**What to do**:
- Create `skills/memory/references/mcp-config.md` documenting:
- cyanheads/obsidian-mcp-server configuration for opencode.json
- Required environment variables: `OBSIDIAN_API_KEY`, `OBSIDIAN_BASE_URL`, `OBSIDIAN_VERIFY_SSL`, `OBSIDIAN_ENABLE_CACHE`
- opencode.json MCP section snippet:
```json
"Obsidian-Vault": {
"command": ["npx", "obsidian-mcp-server"],
"environment": {
"OBSIDIAN_API_KEY": "<your-api-key>",
"OBSIDIAN_BASE_URL": "http://127.0.0.1:27123",
"OBSIDIAN_VERIFY_SSL": "false",
"OBSIDIAN_ENABLE_CACHE": "true"
},
"enabled": true,
"type": "local"
}
```
- Nix home-manager snippet showing how to add to `programs.opencode.settings.mcp`
- Note that this requires `home-manager switch` after adding
- Available MCP tools list: obsidian_read_note, obsidian_update_note, obsidian_global_search, obsidian_manage_frontmatter, obsidian_manage_tags, obsidian_list_notes, obsidian_delete_note, obsidian_search_replace
- How to get the API key from Obsidian: Settings → Local REST API plugin
**Must NOT do**:
- Do NOT directly modify `~/.config/opencode/opencode.json` (Nix-managed)
- Do NOT modify `agents/agents.json`
**Recommended Agent Profile**:
- **Category**: `quick`
- Reason: Creating a single reference doc
- **Skills**: [`obsidian`]
- `obsidian`: Obsidian REST API configuration knowledge
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 1 (with Task 1)
- **Blocks**: Task 4
- **Blocked By**: None
**References**:
**Pattern References**:
- `/home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md:156-166` — Existing API reference pattern
- `/home/m3tam3re/.config/opencode/opencode.json:77-127` — Current MCP config format (Exa, Basecamp, etc.)
**External References**:
- GitHub: `https://github.com/cyanheads/obsidian-mcp-server` — Config docs, env vars, tool list
- npm: `npx obsidian-mcp-server` — Installation method
**WHY Each Reference Matters**:
- opencode.json MCP section shows exact JSON format needed (command array, environment, enabled, type)
- cyanheads repo shows required env vars and their defaults
**Acceptance Criteria**:
```
Scenario: Verify MCP config reference file exists
Tool: Bash
Steps:
1. test -f /home/m3tam3re/p/AI/AGENTS/skills/memory/references/mcp-config.md
2. grep "obsidian-mcp-server" /home/m3tam3re/p/AI/AGENTS/skills/memory/references/mcp-config.md
3. grep "OBSIDIAN_API_KEY" /home/m3tam3re/p/AI/AGENTS/skills/memory/references/mcp-config.md
4. grep "home-manager" /home/m3tam3re/p/AI/AGENTS/skills/memory/references/mcp-config.md
Expected Result: File exists with MCP config, env vars, and Nix instructions
Evidence: grep output
```
**Commit**: YES (groups with Task 4)
- Message: `feat(memory): add core memory skill and MCP config reference`
- Files: `skills/memory/SKILL.md`, `skills/memory/references/mcp-config.md`
- Repo: `~/p/AI/AGENTS`
---
- [x] 4. Create Core Memory Skill
**What to do**:
- Create `skills/memory/SKILL.md` — the central orchestration skill for the dual-layer memory system
- YAML frontmatter:
```yaml
---
name: memory
description: "Dual-layer memory system (Mem0 + Obsidian CODEX). Use when: (1) storing information for future recall ('remember this'), (2) auto-capturing session insights, (3) recalling past decisions/preferences/facts, (4) injecting relevant context before tasks. Triggers: 'remember', 'recall', 'what do I know about', 'memory', session end."
compatibility: opencode
---
```
- Sections to include:
1. **Overview** — Dual-layer architecture (Mem0 operational + Obsidian documented)
2. **Prerequisites** — Mem0 running at localhost:8000, Obsidian MCP configured (reference mcp-config.md)
3. **Memory Categories** — 5 categories with definitions and examples:
- preference: Personal preferences (UI, workflow, communication style)
- fact: Objective information about user/work (role, tech stack, constraints)
- decision: Architectural/tool choices made (with rationale)
- entity: People, organizations, systems, concepts
- other: Everything else
4. **Workflow 1: Store Memory (Explicit)** — User says "remember X":
- Classify category
- POST to Mem0 `/memories` with user_id, metadata (category, source: "explicit")
- Create Obsidian note in `80-memory/<category>/` using memory template
- Cross-reference: mem0_id in Obsidian frontmatter, obsidian_ref in Mem0 metadata
5. **Workflow 2: Recall Memory** — User asks "what do I know about X":
- POST to Mem0 `/search` with query
- Return results with Obsidian note paths for reference
6. **Workflow 3: Auto-Capture (Session End)** — Automatic extraction:
- Scan conversation for memory-worthy content (preferences stated, decisions made, important facts)
- Select top 3 highest-value memories
- For each: store in Mem0 AND create Obsidian note (source: "auto-capture")
- Present to user: "I captured these memories: [list]. Confirm or reject?"
7. **Workflow 4: Auto-Recall (Session Start)** — Context injection:
- On session start, search Mem0 with user's first message
- If relevant memories found (score > 0.7), inject as `<relevant-memories>` context
- Limit to top 5 most relevant
8. **Error Handling** — Graceful degradation:
- Mem0 unavailable: `curl http://localhost:8000/health` fails → skip all memory ops, warn user
- Obsidian unavailable: Store in Mem0 only, log that Obsidian sync failed
- Both unavailable: Skip memory entirely, continue without memory features
9. **Integration** — How other skills/agents use memory:
- Load `memory` skill to access memory workflows
- Apollo is primary memory specialist
- Any agent can search/store via Mem0 REST API patterns in `mem0-memory` skill
**Must NOT do**:
- Do NOT implement citation-based verification
- Do NOT implement memory deletion/forget
- Do NOT add memory expiration logic
- Do NOT create dashboards or search UI
**Recommended Agent Profile**:
- **Category**: `unspecified-high`
- Reason: Core deliverable requiring careful architecture documentation, must be comprehensive
- **Skills**: [`obsidian`, `mem0-memory`]
- `obsidian`: Vault conventions, template patterns, frontmatter standards
- `mem0-memory`: Mem0 REST API patterns, endpoint details
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 2 (with Tasks 2, 5, 6)
- **Blocks**: Tasks 7, 8, 9
- **Blocked By**: Tasks 1, 3
**References**:
**Pattern References**:
- `/home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md` — Full file: Mem0 REST API patterns, endpoint table, identity scopes, workflow patterns
- `/home/m3tam3re/p/AI/AGENTS/skills/obsidian/SKILL.md` — Full file: Obsidian REST API patterns, create/read/update note workflows, frontmatter conventions
- `/home/m3tam3re/p/AI/AGENTS/skills/reflection/SKILL.md` — Skill structure pattern (overview, workflows, integration)
**API References**:
- `/home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md:13-21` — Quick Reference endpoint table
- `/home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md:90-109` — Identity scopes (user_id, agent_id, run_id)
**Documentation References**:
- `/home/m3tam3re/CODEX/AGENTS.md:22-27` — Frontmatter conventions for vault notes
- `/home/m3tam3re/p/AI/AGENTS/skills/memory/references/mcp-config.md` — MCP server config (created in Task 3)
**External References**:
- OpenClaw reference: `/home/m3tam3re/p/AI/openclaw/extensions/memory-lancedb/index.ts` — Auto-capture regex patterns, auto-recall injection, importance scoring (use as inspiration, not copy)
**WHY Each Reference Matters**:
- mem0-memory SKILL.md provides the exact API endpoints and patterns to reference in dual-layer sync workflows
- obsidian SKILL.md provides the vault file creation patterns (curl commands, path encoding)
- openclaw memory-lancedb shows the auto-capture/auto-recall architecture to adapt
**Acceptance Criteria**:
```
Scenario: Validate skill YAML frontmatter
Tool: Bash
Steps:
1. test -f /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
2. grep "^name: memory$" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
3. grep "^compatibility: opencode$" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
4. grep "description:" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
Expected Result: Valid YAML frontmatter with name, description, compatibility
Evidence: grep output
Scenario: Verify skill contains all required workflows
Tool: Bash
Steps:
1. grep -c "## Workflow" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
2. grep "Auto-Capture" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
3. grep "Auto-Recall" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
4. grep "Error Handling" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
5. grep "preference" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
Expected Result: At least 4 workflow sections, auto-capture, auto-recall, error handling, categories
Evidence: grep output
Scenario: Verify dual-layer sync pattern documented
Tool: Bash
Steps:
1. grep "mem0_id" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
2. grep "obsidian_ref" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
3. grep "localhost:8000" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
4. grep "80-memory" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
Expected Result: Cross-reference IDs and both layer endpoints documented
Evidence: grep output
```
**Commit**: YES (groups with Task 3)
- Message: `feat(memory): add core memory skill and MCP config reference`
- Files: `skills/memory/SKILL.md`, `skills/memory/references/mcp-config.md`
- Repo: `~/p/AI/AGENTS`
---
- [x] 5. Update Mem0 Memory Skill
**What to do**:
- Add "Memory Categories" section after Identity Scopes (line ~109):
- Table: category name, definition, Obsidian path, example
- Metadata pattern for categories: `{"category": "preference", "source": "explicit|auto-capture"}`
- Add "Dual-Layer Sync" section after Workflow Patterns:
- After storing to Mem0, also create Obsidian note in `80-memory/<category>/`
- Include mem0_id from response in Obsidian note frontmatter
- Include obsidian_ref path in Mem0 metadata via update
- Add "Health Check" workflow: Check `/health` before any memory operations
- Add "Error Handling" section: What to do when Mem0 is unavailable
**Must NOT do**:
- Do NOT delete existing content
- Do NOT change the YAML frontmatter description (triggers)
- Do NOT change existing API endpoint documentation
**Recommended Agent Profile**:
- **Category**: `quick`
- Reason: Adding sections to existing well-structured file
- **Skills**: [`mem0-memory`]
- `mem0-memory`: Existing skill patterns to extend
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 2 (with Tasks 2, 4, 6)
- **Blocks**: Task 9
- **Blocked By**: Task 1
**References**:
- `/home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md` — Full file: current content to extend (preserve ALL existing content)
**Acceptance Criteria**:
```
Scenario: Verify categories added to mem0-memory skill
Tool: Bash
Steps:
1. grep "Memory Categories" /home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md
2. grep "preference" /home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md
3. grep "Dual-Layer" /home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md
4. grep "80-memory" /home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md
Expected Result: New sections present alongside existing content
Evidence: grep output
```
**Commit**: YES
- Message: `feat(mem0-memory): add memory categories and dual-layer sync patterns`
- Files: `skills/mem0-memory/SKILL.md`
- Repo: `~/p/AI/AGENTS`
---
- [x] 6. Update Obsidian Skill
**What to do**:
- Add "Memory Folder Conventions" section (after Best Practices, ~line 228):
- Document `80-memory/` structure with 5 subfolders
- Memory note naming: kebab-case (e.g., `prefers-dark-mode.md`)
- Required frontmatter fields for memory notes (type, category, mem0_id, etc.)
- Add "Memory Note Workflows" section:
- Create memory note: POST to vault REST API with memory template content
- Read memory note: GET with path encoding for `80-memory/` paths
- Search memories: Search within `80-memory/` path filter
- Update Integration table to include memory skill handoff
**Must NOT do**:
- Do NOT change existing content or workflows
- Do NOT modify the YAML frontmatter
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: [`obsidian`]
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 2
- **Blocks**: Task 9
- **Blocked By**: Task 1
**References**:
- `/home/m3tam3re/p/AI/AGENTS/skills/obsidian/SKILL.md` — Full file: current content to extend
**Acceptance Criteria**:
```
Scenario: Verify memory conventions added to obsidian skill
Tool: Bash
Steps:
1. grep "Memory Folder" /home/m3tam3re/p/AI/AGENTS/skills/obsidian/SKILL.md
2. grep "80-memory" /home/m3tam3re/p/AI/AGENTS/skills/obsidian/SKILL.md
3. grep "mem0_id" /home/m3tam3re/p/AI/AGENTS/skills/obsidian/SKILL.md
Expected Result: Memory folder docs and frontmatter patterns present
Evidence: grep output
```
**Commit**: YES
- Message: `feat(obsidian): add memory folder conventions and workflows`
- Files: `skills/obsidian/SKILL.md`
- Repo: `~/p/AI/AGENTS`
---
- [x] 7. Update Apollo Agent Prompt
**What to do**:
- Add "Memory Management" to Core Responsibilities list (after item 4):
- Store memories in dual-layer system (Mem0 + Obsidian CODEX)
- Retrieve memories via semantic search (Mem0)
- Auto-capture session insights at session end (max 3, confirm with user)
- Handle explicit "remember this" requests
- Inject relevant memories into context on session start
- Add memory-related tools to Tool Usage section
- Add memory error handling to Edge Cases
**Must NOT do**:
- Do NOT remove existing responsibilities
- Do NOT change Apollo's identity or boundaries
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: []
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 3 (with Task 8)
- **Blocks**: Task 9
- **Blocked By**: Task 4
**References**:
- `/home/m3tam3re/p/AI/AGENTS/prompts/apollo.txt` — Full file (47 lines): current prompt to extend
**Acceptance Criteria**:
```
Scenario: Verify memory management added to Apollo prompt
Tool: Bash
Steps:
1. grep -i "memory" /home/m3tam3re/p/AI/AGENTS/prompts/apollo.txt | wc -l
2. grep "Mem0" /home/m3tam3re/p/AI/AGENTS/prompts/apollo.txt
3. grep "auto-capture" /home/m3tam3re/p/AI/AGENTS/prompts/apollo.txt
Expected Result: Multiple memory references, Mem0 mentioned, auto-capture documented
Evidence: grep output
```
**Commit**: YES (groups with Task 8)
- Message: `feat(agents): add memory management to Apollo prompt and user profile`
- Files: `prompts/apollo.txt`, `context/profile.md`
- Repo: `~/p/AI/AGENTS`
---
- [x] 8. Update User Context Profile
**What to do**:
- Add "Memory System" section to `context/profile.md`:
- Mem0 endpoint: `http://localhost:8000`
- Mem0 user_id: `m3tam3re` (or whatever the user's ID should be)
- Obsidian vault path: `~/CODEX`
- Memory folder: `80-memory/`
- Auto-capture: enabled, max 3 per session
- Auto-recall: enabled, top 5 results, score threshold 0.7
- Memory categories: preference, fact, decision, entity, other
- Obsidian MCP server: cyanheads/obsidian-mcp-server (see skills/memory/references/mcp-config.md)
**Must NOT do**:
- Do NOT remove existing profile content
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: []
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 3 (with Task 7)
- **Blocks**: Task 9
- **Blocked By**: Task 4
**References**:
- `/home/m3tam3re/p/AI/AGENTS/context/profile.md` — Current profile to extend
**Acceptance Criteria**:
```
Scenario: Verify memory config in profile
Tool: Bash
Steps:
1. grep "Memory System" /home/m3tam3re/p/AI/AGENTS/context/profile.md
2. grep "localhost:8000" /home/m3tam3re/p/AI/AGENTS/context/profile.md
3. grep "80-memory" /home/m3tam3re/p/AI/AGENTS/context/profile.md
4. grep "auto-capture" /home/m3tam3re/p/AI/AGENTS/context/profile.md
Expected Result: Memory system section with all config values
Evidence: grep output
```
**Commit**: YES (groups with Task 7)
- Message: `feat(agents): add memory management to Apollo prompt and user profile`
- Files: `prompts/apollo.txt`, `context/profile.md`
- Repo: `~/p/AI/AGENTS`
---
- [x] 9. End-to-End Validation
**What to do**:
- Verify ALL files exist and contain expected content
- Run skill validation: `./scripts/test-skill.sh memory`
- Test Mem0 availability: `curl http://localhost:8000/health`
- Test Obsidian REST API: `curl http://127.0.0.1:27124/vault-info`
- Verify CODEX vault structure: `ls -la ~/CODEX/80-memory/`
- Verify template: `cat ~/CODEX/templates/memory.md | head -20`
- Check all YAML frontmatter valid across new/updated skill files
**Must NOT do**:
- Do NOT create automated test infrastructure
- Do NOT modify any files — validation only
**Recommended Agent Profile**:
- **Category**: `unspecified-low`
- Reason: Verification only, running commands and checking outputs
- **Skills**: []
**Parallelization**:
- **Can Run In Parallel**: NO
- **Parallel Group**: Wave 4 (final, sequential)
- **Blocks**: None (final task)
- **Blocked By**: ALL tasks (1-8)
**Acceptance Criteria**:
```
Scenario: Full file existence check
Tool: Bash
Steps:
1. test -f ~/p/AI/AGENTS/skills/memory/SKILL.md
2. test -f ~/p/AI/AGENTS/skills/memory/references/mcp-config.md
3. test -d ~/CODEX/80-memory/preferences
4. test -f ~/CODEX/templates/memory.md
5. grep "80-memory" ~/CODEX/AGENTS.md
6. grep "#memory" ~/CODEX/tag-taxonomy.md
7. grep "80-memory" ~/CODEX/README.md
8. grep -i "memory" ~/p/AI/AGENTS/prompts/apollo.txt
9. grep "Memory System" ~/p/AI/AGENTS/context/profile.md
Expected Result: All checks pass (exit code 0)
Evidence: Shell output captured
Scenario: Mem0 health check
Tool: Bash
Preconditions: Mem0 server must be running
Steps:
1. curl -s -o /dev/null -w "%{http_code}" http://localhost:8000/health
Expected Result: HTTP 200
Evidence: Status code captured
Note: If Mem0 not running, this test will fail — spin up Mem0 first
Scenario: Obsidian REST API check
Tool: Bash
Preconditions: Obsidian desktop app must be running with Local REST API plugin
Steps:
1. curl -s -o /dev/null -w "%{http_code}" http://127.0.0.1:27124/vault-info
Expected Result: HTTP 200
Evidence: Status code captured
Note: Requires Obsidian desktop app to be open
Scenario: Skill validation
Tool: Bash
Steps:
1. cd ~/p/AI/AGENTS && ./scripts/test-skill.sh memory
Expected Result: Validation passes (no errors)
Evidence: Script output captured
```
**Commit**: NO (validation only, no file changes)
---
## Commit Strategy
| After Task | Message | Files | Repo | Verification |
|------------|---------|-------|------|--------------|
| 1 | `feat(vault): add 80-memory folder structure and memory template` | 80-memory/, templates/memory.md, tag-taxonomy.md | ~/CODEX | ls + grep |
| 2 | `docs(vault): add 80-memory documentation to AGENTS.md and README.md` | AGENTS.md, README.md | ~/CODEX | grep |
| 3+4 | `feat(memory): add core memory skill and MCP config reference` | skills/memory/SKILL.md, skills/memory/references/mcp-config.md | ~/p/AI/AGENTS | test-skill.sh |
| 5 | `feat(mem0-memory): add memory categories and dual-layer sync patterns` | skills/mem0-memory/SKILL.md | ~/p/AI/AGENTS | grep |
| 6 | `feat(obsidian): add memory folder conventions and workflows` | skills/obsidian/SKILL.md | ~/p/AI/AGENTS | grep |
| 7+8 | `feat(agents): add memory management to Apollo prompt and user profile` | prompts/apollo.txt, context/profile.md | ~/p/AI/AGENTS | grep |
**Note**: Two different git repos! CODEX and AGENTS commits are independent.
---
## Success Criteria
### Verification Commands
```bash
# CODEX vault structure
ls ~/CODEX/80-memory/ # Expected: preferences/ facts/ decisions/ entities/ other/
cat ~/CODEX/templates/memory.md | head -5 # Expected: ---\ntype: memory
grep "#memory" ~/CODEX/tag-taxonomy.md # Expected: #memory/* tags
# AGENTS skill validation
cd ~/p/AI/AGENTS && ./scripts/test-skill.sh memory # Expected: pass
# Infrastructure (requires services running)
curl -s http://localhost:8000/health # Expected: 200
curl -s http://127.0.0.1:27124/vault-info # Expected: 200
```
### Final Checklist
- [x] All "Must Have" present (dual-layer, auto-capture, auto-recall, categories, health checks, error handling)
- [x] All "Must NOT Have" absent (no citation system, no deletion, no dashboards, no unit tests)
- [x] CODEX commits pushed (vault structure + docs)
- [x] AGENTS commits pushed (skills + prompts + profile)
- [x] User reminded to add Obsidian MCP to Nix config and run `home-manager switch`
- [x] User reminded to spin up Mem0 server before using memory features

File diff suppressed because it is too large Load Diff

View File

@@ -1,804 +0,0 @@
# Centralized Rules & Per-Project Context Injection System
## TL;DR
> **Quick Summary**: Create a `rules/` directory in the AGENTS repository containing modular AI coding rules (per-concern + per-language), deployed centrally via Home Manager. A `mkOpencodeRules` Nix helper function lives in the nixpkgs repo (following the existing `ports.nix` → `mkPortHelpers` pattern), generating per-project `opencode.json` via devShell activation.
>
> **Deliverables**:
> - 6 concern rule files (coding-style, naming, documentation, testing, git-workflow, project-structure)
> - 5 language/framework rule files (python, typescript, nix, shell, n8n)
> - `lib/opencode-rules.nix` in nixpkgs repo — `mkOpencodeRules` helper function
> - Updated `lib/default.nix` in nixpkgs repo — imports opencode-rules
> - Updated `opencode.nix` in nixos-config — deploys rules/ alongside existing skills
> - `rules/USAGE.md` — per-project adoption documentation
>
> **Repos Touched**: 3 (AGENTS, nixpkgs, nixos-config)
> **Estimated Effort**: Medium (11 rule files + 3 nix changes + 1 doc)
> **Parallel Execution**: YES — 4 waves
> **Critical Path**: T1-T3 (foundation) → T6-T16 (content) → T17 (verification)
---
## Context
### Original Request
User wants to streamline their Agent workflow by centrally managing language-specific and framework-specific coding rules in the AGENTS repository, while allowing project-specific overrides. Rules should be injected per-project using Nix flakes + direnv.
### Interview Summary
**Key Discussions**:
- **Loading strategy**: Always loaded (not lazy) — rules always in context when project activates
- **Composition mechanism**: Nix flake devShell — each project declares languages/frameworks needed
- **Rule granularity**: Per concern with separate language files for deep patterns
- **Override strategy**: Project-level AGENTS.md overrides central rules (OpenCode's native precedence)
- **opencode.json**: No project-specific one exists yet — devShell generates it entirely
- **Nix helper location**: Lives in `m3ta-nixpkgs` repo at `lib/opencode-rules.nix` (follows `ports.nix` pattern)
- **AGENTS repo stays pure content**: No Nix code — only markdown rule files
**Research Findings**:
- OpenCode `instructions` field in `opencode.json` loads external .md files as always-on context
- Anthropic guide: progressive disclosure, composability, 500-line max, use TOCs for long files
- Best practices: 100-200 lines per file, imperative language, micro-examples (correct/incorrect)
- Rule files benefit from sandwich principle: critical constraints at START and END
### Metis Review
**Identified Gaps** (addressed):
- **Rule update strategy**: When rules change in AGENTS repo, projects run `nix flake update agents`. Standard Nix flow.
- **Multi-language projects**: `mkOpencodeRules { languages = [ "python" "typescript" ]; }` — list multiple.
- **Context window budget**: ~800-1300 lines total. Well under 1500-line budget.
- **Empty rules selection**: `mkOpencodeRules {}` loads only concern files (defaults to all 6).
### Architecture Decision: Nix Helper Location
**Decision**: `mkOpencodeRules` lives in **nixpkgs repo** (`/home/m3tam3re/p/NIX/nixpkgs/lib/opencode-rules.nix`), NOT in AGENTS repo.
**Rationale**:
- nixpkgs already has `lib/ports.nix``mkPortHelpers` as an identical pattern
- nixpkgs is already consumed by all configs: `inputs.m3ta-nixpkgs.lib.${system}`
- AGENTS repo stays pure content (markdown + configs), no Nix code
- Projects already have `m3ta-nixpkgs` as a flake input — no new input needed for the helper
**Consumption pattern** (per-project):
```nix
let
m3taLib = inputs.m3ta-nixpkgs.lib.${system};
rules = m3taLib.opencode-rules.mkOpencodeRules {
agents = inputs.agents; # Non-flake input with rule content
languages = [ "python" ];
};
in pkgs.mkShell { shellHook = rules.shellHook; }
```
---
## Work Objectives
### Core Objective
Create a centralized, modular AI coding rules system managed in the AGENTS repo, with a Nix helper in nixpkgs for per-project injection via devShell + direnv.
### Concrete Deliverables
- `rules/concerns/{coding-style,naming,documentation,testing,git-workflow,project-structure}.md` — in AGENTS repo
- `rules/languages/{python,typescript,nix,shell}.md` — in AGENTS repo
- `rules/frameworks/n8n.md` — in AGENTS repo
- `rules/USAGE.md` — adoption documentation in AGENTS repo
- `lib/opencode-rules.nix` — in nixpkgs repo (`/home/m3tam3re/p/NIX/nixpkgs/`)
- Updated `lib/default.nix` — in nixpkgs repo (add import)
- Updated `opencode.nix` — in nixos-config repo (`/home/m3tam3re/p/NIX/nixos-config/home/features/coding/`)
### Definition of Done
- [ ] All 11 rule files exist and are under 250 lines each
- [ ] `lib/opencode-rules.nix` in nixpkgs exports `mkOpencodeRules` following `ports.nix` pattern
- [ ] `opencode.nix` deploys `rules/` to `~/.config/opencode/rules/`
- [ ] A project can use `m3taLib.opencode-rules.mkOpencodeRules` in devShell
### Must Have
- All rule files use imperative language ("Always use...", "Never...")
- Every rule includes micro-examples (correct vs incorrect, 2-3 lines each)
- Concern files are language-agnostic; language subsections are brief pointers
- Language files go deep into toolchain, idioms, anti-patterns
- `mkOpencodeRules` accepts: `{ agents, languages ? [], concerns ? [...], frameworks ? [], extraInstructions ? [] }`
- `mkOpencodeRules` follows `ports.nix` pattern: `{lib}: { mkOpencodeRules = ...}`
- shellHook creates `.opencode-rules` symlink + generates `opencode.json`
- Both `.opencode-rules` and `opencode.json` must be gitignored (documented in USAGE.md)
### Must NOT Have (Guardrails)
- Rule files MUST NOT exceed 250 lines
- Total loaded rules MUST NOT exceed 1500 lines for any realistic config
- Concern files MUST NOT contain language-specific implementation details
- MUST NOT put Nix code in AGENTS repo — AGENTS stays pure content
- MUST NOT add rule versioning, testing framework, or generator CLI
- MUST NOT create rules for docker, k8s, terraform — out of scope
- MUST NOT modify existing skills, agents, prompts, or commands
- MUST NOT use generic advice ("write clean code", "follow best practices")
---
## Verification Strategy (MANDATORY)
> **ZERO HUMAN INTERVENTION** — ALL verification is agent-executed. No exceptions.
### Test Decision
- **Infrastructure exists**: NO (config/documentation repos)
- **Automated tests**: NO
- **Framework**: none
### QA Policy
Every task MUST include agent-executed QA scenarios.
Evidence saved to `.sisyphus/evidence/task-{N}-{scenario-slug}.{ext}`.
| Deliverable Type | Verification Tool | Method |
|------------------|-------------------|--------|
| Markdown rule files | Bash (wc, grep) | Line count, micro-examples, imperative language |
| Nix expressions | Bash (nix eval) | Evaluate, check errors |
| Shell integration | Bash | Verify symlink + opencode.json generated |
| Cross-repo | Bash (grep) | Verify entries in correct files |
---
## Execution Strategy
### Parallel Execution Waves
```
Wave 1 (Foundation — 5 tasks, all parallel):
├── Task 1: Create rules/ directory structure in AGENTS repo [quick]
├── Task 2: Create lib/opencode-rules.nix in nixpkgs repo [quick]
├── Task 3: Update lib/default.nix in nixpkgs repo [quick]
├── Task 4: Update opencode.nix in nixos-config repo [quick]
└── Task 5: Create rules/USAGE.md in AGENTS repo [quick]
Wave 2 (Content — 11 rule files, all parallel):
├── Task 6: concerns/coding-style.md [writing]
├── Task 7: concerns/naming.md [writing]
├── Task 8: concerns/documentation.md [writing]
├── Task 9: concerns/testing.md [writing]
├── Task 10: concerns/git-workflow.md [writing]
├── Task 11: concerns/project-structure.md [writing]
├── Task 12: languages/python.md [writing]
├── Task 13: languages/typescript.md [writing]
├── Task 14: languages/nix.md [writing]
├── Task 15: languages/shell.md [writing]
└── Task 16: frameworks/n8n.md [writing]
Wave 3 (Verification):
└── Task 17: End-to-end integration test [deep]
Wave FINAL (Review — 4 parallel):
├── Task F1: Plan compliance audit (oracle)
├── Task F2: Code quality review (unspecified-high)
├── Task F3: Real manual QA (unspecified-high)
└── Task F4: Scope fidelity check (deep)
Critical Path: T1-T3 → T6-T16 (parallel) → T17 → F1-F4
Max Concurrent: 11 (Wave 2)
```
### Dependency Matrix
| Task | Depends On | Blocks | Wave |
|------|------------|--------|------|
| 1 | — | 5, 6-16, 17 | 1 |
| 2, 3 | — | 17 | 1 |
| 4 | — | 17 | 1 |
| 5 | 1, 2 | 17 | 1 |
| 6-16 | 1 | 17 | 2 |
| 17 | 2-5, 6-16 | F1-F4 | 3 |
| F1-F4 | 17 | — | FINAL |
### Agent Dispatch Summary
| Wave | # Parallel | Tasks and Agent Category |
|------|------------|------------------------|
| 1 | **5** | T1-T5 → `quick` |
| 2 | **11** | T6-T16 → `writing` |
| 3 | **1** | T17 → `deep` |
| FINAL | **4** | F1 → `oracle`, F2,F3 → `unspecified-high`, F4 → `deep` |
---
## TODOs
- [x] 1. Create rules/ directory structure in AGENTS repo
**What to do**:
- Create directory structure in `/home/m3tam3re/p/AI/AGENTS/`: `rules/concerns/`, `rules/languages/`, `rules/frameworks/`
- Add `.gitkeep` files to each directory so they're tracked before content is added
- This is the CONTENT repo only — NO Nix code goes here
**Must NOT do**:
- Do not create any Nix files in AGENTS repo
- Do not create rule content files (those are Wave 2)
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: []
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 1 (with Tasks 2-5)
- **Blocks**: Tasks 5, 6-16, 17
- **Blocked By**: None
**References**:
- `/home/m3tam3re/p/AI/AGENTS/skills/` — existing directory structure pattern
**Acceptance Criteria**:
**QA Scenarios (MANDATORY):**
```
Scenario: Directory structure exists
Tool: Bash
Preconditions: None
Steps:
1. Run `ls /home/m3tam3re/p/AI/AGENTS/rules/concerns/.gitkeep /home/m3tam3re/p/AI/AGENTS/rules/languages/.gitkeep /home/m3tam3re/p/AI/AGENTS/rules/frameworks/.gitkeep`
Expected Result: All 3 .gitkeep files exist
Failure Indicators: "No such file or directory"
Evidence: .sisyphus/evidence/task-1-dirs.txt
Scenario: No Nix files in AGENTS repo rules/
Tool: Bash
Preconditions: Dirs created
Steps:
1. Run `find /home/m3tam3re/p/AI/AGENTS/rules/ -name '*.nix' | wc -l`
Expected Result: Count is 0
Failure Indicators: Count > 0
Evidence: .sisyphus/evidence/task-1-no-nix.txt
```
**Commit**: YES
- Message: `feat(rules): add rules directory structure`
- Files: `rules/concerns/.gitkeep`, `rules/languages/.gitkeep`, `rules/frameworks/.gitkeep`
---
- [x] 2. Create `lib/opencode-rules.nix` in nixpkgs repo
**What to do**:
- Create `/home/m3tam3re/p/NIX/nixpkgs/lib/opencode-rules.nix`
- Follow the EXACT pattern of `lib/ports.nix`: `{lib}: { mkOpencodeRules = ...; }`
- The function must accept: `{ agents, languages ? [], concerns ? [ "coding-style" "naming" "documentation" "testing" "git-workflow" "project-structure" ], frameworks ? [], extraInstructions ? [] }`
- `agents` parameter = the non-flake input (path to AGENTS repo in Nix store)
- It must return: `{ shellHook = "..."; instructions = [...]; }`
- `shellHook` must: (a) create `.opencode-rules` symlink to `${agents}/rules`, (b) generate `opencode.json` with `$schema` and `instructions` fields using `builtins.toJSON`
- `instructions` = list of paths relative to project root via `.opencode-rules/` symlink
- Include comprehensive Nix doc comments (matching ports.nix style)
**Must NOT do**:
- Do not deviate from ports.nix pattern
- Do not put any code in AGENTS repo
**Recommended Agent Profile**:
- **Category**: `quick`
- Reason: One Nix file following established pattern
- **Skills**: []
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 1 (with Tasks 1, 3-5)
- **Blocks**: Tasks 5, 17
- **Blocked By**: None
**References**:
**Pattern References**:
- `/home/m3tam3re/p/NIX/nixpkgs/lib/ports.nix` — MUST follow this exact pattern: `{lib}: { mkPortHelpers = portsConfig: let ... in { ... }; }`
- `/home/m3tam3re/p/NIX/nixpkgs/lib/default.nix` — shows how lib modules are imported: `import ./ports.nix {inherit lib;}`
- `/home/m3tam3re/p/NIX/nixpkgs/flake.nix:73-77` — shows how lib is exposed: `lib = forAllSystems (system: ... import ./lib {lib = pkgs.lib;});`
**External References**:
- OpenCode rules docs: `https://opencode.ai/docs/rules/` — `instructions` field accepts relative paths
**WHY Each Reference Matters**:
- `ports.nix` is the canonical pattern for lib functions in this repo — `{lib}:` signature, doc comments, nested `let ... in`
- `default.nix` shows how the new module gets wired in
- `flake.nix` confirms how consumers access it: `m3ta-nixpkgs.lib.${system}.opencode-rules.mkOpencodeRules`
**Acceptance Criteria**:
**QA Scenarios (MANDATORY):**
```
Scenario: opencode-rules.nix evaluates without errors
Tool: Bash
Preconditions: File created
Steps:
1. Run `nix eval --impure --expr 'let pkgs = import <nixpkgs> {}; lib = (import /home/m3tam3re/p/NIX/nixpkgs/lib/opencode-rules.nix {lib = pkgs.lib;}); in builtins.attrNames lib' 2>&1`
Expected Result: Output contains "mkOpencodeRules"
Failure Indicators: "error:" in output
Evidence: .sisyphus/evidence/task-2-eval.txt
Scenario: mkOpencodeRules generates correct paths
Tool: Bash
Preconditions: File created
Steps:
1. Run `nix eval --impure --json --expr 'let pkgs = import <nixpkgs> {}; lib = (import /home/m3tam3re/p/NIX/nixpkgs/lib/opencode-rules.nix {lib = pkgs.lib;}); in (lib.mkOpencodeRules { agents = /home/m3tam3re/p/AI/AGENTS; languages = ["python" "typescript"]; frameworks = ["n8n"]; }).instructions'`
Expected Result: JSON array with 9 paths (6 concerns + 2 languages + 1 framework), all starting with ".opencode-rules/"
Failure Indicators: Wrong count, wrong prefix, error
Evidence: .sisyphus/evidence/task-2-paths.txt
Scenario: Default (empty languages) works
Tool: Bash
Preconditions: File created
Steps:
1. Run `nix eval --impure --json --expr 'let pkgs = import <nixpkgs> {}; lib = (import /home/m3tam3re/p/NIX/nixpkgs/lib/opencode-rules.nix {lib = pkgs.lib;}); in (lib.mkOpencodeRules { agents = /home/m3tam3re/p/AI/AGENTS; }).instructions'`
Expected Result: JSON array with 6 paths (concerns only)
Failure Indicators: Extra paths, error
Evidence: .sisyphus/evidence/task-2-defaults.txt
Scenario: shellHook generates valid JSON
Tool: Bash
Preconditions: File created
Steps:
1. Run `nix eval --impure --raw --expr 'let pkgs = import <nixpkgs> {}; lib = (import /home/m3tam3re/p/NIX/nixpkgs/lib/opencode-rules.nix {lib = pkgs.lib;}); in (lib.mkOpencodeRules { agents = /home/m3tam3re/p/AI/AGENTS; languages = ["python"]; }).shellHook' | sh -c 'eval "$(cat)"' && python3 -m json.tool opencode.json`
Expected Result: Valid JSON output with "$schema" and "instructions" fields
Failure Indicators: JSON parse error, missing fields
Evidence: .sisyphus/evidence/task-2-json.txt
```
**Commit**: YES
- Message: `feat(lib): add opencode-rules helper for per-project rule injection`
- Files: `lib/opencode-rules.nix`
- Pre-commit: `nix eval --impure --expr '...'`
---
- [x] 3. Update `lib/default.nix` in nixpkgs repo
**What to do**:
- Add one line to `/home/m3tam3re/p/NIX/nixpkgs/lib/default.nix` to import opencode-rules:
`opencode-rules = import ./opencode-rules.nix {inherit lib;};`
- Place it after the existing `ports = import ./ports.nix {inherit lib;};` line
- Update the comment at line 10 to remove it (it's a placeholder)
**Must NOT do**:
- Do not modify the ports import
- Do not change the function signature `{lib}:`
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: []
**Parallelization**:
- **Can Run In Parallel**: YES (but logically pairs with Task 2)
- **Parallel Group**: Wave 1
- **Blocks**: Task 17
- **Blocked By**: Task 2 (opencode-rules.nix must exist first)
**References**:
- `/home/m3tam3re/p/NIX/nixpkgs/lib/default.nix:6-12` — current file content, add after line 8
**Acceptance Criteria**:
**QA Scenarios (MANDATORY):**
```
Scenario: default.nix imports opencode-rules
Tool: Bash
Preconditions: Both files updated
Steps:
1. Run `grep 'opencode-rules' /home/m3tam3re/p/NIX/nixpkgs/lib/default.nix`
Expected Result: Line shows `opencode-rules = import ./opencode-rules.nix {inherit lib;};`
Failure Indicators: No match
Evidence: .sisyphus/evidence/task-3-import.txt
Scenario: Full lib evaluates
Tool: Bash
Preconditions: Both files updated
Steps:
1. Run `nix eval --impure --expr 'let pkgs = import <nixpkgs> {}; m3taLib = import /home/m3tam3re/p/NIX/nixpkgs/lib {lib = pkgs.lib;}; in builtins.attrNames m3taLib' 2>&1`
Expected Result: Output includes both "ports" and "opencode-rules"
Failure Indicators: Missing "opencode-rules" or error
Evidence: .sisyphus/evidence/task-3-full-lib.txt
```
**Commit**: YES (groups with Task 2)
- Message: `feat(lib): add opencode-rules helper for per-project rule injection`
- Files: `lib/default.nix`, `lib/opencode-rules.nix`
---
- [x] 4. Update opencode.nix in nixos-config repo
**What to do**:
- Add `rules/` deployment to `xdg.configFile` in `/home/m3tam3re/p/NIX/nixos-config/home/features/coding/opencode.nix`
- Add entry: `"opencode/rules" = { source = "${inputs.agents}/rules"; recursive = true; };`
- Place it alongside existing entries for commands, context, prompts, skills (lines 2-18)
**Must NOT do**:
- Do not modify any existing entries
- Do not change agents, MCP, providers, or oh-my-opencode config
- Do not run `home-manager switch`
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: []
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 1
- **Blocks**: Task 17
- **Blocked By**: None
**References**:
- `/home/m3tam3re/p/NIX/nixos-config/home/features/coding/opencode.nix:2-18` — existing xdg.configFile entries
**Acceptance Criteria**:
**QA Scenarios (MANDATORY):**
```
Scenario: opencode.nix contains rules entry
Tool: Bash
Preconditions: File updated
Steps:
1. Run `grep -c 'opencode/rules' /home/m3tam3re/p/NIX/nixos-config/home/features/coding/opencode.nix`
2. Run `grep -c 'opencode/commands\|opencode/context\|opencode/prompts\|opencode/skills' /home/m3tam3re/p/NIX/nixos-config/home/features/coding/opencode.nix`
Expected Result: Rules count is 1, existing count is 4 (all preserved)
Failure Indicators: Count mismatch
Evidence: .sisyphus/evidence/task-4-opencode-nix.txt
```
**Commit**: YES
- Message: `feat(opencode): deploy rules/ to ~/.config/opencode/rules/ via home-manager`
- Files: `opencode.nix`
---
- [x] 5. Create `rules/USAGE.md` in AGENTS repo
**What to do**:
- Document how to use `mkOpencodeRules` in a project's `flake.nix`
- Show the nixpkgs consumption pattern: `m3taLib.opencode-rules.mkOpencodeRules { agents = inputs.agents; languages = ["python"]; }`
- Complete example `flake.nix` devShell snippet showing: `inputs.agents` + `inputs.m3ta-nixpkgs` + `mkOpencodeRules` + `shellHook`
- Document `.gitignore` additions: `.opencode-rules` and `opencode.json`
- Explain project-level `AGENTS.md` overrides
- Explain update flow: `nix flake update agents`
- Keep concise: max 100 lines
**Must NOT do**:
- Do not create a README.md (repo anti-pattern)
- Do not reference `rules/default.nix` — the helper lives in nixpkgs, not AGENTS
**Recommended Agent Profile**:
- **Category**: `quick`
- **Skills**: []
**Parallelization**:
- **Can Run In Parallel**: YES
- **Parallel Group**: Wave 1
- **Blocks**: Task 17
- **Blocked By**: Tasks 1, 2 (needs to reference both structures)
**References**:
- `/home/m3tam3re/p/AI/AGENTS/AGENTS.md` — repo documentation style (concise, code-heavy)
- `/home/m3tam3re/p/NIX/nixpkgs/lib/ports.nix:1-42` — the doc comment style used for lib functions
- OpenCode rules docs: `https://opencode.ai/docs/rules/` — `instructions` field
**Acceptance Criteria**:
**QA Scenarios (MANDATORY):**
```
Scenario: USAGE.md has required content
Tool: Bash
Preconditions: File created
Steps:
1. Run `wc -l /home/m3tam3re/p/AI/AGENTS/rules/USAGE.md`
2. Run `grep -c 'm3ta-nixpkgs\|mkOpencodeRules\|gitignore\|AGENTS.md\|nix flake update' /home/m3tam3re/p/AI/AGENTS/rules/USAGE.md`
Expected Result: Under 100 lines, key terms >= 5
Failure Indicators: Over 100 lines or missing key concepts
Evidence: .sisyphus/evidence/task-5-usage.txt
```
**Commit**: YES (groups with T1)
- Message: `feat(rules): add rules directory structure and usage documentation`
- Files: `rules/USAGE.md`, `rules/concerns/.gitkeep`, `rules/languages/.gitkeep`, `rules/frameworks/.gitkeep`
---
- [x] 6. Create `rules/concerns/coding-style.md`
**What to do**:
- Write coding style rules: code formatting, patterns/anti-patterns, error handling, type safety, function design, DRY/SOLID
- Imperative language ("Always...", "Never...", "Prefer..."), micro-examples (`Correct:` / `Incorrect:`)
- Keep under 200 lines, sandwich principle (critical rules at start and end)
**Must NOT do**: No language-specific toolchain details, no generic advice ("write clean code"), max 200 lines
**Recommended Agent Profile**: `writing`, Skills: []
**Parallelization**: Wave 2, parallel with T7-T16. Blocks T17. Blocked by T1.
**References**:
- `/home/m3tam3re/p/AI/AGENTS/skills/skill-creator/SKILL.md` — documentation density example
- Awesome Cursorrules: `https://github.com/PatrickJS/awesome-cursorrules`
**Acceptance Criteria**:
```
Scenario: Quality check
Tool: Bash
Steps:
1. `wc -l` → under 200
2. `grep -c 'Correct:\|Incorrect:\|Always\|Never\|Prefer'` → >= 10
3. `grep -c '```'` → >= 6 (3+ example pairs)
4. `grep -ic 'write clean code\|follow best practices'` → 0
Evidence: .sisyphus/evidence/task-6-coding-style.txt
```
**Commit**: NO (groups with Wave 2 commit in T17)
---
- [x] 7. Create `rules/concerns/naming.md`
**What to do**:
- Naming conventions: files, variables, functions, classes, modules, constants
- Per-language table (Python=snake_case, TS=camelCase, Nix=camelCase, Shell=UPPER_SNAKE)
- Keep under 150 lines
**Must NOT do**: No toolchain details, max 150 lines
**Recommended Agent Profile**: `writing`, Skills: []
**Parallelization**: Wave 2. Blocks T17. Blocked by T1.
**References**: `/home/m3tam3re/p/AI/AGENTS/AGENTS.md:58-62` — existing naming conventions
**Acceptance Criteria**:
```
Scenario: `wc -l` → under 150, `grep -c 'snake_case\|camelCase\|PascalCase\|UPPER_SNAKE'` → >= 4
Evidence: .sisyphus/evidence/task-7-naming.txt
```
**Commit**: NO
---
- [x] 8. Create `rules/concerns/documentation.md`
**What to do**: When to document, docstring formats, inline comment philosophy (WHY not WHAT), README standards. Under 150 lines.
**Recommended Agent Profile**: `writing`
**Parallelization**: Wave 2. Blocked by T1.
**References**: `/home/m3tam3re/p/AI/AGENTS/AGENTS.md` — repo's own style
**Acceptance Criteria**: `wc -l` < 150, `grep -c 'WHY\|WHAT\|Correct:\|Incorrect:'` >= 4
**Commit**: NO
---
- [x] 9. Create `rules/concerns/testing.md`
**What to do**: Arrange-act-assert, behavior vs implementation testing, mocking philosophy, coverage, TDD. Under 200 lines.
**Recommended Agent Profile**: `writing`
**Parallelization**: Wave 2. Blocked by T1.
**References**: `/home/m3tam3re/p/AI/AGENTS/AGENTS.md:73-82` — existing test philosophy
**Acceptance Criteria**: `wc -l` < 200, `grep -ic 'arrange\|act\|assert\|mock\|behavior'` >= 4
**Commit**: NO
---
- [x] 10. Create `rules/concerns/git-workflow.md`
**What to do**: Conventional commits, branch naming, PR descriptions, squash vs merge. Under 120 lines.
**Recommended Agent Profile**: `writing`, Skills: [`git-master`]
**Parallelization**: Wave 2. Blocked by T1.
**References**: `https://www.conventionalcommits.org/en/v1.0.0/`
**Acceptance Criteria**: `wc -l` < 120, `grep -c 'feat\|fix\|refactor\|docs\|chore'` >= 5
**Commit**: NO
---
- [x] 11. Create `rules/concerns/project-structure.md`
**What to do**: Directory layout, module organization, entry points, config placement. Per-type: Python (src layout), TS (src/), Nix (modules/). Under 120 lines.
**Recommended Agent Profile**: `writing`
**Parallelization**: Wave 2. Blocked by T1.
**References**: `/home/m3tam3re/p/AI/AGENTS/AGENTS.md:24-38` — repo structure
**Acceptance Criteria**: `wc -l` < 120
**Commit**: NO
---
- [x] 12. Create `rules/languages/python.md`
**What to do**:
- Deep Python patterns: `uv` (pkg mgmt), `ruff` (lint/fmt), `pyright` (types), `pytest` + `hypothesis`, Pydantic for data boundaries
- Idioms: comprehensions, context managers, generators, f-strings
- Anti-patterns: bare except, mutable defaults, global state, star imports
- Project setup: `pyproject.toml`, src layout
- Under 250 lines with micro-examples
**Must NOT do**: No general coding style (covered in concerns/), no Django/Flask/FastAPI, max 250 lines
**Recommended Agent Profile**: `writing`
**Parallelization**: Wave 2. Blocked by T1.
**References**:
- `/home/m3tam3re/p/AI/AGENTS/AGENTS.md:60` — existing Python conventions (shebang, docstrings)
- Ruff docs: `https://docs.astral.sh/ruff/`, uv docs: `https://docs.astral.sh/uv/`
**Acceptance Criteria**: `wc -l` < 250, `grep -c 'ruff\|uv\|pytest\|pydantic\|pyright'` >= 4, `grep -c '```python'` >= 5, no "pythonic"/"best practice"
**Commit**: NO
---
- [x] 13. Create `rules/languages/typescript.md`
**What to do**:
- Strict mode (`strict: true`, `noUncheckedIndexedAccess`), discriminated unions, branded types, `satisfies`, `as const`
- Modern: `using`, `Promise.withResolvers()`, `Object.groupBy()`
- Toolchain: `bun`/`tsx`, `biome`/`eslint`
- Anti-patterns: `as any`, `@ts-ignore`, `!` assertion, `enum` (prefer union)
- Under 250 lines
**Must NOT do**: No React/Next.js, max 250 lines
**Recommended Agent Profile**: `writing`
**Parallelization**: Wave 2. Blocked by T1.
**Acceptance Criteria**: `wc -l` < 250, `grep -c 'strict\|as any\|ts-ignore\|discriminated\|satisfies'` >= 4, `grep -c '```ts'` >= 5
**Commit**: NO
---
- [x] 14. Create `rules/languages/nix.md`
**What to do**:
- Flake structure, module patterns (`{ config, lib, pkgs, ... }:`), `mkIf`/`mkMerge`
- Formatting: `alejandra`, naming: camelCase
- Anti-patterns: `with pkgs;`, `builtins.fetchTarball`, impure ops
- Home Manager patterns, overlays
- Under 200 lines
**Recommended Agent Profile**: `writing`
**Parallelization**: Wave 2. Blocked by T1.
**References**:
- `/home/m3tam3re/p/NIX/nixos-config/home/features/coding/opencode.nix` — user's actual Nix style
- `/home/m3tam3re/p/NIX/nixpkgs/lib/ports.nix` — well-structured Nix code example
**Acceptance Criteria**: `wc -l` < 200, `grep -c 'flake\|mkShell\|alejandra\|with pkgs\|overlay'` >= 4
**Commit**: NO
---
- [x] 15. Create `rules/languages/shell.md`
**What to do**: `set -euo pipefail`, shellcheck, quoting, local vars, POSIX portability, `#!/usr/bin/env bash`. Under 120 lines.
**Recommended Agent Profile**: `writing`
**Parallelization**: Wave 2. Blocked by T1.
**References**: `/home/m3tam3re/p/AI/AGENTS/AGENTS.md:61`, `/home/m3tam3re/p/AI/AGENTS/scripts/test-skill.sh`
**Acceptance Criteria**: `wc -l` < 120, `grep -c 'set -euo pipefail\|shellcheck\|#!/usr/bin/env'` >= 2
**Commit**: NO
---
- [x] 16. Create `rules/frameworks/n8n.md`
**What to do**: Workflow design, node patterns, naming, Error Trigger, data patterns, security. Under 120 lines.
**Recommended Agent Profile**: `writing`
**Parallelization**: Wave 2. Blocked by T1.
**References**: n8n docs: `https://docs.n8n.io/`
**Acceptance Criteria**: `wc -l` < 120, `grep -c 'workflow\|node\|Error Trigger\|webhook\|credential'` >= 4
**Commit**: NO
---
- [x] 17. End-to-end integration test + commits
**What to do**:
1. Verify all 11 rule files exist and meet line count limits
2. Verify `lib/opencode-rules.nix` in nixpkgs evaluates correctly for: empty, single-lang, multi-lang, with-frameworks
3. Verify full lib import works: `m3taLib.opencode-rules.mkOpencodeRules`
4. Verify generated `opencode.json` is valid JSON with correct `instructions` paths
5. Verify all instruction paths resolve to real files in AGENTS repo rules/
6. Verify total context budget: all concerns + 1 language < 1500 lines
7. Verify `opencode.nix` has the rules deployment entry
8. Commit all Wave 2 rule files as a single commit in AGENTS repo
**Must NOT do**: Do not run `home-manager switch`, do not modify files, do not create test projects
**Recommended Agent Profile**: `deep`, Skills: [`git-master`]
**Parallelization**: Wave 3 (sequential). Blocks F1-F4. Blocked by T2-T5, T6-T16.
**References**:
- `/home/m3tam3re/p/NIX/nixpkgs/lib/opencode-rules.nix` — Nix helper to evaluate
- `/home/m3tam3re/p/NIX/nixos-config/home/features/coding/opencode.nix` — deployment config
**Acceptance Criteria**:
**QA Scenarios (MANDATORY):**
```
Scenario: All rule files exist and meet limits
Tool: Bash
Steps:
1. For each of 11 files: `wc -l` and verify under limit
Expected Result: All 11 files present, all under limits
Evidence: .sisyphus/evidence/task-17-inventory.txt
Scenario: Full lib integration
Tool: Bash
Steps:
1. Run `nix eval --impure --json --expr 'let pkgs = import <nixpkgs> {}; m3taLib = import /home/m3tam3re/p/NIX/nixpkgs/lib {lib = pkgs.lib;}; in (m3taLib.opencode-rules.mkOpencodeRules { agents = /home/m3tam3re/p/AI/AGENTS; languages = ["python" "typescript" "nix" "shell"]; frameworks = ["n8n"]; }).instructions'`
Expected Result: JSON array with 11 paths (6 concerns + 4 langs + 1 framework)
Failure Indicators: Wrong count, error
Evidence: .sisyphus/evidence/task-17-full-integration.txt
Scenario: All paths resolve to real files
Tool: Bash
Steps:
1. For each path in instructions output: verify the corresponding file exists under `rules/`
Expected Result: All paths resolve, none missing
Evidence: .sisyphus/evidence/task-17-paths-resolve.txt
Scenario: Total context budget
Tool: Bash
Steps:
1. `cat /home/m3tam3re/p/AI/AGENTS/rules/concerns/*.md | wc -l`
2. `wc -l < /home/m3tam3re/p/AI/AGENTS/rules/languages/python.md`
3. Sum must be < 1500
Expected Result: Total under 1500
Evidence: .sisyphus/evidence/task-17-budget.txt
```
**Commit**: YES
- Message: `feat(rules): add initial rule files for all concerns, languages, and frameworks`
- Files: all `rules/**/*.md` files (11 total)
- Repo: AGENTS
---
## Final Verification Wave (MANDATORY — after ALL implementation tasks)
> 4 review agents run in PARALLEL. ALL must APPROVE. Rejection → fix → re-run.
- [x] F1. **Plan Compliance Audit** — `oracle`
For each "Must Have": verify implementation exists. For each "Must NOT Have": search for violations. Check evidence files. Compare deliverables across all 3 repos.
Output: `Must Have [N/N] | Must NOT Have [N/N] | Tasks [N/N] | VERDICT`
- [x] F2. **Code Quality Review** — `unspecified-high`
Rule files: no generic advice, has examples, consistent tone, under limits. Nix: valid syntax, correct paths, edge cases. USAGE.md: accurate.
Output: `Files [N clean/N issues] | VERDICT`
- [x] F3. **Real Manual QA** — `unspecified-high`
Run `nix eval` on opencode-rules.nix via full lib import with various configs. Verify JSON. Check rule content quality. Save to `.sisyphus/evidence/final-qa/`.
Output: `Scenarios [N/N pass] | VERDICT`
- [x] F4. **Scope Fidelity Check** — `deep`
For each task: "What to do" vs actual file. 1:1 match. No creep. Check "Must NOT do". Flag unaccounted changes across all 3 repos.
Output: `Tasks [N/N compliant] | Unaccounted [CLEAN/N files] | VERDICT`
---
## Commit Strategy
| After Task(s) | Repo | Message | Files |
|---------------|------|---------|-------|
| 1, 5 | AGENTS | `feat(rules): add rules directory structure and usage documentation` | `rules/USAGE.md`, `rules/{concerns,languages,frameworks}/.gitkeep` |
| 2, 3 | nixpkgs | `feat(lib): add opencode-rules helper for per-project rule injection` | `lib/opencode-rules.nix`, `lib/default.nix` |
| 4 | nixos-config | `feat(opencode): deploy rules/ to ~/.config/opencode/rules/` | `opencode.nix` |
| 17 | AGENTS | `feat(rules): add initial rule files for concerns, languages, and frameworks` | all `rules/**/*.md` (11 files) |
---
## Success Criteria
### Verification Commands
```bash
# All rule files exist (AGENTS repo)
ls rules/concerns/*.md rules/languages/*.md rules/frameworks/*.md
# Context budget
cat rules/concerns/*.md rules/languages/python.md | wc -l # Expected: < 1500
# Nix helper via full lib (nixpkgs)
nix eval --impure --json --expr 'let pkgs = import <nixpkgs> {}; m3taLib = import /path/to/nixpkgs/lib {lib = pkgs.lib;}; in (m3taLib.opencode-rules.mkOpencodeRules { agents = /path/to/AGENTS; languages = ["python"]; }).instructions'
# opencode.nix has rules entry (nixos-config)
grep 'opencode/rules' /home/m3tam3re/p/NIX/nixos-config/home/features/coding/opencode.nix
```
### Final Checklist
- [ ] All 11 rule files present and under line limits
- [ ] All rule files use imperative language with micro-examples
- [ ] `lib/opencode-rules.nix` in nixpkgs follows ports.nix pattern exactly
- [ ] `lib/default.nix` imports opencode-rules
- [ ] `opencode.nix` deploys rules/ alongside skills/commands/context/prompts
- [ ] `rules/USAGE.md` documents nixpkgs consumption pattern correctly
- [ ] No Nix code in AGENTS repo
- [ ] No existing files modified (except lib/default.nix +1 line, opencode.nix +3 lines)
- [ ] Total loaded context under 1500 lines for any realistic configuration

View File

@@ -1,15 +1,5 @@
# Opencode Skills Repository # Opencode Skills Repository
## MANDATORY: Use td for Task Management
Run td usage --new-session at conversation start (or after /clear). This tells you what to work on next.
Sessions are automatic (based on terminal/agent context). Optional:
- td session "name" to label the current session
- td session --new to force a new session in the same context
Use td usage -q after first read.
Configuration repository for Opencode Agent Skills, context files, and agent configurations. Deployed via Nix home-manager to `~/.config/opencode/`. Configuration repository for Opencode Agent Skills, context files, and agent configurations. Deployed via Nix home-manager to `~/.config/opencode/`.
## Quick Commands ## Quick Commands
@@ -22,21 +12,22 @@ Configuration repository for Opencode Agent Skills, context files, and agent con
# Skill creation # Skill creation
python3 skills/skill-creator/scripts/init_skill.py <name> --path skills/ python3 skills/skill-creator/scripts/init_skill.py <name> --path skills/
# Issue tracking (beads)
bd ready && bd create "title" && bd close <id> && bd sync
``` ```
## Directory Structure ## Directory Structure
``` ```
. .
├── skills/ # Agent skills (25 modules) ├── skills/ # Agent skills (15 modules)
│ └── skill-name/ │ └── skill-name/
│ ├── SKILL.md # Required: YAML frontmatter + workflows │ ├── SKILL.md # Required: YAML frontmatter + workflows
│ ├── scripts/ # Executable code (optional) │ ├── scripts/ # Executable code (optional)
│ ├── references/ # Domain docs (optional) │ ├── references/ # Domain docs (optional)
│ └── assets/ # Templates/files (optional) │ └── assets/ # Templates/files (optional)
├── rules/ # AI coding rules (languages, concerns, frameworks)
│ ├── languages/ # Python, TypeScript, Nix, Shell
│ ├── concerns/ # Testing, naming, documentation, etc.
│ └── frameworks/ # Framework-specific rules (n8n, etc.)
├── agents/ # Agent definitions (agents.json) ├── agents/ # Agent definitions (agents.json)
├── prompts/ # System prompts (chiron*.txt) ├── prompts/ # System prompts (chiron*.txt)
├── context/ # User profiles ├── context/ # User profiles
@@ -68,7 +59,7 @@ compatibility: opencode
## Anti-Patterns (CRITICAL) ## Anti-Patterns (CRITICAL)
**Frontend Design**: NEVER use generic AI aesthetics, NEVER converge on common choices **Frontend Design**: NEVER use generic AI aesthetics, NEVER converge on common choices
**Excalidraw**: NEVER use diamond shapes (broken arrows), NEVER use `label` property **Excalidraw**: NEVER use `label` property (use boundElements + text element pairs instead)
**Debugging**: NEVER fix just symptom, ALWAYS find root cause first **Debugging**: NEVER fix just symptom, ALWAYS find root cause first
**Excel**: ALWAYS respect existing template conventions over guidelines **Excel**: ALWAYS respect existing template conventions over guidelines
**Structure**: NEVER place scripts/docs outside scripts/references/ directories **Structure**: NEVER place scripts/docs outside scripts/references/ directories
@@ -87,27 +78,46 @@ compatibility: opencode
## Deployment ## Deployment
**Nix pattern** (non-flake input): **Nix flake pattern**:
```nix ```nix
agents = { agents = {
url = "git+https://code.m3ta.dev/m3tam3re/AGENTS"; url = "git+https://code.m3ta.dev/m3tam3re/AGENTS";
flake = false; # Files only, not a Nix flake inputs.nixpkgs.follows = "nixpkgs"; # Optional but recommended
}; };
``` ```
**Exports:**
- `packages.skills-runtime` — composable runtime with all skill dependencies
- `devShells.default` — dev environment for working on skills
**Mapping** (via home-manager): **Mapping** (via home-manager):
- `skills/`, `context/`, `commands/`, `prompts/` → symlinks - `skills/`, `context/`, `commands/`, `prompts/` → symlinks
- `agents/agents.json` → embedded into config.json - `agents/agents.json` → embedded into config.json
- Agent changes: require `home-manager switch` - Agent changes: require `home-manager switch`
- Other changes: visible immediately - Other changes: visible immediately
## Rules System
Centralized AI coding rules consumed via `mkOpencodeRules` from m3ta-nixpkgs:
```nix
# In project flake.nix
m3taLib.opencode-rules.mkOpencodeRules {
inherit agents;
languages = [ "python" "typescript" ];
frameworks = [ "n8n" ];
};
```
See `rules/USAGE.md` for full documentation.
## Notes for AI Agents ## Notes for AI Agents
1. **Config-only repo** - No compilation, no build, manual validation only 1. **Config-only repo** - No compilation, no build, manual validation only
2. **Skills are documentation** - Write for AI consumption, progressive disclosure 2. **Skills are documentation** - Write for AI consumption, progressive disclosure
3. **Consistent structure** - All skills follow 4-level deep pattern (skills/name/ + optional subdirs) 3. **Consistent structure** - All skills follow 4-level deep pattern (skills/name/ + optional subdirs)
4. **Cross-cutting concerns** - Standardized SKILL.md, workflow patterns, delegation rules 4. **Cross-cutting concerns** - Standardized SKILL.md, workflow patterns, delegation rules
5. **Always push** - Session completion workflow: commit + bd sync + git push 5. **Always push** - Session completion workflow: commit + git push
## Quality Gates ## Quality Gates
@@ -115,4 +125,5 @@ Before committing:
1. `./scripts/test-skill.sh --validate` 1. `./scripts/test-skill.sh --validate`
2. Python shebang + docstrings check 2. Python shebang + docstrings check
3. No extraneous files (README.md, CHANGELOG.md in skills/) 3. No extraneous files (README.md, CHANGELOG.md in skills/)
4. Git status clean 4. If skill has scripts with external dependencies → verify `flake.nix` is updated (see skill-creator Step 4)
5. Git status clean

145
README.md
View File

@@ -8,7 +8,6 @@ This repository serves as a **personal AI operating system** - a collection of s
- **Productivity & Task Management** - PARA methodology, GTD workflows, project tracking - **Productivity & Task Management** - PARA methodology, GTD workflows, project tracking
- **Knowledge Management** - Note-taking, research workflows, information organization - **Knowledge Management** - Note-taking, research workflows, information organization
- **Communications** - Email management, meeting scheduling, follow-up tracking
- **AI Development** - Tools for creating new skills and agent configurations - **AI Development** - Tools for creating new skills and agent configurations
- **Memory & Context** - Persistent memory systems, conversation analysis - **Memory & Context** - Persistent memory systems, conversation analysis
- **Document Processing** - PDF manipulation, spreadsheet handling, diagram generation - **Document Processing** - PDF manipulation, spreadsheet handling, diagram generation
@@ -24,7 +23,7 @@ This repository serves as a **personal AI operating system** - a collection of s
│ └── profile.md # Work style, PARA areas, preferences │ └── profile.md # Work style, PARA areas, preferences
├── commands/ # Custom command definitions ├── commands/ # Custom command definitions
│ └── reflection.md │ └── reflection.md
├── skills/ # Opencode Agent Skills (18 skills) ├── skills/ # Opencode Agent Skills (15 skills)
│ ├── agent-development/ # Agent creation and configuration │ ├── agent-development/ # Agent creation and configuration
│ ├── basecamp/ # Basecamp project management │ ├── basecamp/ # Basecamp project management
│ ├── brainstorming/ # Ideation & strategic thinking │ ├── brainstorming/ # Ideation & strategic thinking
@@ -32,11 +31,8 @@ This repository serves as a **personal AI operating system** - a collection of s
│ ├── excalidraw/ # Architecture diagrams │ ├── excalidraw/ # Architecture diagrams
│ ├── frontend-design/ # UI/UX design patterns │ ├── frontend-design/ # UI/UX design patterns
│ ├── memory/ # Persistent memory system │ ├── memory/ # Persistent memory system
│ ├── mem0-memory/ # DEPRECATED (use memory)
│ ├── msteams/ # Microsoft Teams integration
│ ├── obsidian/ # Obsidian vault management │ ├── obsidian/ # Obsidian vault management
│ ├── outline/ # Outline wiki integration │ ├── outline/ # Outline wiki integration
│ ├── outlook/ # Outlook email & calendar
│ ├── pdf/ # PDF manipulation toolkit │ ├── pdf/ # PDF manipulation toolkit
│ ├── prompt-engineering-patterns/ # Prompt patterns │ ├── prompt-engineering-patterns/ # Prompt patterns
│ ├── reflection/ # Conversation analysis │ ├── reflection/ # Conversation analysis
@@ -45,8 +41,12 @@ This repository serves as a **personal AI operating system** - a collection of s
│ └── xlsx/ # Spreadsheet handling │ └── xlsx/ # Spreadsheet handling
├── scripts/ # Repository utility scripts ├── scripts/ # Repository utility scripts
│ └── test-skill.sh # Test skills without deploying │ └── test-skill.sh # Test skills without deploying
├── rules/ # Development rules and conventions ├── rules/ # AI coding rules
├── tools/ # Utility tools │ ├── languages/ # Python, TypeScript, Nix, Shell
│ ├── concerns/ # Testing, naming, documentation
│ └── frameworks/ # Framework-specific rules (n8n)
├── flake.nix # Nix flake: dev shell + skills-runtime export
├── .envrc # direnv config (use flake)
├── AGENTS.md # Developer documentation ├── AGENTS.md # Developer documentation
└── README.md # This file └── README.md # This file
``` ```
@@ -55,21 +55,26 @@ This repository serves as a **personal AI operating system** - a collection of s
### Prerequisites ### Prerequisites
- **Opencode** - AI coding assistant ([opencode.dev](https://opencode.ai)) - **Nix** with flakes enabled — for reproducible dependency management and deployment
- **Nix** (optional) - For declarative deployment via home-manager - **direnv** (recommended) — auto-activates the development environment when entering the repo
- **Python 3** - For skill validation and creation scripts - **Opencode** — AI coding assistant ([opencode.ai](https://opencode.ai))
### Installation ### Installation
#### Option 1: Nix Flake (Recommended) #### Option 1: Nix Flake (Recommended)
This repository is consumed as a **non-flake input** by your NixOS configuration: This repository is a **Nix flake** that exports:
- **`devShells.default`** — development environment for working on skills (activated via direnv)
- **`packages.skills-runtime`** — composable runtime with all skill script dependencies (Python packages + system tools)
**Consume in your system flake:**
```nix ```nix
# In your flake.nix # flake.nix
inputs.agents = { inputs.agents = {
url = "git+https://code.m3ta.dev/m3tam3re/AGENTS"; url = "git+https://code.m3ta.dev/m3tam3re/AGENTS";
flake = false; # Pure files, not a Nix flake inputs.nixpkgs.follows = "nixpkgs";
}; };
# In your home-manager module (e.g., opencode.nix) # In your home-manager module (e.g., opencode.nix)
@@ -85,7 +90,55 @@ programs.opencode.settings.agent = builtins.fromJSON
(builtins.readFile "${inputs.agents}/agents/agents.json"); (builtins.readFile "${inputs.agents}/agents/agents.json");
``` ```
Rebuild your system: **Deploy skills via home-manager:**
```nix
# home-manager module (e.g., opencode.nix)
{ inputs, system, ... }:
{
# Skill files — symlinked, changes visible immediately
xdg.configFile = {
"opencode/skills".source = "${inputs.agents}/skills";
"opencode/context".source = "${inputs.agents}/context";
"opencode/commands".source = "${inputs.agents}/commands";
"opencode/prompts".source = "${inputs.agents}/prompts";
};
# Agent config — embedded into config.json (requires home-manager switch)
programs.opencode.settings.agent = builtins.fromJSON
(builtins.readFile "${inputs.agents}/agents/agents.json");
# Skills runtime — ensures opencode always has script dependencies
home.packages = [ inputs.agents.packages.${system}.skills-runtime ];
}
```
**Compose into project flakes** (so opencode has skill deps in any project):
```nix
# Any project's flake.nix
{
inputs.agents.url = "git+https://code.m3ta.dev/m3tam3re/AGENTS";
inputs.agents.inputs.nixpkgs.follows = "nixpkgs";
outputs = { self, nixpkgs, agents, ... }:
let
system = "x86_64-linux";
pkgs = nixpkgs.legacyPackages.${system};
in {
devShells.${system}.default = pkgs.mkShell {
packages = [
# project-specific tools
pkgs.nodejs
# skill script dependencies
agents.packages.${system}.skills-runtime
];
};
};
}
```
Rebuild:
```bash ```bash
home-manager switch home-manager switch
@@ -151,25 +204,35 @@ compatibility: opencode
[Your skill instructions for Opencode] [Your skill instructions for Opencode]
``` ```
### 3. Validate the Skill ### 3. Register Dependencies
If your skill includes scripts with external dependencies, add them to `flake.nix`:
```nix
# Python packages — add to pythonEnv:
# my-skill: my_script.py
some-python-package
# System tools — add to skills-runtime paths:
# my-skill: needed by my_script.py
pkgs.some-tool
```
Verify: `nix develop --command python3 -c "import some_package"`
### 4. Validate the Skill
```bash ```bash
python3 skills/skill-creator/scripts/quick_validate.py skills/my-skill-name python3 skills/skill-creator/scripts/quick_validate.py skills/my-skill-name
``` ```
### 4. Test the Skill ### 5. Test the Skill
Test your skill without deploying via home-manager:
```bash ```bash
# Use the test script to validate and list skills
./scripts/test-skill.sh my-skill-name # Validate specific skill ./scripts/test-skill.sh my-skill-name # Validate specific skill
./scripts/test-skill.sh --list # List all dev skills
./scripts/test-skill.sh --run # Launch opencode with dev skills ./scripts/test-skill.sh --run # Launch opencode with dev skills
``` ```
The test script creates a temporary config directory with symlinks to this repo's skills, allowing you to test changes before committing.
## 📚 Available Skills ## 📚 Available Skills
| Skill | Purpose | Status | | Skill | Purpose | Status |
@@ -181,11 +244,8 @@ The test script creates a temporary config directory with symlinks to this repo'
| **excalidraw** | Architecture diagrams from codebase analysis | ✅ Active | | **excalidraw** | Architecture diagrams from codebase analysis | ✅ Active |
| **frontend-design** | Production-grade UI/UX with high design quality | ✅ Active | | **frontend-design** | Production-grade UI/UX with high design quality | ✅ Active |
| **memory** | SQLite-based persistent memory with hybrid search | ✅ Active | | **memory** | SQLite-based persistent memory with hybrid search | ✅ Active |
| **mem0-memory** | Legacy memory system (deprecated) | ⚠️ Deprecated |
| **msteams** | Microsoft Teams integration via Graph API | ✅ Active |
| **obsidian** | Obsidian vault management via Local REST API | ✅ Active | | **obsidian** | Obsidian vault management via Local REST API | ✅ Active |
| **outline** | Outline wiki integration for team documentation | ✅ Active | | **outline** | Outline wiki integration for team documentation | ✅ Active |
| **outlook** | Outlook email, calendar, and contact management | ✅ Active |
| **pdf** | PDF manipulation, extraction, creation, and forms | ✅ Active | | **pdf** | PDF manipulation, extraction, creation, and forms | ✅ Active |
| **prompt-engineering-patterns** | Advanced prompt engineering techniques | ✅ Active | | **prompt-engineering-patterns** | Advanced prompt engineering techniques | ✅ Active |
| **reflection** | Conversation analysis and skill improvement | ✅ Active | | **reflection** | Conversation analysis and skill improvement | ✅ Active |
@@ -213,7 +273,23 @@ The test script creates a temporary config directory with symlinks to this repo'
**Configuration**: `agents/agents.json` + `prompts/*.txt` **Configuration**: `agents/agents.json` + `prompts/*.txt`
## 🛠️ Development Workflow ## 🛠️ Development
### Environment
The repository includes a Nix flake with a development shell. With [direnv](https://direnv.net/) installed, the environment activates automatically:
```bash
cd AGENTS/
# → direnv: loading .envrc
# → 🔧 AGENTS dev shell active — Python 3.13.x, jq-1.x
# All skill script dependencies are now available:
python3 -c "import pypdf, openpyxl, yaml" # ✔️
pdftoppm -v # ✔️
```
Without direnv, activate manually: `nix develop`
### Quality Gates ### Quality Gates
@@ -232,6 +308,7 @@ Before committing:
- **skills/skill-creator/SKILL.md** - Comprehensive skill creation guide - **skills/skill-creator/SKILL.md** - Comprehensive skill creation guide
- **skills/skill-creator/references/workflows.md** - Workflow pattern library - **skills/skill-creator/references/workflows.md** - Workflow pattern library
- **skills/skill-creator/references/output-patterns.md** - Output formatting patterns - **skills/skill-creator/references/output-patterns.md** - Output formatting patterns
- **rules/USAGE.md** - AI coding rules integration guide
### Skill Design Principles ### Skill Design Principles
@@ -247,6 +324,7 @@ Before committing:
- **basecamp/** - MCP server integration with multiple tool categories - **basecamp/** - MCP server integration with multiple tool categories
- **brainstorming/** - Framework-based ideation with Obsidian markdown save - **brainstorming/** - Framework-based ideation with Obsidian markdown save
- **memory/** - SQLite-based hybrid search implementation - **memory/** - SQLite-based hybrid search implementation
- **excalidraw/** - Diagram generation with JSON templates and Python renderer
## 🔧 Customization ## 🔧 Customization
@@ -277,6 +355,21 @@ Edit `context/profile.md` to configure:
Create new command definitions in `commands/` directory following the pattern in `commands/reflection.md`. Create new command definitions in `commands/` directory following the pattern in `commands/reflection.md`.
### Add Project Rules
Use the rules system to inject AI coding rules into projects:
```nix
# In project flake.nix
m3taLib.opencode-rules.mkOpencodeRules {
inherit agents;
languages = [ "python" "typescript" ];
frameworks = [ "n8n" ];
};
```
See `rules/USAGE.md` for full documentation.
## 🌟 Use Cases ## 🌟 Use Cases
### Personal Productivity ### Personal Productivity

27
flake.lock generated Normal file
View File

@@ -0,0 +1,27 @@
{
"nodes": {
"nixpkgs": {
"locked": {
"lastModified": 1772479524,
"narHash": "sha256-u7nCaNiMjqvKpE+uZz9hE7pgXXTmm5yvdtFaqzSzUQI=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "4215e62dc2cd3bc705b0a423b9719ff6be378a43",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"nixpkgs": "nixpkgs"
}
}
},
"root": "root",
"version": 7
}

68
flake.nix Normal file
View File

@@ -0,0 +1,68 @@
{
description = "Opencode Agent Skills development environment & runtime";
inputs = { nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable"; };
outputs = { self, nixpkgs }:
let
supportedSystems = [ "x86_64-linux" "aarch64-linux" "aarch64-darwin" ];
forAllSystems = nixpkgs.lib.genAttrs supportedSystems;
in {
# Composable runtime for project flakes and home-manager.
# Usage:
# home.packages = [ inputs.agents.packages.${system}.skills-runtime ];
# devShells.default = pkgs.mkShell {
# packages = [ inputs.agents.packages.${system}.skills-runtime ];
# };
packages = forAllSystems (system:
let
pkgs = nixpkgs.legacyPackages.${system};
pythonEnv = pkgs.python3.withPackages (ps:
with ps; [
# skill-creator: quick_validate.py
pyyaml
# xlsx: recalc.py
openpyxl
# prompt-engineering-patterns: optimize-prompt.py
numpy
# pdf: multiple scripts
pypdf
pillow # PIL
pdf2image
# excalidraw: render_excalidraw.py
playwright
]);
in {
skills-runtime = pkgs.buildEnv {
name = "opencode-skills-runtime";
paths = [
pythonEnv
pkgs.poppler-utils # pdf: pdftoppm/pdfinfo
pkgs.jq # shell scripts
pkgs.playwright-driver.browsers # excalidraw: chromium for rendering
];
};
});
# Dev shell for working on this repo (wraps skills-runtime).
devShells = forAllSystems (system:
let
pkgs = nixpkgs.legacyPackages.${system};
in {
default = pkgs.mkShell {
packages = [ self.packages.${system}.skills-runtime ];
env.PLAYWRIGHT_BROWSERS_PATH = "${pkgs.playwright-driver.browsers}";
shellHook = ''
echo "🔧 AGENTS dev shell active Python $(python3 --version 2>&1 | cut -d' ' -f2), $(jq --version)"
'';
};
});
};
}

View File

@@ -1,266 +1,544 @@
--- ---
name: excalidraw name: excalidraw
description: Generate architecture diagrams as .excalidraw files from codebase analysis. Use when the user asks to create architecture diagrams, system diagrams, visualize codebase structure, or generate excalidraw files. description: "Create Excalidraw diagram JSON files that make visual arguments. Use when: (1) user wants to visualize workflows, architectures, or concepts, (2) creating system diagrams, (3) generating .excalidraw files. Triggers: excalidraw, diagram, visualize, architecture diagram, system diagram."
compatibility: opencode
--- ---
# Excalidraw Diagram Generator # Excalidraw Diagram Creator
Generate architecture diagrams as `.excalidraw` files directly from codebase analysis. Generate `.excalidraw` JSON files that **argue visually**, not just display information.
## Customization
**All colors and brand-specific styles live in one file:** `references/color-palette.md`. Read it before generating any diagram and use it as the single source of truth for all color choices — shape fills, strokes, text colors, evidence artifact backgrounds, everything.
To make this skill produce diagrams in your own brand style, edit `color-palette.md`. Everything else in this file is universal design methodology and Excalidraw best practices.
--- ---
## Quick Start ## Core Philosophy
**User just asks:** **Diagrams should ARGUE, not DISPLAY.**
```
"Generate an architecture diagram for this project"
"Create an excalidraw diagram of the system"
"Visualize this codebase as an excalidraw file"
```
**Claude Code will:** A diagram isn't formatted text. It's a visual argument that shows relationships, causality, and flow that words alone can't express. The shape should BE the meaning.
1. Analyze the codebase (any language/framework)
2. Identify components, services, databases, APIs
3. Map relationships and data flows
4. Generate valid `.excalidraw` JSON with dynamic IDs and labels
**No prerequisites:** Works without existing diagrams, Terraform, or specific file types. **The Isomorphism Test**: If you removed all text, would the structure alone communicate the concept? If not, redesign.
**The Education Test**: Could someone learn something concrete from this diagram, or does it just label boxes? A good diagram teaches—it shows actual formats, real event names, concrete examples.
--- ---
## Critical Rules ## Depth Assessment (Do This First)
### 1. NEVER Use Diamond Shapes Before designing, determine what level of detail this diagram needs:
Diamond arrow connections are broken in raw Excalidraw JSON. Use styled rectangles instead: ### Simple/Conceptual Diagrams
Use abstract shapes when:
- Explaining a mental model or philosophy
- The audience doesn't need technical specifics
- The concept IS the abstraction (e.g., "separation of concerns")
| Semantic Meaning | Rectangle Style | ### Comprehensive/Technical Diagrams
|------------------|-----------------| Use concrete examples when:
| Orchestrator/Hub | Coral (`#ffa8a8`/`#c92a2a`) + strokeWidth: 3 | - Diagramming a real system, protocol, or architecture
| Decision Point | Orange (`#ffd8a8`/`#e8590c`) + dashed stroke | - The diagram will be used to teach or explain (e.g., YouTube video)
- The audience needs to understand what things actually look like
- You're showing how multiple technologies integrate
### 2. Labels Require TWO Elements **For technical diagrams, you MUST include evidence artifacts** (see below).
The `label` property does NOT work in raw JSON. Every labeled shape needs: ---
```json ## Research Mandate (For Technical Diagrams)
// 1. Shape with boundElements reference
{
"id": "my-box",
"type": "rectangle",
"boundElements": [{ "type": "text", "id": "my-box-text" }]
}
// 2. Separate text element with containerId **Before drawing anything technical, research the actual specifications.**
{
"id": "my-box-text", If you're diagramming a protocol, API, or framework:
"type": "text", 1. Look up the actual JSON/data formats
"containerId": "my-box", 2. Find the real event names, method names, or API endpoints
"text": "My Label" 3. Understand how the pieces actually connect
} 4. Use real terminology, not generic placeholders
Bad: "Protocol" → "Frontend"
Good: "AG-UI streams events (RUN_STARTED, STATE_DELTA, A2UI_UPDATE)" → "CopilotKit renders via createA2UIMessageRenderer()"
**Research makes diagrams accurate AND educational.**
---
## Evidence Artifacts
Evidence artifacts are concrete examples that prove your diagram is accurate and help viewers learn. Include them in technical diagrams.
**Types of evidence artifacts** (choose what's relevant to your diagram):
| Artifact Type | When to Use | How to Render |
|---------------|-------------|---------------|
| **Code snippets** | APIs, integrations, implementation details | Dark rectangle + syntax-colored text (see color palette for evidence artifact colors) |
| **Data/JSON examples** | Data formats, schemas, payloads | Dark rectangle + colored text (see color palette) |
| **Event/step sequences** | Protocols, workflows, lifecycles | Timeline pattern (line + dots + labels) |
| **UI mockups** | Showing actual output/results | Nested rectangles mimicking real UI |
| **Real input content** | Showing what goes IN to a system | Rectangle with sample content visible |
| **API/method names** | Real function calls, endpoints | Use actual names from docs, not placeholders |
**Example**: For a diagram about a streaming protocol, you might show:
- The actual event names from the spec (not just "Event 1", "Event 2")
- A code snippet showing how to connect
- What the streamed data actually looks like
**Example**: For a diagram about a data transformation pipeline:
- Show sample input data (actual format, not "Input")
- Show sample output data (actual format, not "Output")
- Show intermediate states if relevant
The key principle: **show what things actually look like**, not just what they're called.
---
## Multi-Zoom Architecture
Comprehensive diagrams operate at multiple zoom levels simultaneously. Think of it like a map that shows both the country borders AND the street names.
### Level 1: Summary Flow
A simplified overview showing the full pipeline or process at a glance. Often placed at the top or bottom of the diagram.
*Example*: `Input → Processing → Output` or `Client → Server → Database`
### Level 2: Section Boundaries
Labeled regions that group related components. These create visual "rooms" that help viewers understand what belongs together.
*Example*: Grouping by responsibility (Backend / Frontend), by phase (Setup / Execution / Cleanup), or by team (User / System / External)
### Level 3: Detail Inside Sections
Evidence artifacts, code snippets, and concrete examples within each section. This is where the educational value lives.
*Example*: Inside a "Backend" section, you might show the actual API response format, not just a box labeled "API Response"
**For comprehensive diagrams, aim to include all three levels.** The summary gives context, the sections organize, and the details teach.
### Bad vs Good
| Bad (Displaying) | Good (Arguing) |
|------------------|----------------|
| 5 equal boxes with labels | Each concept has a shape that mirrors its behavior |
| Card grid layout | Visual structure matches conceptual structure |
| Icons decorating text | Shapes that ARE the meaning |
| Same container for everything | Distinct visual vocabulary per concept |
| Everything in a box | Free-floating text with selective containers |
### Simple vs Comprehensive (Know Which You Need)
| Simple Diagram | Comprehensive Diagram |
|----------------|----------------------|
| Generic labels: "Input" → "Process" → "Output" | Specific: shows what the input/output actually looks like |
| Named boxes: "API", "Database", "Client" | Named boxes + examples of actual requests/responses |
| "Events" or "Messages" label | Timeline with real event/message names from the spec |
| "UI" or "Dashboard" rectangle | Mockup showing actual UI elements and content |
| ~30 seconds to explain | ~2-3 minutes of teaching content |
| Viewer learns the structure | Viewer learns the structure AND the details |
**Simple diagrams** are fine for abstract concepts, quick overviews, or when the audience already knows the details. **Comprehensive diagrams** are needed for technical architectures, tutorials, educational content, or when you want the diagram itself to teach.
---
## Container vs. Free-Floating Text
**Not every piece of text needs a shape around it.** Default to free-floating text. Add containers only when they serve a purpose.
| Use a Container When... | Use Free-Floating Text When... |
|------------------------|-------------------------------|
| It's the focal point of a section | It's a label or description |
| It needs visual grouping with other elements | It's supporting detail or metadata |
| Arrows need to connect to it | It describes something nearby |
| The shape itself carries meaning (decision diamond, etc.) | It's a section title, subtitle, or annotation |
| It represents a distinct "thing" in the system | It's a section title, subtitle, or annotation |
**Typography as hierarchy**: Use font size, weight, and color to create visual hierarchy without boxes. A 28px title doesn't need a rectangle around it.
**The container test**: For each boxed element, ask "Would this work as free-floating text?" If yes, remove the container.
---
## Design Process (Do This BEFORE Generating JSON)
### Step 0: Assess Depth Required
Before anything else, determine if this needs to be:
- **Simple/Conceptual**: Abstract shapes, labels, relationships (mental models, philosophies)
- **Comprehensive/Technical**: Concrete examples, code snippets, real data (systems, architectures, tutorials)
**If comprehensive**: Do research first. Look up actual specs, formats, event names, APIs.
### Step 1: Understand Deeply
Read the content. For each concept, ask:
- What does this concept **DO**? (not what IS it)
- What relationships exist between concepts?
- What's the core transformation or flow?
- **What would someone need to SEE to understand this?** (not just read about)
### Step 2: Map Concepts to Patterns
For each concept, find the visual pattern that mirrors its behavior:
| If the concept... | Use this pattern |
|-------------------|------------------|
| Spawns multiple outputs | **Fan-out** (radial arrows from center) |
| Combines inputs into one | **Convergence** (funnel, arrows merging) |
| Has hierarchy/nesting | **Tree** (lines + free-floating text) |
| Is a sequence of steps | **Timeline** (line + dots + free-floating labels) |
| Loops or improves continuously | **Spiral/Cycle** (arrow returning to start) |
| Is an abstract state or context | **Cloud** (overlapping ellipses) |
| Transforms input to output | **Assembly line** (before → process → after) |
| Compares two things | **Side-by-side** (parallel with contrast) |
| Separates into phases | **Gap/Break** (visual separation between sections) |
### Step 3: Ensure Variety
For multi-concept diagrams: **each major concept must use a different visual pattern**. No uniform cards or grids.
### Step 4: Sketch the Flow
Before JSON, mentally trace how the eye moves through the diagram. There should be a clear visual story.
### Step 5: Generate JSON
Only now create the Excalidraw elements. **See below for how to handle large diagrams.**
### Step 6: Render & Validate (MANDATORY)
After generating the JSON, you MUST run the render-view-fix loop until the diagram looks right. This is not optional — see the **Render & Validate** section below for the full process.
---
## Large / Comprehensive Diagram Strategy
**For comprehensive or technical diagrams, you MUST build the JSON one section at a time.** Do NOT attempt to generate the entire file in a single pass. This is a hard constraint — output token limits mean a comprehensive diagram easily exceeds capacity in one shot. Even if it didn't, generating everything at once leads to worse quality. Section-by-section is better in every way.
### The Section-by-Section Workflow
**Phase 1: Build each section**
1. **Create the base file** with the JSON wrapper (`type`, `version`, `appState`, `files`) and the first section of elements.
2. **Add one section per edit.** Each section gets its own dedicated pass — take your time with it. Think carefully about the layout, spacing, and how this section connects to what's already there.
3. **Use descriptive string IDs** (e.g., `"trigger_rect"`, `"arrow_fan_left"`) so cross-section references are readable.
4. **Namespace seeds by section** (e.g., section 1 uses 100xxx, section 2 uses 200xxx) to avoid collisions.
5. **Update cross-section bindings** as you go. When a new section's element needs to bind to an element from a previous section (e.g., an arrow connecting sections), edit the earlier element's `boundElements` array at the same time.
**Phase 2: Review the whole**
After all sections are in place, read through the complete JSON and check:
- Are cross-section arrows bound correctly on both ends?
- Is the overall spacing balanced, or are some sections cramped while others have too much whitespace?
- Do IDs and bindings all reference elements that actually exist?
Fix any alignment or binding issues before rendering.
**Phase 3: Render & validate**
Now run the render-view-fix loop from the Render & Validate section. This is where you'll catch visual issues that aren't obvious from JSON — overlaps, clipping, imbalanced composition.
### Section Boundaries
Plan your sections around natural visual groupings from the diagram plan. A typical large diagram might split into:
- **Section 1**: Entry point / trigger
- **Section 2**: First decision or routing
- **Section 3**: Main content (hero section — may be the largest single section)
- **Section 4-N**: Remaining phases, outputs, etc.
Each section should be independently understandable: its elements, internal arrows, and any cross-references to adjacent sections.
### What NOT to Do
- **Don't generate the entire diagram in one response.** You will hit the output token limit and produce truncated, broken JSON. Even if the diagram is small enough to fit, splitting into sections produces better results.
- **Don't write a Python generator script.** The templating and coordinate math seem helpful but introduce a layer of indirection that makes debugging harder. Hand-crafted JSON with descriptive IDs is more maintainable.
---
## Visual Pattern Library
### Fan-Out (One-to-Many)
Central element with arrows radiating to multiple targets. Use for: sources, PRDs, root causes, central hubs.
```
□ → ○
``` ```
### 3. Elbow Arrows Need Three Properties ### Convergence (Many-to-One)
Multiple inputs merging through arrows to single output. Use for: aggregation, funnels, synthesis.
```
○ ↘
○ → □
○ ↗
```
For 90-degree corners (not curved): ### Tree (Hierarchy)
Parent-child branching with connecting lines and free-floating text (no boxes needed). Use for: file systems, org charts, taxonomies.
```
label
├── label
│ ├── label
│ └── label
└── label
```
Use `line` elements for the trunk and branches, free-floating text for labels.
### Spiral/Cycle (Continuous Loop)
Elements in sequence with arrow returning to start. Use for: feedback loops, iterative processes, evolution.
```
□ → □
↑ ↓
□ ← □
```
### Cloud (Abstract State)
Overlapping ellipses with varied sizes. Use for: context, memory, conversations, mental states.
### Assembly Line (Transformation)
Input → Process Box → Output with clear before/after. Use for: transformations, processing, conversion.
```
○○○ → [PROCESS] → □□□
chaos order
```
### Side-by-Side (Comparison)
Two parallel structures with visual contrast. Use for: before/after, options, trade-offs.
### Gap/Break (Separation)
Visual whitespace or barrier between sections. Use for: phase changes, context resets, boundaries.
### Lines as Structure
Use lines (type: `line`, not arrows) as primary structural elements instead of boxes:
- **Timelines**: Vertical or horizontal line with small dots (10-20px ellipses) at intervals, free-floating labels beside each dot
- **Tree structures**: Vertical trunk line + horizontal branch lines, with free-floating text labels (no boxes needed)
- **Dividers**: Thin dashed lines to separate sections
- **Flow spines**: A central line that elements relate to, rather than connecting boxes
```
Timeline: Tree:
●─── Label 1 │
│ ├── item
●─── Label 2 │ ├── sub
│ │ └── sub
●─── Label 3 └── item
```
Lines + free-floating text often creates a cleaner result than boxes + contained text.
---
## Shape Meaning
Choose shape based on what it represents—or use no shape at all:
| Concept Type | Shape | Why |
|--------------|-------|-----|
| Labels, descriptions, details | **none** (free-floating text) | Typography creates hierarchy |
| Section titles, annotations | **none** (free-floating text) | Font size/weight is enough |
| Markers on a timeline | small `ellipse` (10-20px) | Visual anchor, not container |
| Start, trigger, input | `ellipse` | Soft, origin-like |
| End, output, result | `ellipse` | Completion, destination |
| Decision, condition | `diamond` | Classic decision symbol |
| Process, action, step | `rectangle` | Contained action |
| Abstract state, context | overlapping `ellipse` | Fuzzy, cloud-like |
| Hierarchy node | lines + text (no boxes) | Structure through lines |
**Rule**: Default to no container. Add shapes only when they carry meaning. Aim for <30% of text elements to be inside containers.
---
## Color as Meaning
Colors encode information, not decoration. Every color choice should come from `references/color-palette.md` — the semantic shape colors, text hierarchy colors, and evidence artifact colors are all defined there.
**Key principles:**
- Each semantic purpose (start, end, decision, AI, error, etc.) has a specific fill/stroke pair
- Free-floating text uses color for hierarchy (titles, subtitles, details — each at a different level)
- Evidence artifacts (code snippets, JSON examples) use their own dark background + colored text scheme
- Always pair a darker stroke with a lighter fill for contrast
**Do not invent new colors.** If a concept doesn't fit an existing semantic category, use Primary/Neutral or Secondary.
---
## Modern Aesthetics
For clean, professional diagrams:
### Roughness
- `roughness: 0` — Clean, crisp edges. Use for modern/technical diagrams.
- `roughness: 1` — Hand-drawn, organic feel. Use for brainstorming/informal diagrams.
**Default to 0** for most professional use cases.
### Stroke Width
- `strokeWidth: 1` — Thin, elegant. Good for lines, dividers, subtle connections.
- `strokeWidth: 2` — Standard. Good for shapes and primary arrows.
- `strokeWidth: 3` — Bold. Use sparingly for emphasis (main flow line, key connections).
### Opacity
**Always use `opacity: 100` for all elements.** Use color, size, and stroke width to create hierarchy instead of transparency.
### Small Markers Instead of Shapes
Instead of full shapes, use small dots (10-20px ellipses) as:
- Timeline markers
- Bullet points
- Connection nodes
- Visual anchors for free-floating text
---
## Layout Principles
### Hierarchy Through Scale
- **Hero**: 300×150 - visual anchor, most important
- **Primary**: 180×90
- **Secondary**: 120×60
- **Small**: 60×40
### Whitespace = Importance
The most important element has the most empty space around it (200px+).
### Flow Direction
Guide the eye: typically left→right or top→bottom for sequences, radial for hub-and-spoke.
### Connections Required
Position alone doesn't show relationships. If A relates to B, there must be an arrow.
---
## Text Rules
**CRITICAL**: The JSON `text` property contains ONLY readable words.
```json ```json
{ {
"type": "arrow", "id": "myElement1",
"roughness": 0, // Clean lines "text": "Start",
"roundness": null, // Sharp corners "originalText": "Start"
"elbowed": true // 90-degree mode
} }
``` ```
### 4. Arrow Edge Calculations Settings: `fontSize: 16`, `fontFamily: 3`, `textAlign: "center"`, `verticalAlign: "middle"`
Arrows must start/end at shape edges, not centers:
| Edge | Formula |
|------|---------|
| Top | `(x + width/2, y)` |
| Bottom | `(x + width/2, y + height)` |
| Left | `(x, y + height/2)` |
| Right | `(x + width, y + height/2)` |
**Detailed arrow routing:** See `references/arrows.md`
--- ---
## Element Types ## JSON Structure
| Type | Use For |
|------|---------|
| `rectangle` | Services, databases, containers, orchestrators |
| `ellipse` | Users, external systems, start/end points |
| `text` | Labels inside shapes, titles, annotations |
| `arrow` | Data flow, connections, dependencies |
| `line` | Grouping boundaries, separators |
**Full JSON format:** See `references/json-format.md`
---
## Workflow
### Step 1: Analyze Codebase
Discover components by looking for:
| Codebase Type | What to Look For |
|---------------|------------------|
| Monorepo | `packages/*/package.json`, workspace configs |
| Microservices | `docker-compose.yml`, k8s manifests |
| IaC | Terraform/Pulumi resource definitions |
| Backend API | Route definitions, controllers, DB models |
| Frontend | Component hierarchy, API calls |
**Use tools:**
- `Glob``**/package.json`, `**/Dockerfile`, `**/*.tf`
- `Grep``app.get`, `@Controller`, `CREATE TABLE`
- `Read` → README, config files, entry points
### Step 2: Plan Layout
**Vertical flow (most common):**
```
Row 1: Users/Entry points (y: 100)
Row 2: Frontend/Gateway (y: 230)
Row 3: Orchestration (y: 380)
Row 4: Services (y: 530)
Row 5: Data layer (y: 680)
Columns: x = 100, 300, 500, 700, 900
Element size: 160-200px x 80-90px
```
**Other patterns:** See `references/examples.md`
### Step 3: Generate Elements
For each component:
1. Create shape with unique `id`
2. Add `boundElements` referencing text
3. Create text with `containerId`
4. Choose color based on type
**Color palettes:** See `references/colors.md`
### Step 4: Add Connections
For each relationship:
1. Calculate source edge point
2. Plan elbow route (avoid overlaps)
3. Create arrow with `points` array
4. Match stroke color to destination type
**Arrow patterns:** See `references/arrows.md`
### Step 5: Add Grouping (Optional)
For logical groupings:
- Large transparent rectangle with `strokeStyle: "dashed"`
- Standalone text label at top-left
### Step 6: Validate and Write
Run validation before writing. Save to `docs/` or user-specified path.
**Validation checklist:** See `references/validation.md`
---
## Quick Arrow Reference
**Straight down:**
```json ```json
{ "points": [[0, 0], [0, 110]], "x": 590, "y": 290 } {
"type": "excalidraw",
"version": 2,
"source": "https://excalidraw.com",
"elements": [...],
"appState": {
"viewBackgroundColor": "#ffffff",
"gridSize": 20
},
"files": {}
}
``` ```
**L-shape (left then down):** ## Element Templates
```json
{ "points": [[0, 0], [-325, 0], [-325, 125]], "x": 525, "y": 420 }
```
**U-turn (callback):** See `references/element-templates.md` for copy-paste JSON templates for each element type (text, line, dot, rectangle, arrow). Pull colors from `references/color-palette.md` based on each element's semantic purpose.
```json
{ "points": [[0, 0], [50, 0], [50, -125], [20, -125]], "x": 710, "y": 440 }
```
**Arrow width/height** = bounding box of points:
```
points [[0,0], [-440,0], [-440,70]] → width=440, height=70
```
**Multiple arrows from same edge** - stagger positions:
```
5 arrows: 20%, 35%, 50%, 65%, 80% across edge width
```
--- ---
## Default Color Palette ## Render & Validate (MANDATORY)
| Component | Background | Stroke | You cannot judge a diagram from JSON alone. After generating or editing the Excalidraw JSON, you MUST render it to PNG, view the image, and fix what you see — in a loop until it's right. This is a core part of the workflow, not a final check.
|-----------|------------|--------|
| Frontend | `#a5d8ff` | `#1971c2` |
| Backend/API | `#d0bfff` | `#7048e8` |
| Database | `#b2f2bb` | `#2f9e44` |
| Storage | `#ffec99` | `#f08c00` |
| AI/ML | `#e599f7` | `#9c36b5` |
| External APIs | `#ffc9c9` | `#e03131` |
| Orchestration | `#ffa8a8` | `#c92a2a` |
| Message Queue | `#fff3bf` | `#fab005` |
| Cache | `#ffe8cc` | `#fd7e14` |
| Users | `#e7f5ff` | `#1971c2` |
**Cloud-specific palettes:** See `references/colors.md` ### How to Render
Run the render script from the skill's `references/` directory:
```bash
python3 <skill-references-dir>/render_excalidraw.py <path-to-file.excalidraw>
```
This outputs a PNG next to the `.excalidraw` file. Then use the **Read tool** on the PNG to actually view it.
### The Loop
After generating the initial JSON, run this cycle:
**1. Render & View** — Run the render script, then Read the PNG.
**2. Audit against your original vision** — Before looking for bugs, compare the rendered result to what you designed in Steps 1-4. Ask:
- Does the visual structure match the conceptual structure you planned?
- Does each section use the pattern you intended (fan-out, convergence, timeline, etc.)?
- Does the eye flow through the diagram in the order you designed?
- Is the visual hierarchy correct — hero elements dominant, supporting elements smaller?
- For technical diagrams: are the evidence artifacts (code snippets, data examples) readable and properly placed?
**3. Check for visual defects:**
- Text clipped by or overflowing its container
- Text or shapes overlapping other elements
- Arrows crossing through elements instead of routing around them
- Arrows landing on the wrong element or pointing into empty space
- Labels floating ambiguously (not clearly anchored to what they describe)
- Uneven spacing between elements that should be evenly spaced
- Sections with too much whitespace next to sections that are too cramped
- Text too small to read at the rendered size
- Overall composition feels lopsided or unbalanced
**4. Fix** — Edit the JSON to address everything you found. Common fixes:
- Widen containers when text is clipped
- Adjust `x`/`y` coordinates to fix spacing and alignment
- Add intermediate waypoints to arrow `points` arrays to route around elements
- Reposition labels closer to the element they describe
- Resize elements to rebalance visual weight across sections
**5. Re-render & re-view** — Run the render script again and Read the new PNG.
**6. Repeat** — Keep cycling until the diagram passes both the vision check (Step 2) and the defect check (Step 3). Typically takes 2-4 iterations. Don't stop after one pass just because there are no critical bugs — if the composition could be better, improve it.
### When to Stop
The loop is done when:
- The rendered diagram matches the conceptual design from your planning steps
- No text is clipped, overlapping, or unreadable
- Arrows route cleanly and connect to the right elements
- Spacing is consistent and the composition is balanced
- You'd be comfortable showing it to someone without caveats
--- ---
## Quick Validation Checklist ## Quality Checklist
Before writing file: ### Depth & Evidence (Check First for Technical Diagrams)
- [ ] Every shape with label has boundElements + text element 1. **Research done**: Did you look up actual specs, formats, event names?
- [ ] Text elements have containerId matching shape 2. **Evidence artifacts**: Are there code snippets, JSON examples, or real data?
- [ ] Multi-point arrows have `elbowed: true`, `roundness: null` 3. **Multi-zoom**: Does it have summary flow + section boundaries + detail?
- [ ] Arrow x,y = source shape edge point 4. **Concrete over abstract**: Real content shown, not just labeled boxes?
- [ ] Arrow final point offset reaches target edge 5. **Educational value**: Could someone learn something concrete from this?
- [ ] No diamond shapes
- [ ] No duplicate IDs
**Full validation algorithm:** See `references/validation.md` ### Conceptual
6. **Isomorphism**: Does each visual structure mirror its concept's behavior?
7. **Argument**: Does the diagram SHOW something text alone couldn't?
8. **Variety**: Does each major concept use a different visual pattern?
9. **No uniform containers**: Avoided card grids and equal boxes?
--- ### Container Discipline
10. **Minimal containers**: Could any boxed element work as free-floating text instead?
11. **Lines as structure**: Are tree/timeline patterns using lines + text rather than boxes?
12. **Typography hierarchy**: Are font size and color creating visual hierarchy (reducing need for boxes)?
## Common Issues ### Structural
13. **Connections**: Every relationship has an arrow or line
14. **Flow**: Clear visual path for the eye to follow
15. **Hierarchy**: Important elements are larger/more isolated
| Issue | Fix | ### Technical
|-------|-----| 16. **Text clean**: `text` contains only readable words
| Labels don't appear | Use TWO elements (shape + text), not `label` property | 17. **Font**: `fontFamily: 3`
| Arrows curved | Add `elbowed: true`, `roundness: null`, `roughness: 0` | 18. **Roughness**: `roughness: 0` for clean/modern (unless hand-drawn style requested)
| Arrows floating | Calculate x,y from shape edge, not center | 19. **Opacity**: `opacity: 100` for all elements (no transparency)
| Arrows overlapping | Stagger start positions across edge | 20. **Container ratio**: <30% of text elements should be inside containers
**Detailed bug fixes:** See `references/validation.md` ### Visual Validation (Render Required)
21. **Rendered to PNG**: Diagram has been rendered and visually inspected
--- 22. **No text overflow**: All text fits within its container
23. **No overlapping elements**: Shapes and text don't overlap unintentionally
## Reference Files 24. **Even spacing**: Similar elements have consistent spacing
25. **Arrows land correctly**: Arrows connect to intended elements without crossing others
| File | Contents | 26. **Readable at export size**: Text is legible in the rendered PNG
|------|----------| 27. **Balanced composition**: No large empty voids or overcrowded regions
| `references/json-format.md` | Element types, required properties, text bindings |
| `references/arrows.md` | Routing algorithm, patterns, bindings, staggering |
| `references/colors.md` | Default, AWS, Azure, GCP, K8s palettes |
| `references/examples.md` | Complete JSON examples, layout patterns |
| `references/validation.md` | Checklists, validation algorithm, bug fixes |
---
## Output
- **Location:** `docs/architecture/` or user-specified
- **Filename:** Descriptive, e.g., `system-architecture.excalidraw`
- **Testing:** Open in https://excalidraw.com or VS Code extension

View File

@@ -1,288 +0,0 @@
# Arrow Routing Reference
Complete guide for creating elbow arrows with proper connections.
---
## Critical: Elbow Arrow Properties
Three required properties for 90-degree corners:
```json
{
"type": "arrow",
"roughness": 0, // Clean lines
"roundness": null, // Sharp corners (not curved)
"elbowed": true // Enables elbow mode
}
```
**Without these, arrows will be curved, not 90-degree elbows.**
---
## Edge Calculation Formulas
| Shape Type | Edge | Formula |
|------------|------|---------|
| Rectangle | Top | `(x + width/2, y)` |
| Rectangle | Bottom | `(x + width/2, y + height)` |
| Rectangle | Left | `(x, y + height/2)` |
| Rectangle | Right | `(x + width, y + height/2)` |
| Ellipse | Top | `(x + width/2, y)` |
| Ellipse | Bottom | `(x + width/2, y + height)` |
---
## Universal Arrow Routing Algorithm
```
FUNCTION createArrow(source, target, sourceEdge, targetEdge):
// Step 1: Get source edge point
sourcePoint = getEdgePoint(source, sourceEdge)
// Step 2: Get target edge point
targetPoint = getEdgePoint(target, targetEdge)
// Step 3: Calculate offsets
dx = targetPoint.x - sourcePoint.x
dy = targetPoint.y - sourcePoint.y
// Step 4: Determine routing pattern
IF sourceEdge == "bottom" AND targetEdge == "top":
IF abs(dx) < 10: // Nearly aligned
points = [[0, 0], [0, dy]]
ELSE: // Need L-shape
points = [[0, 0], [dx, 0], [dx, dy]]
ELSE IF sourceEdge == "right" AND targetEdge == "left":
IF abs(dy) < 10:
points = [[0, 0], [dx, 0]]
ELSE:
points = [[0, 0], [0, dy], [dx, dy]]
ELSE IF sourceEdge == targetEdge: // U-turn
clearance = 50
IF sourceEdge == "right":
points = [[0, 0], [clearance, 0], [clearance, dy], [dx, dy]]
ELSE IF sourceEdge == "bottom":
points = [[0, 0], [0, clearance], [dx, clearance], [dx, dy]]
// Step 5: Calculate bounding box
width = max(abs(p[0]) for p in points)
height = max(abs(p[1]) for p in points)
RETURN {x: sourcePoint.x, y: sourcePoint.y, points, width, height}
FUNCTION getEdgePoint(shape, edge):
SWITCH edge:
"top": RETURN (shape.x + shape.width/2, shape.y)
"bottom": RETURN (shape.x + shape.width/2, shape.y + shape.height)
"left": RETURN (shape.x, shape.y + shape.height/2)
"right": RETURN (shape.x + shape.width, shape.y + shape.height/2)
```
---
## Arrow Patterns Reference
| Pattern | Points | Use Case |
|---------|--------|----------|
| Down | `[[0,0], [0,h]]` | Vertical connection |
| Right | `[[0,0], [w,0]]` | Horizontal connection |
| L-left-down | `[[0,0], [-w,0], [-w,h]]` | Go left, then down |
| L-right-down | `[[0,0], [w,0], [w,h]]` | Go right, then down |
| L-down-left | `[[0,0], [0,h], [-w,h]]` | Go down, then left |
| L-down-right | `[[0,0], [0,h], [w,h]]` | Go down, then right |
| S-shape | `[[0,0], [0,h1], [w,h1], [w,h2]]` | Navigate around obstacles |
| U-turn | `[[0,0], [w,0], [w,-h], [0,-h]]` | Callback/return arrows |
---
## Worked Examples
### Vertical Connection (Bottom to Top)
```
Source: x=500, y=200, width=180, height=90
Target: x=500, y=400, width=180, height=90
source_bottom = (500 + 180/2, 200 + 90) = (590, 290)
target_top = (500 + 180/2, 400) = (590, 400)
Arrow x = 590, y = 290
Distance = 400 - 290 = 110
Points = [[0, 0], [0, 110]]
```
### Fan-out (One to Many)
```
Orchestrator: x=570, y=400, width=140, height=80
Target: x=120, y=550, width=160, height=80
orchestrator_bottom = (570 + 140/2, 400 + 80) = (640, 480)
target_top = (120 + 160/2, 550) = (200, 550)
Arrow x = 640, y = 480
Horizontal offset = 200 - 640 = -440
Vertical offset = 550 - 480 = 70
Points = [[0, 0], [-440, 0], [-440, 70]] // Left first, then down
```
### U-turn (Callback)
```
Source: x=570, y=400, width=140, height=80
Target: x=550, y=270, width=180, height=90
Connection: Right of source -> Right of target
source_right = (570 + 140, 400 + 80/2) = (710, 440)
target_right = (550 + 180, 270 + 90/2) = (730, 315)
Arrow x = 710, y = 440
Vertical distance = 315 - 440 = -125
Final x offset = 730 - 710 = 20
Points = [[0, 0], [50, 0], [50, -125], [20, -125]]
// Right 50px (clearance), up 125px, left 30px
```
---
## Staggering Multiple Arrows
When N arrows leave from same edge, spread evenly:
```
FUNCTION getStaggeredPositions(shape, edge, numArrows):
positions = []
FOR i FROM 0 TO numArrows-1:
percentage = 0.2 + (0.6 * i / (numArrows - 1))
IF edge == "bottom" OR edge == "top":
x = shape.x + shape.width * percentage
y = (edge == "bottom") ? shape.y + shape.height : shape.y
ELSE:
x = (edge == "right") ? shape.x + shape.width : shape.x
y = shape.y + shape.height * percentage
positions.append({x, y})
RETURN positions
// Examples:
// 2 arrows: 20%, 80%
// 3 arrows: 20%, 50%, 80%
// 5 arrows: 20%, 35%, 50%, 65%, 80%
```
---
## Arrow Bindings
For better visual attachment, use `startBinding` and `endBinding`:
```json
{
"id": "arrow-workflow-convert",
"type": "arrow",
"x": 525,
"y": 420,
"width": 325,
"height": 125,
"points": [[0, 0], [-325, 0], [-325, 125]],
"roughness": 0,
"roundness": null,
"elbowed": true,
"startBinding": {
"elementId": "cloud-workflows",
"focus": 0,
"gap": 1,
"fixedPoint": [0.5, 1]
},
"endBinding": {
"elementId": "convert-pdf-service",
"focus": 0,
"gap": 1,
"fixedPoint": [0.5, 0]
},
"startArrowhead": null,
"endArrowhead": "arrow"
}
```
### fixedPoint Values
- Top center: `[0.5, 0]`
- Bottom center: `[0.5, 1]`
- Left center: `[0, 0.5]`
- Right center: `[1, 0.5]`
### Update Shape boundElements
```json
{
"id": "cloud-workflows",
"boundElements": [
{ "type": "text", "id": "cloud-workflows-text" },
{ "type": "arrow", "id": "arrow-workflow-convert" }
]
}
```
---
## Bidirectional Arrows
For two-way data flows:
```json
{
"type": "arrow",
"startArrowhead": "arrow",
"endArrowhead": "arrow"
}
```
Arrowhead options: `null`, `"arrow"`, `"bar"`, `"dot"`, `"triangle"`
---
## Arrow Labels
Position standalone text near arrow midpoint:
```json
{
"id": "arrow-api-db-label",
"type": "text",
"x": 305, // Arrow x + offset
"y": 245, // Arrow midpoint
"text": "SQL",
"fontSize": 12,
"containerId": null,
"backgroundColor": "#ffffff"
}
```
**Positioning formula:**
- Vertical: `label.y = arrow.y + (total_height / 2)`
- Horizontal: `label.x = arrow.x + (total_width / 2)`
- L-shaped: Position at corner or longest segment midpoint
---
## Width/Height Calculation
Arrow `width` and `height` = bounding box of path:
```
points = [[0, 0], [-440, 0], [-440, 70]]
width = abs(-440) = 440
height = abs(70) = 70
points = [[0, 0], [50, 0], [50, -125], [20, -125]]
width = max(abs(50), abs(20)) = 50
height = abs(-125) = 125
```

View File

@@ -0,0 +1,67 @@
# Color Palette & Brand Style
**This is the single source of truth for all colors and brand-specific styles.** To customize diagrams for your own brand, edit this file — everything else in the skill is universal.
---
## Shape Colors (Semantic)
Colors encode meaning, not decoration. Each semantic purpose has a fill/stroke pair.
| Semantic Purpose | Fill | Stroke |
|------------------|------|--------|
| Primary/Neutral | `#3b82f6` | `#1e3a5f` |
| Secondary | `#60a5fa` | `#1e3a5f` |
| Tertiary | `#93c5fd` | `#1e3a5f` |
| Start/Trigger | `#fed7aa` | `#c2410c` |
| End/Success | `#a7f3d0` | `#047857` |
| Warning/Reset | `#fee2e2` | `#dc2626` |
| Decision | `#fef3c7` | `#b45309` |
| AI/LLM | `#ddd6fe` | `#6d28d9` |
| Inactive/Disabled | `#dbeafe` | `#1e40af` (use dashed stroke) |
| Error | `#fecaca` | `#b91c1c` |
**Rule**: Always pair a darker stroke with a lighter fill for contrast.
---
## Text Colors (Hierarchy)
Use color on free-floating text to create visual hierarchy without containers.
| Level | Color | Use For |
|-------|-------|---------|
| Title | `#1e40af` | Section headings, major labels |
| Subtitle | `#3b82f6` | Subheadings, secondary labels |
| Body/Detail | `#64748b` | Descriptions, annotations, metadata |
| On light fills | `#374151` | Text inside light-colored shapes |
| On dark fills | `#ffffff` | Text inside dark-colored shapes |
---
## Evidence Artifact Colors
Used for code snippets, data examples, and other concrete evidence inside technical diagrams.
| Artifact | Background | Text Color |
|----------|-----------|------------|
| Code snippet | `#1e293b` | Syntax-colored (language-appropriate) |
| JSON/data example | `#1e293b` | `#22c55e` (green) |
---
## Default Stroke & Line Colors
| Element | Color |
|---------|-------|
| Arrows | Use the stroke color of the source element's semantic purpose |
| Structural lines (dividers, trees, timelines) | Primary stroke (`#1e3a5f`) or Slate (`#64748b`) |
| Marker dots (fill + stroke) | Primary fill (`#3b82f6`) |
---
## Background
| Property | Value |
|----------|-------|
| Canvas background | `#ffffff` |

View File

@@ -1,91 +0,0 @@
# Color Palettes Reference
Color schemes for different platforms and component types.
---
## Default Palette (Platform-Agnostic)
| Component Type | Background | Stroke | Example |
|----------------|------------|--------|---------|
| Frontend/UI | `#a5d8ff` | `#1971c2` | Next.js, React apps |
| Backend/API | `#d0bfff` | `#7048e8` | API servers, processors |
| Database | `#b2f2bb` | `#2f9e44` | PostgreSQL, MySQL, MongoDB |
| Storage | `#ffec99` | `#f08c00` | Object storage, file systems |
| AI/ML Services | `#e599f7` | `#9c36b5` | ML models, AI APIs |
| External APIs | `#ffc9c9` | `#e03131` | Third-party services |
| Orchestration | `#ffa8a8` | `#c92a2a` | Workflows, schedulers |
| Validation | `#ffd8a8` | `#e8590c` | Validators, checkers |
| Network/Security | `#dee2e6` | `#495057` | VPC, IAM, firewalls |
| Classification | `#99e9f2` | `#0c8599` | Routers, classifiers |
| Users/Actors | `#e7f5ff` | `#1971c2` | User ellipses |
| Message Queue | `#fff3bf` | `#fab005` | Kafka, RabbitMQ, SQS |
| Cache | `#ffe8cc` | `#fd7e14` | Redis, Memcached |
| Monitoring | `#d3f9d8` | `#40c057` | Prometheus, Grafana |
---
## AWS Palette
| Service Category | Background | Stroke |
|-----------------|------------|--------|
| Compute (EC2, Lambda, ECS) | `#ff9900` | `#cc7a00` |
| Storage (S3, EBS) | `#3f8624` | `#2d6119` |
| Database (RDS, DynamoDB) | `#3b48cc` | `#2d3899` |
| Networking (VPC, Route53) | `#8c4fff` | `#6b3dcc` |
| Security (IAM, KMS) | `#dd344c` | `#b12a3d` |
| Analytics (Kinesis, Athena) | `#8c4fff` | `#6b3dcc` |
| ML (SageMaker, Bedrock) | `#01a88d` | `#017d69` |
---
## Azure Palette
| Service Category | Background | Stroke |
|-----------------|------------|--------|
| Compute | `#0078d4` | `#005a9e` |
| Storage | `#50e6ff` | `#3cb5cc` |
| Database | `#0078d4` | `#005a9e` |
| Networking | `#773adc` | `#5a2ca8` |
| Security | `#ff8c00` | `#cc7000` |
| AI/ML | `#50e6ff` | `#3cb5cc` |
---
## GCP Palette
| Service Category | Background | Stroke |
|-----------------|------------|--------|
| Compute (GCE, Cloud Run) | `#4285f4` | `#3367d6` |
| Storage (GCS) | `#34a853` | `#2d8e47` |
| Database (Cloud SQL, Firestore) | `#ea4335` | `#c53929` |
| Networking | `#fbbc04` | `#d99e04` |
| AI/ML (Vertex AI) | `#9334e6` | `#7627b8` |
---
## Kubernetes Palette
| Component | Background | Stroke |
|-----------|------------|--------|
| Pod | `#326ce5` | `#2756b8` |
| Service | `#326ce5` | `#2756b8` |
| Deployment | `#326ce5` | `#2756b8` |
| ConfigMap/Secret | `#7f8c8d` | `#626d6e` |
| Ingress | `#00d4aa` | `#00a888` |
| Node | `#303030` | `#1a1a1a` |
| Namespace | `#f0f0f0` | `#c0c0c0` (dashed) |
---
## Diagram Type Suggestions
| Diagram Type | Recommended Layout | Key Elements |
|--------------|-------------------|--------------|
| Microservices | Vertical flow | Services, databases, queues, API gateway |
| Data Pipeline | Horizontal flow | Sources, transformers, sinks, storage |
| Event-Driven | Hub-and-spoke | Event bus center, producers/consumers |
| Kubernetes | Layered groups | Namespace boxes, pods inside deployments |
| CI/CD | Horizontal flow | Source -> Build -> Test -> Deploy -> Monitor |
| Network | Hierarchical | Internet -> LB -> VPC -> Subnets -> Instances |
| User Flow | Swimlanes | User actions, system responses, external calls |

View File

@@ -0,0 +1,182 @@
# Element Templates
Copy-paste JSON templates for each Excalidraw element type. The `strokeColor` and `backgroundColor` values are placeholders — always pull actual colors from `color-palette.md` based on the element's semantic purpose.
## Free-Floating Text (no container)
```json
{
"type": "text",
"id": "label1",
"x": 100, "y": 100,
"width": 200, "height": 25,
"text": "Section Title",
"originalText": "Section Title",
"fontSize": 20,
"fontFamily": 3,
"textAlign": "left",
"verticalAlign": "top",
"strokeColor": "<title color from palette>",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"seed": 11111,
"version": 1,
"versionNonce": 22222,
"isDeleted": false,
"groupIds": [],
"boundElements": null,
"link": null,
"locked": false,
"containerId": null,
"lineHeight": 1.25
}
```
## Line (structural, not arrow)
```json
{
"type": "line",
"id": "line1",
"x": 100, "y": 100,
"width": 0, "height": 200,
"strokeColor": "<structural line color from palette>",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"seed": 44444,
"version": 1,
"versionNonce": 55555,
"isDeleted": false,
"groupIds": [],
"boundElements": null,
"link": null,
"locked": false,
"points": [[0, 0], [0, 200]]
}
```
## Small Marker Dot
```json
{
"type": "ellipse",
"id": "dot1",
"x": 94, "y": 94,
"width": 12, "height": 12,
"strokeColor": "<marker dot color from palette>",
"backgroundColor": "<marker dot color from palette>",
"fillStyle": "solid",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"seed": 66666,
"version": 1,
"versionNonce": 77777,
"isDeleted": false,
"groupIds": [],
"boundElements": null,
"link": null,
"locked": false
}
```
## Rectangle
```json
{
"type": "rectangle",
"id": "elem1",
"x": 100, "y": 100, "width": 180, "height": 90,
"strokeColor": "<stroke from palette based on semantic purpose>",
"backgroundColor": "<fill from palette based on semantic purpose>",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"seed": 12345,
"version": 1,
"versionNonce": 67890,
"isDeleted": false,
"groupIds": [],
"boundElements": [{"id": "text1", "type": "text"}],
"link": null,
"locked": false,
"roundness": {"type": 3}
}
```
## Text (centered in shape)
```json
{
"type": "text",
"id": "text1",
"x": 130, "y": 132,
"width": 120, "height": 25,
"text": "Process",
"originalText": "Process",
"fontSize": 16,
"fontFamily": 3,
"textAlign": "center",
"verticalAlign": "middle",
"strokeColor": "<text color — match parent shape's stroke or use 'on light/dark fills' from palette>",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"seed": 11111,
"version": 1,
"versionNonce": 22222,
"isDeleted": false,
"groupIds": [],
"boundElements": null,
"link": null,
"locked": false,
"containerId": "elem1",
"lineHeight": 1.25
}
```
## Arrow
```json
{
"type": "arrow",
"id": "arrow1",
"x": 282, "y": 145, "width": 118, "height": 0,
"strokeColor": "<arrow color — typically matches source element's stroke from palette>",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"seed": 33333,
"version": 1,
"versionNonce": 44444,
"isDeleted": false,
"groupIds": [],
"boundElements": null,
"link": null,
"locked": false,
"points": [[0, 0], [118, 0]],
"startBinding": {"elementId": "elem1", "focus": 0, "gap": 2},
"endBinding": {"elementId": "elem2", "focus": 0, "gap": 2},
"startArrowhead": null,
"endArrowhead": "arrow"
}
```
For curves: use 3+ points in `points` array.

View File

@@ -1,381 +0,0 @@
# Complete Examples Reference
Full JSON examples showing proper element structure.
---
## 3-Tier Architecture Example
This is a REFERENCE showing JSON structure. Replace IDs, labels, positions, and colors based on discovered components.
```json
{
"type": "excalidraw",
"version": 2,
"source": "claude-code-excalidraw-skill",
"elements": [
{
"id": "user",
"type": "ellipse",
"x": 150,
"y": 50,
"width": 100,
"height": 60,
"angle": 0,
"strokeColor": "#1971c2",
"backgroundColor": "#e7f5ff",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": { "type": 2 },
"seed": 1,
"version": 1,
"versionNonce": 1,
"isDeleted": false,
"boundElements": [{ "type": "text", "id": "user-text" }],
"updated": 1,
"link": null,
"locked": false
},
{
"id": "user-text",
"type": "text",
"x": 175,
"y": 67,
"width": 50,
"height": 25,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": null,
"seed": 2,
"version": 1,
"versionNonce": 2,
"isDeleted": false,
"boundElements": null,
"updated": 1,
"link": null,
"locked": false,
"text": "User",
"fontSize": 16,
"fontFamily": 1,
"textAlign": "center",
"verticalAlign": "middle",
"baseline": 14,
"containerId": "user",
"originalText": "User",
"lineHeight": 1.25
},
{
"id": "frontend",
"type": "rectangle",
"x": 100,
"y": 180,
"width": 200,
"height": 80,
"angle": 0,
"strokeColor": "#1971c2",
"backgroundColor": "#a5d8ff",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": { "type": 3 },
"seed": 3,
"version": 1,
"versionNonce": 3,
"isDeleted": false,
"boundElements": [{ "type": "text", "id": "frontend-text" }],
"updated": 1,
"link": null,
"locked": false
},
{
"id": "frontend-text",
"type": "text",
"x": 105,
"y": 195,
"width": 190,
"height": 50,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": null,
"seed": 4,
"version": 1,
"versionNonce": 4,
"isDeleted": false,
"boundElements": null,
"updated": 1,
"link": null,
"locked": false,
"text": "Frontend\nNext.js",
"fontSize": 16,
"fontFamily": 1,
"textAlign": "center",
"verticalAlign": "middle",
"baseline": 14,
"containerId": "frontend",
"originalText": "Frontend\nNext.js",
"lineHeight": 1.25
},
{
"id": "database",
"type": "rectangle",
"x": 100,
"y": 330,
"width": 200,
"height": 80,
"angle": 0,
"strokeColor": "#2f9e44",
"backgroundColor": "#b2f2bb",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": { "type": 3 },
"seed": 5,
"version": 1,
"versionNonce": 5,
"isDeleted": false,
"boundElements": [{ "type": "text", "id": "database-text" }],
"updated": 1,
"link": null,
"locked": false
},
{
"id": "database-text",
"type": "text",
"x": 105,
"y": 345,
"width": 190,
"height": 50,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": null,
"seed": 6,
"version": 1,
"versionNonce": 6,
"isDeleted": false,
"boundElements": null,
"updated": 1,
"link": null,
"locked": false,
"text": "Database\nPostgreSQL",
"fontSize": 16,
"fontFamily": 1,
"textAlign": "center",
"verticalAlign": "middle",
"baseline": 14,
"containerId": "database",
"originalText": "Database\nPostgreSQL",
"lineHeight": 1.25
},
{
"id": "arrow-user-frontend",
"type": "arrow",
"x": 200,
"y": 115,
"width": 0,
"height": 60,
"angle": 0,
"strokeColor": "#1971c2",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": null,
"seed": 7,
"version": 1,
"versionNonce": 7,
"isDeleted": false,
"boundElements": null,
"updated": 1,
"link": null,
"locked": false,
"points": [[0, 0], [0, 60]],
"lastCommittedPoint": null,
"startBinding": null,
"endBinding": null,
"startArrowhead": null,
"endArrowhead": "arrow",
"elbowed": true
},
{
"id": "arrow-frontend-database",
"type": "arrow",
"x": 200,
"y": 265,
"width": 0,
"height": 60,
"angle": 0,
"strokeColor": "#2f9e44",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": null,
"seed": 8,
"version": 1,
"versionNonce": 8,
"isDeleted": false,
"boundElements": null,
"updated": 1,
"link": null,
"locked": false,
"points": [[0, 0], [0, 60]],
"lastCommittedPoint": null,
"startBinding": null,
"endBinding": null,
"startArrowhead": null,
"endArrowhead": "arrow",
"elbowed": true
}
],
"appState": {
"gridSize": 20,
"viewBackgroundColor": "#ffffff"
},
"files": {}
}
```
---
## Layout Patterns
### Vertical Flow (Most Common)
```
Grid positioning:
- Column width: 200-250px
- Row height: 130-150px
- Element size: 160-200px x 80-90px
- Spacing: 40-50px between elements
Row positions (y):
Row 0: 20 (title)
Row 1: 100 (users/entry points)
Row 2: 230 (frontend/gateway)
Row 3: 380 (orchestration)
Row 4: 530 (services)
Row 5: 680 (data layer)
Row 6: 830 (external services)
Column positions (x):
Col 0: 100
Col 1: 300
Col 2: 500
Col 3: 700
Col 4: 900
```
### Horizontal Flow (Pipelines)
```
Stage positions (x):
Stage 0: 100 (input/source)
Stage 1: 350 (transform 1)
Stage 2: 600 (transform 2)
Stage 3: 850 (transform 3)
Stage 4: 1100 (output/sink)
All stages at same y: 200
Arrows: "right" -> "left" connections
```
### Hub-and-Spoke
```
Center hub: x=500, y=350
8 positions at 45° increments:
N: (500, 150)
NE: (640, 210)
E: (700, 350)
SE: (640, 490)
S: (500, 550)
SW: (360, 490)
W: (300, 350)
NW: (360, 210)
```
---
## Complex Architecture Layout
```
Row 0: Title/Header (y: 20)
Row 1: Users/Clients (y: 80)
Row 2: Frontend/Gateway (y: 200)
Row 3: Orchestration (y: 350)
Row 4: Processing Services (y: 550)
Row 5: Data Layer (y: 680)
Row 6: External Services (y: 830)
Columns (x):
Col 0: 120
Col 1: 320
Col 2: 520
Col 3: 720
Col 4: 920
```
---
## Diagram Complexity Guidelines
| Complexity | Max Elements | Max Arrows | Approach |
|------------|-------------|------------|----------|
| Simple | 5-10 | 5-10 | Single file, no groups |
| Medium | 10-25 | 15-30 | Use grouping rectangles |
| Complex | 25-50 | 30-60 | Split into multiple diagrams |
| Very Complex | 50+ | 60+ | Multiple focused diagrams |
**When to split:**
- More than 50 elements
- Create: `architecture-overview.excalidraw`, `architecture-data-layer.excalidraw`
**When to use groups:**
- 3+ related services
- Same deployment unit
- Logical boundaries (VPC, Security Zone)

View File

@@ -1,210 +0,0 @@
# Excalidraw JSON Format Reference
Complete reference for Excalidraw JSON structure and element types.
---
## File Structure
```json
{
"type": "excalidraw",
"version": 2,
"source": "claude-code-excalidraw-skill",
"elements": [],
"appState": {
"gridSize": 20,
"viewBackgroundColor": "#ffffff"
},
"files": {}
}
```
---
## Element Types
| Type | Use For | Arrow Reliability |
|------|---------|-------------------|
| `rectangle` | Services, components, databases, containers, orchestrators, decision points | Excellent |
| `ellipse` | Users, external systems, start/end points | Good |
| `text` | Labels inside shapes, titles, annotations | N/A |
| `arrow` | Data flow, connections, dependencies | N/A |
| `line` | Grouping boundaries, separators | N/A |
### BANNED: Diamond Shapes
**NEVER use `type: "diamond"` in generated diagrams.**
Diamond arrow connections are fundamentally broken in raw Excalidraw JSON:
- Excalidraw applies `roundness` to diamond vertices during rendering
- Visual edges appear offset from mathematical edge points
- No offset formula reliably compensates
- Arrows appear disconnected/floating
**Use styled rectangles instead** for visual distinction:
| Semantic Meaning | Rectangle Style |
|------------------|-----------------|
| Orchestrator/Hub | Coral (`#ffa8a8`/`#c92a2a`) + strokeWidth: 3 |
| Decision Point | Orange (`#ffd8a8`/`#e8590c`) + dashed stroke |
| Central Router | Larger size + bold color |
---
## Required Element Properties
Every element MUST have these properties:
```json
{
"id": "unique-id-string",
"type": "rectangle",
"x": 100,
"y": 100,
"width": 200,
"height": 80,
"angle": 0,
"strokeColor": "#1971c2",
"backgroundColor": "#a5d8ff",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": { "type": 3 },
"seed": 1,
"version": 1,
"versionNonce": 1,
"isDeleted": false,
"boundElements": null,
"updated": 1,
"link": null,
"locked": false
}
```
---
## Text Inside Shapes (Labels)
**Every labeled shape requires TWO elements:**
### Shape with boundElements
```json
{
"id": "{component-id}",
"type": "rectangle",
"x": 500,
"y": 200,
"width": 200,
"height": 90,
"strokeColor": "#1971c2",
"backgroundColor": "#a5d8ff",
"boundElements": [{ "type": "text", "id": "{component-id}-text" }],
// ... other required properties
}
```
### Text with containerId
```json
{
"id": "{component-id}-text",
"type": "text",
"x": 505, // shape.x + 5
"y": 220, // shape.y + (shape.height - text.height) / 2
"width": 190, // shape.width - 10
"height": 50,
"text": "{Component Name}\n{Subtitle}",
"fontSize": 16,
"fontFamily": 1,
"textAlign": "center",
"verticalAlign": "middle",
"containerId": "{component-id}",
"originalText": "{Component Name}\n{Subtitle}",
"lineHeight": 1.25,
// ... other required properties
}
```
### DO NOT Use the `label` Property
The `label` property is for the JavaScript API, NOT raw JSON files:
```json
// WRONG - will show empty boxes
{ "type": "rectangle", "label": { "text": "My Label" } }
// CORRECT - requires TWO elements
// 1. Shape with boundElements reference
// 2. Separate text element with containerId
```
### Text Positioning
- Text `x` = shape `x` + 5
- Text `y` = shape `y` + (shape.height - text.height) / 2
- Text `width` = shape `width` - 10
- Use `\n` for multi-line labels
- Always use `textAlign: "center"` and `verticalAlign: "middle"`
### ID Naming Convention
Always use pattern: `{shape-id}-text` for text element IDs.
---
## Dynamic ID Generation
IDs and labels are generated from codebase analysis:
| Discovered Component | Generated ID | Generated Label |
|---------------------|--------------|-----------------|
| Express API server | `express-api` | `"API Server\nExpress.js"` |
| PostgreSQL database | `postgres-db` | `"PostgreSQL\nDatabase"` |
| Redis cache | `redis-cache` | `"Redis\nCache Layer"` |
| S3 bucket for uploads | `s3-uploads` | `"S3 Bucket\nuploads/"` |
| Lambda function | `lambda-processor` | `"Lambda\nProcessor"` |
| React frontend | `react-frontend` | `"React App\nFrontend"` |
---
## Grouping with Dashed Rectangles
For logical groupings (namespaces, VPCs, pipelines):
```json
{
"id": "group-ai-pipeline",
"type": "rectangle",
"x": 100,
"y": 500,
"width": 1000,
"height": 280,
"strokeColor": "#9c36b5",
"backgroundColor": "transparent",
"strokeStyle": "dashed",
"roughness": 0,
"roundness": null,
"boundElements": null
}
```
Group labels are standalone text (no containerId) at top-left:
```json
{
"id": "group-ai-pipeline-label",
"type": "text",
"x": 120,
"y": 510,
"text": "AI Processing Pipeline (Cloud Run)",
"textAlign": "left",
"verticalAlign": "top",
"containerId": null
}
```

View File

@@ -0,0 +1,71 @@
# Excalidraw JSON Schema
## Element Types
| Type | Use For |
|------|---------|
| `rectangle` | Processes, actions, components |
| `ellipse` | Entry/exit points, external systems |
| `diamond` | Decisions, conditionals |
| `arrow` | Connections between shapes |
| `text` | Labels inside shapes |
| `line` | Non-arrow connections |
| `frame` | Grouping containers |
## Common Properties
All elements share these:
| Property | Type | Description |
|----------|------|-------------|
| `id` | string | Unique identifier |
| `type` | string | Element type |
| `x`, `y` | number | Position in pixels |
| `width`, `height` | number | Size in pixels |
| `strokeColor` | string | Border color (hex) |
| `backgroundColor` | string | Fill color (hex or "transparent") |
| `fillStyle` | string | "solid", "hachure", "cross-hatch" |
| `strokeWidth` | number | 1, 2, or 4 |
| `strokeStyle` | string | "solid", "dashed", "dotted" |
| `roughness` | number | 0 (smooth), 1 (default), 2 (rough) |
| `opacity` | number | 0-100 |
| `seed` | number | Random seed for roughness |
## Text-Specific Properties
| Property | Description |
|----------|-------------|
| `text` | The display text |
| `originalText` | Same as text |
| `fontSize` | Size in pixels (16-20 recommended) |
| `fontFamily` | 3 for monospace (use this) |
| `textAlign` | "left", "center", "right" |
| `verticalAlign` | "top", "middle", "bottom" |
| `containerId` | ID of parent shape |
## Arrow-Specific Properties
| Property | Description |
|----------|-------------|
| `points` | Array of [x, y] coordinates |
| `startBinding` | Connection to start shape |
| `endBinding` | Connection to end shape |
| `startArrowhead` | null, "arrow", "bar", "dot", "triangle" |
| `endArrowhead` | null, "arrow", "bar", "dot", "triangle" |
## Binding Format
```json
{
"elementId": "shapeId",
"focus": 0,
"gap": 2
}
```
## Rectangle Roundness
Add for rounded corners:
```json
"roundness": { "type": 3 }
```

View File

@@ -0,0 +1,205 @@
#!/usr/bin/env python3
"""Render Excalidraw JSON to PNG using Playwright + headless Chromium.
Usage:
python3 render_excalidraw.py <path-to-file.excalidraw> [--output path.png] [--scale 2] [--width 1920]
Dependencies (playwright, chromium) are provided by the Nix flake / direnv environment.
"""
from __future__ import annotations
import argparse
import json
import sys
from pathlib import Path
def validate_excalidraw(data: dict) -> list[str]:
"""Validate Excalidraw JSON structure. Returns list of errors (empty = valid)."""
errors: list[str] = []
if data.get("type") != "excalidraw":
errors.append(f"Expected type 'excalidraw', got '{data.get('type')}'")
if "elements" not in data:
errors.append("Missing 'elements' array")
elif not isinstance(data["elements"], list):
errors.append("'elements' must be an array")
elif len(data["elements"]) == 0:
errors.append("'elements' array is empty — nothing to render")
return errors
def compute_bounding_box(elements: list[dict]) -> tuple[float, float, float, float]:
"""Compute bounding box (min_x, min_y, max_x, max_y) across all elements."""
min_x = float("inf")
min_y = float("inf")
max_x = float("-inf")
max_y = float("-inf")
for el in elements:
if el.get("isDeleted"):
continue
x = el.get("x", 0)
y = el.get("y", 0)
w = el.get("width", 0)
h = el.get("height", 0)
# For arrows/lines, points array defines the shape relative to x,y
if el.get("type") in ("arrow", "line") and "points" in el:
for px, py in el["points"]:
min_x = min(min_x, x + px)
min_y = min(min_y, y + py)
max_x = max(max_x, x + px)
max_y = max(max_y, y + py)
else:
min_x = min(min_x, x)
min_y = min(min_y, y)
max_x = max(max_x, x + abs(w))
max_y = max(max_y, y + abs(h))
if min_x == float("inf"):
return (0, 0, 800, 600)
return (min_x, min_y, max_x, max_y)
def render(
excalidraw_path: Path,
output_path: Path | None = None,
scale: int = 2,
max_width: int = 1920,
) -> Path:
"""Render an .excalidraw file to PNG. Returns the output PNG path."""
# Import playwright here so validation errors show before import errors
try:
from playwright.sync_api import sync_playwright
except ImportError:
print("ERROR: playwright not installed.", file=sys.stderr)
print("Ensure the Nix dev shell is active (direnv allow).", file=sys.stderr)
sys.exit(1)
# Read and validate
raw = excalidraw_path.read_text(encoding="utf-8")
try:
data = json.loads(raw)
except json.JSONDecodeError as e:
print(f"ERROR: Invalid JSON in {excalidraw_path}: {e}", file=sys.stderr)
sys.exit(1)
errors = validate_excalidraw(data)
if errors:
print(f"ERROR: Invalid Excalidraw file:", file=sys.stderr)
for err in errors:
print(f" - {err}", file=sys.stderr)
sys.exit(1)
# Compute viewport size from element bounding box
elements = [e for e in data["elements"] if not e.get("isDeleted")]
min_x, min_y, max_x, max_y = compute_bounding_box(elements)
padding = 80
diagram_w = max_x - min_x + padding * 2
diagram_h = max_y - min_y + padding * 2
# Cap viewport width, let height be natural
vp_width = min(int(diagram_w), max_width)
vp_height = max(int(diagram_h), 600)
# Output path
if output_path is None:
output_path = excalidraw_path.with_suffix(".png")
# Template path (same directory as this script)
template_path = Path(__file__).parent / "render_template.html"
if not template_path.exists():
print(f"ERROR: Template not found at {template_path}", file=sys.stderr)
sys.exit(1)
template_url = template_path.as_uri()
with sync_playwright() as p:
try:
browser = p.chromium.launch(headless=True)
except Exception as e:
if "Executable doesn't exist" in str(e) or "browserType.launch" in str(e):
print("ERROR: Chromium not installed for Playwright.", file=sys.stderr)
print("Ensure the Nix dev shell is active (direnv allow).", file=sys.stderr)
sys.exit(1)
raise
page = browser.new_page(
viewport={"width": vp_width, "height": vp_height},
device_scale_factor=scale,
)
# Load the template
page.goto(template_url)
# Wait for the ES module to load (imports from esm.sh)
page.wait_for_function("window.__moduleReady === true", timeout=30000)
# Inject the diagram data and render
json_str = json.dumps(data)
result = page.evaluate(f"window.renderDiagram({json_str})")
if not result or not result.get("success"):
error_msg = (
result.get("error", "Unknown render error")
if result
else "renderDiagram returned null"
)
print(f"ERROR: Render failed: {error_msg}", file=sys.stderr)
browser.close()
sys.exit(1)
# Wait for render completion signal
page.wait_for_function("window.__renderComplete === true", timeout=15000)
# Screenshot the SVG element
svg_el = page.query_selector("#root svg")
if svg_el is None:
print("ERROR: No SVG element found after render.", file=sys.stderr)
browser.close()
sys.exit(1)
svg_el.screenshot(path=str(output_path))
browser.close()
return output_path
def main() -> None:
"""Entry point for rendering Excalidraw JSON files to PNG."""
parser = argparse.ArgumentParser(description="Render Excalidraw JSON to PNG")
parser.add_argument("input", type=Path, help="Path to .excalidraw JSON file")
parser.add_argument(
"--output",
"-o",
type=Path,
default=None,
help="Output PNG path (default: same name with .png)",
)
parser.add_argument(
"--scale", "-s", type=int, default=2, help="Device scale factor (default: 2)"
)
parser.add_argument(
"--width",
"-w",
type=int,
default=1920,
help="Max viewport width (default: 1920)",
)
args = parser.parse_args()
if not args.input.exists():
print(f"ERROR: File not found: {args.input}", file=sys.stderr)
sys.exit(1)
png_path = render(args.input, args.output, args.scale, args.width)
print(str(png_path))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,57 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<style>
* { margin: 0; padding: 0; box-sizing: border-box; }
body { background: #ffffff; overflow: hidden; }
#root { display: inline-block; }
#root svg { display: block; }
</style>
</head>
<body>
<div id="root"></div>
<script type="module">
import { exportToSvg } from "https://esm.sh/@excalidraw/excalidraw?bundle";
window.renderDiagram = async function(jsonData) {
try {
const data = typeof jsonData === "string" ? JSON.parse(jsonData) : jsonData;
const elements = data.elements || [];
const appState = data.appState || {};
const files = data.files || {};
// Force white background in appState
appState.viewBackgroundColor = appState.viewBackgroundColor || "#ffffff";
appState.exportWithDarkMode = false;
const svg = await exportToSvg({
elements: elements,
appState: {
...appState,
exportBackground: true,
},
files: files,
});
// Clear any previous render
const root = document.getElementById("root");
root.innerHTML = "";
root.appendChild(svg);
window.__renderComplete = true;
window.__renderError = null;
return { success: true, width: svg.getAttribute("width"), height: svg.getAttribute("height") };
} catch (err) {
window.__renderComplete = true;
window.__renderError = err.message;
return { success: false, error: err.message };
}
};
// Signal that the module is loaded and ready
window.__moduleReady = true;
</script>
</body>
</html>

View File

@@ -1,182 +0,0 @@
# Validation Reference
Checklists, validation algorithms, and common bug fixes.
---
## Pre-Flight Validation Algorithm
Run BEFORE writing the file:
```
FUNCTION validateDiagram(elements):
errors = []
// 1. Validate shape-text bindings
FOR each shape IN elements WHERE shape.boundElements != null:
FOR each binding IN shape.boundElements:
textElement = findById(elements, binding.id)
IF textElement == null:
errors.append("Shape {shape.id} references missing text {binding.id}")
ELSE IF textElement.containerId != shape.id:
errors.append("Text containerId doesn't match shape")
// 2. Validate arrow connections
FOR each arrow IN elements WHERE arrow.type == "arrow":
sourceShape = findShapeNear(elements, arrow.x, arrow.y)
IF sourceShape == null:
errors.append("Arrow {arrow.id} doesn't start from shape edge")
finalPoint = arrow.points[arrow.points.length - 1]
endX = arrow.x + finalPoint[0]
endY = arrow.y + finalPoint[1]
targetShape = findShapeNear(elements, endX, endY)
IF targetShape == null:
errors.append("Arrow {arrow.id} doesn't end at shape edge")
IF arrow.points.length > 2:
IF arrow.elbowed != true:
errors.append("Arrow {arrow.id} missing elbowed:true")
IF arrow.roundness != null:
errors.append("Arrow {arrow.id} should have roundness:null")
// 3. Validate unique IDs
ids = [el.id for el in elements]
duplicates = findDuplicates(ids)
IF duplicates.length > 0:
errors.append("Duplicate IDs: {duplicates}")
// 4. Validate bounding boxes
FOR each arrow IN elements WHERE arrow.type == "arrow":
maxX = max(abs(p[0]) for p in arrow.points)
maxY = max(abs(p[1]) for p in arrow.points)
IF arrow.width < maxX OR arrow.height < maxY:
errors.append("Arrow {arrow.id} bounding box too small")
RETURN errors
FUNCTION findShapeNear(elements, x, y, tolerance=15):
FOR each shape IN elements WHERE shape.type IN ["rectangle", "ellipse"]:
edges = [
(shape.x + shape.width/2, shape.y), // top
(shape.x + shape.width/2, shape.y + shape.height), // bottom
(shape.x, shape.y + shape.height/2), // left
(shape.x + shape.width, shape.y + shape.height/2) // right
]
FOR each edge IN edges:
IF abs(edge.x - x) < tolerance AND abs(edge.y - y) < tolerance:
RETURN shape
RETURN null
```
---
## Checklists
### Before Generating
- [ ] Identified all components from codebase
- [ ] Mapped all connections/data flows
- [ ] Chose layout pattern (vertical, horizontal, hub-and-spoke)
- [ ] Selected color palette (default, AWS, Azure, K8s)
- [ ] Planned grid positions
- [ ] Created ID naming scheme
### During Generation
- [ ] Every labeled shape has BOTH shape AND text elements
- [ ] Shape has `boundElements: [{ "type": "text", "id": "{id}-text" }]`
- [ ] Text has `containerId: "{shape-id}"`
- [ ] Multi-point arrows have `elbowed: true`, `roundness: null`, `roughness: 0`
- [ ] Arrows have `startBinding` and `endBinding`
- [ ] No diamond shapes used
- [ ] Applied staggering formula for multiple arrows
### Arrow Validation (Every Arrow)
- [ ] Arrow `x,y` calculated from shape edge
- [ ] Final point offset = `targetEdge - sourceEdge`
- [ ] Arrow `width` = `max(abs(point[0]))`
- [ ] Arrow `height` = `max(abs(point[1]))`
- [ ] U-turn arrows have 40-60px clearance
### After Generation
- [ ] All `boundElements` IDs reference valid text elements
- [ ] All `containerId` values reference valid shapes
- [ ] All arrows start within 15px of shape edge
- [ ] All arrows end within 15px of shape edge
- [ ] No duplicate IDs
- [ ] Arrow bounding boxes match points
- [ ] File is valid JSON
---
## Common Bugs and Fixes
### Bug: Arrow appears disconnected/floating
**Cause**: Arrow `x,y` not calculated from shape edge.
**Fix**:
```
Rectangle bottom: arrow_x = shape.x + shape.width/2
arrow_y = shape.y + shape.height
```
### Bug: Arrow endpoint doesn't reach target
**Cause**: Final point offset calculated incorrectly.
**Fix**:
```
target_edge = (target.x + target.width/2, target.y)
offset_x = target_edge.x - arrow.x
offset_y = target_edge.y - arrow.y
Final point = [offset_x, offset_y]
```
### Bug: Multiple arrows from same source overlap
**Cause**: All arrows start from identical `x,y`.
**Fix**: Stagger start positions:
```
For 5 arrows from bottom edge:
arrow1.x = shape.x + shape.width * 0.2
arrow2.x = shape.x + shape.width * 0.35
arrow3.x = shape.x + shape.width * 0.5
arrow4.x = shape.x + shape.width * 0.65
arrow5.x = shape.x + shape.width * 0.8
```
### Bug: Callback arrow doesn't loop correctly
**Cause**: U-turn path lacks clearance.
**Fix**: Use 4-point path:
```
Points = [[0, 0], [clearance, 0], [clearance, -vert], [final_x, -vert]]
clearance = 40-60px
```
### Bug: Labels don't appear inside shapes
**Cause**: Using `label` property instead of separate text element.
**Fix**: Create TWO elements:
1. Shape with `boundElements` referencing text
2. Text with `containerId` referencing shape
### Bug: Arrows are curved, not 90-degree
**Cause**: Missing elbow properties.
**Fix**: Add all three:
```json
{
"roughness": 0,
"roundness": null,
"elbowed": true
}
```

View File

@@ -1,75 +0,0 @@
---
name: memory
description: "Persistent memory system for Opencode agents. SQLite-based hybrid search over Obsidian vault. Use when: (1) storing user preferences/decisions, (2) recalling past context, (3) searching knowledge base. Triggers: remember, recall, memory, store, preference."
compatibility: opencode
---
## Overview
opencode-memory is a SQLite-based hybrid memory system for Opencode agents. It indexes markdown files from your Obsidian vault (`~/CODEX/80-memory/`) and session transcripts, providing fast hybrid search (vector + keyword BM25).
## Architecture
- **Source of truth**: Markdown files at `~/CODEX/80-memory/`
- **Derived index**: SQLite at `~/.local/share/opencode-memory/index.db`
- **Hybrid search**: FTS5 (BM25) + vec0 (vector similarity)
- **Embeddings**: OpenAI text-embedding-3-small (1536 dimensions)
## Available Tools
### memory_search
Hybrid search over all indexed content (vault + sessions).
```
memory_search(query, maxResults?, source?)
```
- `query`: Search query (natural language)
- `maxResults`: Max results (default 6)
- `source`: Filter by "memory", "sessions", or "all"
### memory_store
Store new memory as markdown file in vault.
```
memory_store(content, title?, category?)
```
- `content`: Memory content to store
- `title`: Optional title (slugified for filename)
- `category`: "preferences", "facts", "decisions", "entities", "other"
### memory_get
Read specific file/lines from vault.
```
memory_get(filePath, startLine?, endLine?)
```
## Auto-Behaviors
- **Auto-recall**: On session.created, relevant memories are searched and injected
- **Auto-capture**: On session.idle, preferences/decisions are extracted and stored
- **Token budget**: Max 2000 tokens injected to respect context limits
## Workflows
### Recall information
Before answering about past work, preferences, or decisions:
1. Call `memory_search` with relevant query
2. Use `memory_get` to retrieve full context if needed
### Store new information
When user expresses preference or decision:
1. Call `memory_store` with content and category
## Vault Structure
```
~/CODEX/80-memory/
├── preferences/ # User preferences
├── facts/ # Factual knowledge
├── decisions/ # Design decisions
├── entities/ # People, projects, concepts
└── other/ # Uncategorized memories
```

View File

@@ -1,54 +0,0 @@
# opencode-memory Deployment Guide
## Installation
### Option 1: Nix (Recommended)
Add to your Nix flake:
```nix
inputs.opencode-memory = {
url = "git+https://code.m3ta.dev/m3tam3re/opencode-memory";
flake = false;
};
```
### Option 2: npm
```bash
npm install -g @m3tam3re/opencode-memory
```
## Configuration
Add to `~/.config/opencode/opencode.json`:
```json
{
"plugins": [
"opencode-memory"
]
}
```
## Environment Variables
- `OPENAI_API_KEY`: Required for embeddings
## Vault Location
Default: `~/CODEX/80-memory/`
Override in plugin config if needed.
## Rebuild Index
```bash
bun run src/cli.ts --rebuild
```
## Verification
1. Start Opencode
2. Call `memory_search` with any query
3. Verify no errors in logs

View File

@@ -1,109 +0,0 @@
# Obsidian MCP Server Configuration
## Overview
This document describes how to configure the [cyanheads/obsidian-mcp-server](https://github.com/cyanheads/obsidian-mcp-server) for use with Opencode. This MCP server enables AI agents to interact with the Obsidian vault via the Local REST API plugin.
## Prerequisites
1. **Obsidian Desktop App** - Must be running
2. **Local REST API Plugin** - Installed and enabled in Obsidian
3. **API Key** - Generated from plugin settings
## Environment Variables
| Variable | Description | Default | Required |
|----------|-------------|---------|----------|
| `OBSIDIAN_API_KEY` | API key from Local REST API plugin | - | Yes |
| `OBSIDIAN_BASE_URL` | Base URL for REST API | `http://127.0.0.1:27123` | No |
| `OBSIDIAN_VERIFY_SSL` | Verify SSL certificates | `false` | No |
| `OBSIDIAN_ENABLE_CACHE` | Enable vault caching | `true` | No |
## opencode.json Configuration
Add this to your `programs.opencode.settings.mcp` in your Nix home-manager config:
```json
"Obsidian-Vault": {
"command": ["npx", "obsidian-mcp-server"],
"environment": {
"OBSIDIAN_API_KEY": "<your-api-key>",
"OBSIDIAN_BASE_URL": "http://127.0.0.1:27123",
"OBSIDIAN_VERIFY_SSL": "false",
"OBSIDIAN_ENABLE_CACHE": "true"
},
"enabled": true,
"type": "local"
}
```
**Note**: Replace `<your-api-key>` with the key from Obsidian Settings → Local REST API.
## Nix Home-Manager Integration
In your NixOS/home-manager configuration:
```nix
programs.opencode.settings.mcp = {
# ... other MCP servers ...
"Obsidian-Vault" = {
command = ["npx" "obsidian-mcp-server"];
environment = {
OBSIDIAN_API_KEY = "<your-api-key>";
OBSIDIAN_BASE_URL = "http://127.0.0.1:27123";
OBSIDIAN_VERIFY_SSL = "false";
OBSIDIAN_ENABLE_CACHE = "true";
};
enabled = true;
type = "local";
};
};
```
After updating, run:
```bash
home-manager switch
```
## Getting the API Key
1. Open Obsidian Settings
2. Navigate to Community Plugins → Local REST API
3. Copy the API key shown in settings
4. Paste into your configuration
## Available MCP Tools
Once configured, these tools are available:
| Tool | Description |
|------|-------------|
| `obsidian_read_note` | Read a note's content |
| `obsidian_update_note` | Create or update a note |
| `obsidian_global_search` | Search the entire vault |
| `obsidian_manage_frontmatter` | Get/set frontmatter fields |
| `obsidian_manage_tags` | Add/remove tags |
| `obsidian_list_notes` | List notes in a folder |
| `obsidian_delete_note` | Delete a note |
| `obsidian_search_replace` | Search and replace in a note |
## Troubleshooting
### Server not responding
- Ensure Obsidian desktop app is running
- Check Local REST API plugin is enabled
- Verify API key matches
### Connection refused
- Check the base URL (default: `http://127.0.0.1:27123`)
- Some setups use port 27124 - check plugin settings
### npx not found
- Ensure Node.js is installed
- Run `npm install -g npx` if needed
## References
- [cyanheads/obsidian-mcp-server GitHub](https://github.com/cyanheads/obsidian-mcp-server)
- [Obsidian Local REST API Plugin](https://github.com/czottmann/obsidian-local-rest-api)

View File

@@ -1,108 +0,0 @@
---
name: msteams
description: "Microsoft Teams Graph API integration for team communication. Use when: (1) Managing teams and channels, (2) Sending/receiving channel messages, (3) Scheduling or managing meetings, (4) Handling chat conversations. Triggers: 'Teams', 'meeting', 'channel', 'team message', 'chat', 'Teams message'."
compatibility: opencode
---
# Microsoft Teams Integration
Microsoft Teams Graph API integration for managing team communication, channels, messages, meetings, and chat conversations via MCP tools.
## Core Capabilities
### Teams & Channels
- **List joined teams**: Retrieve all teams the user is a member of
- **Manage channels**: Create, list, and manage channels within teams
- **Team membership**: Add, remove, and update team members
### Channel Messages
- **Send messages**: Post messages to channels with rich text support
- **Retrieve messages**: List channel messages with filtering by date range
- **Message management**: Read and respond to channel communications
### Online Meetings
- **Schedule meetings**: Create online meetings with participants
- **Manage meetings**: Update meeting details and coordinates
- **Meeting access**: Retrieve join links and meeting information
- **Presence**: Check user presence and activity status
### Chat
- **Direct messages**: 1:1 chat conversations with users
- **Group chats**: Multi-person chat conversations
- **Chat messages**: Send and receive chat messages
## Common Workflows
### Send Channel Message
1. Identify target team and channel
2. Compose message content
3. Use MCP tool to send message to channel
Example:
```
"Post a message to the 'General' channel in 'Engineering' team about the deployment status"
```
### Schedule Meeting
1. Determine meeting participants
2. Set meeting time and duration
3. Create meeting title and description
4. Use MCP tool to create online meeting
Example:
```
"Schedule a meeting with @alice and @bob for Friday 2pm to discuss the project roadmap"
```
### List Channel Messages
1. Specify team and channel
2. Define date range (required for polling)
3. Retrieve and display messages
Example:
```
"Show me all messages in #general from the last week"
```
### Send Direct Message
1. Identify recipient user
2. Compose message
3. Use MCP chat tool to send message
Example:
```
"Send a message to @john asking if the PR review is complete"
```
## MCP Tool Categories
The MS Teams MCP server provides tool categories for:
- **Channels**: Team and channel management operations
- **Messages**: Channel message operations
- **Meetings**: Online meeting scheduling and management
- **Chat**: Direct and group chat operations
## Important Constraints
**Authentication**: Do NOT include Graph API authentication flows. The MCP server handles authentication configuration.
**Polling limits**: When retrieving messages, always specify a date range. Polling the same resource more than once per day is a violation of Microsoft APIs Terms of Use.
**Email overlap**: Do NOT overlap with Outlook email functionality. This skill focuses on Teams-specific communication (channels, chat, meetings), not email operations.
**File storage**: Files in channels are stored in SharePoint. Use SharePoint-specific operations for file management.
## Domain Boundaries
This skill integrates with **Hermes** (work communication agent). Hermes loads this skill when user requests:
- Teams-related operations
- Meeting scheduling or management
- Channel communication
- Teams chat conversations
For email operations, Hermes uses the **outlook** skill instead.

View File

@@ -1,231 +0,0 @@
---
name: outlook
description: "Outlook Graph API integration for email, calendar, and contact management. Use when: (1) Reading or sending emails, (2) Managing inbox and folders, (3) Working with calendar events and appointments, (4) Managing contacts, (5) Organizing email messages. Triggers: 'email', 'Outlook', 'inbox', 'calendar', 'contact', 'message', 'folder', 'appointment', 'meeting'."
compatibility: opencode
---
# Outlook
Outlook Graph API integration for mail, calendar, and contact management via MCP. Enables comprehensive email workflows, calendar coordination, and contact organization.
## Overview
Outlook is Microsoft 365's messaging and communication hub. This skill provides access to:
- **Mail**: Send, receive, search, organize, and categorize messages
- **Calendar**: Create and manage events, appointments, and meetings
- **Contacts**: Manage contact information and relationships
- **Folders**: Organize messages into custom folder structures
## Core Workflows
### 1. Reading Emails
Retrieve messages from mailbox:
```
User: "Check my inbox for new messages from John"
AI: Use Outlook MCP to:
1. List messages in Inbox folder
2. Filter by sender = "John"
3. Display summary (subject, date, importance)
```
**Use search and sort** to efficiently find messages:
- Filter by folder (Inbox, Sent, Drafts, etc.)
- Sort by date, importance, or subject
- Search by sender, subject, or content
### 2. Sending Emails
Compose and send new messages:
```
User: "Send an email to sarah@company.com about the project update"
AI: Use Outlook MCP to:
1. Create message draft with recipient and subject
2. Add body content
3. Send the message
```
**Key options**:
- Add attachments
- Set importance (low, normal, high)
- Add recipients (to, cc, bcc)
- Use categories for organization
### 3. Replying and Forwarding
Respond to existing messages:
```
User: "Reply to the last email from Mike with thanks"
AI: Use Outlook MCP to:
1. Retrieve the most recent message from Mike
2. Create reply with appropriate body
3. Send the reply
```
**Supports**:
- Reply (to sender)
- Reply All (to all recipients)
- Forward (to new recipients)
### 4. Managing Folders
Organize messages into folder structure:
```
User: "Move all emails from vendor@supplier.com to Vendors folder"
AI: Use Outlook MCP to:
1. Search for messages from vendor@supplier.com
2. Move matching messages to Vendors folder
3. Confirm move operation
```
**Folder operations**:
- List folders
- Create new folders
- Move messages between folders
- Delete folders (careful!)
### 5. Calendar Events
Manage calendar and schedule meetings:
```
User: "Schedule a team meeting for Friday at 2pm"
AI: Use Outlook MCP to:
1. Create calendar event
2. Set date/time (Friday 2:00 PM)
3. Add attendees
4. Send meeting invitation
```
**Event details**:
- Subject and location
- Start/end times
- Attendees and required/optional status
- Meeting body/agenda
- Recurrence (for recurring meetings)
### 6. Managing Contacts
Create and maintain contact information:
```
User: "Add Jane Smith to contacts: jane@company.com, 555-1234"
AI: Use Outlook MCP to:
1. Create contact object
2. Add email address and phone number
3. Save to contacts
```
**Contact fields**:
- Name (first, last, display name)
- Email addresses (multiple)
- Phone numbers (multiple types)
- Company, job title
- Notes
## Advanced Features
### Message Organization
**Categories**: Tag messages with color-coded categories for enhanced organization
```
User: "Tag these project emails as 'Urgent' category"
AI: Use Outlook MCP to:
1. Retrieve specified messages
2. Assign category (e.g., "Urgent")
3. Confirm categorization
```
**Importance**: Mark messages as high, normal, or low importance
```
User: "Mark this message as high priority"
AI: Use Outlook MCP to update message importance flag
```
**Search**: Find messages by sender, subject, content, or date range
```
User: "Find all emails about Q4 budget from October"
AI: Use Outlook MCP to search with filters:
- Subject contains "budget"
- Date range: October
- Optionally filter by sender
```
### Email Intelligence
**Focused Inbox**: Access messages categorized as focused vs other
**Mail Tips**: Check recipient status before sending (auto-reply, full mailbox)
**MIME Support**: Handle email in MIME format for interoperability
## Integration with Other Skills
This skill focuses on Outlook-specific operations. For related functionality:
| Need | Skill | When to Use |
|------|-------|-------------|
| **Team project updates** | basecamp | "Update the Basecamp todo" |
| **Team channel messages** | msteams | "Post this in the Teams channel" |
| **Private notes about emails** | obsidian | "Save this to Obsidian" |
| **Drafting long-form emails** | calliope | "Help me write a professional email" |
| **Short quick messages** | hermes (this skill) | "Send a quick update" |
## Common Patterns
### Email Triage Workflow
1. **Scan inbox**: List messages sorted by date
2. **Categorize**: Assign categories based on content/urgency
3. **Action**: Reply, forward, or move to appropriate folder
4. **Track**: Flag for follow-up if needed
### Meeting Coordination
1. **Check availability**: Query calendar for conflicts
2. **Propose time**: Suggest multiple time options
3. **Create event**: Set up meeting with attendees
4. **Follow up**: Send reminder or agenda
### Project Communication
1. **Search thread**: Find all messages related to project
2. **Organize**: Move to project folder
3. **Categorize**: Tag with project category
4. **Summarize**: Extract key points if needed
## Quality Standards
- **Accurate recipient addressing**: Verify email addresses before sending
- **Clear subject lines**: Ensure subjects accurately reflect content
- **Appropriate categorization**: Use categories consistently
- **Folder hygiene**: Maintain organized folder structure
- **Respect privacy**: Do not share sensitive content indiscriminately
## Edge Cases
**Multiple mailboxes**: This skill supports primary and shared mailboxes, not archive mailboxes
**Large attachments**: Use appropriate attachment handling for large files
**Meeting conflicts**: Check calendar availability before scheduling
**Email limits**: Respect rate limits and sending quotas
**Deleted items**: Use caution with delete operations (consider archiving instead)
## Boundaries
- **Do NOT handle Teams-specific messaging** (Teams's domain)
- **Do NOT handle Basecamp communication** (basecamp's domain)
- **Do NOT manage wiki documentation** (Athena's domain)
- **Do NOT access private Obsidian vaults** (Apollo's domain)
- **Do NOT write creative email content** (delegate to calliope for drafts)

View File

@@ -79,6 +79,7 @@ Executable code (Python/Bash/etc.) for tasks that require deterministic reliabil
- **Example**: `scripts/rotate_pdf.py` for PDF rotation tasks - **Example**: `scripts/rotate_pdf.py` for PDF rotation tasks
- **Benefits**: Token efficient, deterministic, may be executed without loading into context - **Benefits**: Token efficient, deterministic, may be executed without loading into context
- **Note**: Scripts may still need to be read by Opencode for patching or environment-specific adjustments - **Note**: Scripts may still need to be read by Opencode for patching or environment-specific adjustments
- **Dependencies**: Scripts with external dependencies (Python packages, system tools) require those dependencies to be registered in the repository's `flake.nix`. See Step 4 for details.
##### References (`references/`) ##### References (`references/`)
@@ -302,6 +303,37 @@ To begin implementation, start with the reusable resources identified above: `sc
Added scripts must be tested by actually running them to ensure there are no bugs and that the output matches what is expected. If there are many similar scripts, only a representative sample needs to be tested to ensure confidence that they all work while balancing time to completion. Added scripts must be tested by actually running them to ensure there are no bugs and that the output matches what is expected. If there are many similar scripts, only a representative sample needs to be tested to ensure confidence that they all work while balancing time to completion.
#### Register Dependencies in flake.nix
When scripts introduce external dependencies (Python packages or system tools), add them to the repository's `flake.nix`. Dependencies are defined once in `pythonEnv` (Python packages) or `packages` (system tools) inside the `skills-runtime` buildEnv. This runtime is exported as `packages.${system}.skills-runtime` and consumed by project flakes and home-manager — ensuring opencode always has the correct environment regardless of which project it runs in.
**Python packages** — add to the `pythonEnv` block with a comment referencing the skill:
```nix
pythonEnv = pkgs.python3.withPackages (ps:
with ps; [
# <skill-name>: <script>.py
<package-name>
]);
```
**System tools** (e.g. `poppler-utils`, `ffmpeg`, `imagemagick`) — add to the `paths` list in the `skills-runtime` buildEnv:
```nix
skills-runtime = pkgs.buildEnv {
name = "opencode-skills-runtime";
paths = [
pythonEnv
# <skill-name>: needed by <script>
pkgs.<tool-name>
];
};
```
**Convention**: Each entry must include a comment with `# <skill-name>: <reason>` so dependencies remain traceable to their originating skill.
After adding dependencies, verify they resolve: `nix develop --command python3 -c "import <package>"`
Any example files and directories not needed for the skill should be deleted. The initialization script creates example files in `scripts/`, `references/`, and `assets/` to demonstrate structure, but most skills won't need all of them. Any example files and directories not needed for the skill should be deleted. The initialization script creates example files in `scripts/`, `references/`, and `assets/` to demonstrate structure, but most skills won't need all of them.
#### Update SKILL.md #### Update SKILL.md