Compare commits
7 Commits
6fceea7460
...
master
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
39ac89f388 | ||
|
|
1bc81fb38c | ||
|
|
1f1eabd1ed | ||
|
|
5b204c95e4 | ||
|
|
4e9da366e4 | ||
|
|
8910413315 | ||
|
|
d475dde398 |
4
.gitignore
vendored
4
.gitignore
vendored
@@ -8,3 +8,7 @@
|
||||
.sidecar-start.sh
|
||||
.sidecar-base
|
||||
.td-root
|
||||
|
||||
# Nix / direnv
|
||||
.direnv/
|
||||
result
|
||||
|
||||
@@ -1,9 +0,0 @@
|
||||
{
|
||||
"active_plan": "/home/m3tam3re/p/AI/AGENTS/.sisyphus/plans/opencode-memory.md",
|
||||
"started_at": "2026-02-14T04:43:37.746Z",
|
||||
"session_ids": [
|
||||
"ses_3a5a47a05ffeoNYfz2RARYsHX9"
|
||||
],
|
||||
"plan_name": "opencode-memory",
|
||||
"agent": "atlas"
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,28 +0,0 @@
|
||||
|
||||
## Task 5: Update Mem0 Memory Skill (2026-02-12)
|
||||
|
||||
### Decisions Made
|
||||
|
||||
1. **Section Placement**: Added new sections without disrupting existing content structure
|
||||
- "Memory Categories" after "Identity Scopes" (line ~109)
|
||||
- "Dual-Layer Sync" after "Workflow Patterns" (line ~138)
|
||||
- Extended "Health Check" section with Pre-Operation Check
|
||||
- "Error Handling" at end, before API Reference
|
||||
|
||||
2. **Content Structure**:
|
||||
- Memory Categories: 5-category classification with table format
|
||||
- Dual-Layer Sync: Complete sync pattern with bash example
|
||||
- Health Check: Added pre-operation verification
|
||||
- Error Handling: Comprehensive graceful degradation patterns
|
||||
|
||||
3. **Validation Approach**:
|
||||
- Used `./scripts/test-skill.sh --validate` for skill structure validation
|
||||
- All sections verified with grep commands
|
||||
- Commit and push completed successfully
|
||||
|
||||
### Success Patterns
|
||||
|
||||
- Edit tool works well for adding sections to existing markdown files
|
||||
- Preserving existing content while adding new sections
|
||||
- Using grep for verification of section additions
|
||||
- `./scripts/test-skill.sh --validate` validates YAML frontmatter automatically
|
||||
@@ -1,47 +0,0 @@
|
||||
|
||||
## Core Memory Skill Creation (2026-02-12)
|
||||
|
||||
**Task**: Create `skills/memory/SKILL.md` - dual-layer memory orchestration skill
|
||||
|
||||
**Pattern Identified**:
|
||||
- Skill structure follows YAML frontmatter with required fields:
|
||||
- `name`: skill identifier
|
||||
- `description`: Use when (X), triggers (Y) pattern
|
||||
- `compatibility`: "opencode"
|
||||
- Markdown structure: Overview, Prerequisites, Workflows, Error Handling, Integration, Quick Reference, See Also
|
||||
|
||||
**Verification Pattern**:
|
||||
```bash
|
||||
test -f <path> && echo "File exists"
|
||||
grep "name: <skill>" <path>
|
||||
grep "key-term" <path>
|
||||
```
|
||||
|
||||
**Key Design Decision**:
|
||||
- Central orchestration skill that references underlying implementation skills (mem0-memory, obsidian)
|
||||
- 4 core workflows: Store, Recall, Auto-Capture, Auto-Recall
|
||||
- Error handling with graceful degradation
|
||||
|
||||
## Apollo Agent Prompt Update (2026-02-12)
|
||||
|
||||
**Task**: Add memory management responsibilities to Apollo agent system prompt
|
||||
|
||||
**Edit Pattern**: Multiple targeted edits to single file preserving existing content
|
||||
- Line number-based edits require precise matching of surrounding context
|
||||
- Edit order: Core Responsibilities → Quality Standards → Tool Usage → Edge Cases
|
||||
- Each edit inserts new bullet items without removing existing content
|
||||
|
||||
**Key Additions**:
|
||||
1. Core Responsibilities: "Manage dual-layer memory system (Mem0 + Obsidian CODEX)"
|
||||
2. Quality Standards: Memory storage, auto-capture, retrieval, categories
|
||||
3. Tool Usage: Mem0 REST API (localhost:8000), Obsidian MCP integration
|
||||
4. Edge Cases: Mem0 unavailable, Obsidian unavailable handling
|
||||
|
||||
**Verification Pattern**:
|
||||
```bash
|
||||
grep -c "memory" ~/p/AI/AGENTS/prompts/apollo.txt # Count occurrences
|
||||
grep "Mem0" ~/p/AI/AGENTS/prompts/apollo.txt # Check specific term
|
||||
grep -i "auto-capture" ~/p/AI/AGENTS/prompts/apollo.txt # Case-insensitive
|
||||
```
|
||||
|
||||
**Observation**: grep is case-sensitive by default - use -i for case-insensitive searches
|
||||
@@ -1,120 +0,0 @@
|
||||
# Opencode Memory Plugin — Learnings
|
||||
|
||||
## Session: ses_3a5a47a05ffeoNYfz2RARYsHX9
|
||||
Started: 2026-02-14
|
||||
|
||||
### Architecture Decisions
|
||||
- SQLite + FTS5 + vec0 replaces mem0+qdrant entirely
|
||||
- Markdown at ~/CODEX/80-memory/ is source of truth
|
||||
- SQLite DB at ~/.local/share/opencode-memory/index.db is derived index
|
||||
- OpenAI text-embedding-3-small for embeddings (1536 dimensions)
|
||||
- Hybrid search: 0.7 vector weight + 0.3 BM25 weight
|
||||
- Chunking: 400 tokens, 80 overlap (tiktoken cl100k_base)
|
||||
|
||||
### Key Patterns from Openclaw
|
||||
- MemoryIndexManager pattern (1590 lines) — file watching, chunking, indexing
|
||||
- Hybrid scoring with weighted combination
|
||||
- Embedding cache by content_hash + model
|
||||
- Two sources: "memory" (markdown files) + "sessions" (transcripts)
|
||||
- Two tools: memory_search (hybrid query) + memory_get (read lines)
|
||||
|
||||
### Technical Stack
|
||||
- Runtime: bun
|
||||
- Test framework: bun test (TDD)
|
||||
- SQLite: better-sqlite3 (synchronous API)
|
||||
- Embeddings: openai npm package
|
||||
- Chunking: tiktoken (cl100k_base encoding)
|
||||
- File watching: chokidar
|
||||
- Validation: zod (for tool schemas)
|
||||
|
||||
### Vec0 Extension Findings (Task 1)
|
||||
- **vec0 extension**: NOT AVAILABLE - requires vec0.so shared library not present
|
||||
- **Alternative solution**: sqlite-vec package (v0.1.7-alpha.2) successfully tested
|
||||
- **Loading mechanism**: `sqliteVec.load(db)` loads vector extension into database
|
||||
- **Test result**: Works with Node.js (better-sqlite3 native module compatible)
|
||||
- **Note**: better-sqlite3 does NOT work with Bun runtime (native module incompatibility)
|
||||
- **Testing command**: `node -e "const Database = require('better-sqlite3'); const sqliteVec = require('sqlite-vec'); const db = new Database(':memory:'); sqliteVec.load(db); console.log('OK')"`
|
||||
|
||||
### Bun Runtime Limitations
|
||||
- better-sqlite3 native module NOT compatible with Bun (ERR_DLOPEN_FAILED)
|
||||
- Use Node.js for any code requiring better-sqlite3
|
||||
- Alternative: bun:sqlite API (similar API, but not same library)
|
||||
|
||||
## Wave Progress
|
||||
- Wave 1: IN PROGRESS (Task 1)
|
||||
- Wave 2-6: PENDING
|
||||
|
||||
### Configuration Module Implementation (Task: Config Module)
|
||||
- **TDD approach**: RED-GREEN-REFACTOR cycle successfully applied
|
||||
- **Pattern**: Default config object + resolveConfig() function for merging
|
||||
- **Path expansion**: `expandPath()` helper function handles `~` → `$HOME` expansion
|
||||
- **Test coverage**: 10 tests covering defaults, overrides, path expansion, and config merging
|
||||
- **TypeScript best practices**: Proper type exports from types.ts, type imports in config.ts
|
||||
- **Defaults match openclaw**: chunking (400/80), search weights (0.7/0.3), minScore (0.35), maxResults (6)
|
||||
- **Bun test framework**: Fast execution (~20ms for 10 tests), clean output
|
||||
|
||||
### Database Schema Implementation (Task 2)
|
||||
- **TDD approach**: RED-GREEN-REFACTOR cycle successfully applied for db module
|
||||
- **Schema tables**: meta, files, chunks, embedding_cache, chunks_fts (FTS5), chunks_vec (vec0)
|
||||
- **WAL mode**: Enabled via `db.pragma('journal_mode = WAL')` for better concurrency
|
||||
- **Foreign keys**: Enabled via `db.pragma('foreign_keys = ON')`
|
||||
- **sqlite-vec integration**: Loaded via `sqliteVec.load(db)` for vector search capabilities
|
||||
- **FTS5 virtual table**: External content table referencing chunks for full-text search
|
||||
- **vec0 virtual table**: 1536-dimension float array for OpenAI text-embedding-3-small embeddings
|
||||
- **Test execution**: Use Node.js with tsx for TypeScript execution (not Bun runtime)
|
||||
- **Buffer handling**: Float32Array must be converted to Buffer via `Buffer.from(array.buffer)` for SQLite binding
|
||||
- **In-memory databases**: WAL mode returns 'memory' for :memory: DBs, 'wal' for file-based DBs
|
||||
- **Test coverage**: 9 tests covering table creation, data insertion, FTS5, vec0, WAL mode, and clean closure
|
||||
- **Error handling**: better-sqlite3 throws "The database connection is not open" for operations on closed DBs
|
||||
|
||||
### Node.js Test Execution
|
||||
- **Issue**: better-sqlite3 not compatible with Bun runtime (native module)
|
||||
- **Solution**: Use Node.js with tsx (TypeScript executor) for running tests
|
||||
- **Command**: `npx tsx --test src/__tests__/db.test.ts`
|
||||
- **Node.test API**: Uses `describe`, `it`, `before`, `after` from 'node:test' module
|
||||
- **Assertions**: Use `assert` from 'node:assert' module
|
||||
- **Cleanup**: Use `after()` hooks for database cleanup, not `afterEach()` (node:test difference)
|
||||
|
||||
### Embedding Provider Implementation (Task: Embeddings Module)
|
||||
- **TDD approach**: RED-GREEN-REFACTOR cycle successfully applied for embeddings module
|
||||
- **Mock database**: Created in-memory mock for testing since better-sqlite3 incompatible with Bun
|
||||
- **Float32 precision**: embeddings stored/retrieved via Float32Array has limited precision (use toBeCloseTo in tests)
|
||||
- **Cache implementation**: content_hash + model composite key in embedding_cache table
|
||||
- **Retry logic**: Exponential backoff (1s, 2s, 4s) for 429/500 errors, max 3 retries
|
||||
- **Test coverage**: 11 tests covering embed(), embedBatch(), cache hits/misses, API failures, retries, buffer conversion
|
||||
- **Helper functions**: embeddingToBuffer() and bufferToEmbedding() for Float32Array ↔ Buffer conversion
|
||||
- **Bun spyOn**: Use mockClear() to reset call count without replacing mock implementation
|
||||
- **Buffer size**: Float32 embedding stored as Buffer with size = dimensions * 4 bytes
|
||||
|
||||
### FTS5 BM25 Search Implementation (Task: FTS5 Search Module)
|
||||
- **TDD approach**: RED-GREEN-REFACTOR cycle successfully applied for search module
|
||||
- **buildFtsQuery()**: Extracts alphanumeric tokens via regex `/[A-Za-z0-9_]+/g`, quotes them, joins with AND
|
||||
- **FTS5 escaping**: Tokens are quoted to handle special characters (e.g., `"term"`)
|
||||
- **BM25 score normalization**: `bm25RankToScore(rank)` converts BM25 rank to 0-1 score using `1 / (1 + normalized)`
|
||||
- **FTS5 external content tables**: The schema uses `content='chunks', content_rowid='rowid'` but requires manual insertion into chunks_fts
|
||||
- **Test data setup**: Must manually insert into chunks_fts after inserting into chunks (external content doesn't auto-populate)
|
||||
- **BM25 ranking**: Results are ordered by `rank` column (lower rank = better match for FTS5)
|
||||
- **Error handling**: searchFTS catches SQL errors and returns empty array (graceful degradation)
|
||||
- **MaxResults parameter**: Respects LIMIT clause in SQL query
|
||||
- **SearchResult interface**: Includes id, filePath, startLine, endLine, text, contentHash, source, score (all required)
|
||||
- **Prefix matching**: FTS5 supports prefix queries automatically via token matching (e.g., "test" matches "testing")
|
||||
- **No matches**: Returns empty array when query has no valid tokens or no matches found
|
||||
- **Test coverage**: 7 tests covering basic search, exact keywords, partial words, no matches, ranking, maxResults, and metadata
|
||||
|
||||
### Hybrid Search Implementation (Task: Hybrid Search Combiner)
|
||||
- **TDD approach**: RED-GREEN-REFACTOR cycle successfully applied for hybrid search
|
||||
- **Weighted scoring**: Combined score = vectorWeight * vectorScore + textWeight * textScore (default: 0.7/0.3)
|
||||
- **Result merging**: Uses Map<string, HybridSearchResult> to merge results by chunk ID, preventing duplicates
|
||||
- **Dual-score tracking**: Each result tracks both vectorScore and textScore separately, allowing for degraded modes
|
||||
- **Graceful degradation**: Works with FTS5-only (vector search fails) or vector-only (FTS5 fails)
|
||||
- **minScore filtering**: Results below minScore threshold are filtered out after score calculation
|
||||
- **Score sorting**: Results sorted by combined score in descending order before applying maxResults limit
|
||||
- **Vector search fallback**: searchVector catches errors and returns empty array, allowing FTS5-only operation
|
||||
- **FTS5 query fallback**: searchFTS catches SQL errors and returns empty array, allowing vector-only operation
|
||||
- **Database cleanup**: beforeEach must delete from chunks_fts, chunks_vec, chunks, and files to avoid state bleed
|
||||
- **Virtual table corruption**: Deleting from FTS5/vec0 virtual tables can cause corruption - use try/catch to recreate
|
||||
- **SearchResult type conflict**: SearchResult is imported from types.ts, don't re-export in search.ts
|
||||
- **Test isolation**: Virtual tables (chunks_fts, chunks_vec) must be cleared and potentially recreated between tests
|
||||
- **Buffer conversion**: queryEmbedding converted to Buffer via Buffer.from(new Float32Array(array).buffer)
|
||||
- **Debug logging**: process.env.DEBUG_SEARCH flag enables detailed logging of FTS5 and vector search results
|
||||
- **Test coverage**: 9 tests covering combination, weighting, minScore filtering, deduplication, sorting, maxResults, degraded modes (FTS5-only, vector-only), and custom weights
|
||||
@@ -1,748 +0,0 @@
|
||||
# Agent Permissions Refinement
|
||||
|
||||
## TL;DR
|
||||
|
||||
> **Quick Summary**: Refine OpenCode agent permissions for Chiron (planning) and Chriton-Forge (build) to implement 2025 AI security best practices with principle of least privilege, human-in-the-loop for critical actions, and explicit guardrails against permission bypass.
|
||||
|
||||
> **Deliverables**:
|
||||
> - Updated `agents/agents.json` with refined permissions for Chiron and Chriton-Forge
|
||||
> - Critical bug fix: Duplicate `external_directory` key in Chiron config
|
||||
> - Enhanced secret blocking with additional patterns
|
||||
> - Bash injection prevention rules
|
||||
> - Git protection against secret commits and repo hijacking
|
||||
|
||||
> **Estimated Effort**: Medium
|
||||
> **Parallel Execution**: NO - sequential changes to single config file
|
||||
> **Critical Path**: Fix duplicate key → Apply Chiron permissions → Apply Chriton-Forge permissions → Validate
|
||||
|
||||
---
|
||||
|
||||
## Context
|
||||
|
||||
### Original Request
|
||||
User wants to refine agent permissions for:
|
||||
- **Chiron**: Planning agent with read-only access, restricted to read-only subagents, no file editing, can create beads issues
|
||||
- **Chriton-Forge**: Build agent with write access restricted to ~/p/**, git commits allowed but git push asks, package install commands ask
|
||||
- **General**: Sane defaults that are secure but open enough for autonomous work
|
||||
|
||||
### Interview Summary
|
||||
**Key Discussions**:
|
||||
- Chiron: Read-only planning, no file editing, bash denied except for `bd *` commands, external_directory ~/p/** only, task permission to restrict subagents to explore/librarian/athena + chiron-forge for handoff
|
||||
- Chriton-Forge: Write access restricted to ~/p/**, git commits allow / git push ask, package install commands ask, git config deny
|
||||
- Workspace path: ~/p/** is symlink to ~/projects/personal/** (just replacing path reference)
|
||||
- Bash security: Block all bash redirect patterns (echo >, cat >, tee, etc.)
|
||||
|
||||
**Research Findings**:
|
||||
- OpenCode supports granular permission rules with wildcards, last-match-wins
|
||||
- 2025 best practices: Principle of least privilege, tiered permissions (read-only auto, destructive ask, JIT privileges), human-in-the-loop for critical actions
|
||||
- Security hardening: Block command injection vectors, prevent git secret commits, add comprehensive secret blocking patterns
|
||||
|
||||
### Metis Review
|
||||
**Critical Issues Identified**:
|
||||
1. **Duplicate `external_directory` key** in Chiron config (lines 8-9 and 27) - second key overrides first, breaking intended behavior
|
||||
2. **Bash edit bypass**: Even with `edit: deny`, bash can write files via redirection (`echo "x" > file.txt`, `cat >`, `tee`)
|
||||
3. **Git secret protection**: Agent could commit secrets (read .env, then git commit .env)
|
||||
4. **Git config hijacking**: Agent could modify .git/config to push to attacker-controlled repo
|
||||
5. **Command injection**: Malicious content could execute via `$()`, backticks, `eval`, `source`
|
||||
6. **Secret blocking incomplete**: Missing patterns for `.local/share/*`, `.cache/*`, `*.db`, `*.keychain`, `*.p12`
|
||||
|
||||
**Guardrails Applied**:
|
||||
- Fix duplicate external_directory key (use single object with catch-all `"*": "ask"` after specific rules)
|
||||
- Add bash file write protection patterns (echo >, cat >, printf >, tee, > operators)
|
||||
- Add git secret protection (`git add *.env*`: deny, `git commit *.env*`: deny)
|
||||
- Add git config protection (`git config *`: deny for Chriton-Forge)
|
||||
- Add bash injection prevention (`$(*`, `` `*``, `eval *`, `source *`)
|
||||
- Expand secret blocking with additional patterns
|
||||
- Add /run/agenix/* to read deny list
|
||||
|
||||
---
|
||||
|
||||
## Work Objectives
|
||||
|
||||
### Core Objective
|
||||
Refine OpenCode agent permissions in `agents/agents.json` to implement security hardening based on 2025 AI agent best practices while maintaining autonomous workflow capabilities.
|
||||
|
||||
### Concrete Deliverables
|
||||
- Updated `agents/agents.json` with:
|
||||
- Chiron: Read-only permissions, subagent restrictions, bash denial (except `bd *`), no file editing
|
||||
- Chriton-Forge: Write access scoped to ~/p/**, git commit allow / push ask, package install ask, git config deny
|
||||
- Both: Enhanced secret blocking, bash injection prevention, git secret protection
|
||||
|
||||
### Definition of Done
|
||||
- [x] Permission configuration updated in `agents/agents.json`
|
||||
- [x] JSON syntax valid (no duplicate keys, valid structure)
|
||||
- [x] Workspace path validated (~/p/** exists and is correct)
|
||||
- [x] Acceptance criteria tests pass (via manual verification)
|
||||
|
||||
### Must Have
|
||||
- Chiron cannot edit files directly
|
||||
- Chiron cannot write files via bash (redirects blocked)
|
||||
- Chiron restricted to read-only subagents + chiron-forge for handoff
|
||||
- Chriton-Forge can only write to ~/p/**
|
||||
- Chriton-Forge cannot git config
|
||||
- Both agents block secret file reads
|
||||
- Both agents prevent command injection
|
||||
- Git operations cannot commit secrets
|
||||
- No duplicate keys in permission configuration
|
||||
|
||||
### Must NOT Have (Guardrails)
|
||||
- **Edit bypass via bash**: No bash redirection patterns that allow file writes when `edit: deny`
|
||||
- **Git secret commits**: No ability to git add/commit .env or credential files
|
||||
- **Repo hijacking**: No git config modification allowed for Chriton-Forge
|
||||
- **Command injection**: No `$()`, backticks, `eval`, `source` execution via bash
|
||||
- **Write scope escape**: Chriton-Forge cannot write outside ~/p/** without asking
|
||||
- **Secret exfiltration**: No access to .env, .ssh, .gnupg, credentials, secrets, .pem, .key, /run/agenix
|
||||
- **Unrestricted bash for Chiron**: Only `bd *` commands allowed
|
||||
|
||||
---
|
||||
|
||||
## Verification Strategy (MANDATORY)
|
||||
|
||||
> This is configuration work, not code development. Manual verification is required after deployment.
|
||||
|
||||
### Test Decision
|
||||
- **Infrastructure exists**: YES (home-manager deployment)
|
||||
- **User wants tests**: NO (Manual-only verification)
|
||||
- **Framework**: None
|
||||
|
||||
### Manual Verification Procedures
|
||||
|
||||
Each TODO includes EXECUTABLE verification procedures that users can run to validate changes.
|
||||
|
||||
**Verification Commands to Run After Deployment:**
|
||||
|
||||
1. **JSON Syntax Validation**:
|
||||
```bash
|
||||
# Validate JSON structure and no duplicate keys
|
||||
jq '.' /home/m3tam3re/p/AI/AGENTS/agents/agents.json > /dev/null 2>&1
|
||||
# Expected: Exit code 0 (valid JSON)
|
||||
|
||||
# Check for duplicate keys (manual review of chiron permission object)
|
||||
# Expected: Single external_directory key, no other duplicates
|
||||
```
|
||||
|
||||
2. **Workspace Path Validation**:
|
||||
```bash
|
||||
ls -la ~/p/ 2>&1
|
||||
# Expected: Directory exists, shows contents (likely symlink to ~/projects/personal/)
|
||||
```
|
||||
|
||||
3. **After Deployment - Chiron Read-Only Test** (manual):
|
||||
- Have Chiron attempt to edit a test file
|
||||
- Expected: Permission denied with clear error message
|
||||
- Have Chiron attempt to write via bash (echo "test" > /tmp/test.txt)
|
||||
- Expected: Permission denied
|
||||
- Have Chiron run `bd ready` command
|
||||
- Expected: Command succeeds, returns JSON output with issue list
|
||||
- Have Chiron attempt to invoke build-capable subagent (sisyphus-junior)
|
||||
- Expected: Permission denied
|
||||
|
||||
4. **After Deployment - Chiron Workspace Access** (manual):
|
||||
- Have Chiron read file within ~/p/**
|
||||
- Expected: Success, returns file contents
|
||||
- Have Chiron read file outside ~/p/**
|
||||
- Expected: Permission denied or ask user
|
||||
- Have Chiron delegate to explore/librarian/athena
|
||||
- Expected: Success, subagent executes
|
||||
|
||||
5. **After Deployment - Chriton-Forge Write Access** (manual):
|
||||
- Have Chriton-Forge write test file in ~/p/** directory
|
||||
- Expected: Success, file created
|
||||
- Have Chriton-Forge attempt to write file to /tmp
|
||||
- Expected: Ask user for approval
|
||||
- Have Chriton-Forge run `git add` and `git commit -m "test"`
|
||||
- Expected: Success, commit created without asking
|
||||
- Have Chriton-Forge attempt `git push`
|
||||
- Expected: Ask user for approval
|
||||
- Have Chriton-Forge attempt `git config`
|
||||
- Expected: Permission denied
|
||||
- Have Chriton-Forge attempt `npm install lodash`
|
||||
- Expected: Ask user for approval
|
||||
|
||||
6. **After Deployment - Secret Blocking Tests** (manual):
|
||||
- Attempt to read .env file with both agents
|
||||
- Expected: Permission denied
|
||||
- Attempt to read /run/agenix/ with Chiron
|
||||
- Expected: Permission denied
|
||||
- Attempt to read .env.example (should be allowed)
|
||||
- Expected: Success
|
||||
|
||||
7. **After Deployment - Bash Injection Prevention** (manual):
|
||||
- Have agent attempt bash -c "$(cat /malicious)"
|
||||
- Expected: Permission denied
|
||||
- Have agent attempt bash -c "`cat /malicious`"
|
||||
- Expected: Permission denied
|
||||
- Have agent attempt eval command
|
||||
- Expected: Permission denied
|
||||
|
||||
8. **After Deployment - Git Secret Protection** (manual):
|
||||
- Have agent attempt `git add .env`
|
||||
- Expected: Permission denied
|
||||
- Have agent attempt `git commit .env`
|
||||
- Expected: Permission denied
|
||||
|
||||
9. **Deployment Verification**:
|
||||
```bash
|
||||
# After home-manager switch, verify config is embedded correctly
|
||||
cat ~/.config/opencode/config.json | jq '.agent.chiron.permission.external_directory'
|
||||
# Expected: Shows ~/p/** rule, no duplicate keys
|
||||
|
||||
# Verify agents load without errors
|
||||
# Expected: No startup errors when launching OpenCode
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution Strategy
|
||||
|
||||
### Parallel Execution Waves
|
||||
|
||||
> Single file sequential changes - no parallelization possible.
|
||||
|
||||
```
|
||||
Single-Threaded Execution:
|
||||
Task 1: Fix duplicate external_directory key
|
||||
Task 2: Apply Chiron permission updates
|
||||
Task 3: Apply Chriton-Forge permission updates
|
||||
Task 4: Validate configuration
|
||||
```
|
||||
|
||||
### Dependency Matrix
|
||||
|
||||
| Task | Depends On | Blocks | Can Parallelize With |
|
||||
|------|------------|--------|---------------------|
|
||||
| 1 | None | 2, 3 | None (must start) |
|
||||
| 2 | 1 | 4 | 3 |
|
||||
| 3 | 1 | 4 | 2 |
|
||||
| 4 | 2, 3 | None | None (validation) |
|
||||
|
||||
### Agent Dispatch Summary
|
||||
|
||||
| Task | Recommended Agent |
|
||||
|------|-----------------|
|
||||
| 1 | delegate_task(category="quick", load_skills=["git-master"]) |
|
||||
| 2 | delegate_task(category="quick", load_skills=["git-master"]) |
|
||||
| 3 | delegate_task(category="quick", load_skills=["git-master"]) |
|
||||
| 4 | User (manual verification) |
|
||||
|
||||
---
|
||||
|
||||
## TODOs
|
||||
|
||||
> Implementation tasks for agent configuration changes. Each task MUST include acceptance criteria with executable verification.
|
||||
|
||||
- [x] 1. Fix Duplicate external_directory Key in Chiron Config
|
||||
|
||||
**What to do**:
|
||||
- Remove duplicate `external_directory` key from Chiron permission object
|
||||
- Consolidate into single object with specific rule + catch-all `"*": "ask"`
|
||||
- Replace `~/projects/personal/**` with `~/p/**` (symlink to same directory)
|
||||
|
||||
**Must NOT do**:
|
||||
- Leave duplicate keys (second key overrides first, breaks config)
|
||||
- Skip workspace path validation (verify ~/p/** exists)
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
> **Category**: quick
|
||||
- Reason: Simple JSON edit, single file change, no complex logic
|
||||
> **Skills**: git-master
|
||||
- git-master: Git workflow for committing changes
|
||||
> **Skills Evaluated but Omitted**:
|
||||
- research: Not needed (no investigation required)
|
||||
- librarian: Not needed (no external docs needed)
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: NO
|
||||
- **Parallel Group**: Sequential
|
||||
- **Blocks**: Tasks 2, 3 (depends on clean config)
|
||||
- **Blocked By**: None (can start immediately)
|
||||
|
||||
**References** (CRITICAL - Be Exhaustive):
|
||||
|
||||
**Pattern References** (existing code to follow):
|
||||
- `agents/agents.json:1-135` - Current agent configuration structure (JSON format, permission object structure)
|
||||
- `agents/agents.json:7-29` - Chiron permission object (current state with duplicate key)
|
||||
|
||||
**API/Type References** (contracts to implement against):
|
||||
- OpenCode permission schema: `{"permission": {"bash": {...}, "edit": "...", "external_directory": {...}, "task": {...}}`
|
||||
|
||||
**Documentation References** (specs and requirements):
|
||||
- Interview draft: `.sisyphus/drafts/agent-permissions-refinement.md` - All user decisions and requirements
|
||||
- Metis analysis: Critical issue #1 - Duplicate external_directory key
|
||||
|
||||
**External References** (libraries and frameworks):
|
||||
- OpenCode docs: https://opencode.ai/docs/permissions/ - Permission system documentation (allow/ask/deny, wildcards, last-match-wins)
|
||||
- OpenCode docs: https://opencode.ai/docs/agents/ - Agent configuration format
|
||||
|
||||
**WHY Each Reference Matters** (explain the relevance):
|
||||
- `agents/agents.json` - Target file to modify, shows current structure and duplicate key bug
|
||||
- Interview draft - Contains all user decisions (~/p/** path, subagent restrictions, etc.)
|
||||
- OpenCode permissions docs - Explains permission system mechanics (last-match-wins critical for rule ordering)
|
||||
- Metis analysis - Identifies the duplicate key bug that MUST be fixed
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
> **CRITICAL: AGENT-EXECUTABLE VERIFICATION ONLY**
|
||||
|
||||
**Automated Verification (config validation)**:
|
||||
\`\`\`bash
|
||||
# Agent runs:
|
||||
jq '.' /home/m3tam3re/p/AI/AGENTS/agents/agents.json > /dev/null 2>&1
|
||||
# Assert: Exit code 0 (valid JSON)
|
||||
|
||||
# Verify single external_directory key in chiron permission object
|
||||
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron.permission | keys' | grep external_directory | wc -l
|
||||
# Assert: Output is "1" (exactly one external_directory key)
|
||||
|
||||
# Verify workspace path exists
|
||||
ls -la ~/p/ 2>&1 | head -1
|
||||
# Assert: Shows directory listing (not "No such file or directory")
|
||||
\`\`\`
|
||||
|
||||
**Evidence to Capture**:
|
||||
- [x] jq validation output (exit code 0)
|
||||
- [x] external_directory key count output (should be "1")
|
||||
- [x] Workspace path ls output (shows directory exists)
|
||||
|
||||
**Commit**: NO (group with Task 2 and 3)
|
||||
|
||||
- [x] 2. Apply Chiron Permission Updates
|
||||
|
||||
**What to do**:
|
||||
- Set `edit` to `"deny"` (planning agent should not write files)
|
||||
- Set `bash` permissions to deny all except `bd *`:
|
||||
```json
|
||||
"bash": {
|
||||
"*": "deny",
|
||||
"bd *": "allow"
|
||||
}
|
||||
```
|
||||
- Set `external_directory` to `~/p/**` with catch-all ask:
|
||||
```json
|
||||
"external_directory": {
|
||||
"~/p/**": "allow",
|
||||
"*": "ask"
|
||||
}
|
||||
```
|
||||
- Add `task` permission to restrict subagents:
|
||||
```json
|
||||
"task": {
|
||||
"*": "deny",
|
||||
"explore": "allow",
|
||||
"librarian": "allow",
|
||||
"athena": "allow",
|
||||
"chiron-forge": "allow"
|
||||
}
|
||||
```
|
||||
- Add `/run/agenix/*` to read deny list
|
||||
- Add expanded secret blocking patterns: `.local/share/*`, `.cache/*`, `*.db`, `*.keychain`, `*.p12`
|
||||
|
||||
**Must NOT do**:
|
||||
- Allow bash file write operators (echo >, cat >, tee, etc.) - will add in Task 3 for both agents
|
||||
- Allow chiron to invoke build-capable subagents beyond chiron-forge
|
||||
- Skip webfetch permission (should be "allow" for research capability)
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
> **Category**: quick
|
||||
- Reason: JSON configuration update, follows clear specifications from draft
|
||||
> **Skills**: git-master
|
||||
- git-master: Git workflow for committing changes
|
||||
> **Skills Evaluated but Omitted**:
|
||||
- research: Not needed (all requirements documented in draft)
|
||||
- librarian: Not needed (no external docs needed)
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2 (with Task 3)
|
||||
- **Blocks**: Task 4
|
||||
- **Blocked By**: Task 1
|
||||
|
||||
**References** (CRITICAL - Be Exhaustive):
|
||||
|
||||
**Pattern References** (existing code to follow):
|
||||
- `agents/agents.json:11-24` - Current Chiron read permissions with secret blocking patterns
|
||||
- `agents/agents.json:114-132` - Athena permission object (read-only subagent reference pattern)
|
||||
|
||||
**API/Type References** (contracts to implement against):
|
||||
- OpenCode task permission schema: `{"task": {"agent-name": "allow"}}`
|
||||
|
||||
**Documentation References** (specs and requirements):
|
||||
- Interview draft: `.sisyphus/drafts/agent-permissions-refinement.md` - Chiron permission decisions
|
||||
- Metis analysis: Guardrails #7, #8 - Secret blocking patterns, task permission implementation
|
||||
|
||||
**External References** (libraries and frameworks):
|
||||
- OpenCode docs: https://opencode.ai/docs/agents/#task-permissions - Task permission documentation
|
||||
- OpenCode docs: https://opencode.ai/docs/permissions/ - Permission level definitions and pattern matching
|
||||
|
||||
**WHY Each Reference Matters** (explain the relevance):
|
||||
- `agents/agents.json:11-24` - Shows current secret blocking patterns to extend
|
||||
- `agents/agents.json:114-132` - Shows read-only subagent pattern for reference (athena: deny bash, deny edit)
|
||||
- Interview draft - Contains exact user requirements for Chiron permissions
|
||||
- OpenCode task docs - Explains how to restrict subagent invocation via task permission
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
> **CRITICAL: AGENT-EXECUTABLE VERIFICATION ONLY**
|
||||
|
||||
**Automated Verification (config validation)**:
|
||||
\`\`\`bash
|
||||
# Agent runs:
|
||||
jq '.chiron.permission.edit' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron.permission.bash."*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron.permission.bash."bd *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "allow"
|
||||
|
||||
jq '.chiron.permission.task."*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron.permission.task | keys' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Contains ["*", "athena", "chiron-forge", "explore", "librarian"]
|
||||
|
||||
jq '.chiron.permission.external_directory."~/p/**"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "allow"
|
||||
|
||||
jq '.chiron.permission.external_directory."*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "ask"
|
||||
|
||||
jq '.chiron.permission.read."/run/agenix/*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
\`\`\`
|
||||
|
||||
**Evidence to Capture**:
|
||||
- [x] Edit permission value (should be "deny")
|
||||
- [x] Bash wildcard permission (should be "deny")
|
||||
- [x] Bash bd permission (should be "allow")
|
||||
- [x] Task wildcard permission (should be "deny")
|
||||
- [x] Task allowlist keys (should show 5 entries)
|
||||
- [x] External directory ~/p/** permission (should be "allow")
|
||||
- [x] External directory wildcard permission (should be "ask")
|
||||
- [x] Read /run/agenix/* permission (should be "deny")
|
||||
|
||||
**Commit**: NO (group with Task 3)
|
||||
|
||||
- [x] 3. Apply Chriton-Forge Permission Updates
|
||||
|
||||
**What to do**:
|
||||
- Split `git *: "ask"` into granular rules:
|
||||
- Allow: `git add *`, `git commit *`, read-only commands (status, log, diff, branch, show, stash, remote)
|
||||
- Ask: `git push *`
|
||||
- Deny: `git config *`
|
||||
- Change package managers from `"ask"` to granular rules:
|
||||
- Ask for installs: `npm install *`, `npm i *`, `npx *`, `pip install *`, `pip3 install *`, `uv *`, `bun install *`, `bun i *`, `bunx *`, `yarn install *`, `yarn add *`, `pnpm install *`, `pnpm add *`, `cargo install *`, `go install *`, `make install`
|
||||
- Allow other commands implicitly (let them use catch-all rules or existing allow patterns)
|
||||
- Set `external_directory` to allow `~/p/**` with catch-all ask:
|
||||
```json
|
||||
"external_directory": {
|
||||
"~/p/**": "allow",
|
||||
"*": "ask"
|
||||
}
|
||||
```
|
||||
- Add bash file write protection patterns (apply to both agents):
|
||||
```json
|
||||
"bash": {
|
||||
"echo * > *": "deny",
|
||||
"cat * > *": "deny",
|
||||
"printf * > *": "deny",
|
||||
"tee": "deny",
|
||||
"*>*": "deny",
|
||||
">*>*": "deny"
|
||||
}
|
||||
```
|
||||
- Add bash command injection prevention (apply to both agents):
|
||||
```json
|
||||
"bash": {
|
||||
"$(*": "deny",
|
||||
"`*": "deny",
|
||||
"eval *": "deny",
|
||||
"source *": "deny"
|
||||
}
|
||||
```
|
||||
- Add git secret protection patterns (apply to both agents):
|
||||
```json
|
||||
"bash": {
|
||||
"git add *.env*": "deny",
|
||||
"git commit *.env*": "deny",
|
||||
"git add *credentials*": "deny",
|
||||
"git add *secrets*": "deny"
|
||||
}
|
||||
```
|
||||
- Add expanded secret blocking patterns to read permission:
|
||||
- `.local/share/*`, `.cache/*`, `*.db`, `*.keychain`, `*.p12`
|
||||
|
||||
**Must NOT do**:
|
||||
- Remove existing bash deny rules for dangerous commands (dd, mkfs, fdisk, parted, eval, sudo, su, systemctl, etc.)
|
||||
- Allow git config modifications
|
||||
- Allow bash to write files via any method (must block all redirect patterns)
|
||||
- Skip command injection prevention ($(), backticks, eval, source)
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
> **Category**: quick
|
||||
- Reason: JSON configuration update, follows clear specifications from draft
|
||||
> **Skills**: git-master
|
||||
- git-master: Git workflow for committing changes
|
||||
> **Skills Evaluated but Omitted**:
|
||||
- research: Not needed (all requirements documented in draft)
|
||||
- librarian: Not needed (no external docs needed)
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2 (with Task 2)
|
||||
- **Blocks**: Task 4
|
||||
- **Blocked By**: Task 1
|
||||
|
||||
**References** (CRITICAL - Be Exhaustive):
|
||||
|
||||
**Pattern References** (existing code to follow):
|
||||
- `agents/agents.json:37-103` - Current Chriton-Forge bash permissions (many explicit allow/ask/deny rules)
|
||||
- `agents/agents.json:37-50` - Current Chriton-Forge read permissions with secret blocking
|
||||
|
||||
**API/Type References** (contracts to implement against):
|
||||
- OpenCode permission schema: Same as Task 2
|
||||
|
||||
**Documentation References** (specs and requirements):
|
||||
- Interview draft: `.sisyphus/drafts/agent-permissions-refinement.md` - Chriton-Forge permission decisions
|
||||
- Metis analysis: Guardrails #1-#6 - Bash edit bypass, git secret protection, command injection, git config protection
|
||||
|
||||
**External References** (libraries and frameworks):
|
||||
- OpenCode docs: https://opencode.ai/docs/permissions/ - Permission pattern matching (wildcards, last-match-wins)
|
||||
|
||||
**WHY Each Reference Matters** (explain the relevance):
|
||||
- `agents/agents.json:37-103` - Shows current bash permission structure (many explicit rules) to extend with new patterns
|
||||
- `agents/agents.json:37-50` - Shows current secret blocking to extend with additional patterns
|
||||
- Interview draft - Contains exact user requirements for Chriton-Forge permissions
|
||||
- Metis analysis - Provides bash injection prevention patterns and git protection rules
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
> **CRITICAL: AGENT-EXECUTABLE VERIFICATION ONLY**
|
||||
|
||||
**Automated Verification (config validation)**:
|
||||
\`\`\`bash
|
||||
# Agent runs:
|
||||
|
||||
# Verify git commit is allowed
|
||||
jq '.chiron-forge.permission.bash."git commit *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "allow"
|
||||
|
||||
# Verify git push asks
|
||||
jq '.chiron-forge.permission.bash."git push *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "ask"
|
||||
|
||||
# Verify git config is denied
|
||||
jq '.chiron-forge.permission.bash."git config *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
# Verify npm install asks
|
||||
jq '.chiron-forge.permission.bash."npm install *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "ask"
|
||||
|
||||
# Verify bash file write redirects are blocked
|
||||
jq '.chiron-forge.permission.bash."echo * > *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron-forge.permission.bash."cat * > *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron-forge.permission.bash."tee"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
# Verify command injection is blocked
|
||||
jq '.chiron-forge.permission.bash."$(*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron-forge.permission.bash."`*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
# Verify git secret protection
|
||||
jq '.chiron-forge.permission.bash."git add *.env*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron-forge.permission.bash."git commit *.env*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
# Verify external_directory scope
|
||||
jq '.chiron-forge.permission.external_directory."~/p/**"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "allow"
|
||||
|
||||
jq '.chiron-forge.permission.external_directory."*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "ask"
|
||||
|
||||
# Verify expanded secret blocking
|
||||
jq '.chiron-forge.permission.read.".local/share/*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron-forge.permission.read.".cache/*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron-forge.permission.read."*.db"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
\`\`\`
|
||||
|
||||
**Evidence to Capture**:
|
||||
- [x] Git commit permission (should be "allow")
|
||||
- [x] Git push permission (should be "ask")
|
||||
- [x] Git config permission (should be "deny")
|
||||
- [x] npm install permission (should be "ask")
|
||||
- [x] bash redirect echo > permission (should be "deny")
|
||||
- [x] bash redirect cat > permission (should be "deny")
|
||||
- [x] bash tee permission (should be "deny")
|
||||
- [x] bash $() injection permission (should be "deny")
|
||||
- [x] bash backtick injection permission (should be "deny")
|
||||
- [x] git add *.env* permission (should be "deny")
|
||||
- [x] git commit *.env* permission (should be "deny")
|
||||
- [x] external_directory ~/p/** permission (should be "allow")
|
||||
- [x] external_directory wildcard permission (should be "ask")
|
||||
- [x] read .local/share/* permission (should be "deny")
|
||||
- [x] read .cache/* permission (should be "deny")
|
||||
- [x] read *.db permission (should be "deny")
|
||||
|
||||
**Commit**: YES (groups with Tasks 1, 2, 3)
|
||||
- Message: `chore(agents): refine permissions for Chiron and Chriton-Forge with security hardening`
|
||||
- Files: `agents/agents.json`
|
||||
- Pre-commit: `jq '.' agents/agents.json > /dev/null 2>&1` (validate JSON)
|
||||
|
||||
- [x] 4. Validate Configuration (Manual Verification)
|
||||
|
||||
**What to do**:
|
||||
- Run JSON syntax validation: `jq '.' agents/agents.json`
|
||||
- Verify no duplicate keys in configuration
|
||||
- Verify workspace path exists: `ls -la ~/p/`
|
||||
- Document manual verification procedure for post-deployment testing
|
||||
|
||||
**Must NOT do**:
|
||||
- Skip workspace path validation
|
||||
- Skip duplicate key verification
|
||||
- Proceed to deployment without validation
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
> **Category**: quick
|
||||
- Reason: Simple validation commands, documentation task
|
||||
> **Skills**: git-master
|
||||
- git-master: Git workflow for committing validation script or notes if needed
|
||||
> **Skills Evaluated but Omitted**:
|
||||
- research: Not needed (validation is straightforward)
|
||||
- librarian: Not needed (no external docs needed)
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: NO
|
||||
- **Parallel Group**: Sequential
|
||||
- **Blocks**: None (final validation task)
|
||||
- **Blocked By**: Tasks 2, 3
|
||||
|
||||
**References** (CRITICAL - Be Exhaustive):
|
||||
|
||||
**Pattern References** (existing code to follow):
|
||||
- `AGENTS.md` - Repository documentation structure
|
||||
|
||||
**API/Type References** (contracts to implement against):
|
||||
- N/A (validation task)
|
||||
|
||||
**Documentation References** (specs and requirements):
|
||||
- Interview draft: `.sisyphus/drafts/agent-permissions-refinement.md` - All user requirements
|
||||
- Metis analysis: Guardrails #1-#6 - Validation requirements
|
||||
|
||||
**External References** (libraries and frameworks):
|
||||
- N/A (validation task)
|
||||
|
||||
**WHY Each Reference Matters** (explain the relevance):
|
||||
- Interview draft - Contains all requirements to validate against
|
||||
- Metis analysis - Identifies specific validation steps (duplicate keys, workspace path, etc.)
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
> **CRITICAL: AGENT-EXECUTABLE VERIFICATION ONLY**
|
||||
|
||||
**Automated Verification (config validation)**:
|
||||
\`\`\`bash
|
||||
# Agent runs:
|
||||
|
||||
# JSON syntax validation
|
||||
jq '.' /home/m3tam3re/p/AI/AGENTS/agents/agents.json > /dev/null 2>&1
|
||||
# Assert: Exit code 0
|
||||
|
||||
# Verify no duplicate external_directory keys
|
||||
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron.permission | keys' | grep external_directory | wc -l
|
||||
# Assert: Output is "1"
|
||||
|
||||
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron-forge.permission | keys' | grep external_directory | wc -l
|
||||
# Assert: Output is "1"
|
||||
|
||||
# Verify workspace path exists
|
||||
ls -la ~/p/ 2>&1 | head -1
|
||||
# Assert: Shows directory listing (not "No such file or directory")
|
||||
|
||||
# Verify all permission keys are valid
|
||||
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron.permission' > /dev/null 2>&1
|
||||
# Assert: Exit code 0
|
||||
|
||||
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron-forge.permission' > /dev/null 2>&1
|
||||
# Assert: Exit code 0
|
||||
\`\`\`
|
||||
|
||||
**Evidence to Capture**:
|
||||
- [x] jq validation output (exit code 0)
|
||||
- [x] Chiron external_directory key count (should be "1")
|
||||
- [x] Chriton-Forge external_directory key count (should be "1")
|
||||
- [x] Workspace path ls output (shows directory exists)
|
||||
- [x] Chiron permission object validation (exit code 0)
|
||||
- [x] Chriton-Forge permission object validation (exit code 0)
|
||||
|
||||
**Commit**: NO (validation only, no changes)
|
||||
|
||||
---
|
||||
|
||||
## Commit Strategy
|
||||
|
||||
| After Task | Message | Files | Verification |
|
||||
|------------|---------|-------|--------------|
|
||||
| 1, 2, 3 | `chore(agents): refine permissions for Chiron and Chriton-Forge with security hardening` | agents/agents.json | `jq '.' agents/agents.json > /dev/null` |
|
||||
| 4 | N/A (validation only) | N/A | N/A |
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
### Verification Commands
|
||||
```bash
|
||||
# Pre-deployment validation
|
||||
jq '.' /home/m3tam3re/p/AI/AGENTS/agents/agents.json > /dev/null 2>&1
|
||||
# Expected: Exit code 0
|
||||
|
||||
# Duplicate key check
|
||||
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron.permission | keys' | grep external_directory | wc -l
|
||||
# Expected: 1
|
||||
|
||||
# Workspace path validation
|
||||
ls -la ~/p/ 2>&1
|
||||
# Expected: Directory listing
|
||||
|
||||
# Post-deployment (manual)
|
||||
# Have Chiron attempt file edit → Expected: Permission denied
|
||||
# Have Chiron run bd ready → Expected: Success
|
||||
# Have Chriton-Forge git commit → Expected: Success
|
||||
# Have Chriton-Forge git push → Expected: Ask user
|
||||
# Have agent read .env → Expected: Permission denied
|
||||
```
|
||||
|
||||
### Final Checklist
|
||||
- [x] Duplicate `external_directory` key fixed
|
||||
- [x] Chiron edit set to "deny"
|
||||
- [x] Chiron bash denied except `bd *`
|
||||
- [x] Chiron task permission restricts subagents (explore, librarian, athena, chiron-forge)
|
||||
- [x] Chiron external_directory allows ~/p/** only
|
||||
- [x] Chriton-Forge git commit allowed, git push asks
|
||||
- [x] Chriton-Forge git config denied
|
||||
- [x] Chriton-Forge package install commands ask
|
||||
- [x] Chriton-Forge external_directory allows ~/p/**, asks others
|
||||
- [x] Bash file write operators blocked (echo >, cat >, tee, etc.)
|
||||
- [x] Bash command injection blocked ($(), backticks, eval, source)
|
||||
- [x] Git secret protection added (git add/commit *.env* deny)
|
||||
- [x] Expanded secret blocking patterns added (.local/share/*, .cache/*, *.db, *.keychain, *.p12)
|
||||
- [x] /run/agenix/* blocked in read permissions
|
||||
- [x] JSON syntax valid (jq validates)
|
||||
- [x] No duplicate keys in configuration
|
||||
- [x] Workspace path ~/p/** exists
|
||||
@@ -1,977 +0,0 @@
|
||||
# Chiron Personal Agent Framework
|
||||
|
||||
## TL;DR
|
||||
|
||||
> **Quick Summary**: Create an Oh-My-Opencode-style agent framework for personal productivity with Chiron as the orchestrator, 4 specialized subagents (Hermes, Athena, Apollo, Calliope), and 5 tool integration skills (Basecamp, Outline, MS Teams, Outlook, Obsidian).
|
||||
>
|
||||
> **Deliverables**:
|
||||
> - 6 agent definitions in `agents.json`
|
||||
> - 6 system prompt files in `prompts/`
|
||||
> - 5 tool integration skills in `skills/`
|
||||
> - Validation script extension in `scripts/`
|
||||
>
|
||||
> **Estimated Effort**: Medium
|
||||
> **Parallel Execution**: YES - 3 waves
|
||||
> **Critical Path**: Task 1 (agents.json) → Task 3-7 (prompts) → Task 9-13 (skills) → Task 14 (validation)
|
||||
>
|
||||
> **Status**: ✅ COMPLETE - All 14 main tasks + 6 verification items = 20/20 deliverables
|
||||
|
||||
---
|
||||
|
||||
## Context
|
||||
|
||||
### Original Request
|
||||
Create an agent framework similar to Oh-My-Opencode but focused on personal productivity:
|
||||
- Manage work tasks, appointments, projects via Basecamp, Outline, MS Teams, Outlook
|
||||
- Manage private tasks and knowledge via Obsidian
|
||||
- Greek mythology naming convention (avoiding Oh My OpenCode names)
|
||||
- Main agent named "Chiron"
|
||||
|
||||
### Interview Summary
|
||||
**Key Discussions**:
|
||||
- **Chiron's Role**: Main orchestrator that delegates to specialized subagents
|
||||
- **Agent Count**: Minimal (3-4 agents initially) + 2 primary agents
|
||||
- **Domain Separation**: Separate work vs private agents with clear boundaries
|
||||
- **Tool Priority**: All 4 work tools + Obsidian equally important
|
||||
- **Basecamp MCP**: User confirmed working MCP at georgeantonopoulos/Basecamp-MCP-Server
|
||||
|
||||
**Research Findings**:
|
||||
- Oh My OpenCode names to avoid: Sisyphus, Atlas, Prometheus, Hephaestus, Metis, Momus, Oracle, Librarian, Explore, Multimodal-Looker, Sisyphus-Junior
|
||||
- MCP servers available for all work tools + Obsidian
|
||||
- Protonmail requires custom IMAP/SMTP (deferred)
|
||||
- Current repo has established skill patterns with SKILL.md + optional subdirectories
|
||||
|
||||
### Metis Review
|
||||
**Identified Gaps** (addressed in plan):
|
||||
- Delegation model clarified: Chiron uses Question tool for ambiguous requests
|
||||
- Behavioral difference between Chiron and Chiron-Forge defined
|
||||
- Executable acceptance criteria added for all tasks
|
||||
- Edge cases documented in guardrails section
|
||||
- MCP authentication assumed pre-configured by NixOS (explicit scope boundary)
|
||||
|
||||
---
|
||||
|
||||
## Work Objectives
|
||||
|
||||
### Core Objective
|
||||
Create a personal productivity agent framework following Oh-My-Opencode patterns, enabling AI-assisted management of work and private life through specialized agents that integrate with existing tools.
|
||||
|
||||
### Concrete Deliverables
|
||||
1. `agents/agents.json` - 6 agent definitions (2 primary, 4 subagent)
|
||||
2. `prompts/chiron.txt` - Chiron (plan mode) system prompt
|
||||
3. `prompts/chiron-forge.txt` - Chiron-Forge (build mode) system prompt
|
||||
4. `prompts/hermes.txt` - Work communication agent prompt
|
||||
5. `prompts/athena.txt` - Work knowledge agent prompt
|
||||
6. `prompts/apollo.txt` - Private knowledge agent prompt
|
||||
7. `prompts/calliope.txt` - Writing agent prompt
|
||||
8. `skills/basecamp/SKILL.md` - Basecamp integration skill
|
||||
9. `skills/outline/SKILL.md` - Outline wiki integration skill
|
||||
10. `skills/msteams/SKILL.md` - MS Teams integration skill
|
||||
11. `skills/outlook/SKILL.md` - Outlook email integration skill
|
||||
12. `skills/obsidian/SKILL.md` - Obsidian integration skill
|
||||
13. `scripts/validate-agents.sh` - Agent validation script
|
||||
|
||||
### Definition of Done
|
||||
- [x] `python3 -c "import json; json.load(open('agents/agents.json'))"` → Exit 0
|
||||
- [x] All 6 prompt files exist and are non-empty
|
||||
- [x] All 5 skill directories have valid SKILL.md with YAML frontmatter
|
||||
- [x] `./scripts/test-skill.sh --validate` passes for new skills
|
||||
- [x] `./scripts/validate-agents.sh` passes
|
||||
|
||||
### Must Have
|
||||
- All agents use Question tool for multi-choice decisions
|
||||
- External prompt files (not inline in JSON)
|
||||
- Follow existing skill structure patterns
|
||||
- Greek naming convention for agents
|
||||
- Clear separation between plan mode (Chiron) and build mode (Chiron-Forge)
|
||||
- Skills provide tool-specific knowledge that agents load on demand
|
||||
|
||||
### Must NOT Have (Guardrails)
|
||||
- **NO MCP server configuration** - Managed by NixOS, outside this repo
|
||||
- **NO authentication handling** - Assume pre-configured MCP tools
|
||||
- **NO cross-agent state sharing** - Each agent operates independently
|
||||
- **NO new opencode commands** - Use existing command patterns only
|
||||
- **NO generic "I'm an AI assistant" prompts** - Domain-specific responsibilities only
|
||||
- **NO Protonmail integration** - Deferred to future phase
|
||||
- **NO duplicate tool knowledge across skills** - Each skill focuses on ONE tool
|
||||
- **NO scripts outside scripts/ directory**
|
||||
- **NO model configuration changes** - Keep current `zai-coding-plan/glm-4.7`
|
||||
|
||||
---
|
||||
|
||||
## Verification Strategy (MANDATORY)
|
||||
|
||||
> **UNIVERSAL RULE: ZERO HUMAN INTERVENTION**
|
||||
>
|
||||
> ALL tasks in this plan MUST be verifiable WITHOUT any human action.
|
||||
> This is NOT conditional - it applies to EVERY task, regardless of test strategy.
|
||||
>
|
||||
> ### Test Decision
|
||||
> - **Infrastructure exists**: YES (test-skill.sh)
|
||||
> - **Automated tests**: Tests-after (validation scripts)
|
||||
> - **Framework**: bash + python for validation
|
||||
>
|
||||
> ### Agent-Executed QA Scenarios (MANDATORY - ALL tasks)
|
||||
>
|
||||
> **Verification Tool by Deliverable Type**:
|
||||
>
|
||||
> | Type | Tool | How Agent Verifies |
|
||||
> |------|------|-------------------|
|
||||
> | **agents.json** | Bash (python/jq) | Parse JSON, validate structure, check required fields |
|
||||
> | **Prompt files** | Bash (file checks) | File exists, non-empty, contains expected sections |
|
||||
> | **SKILL.md files** | Bash (test-skill.sh) | YAML frontmatter valid, name matches directory |
|
||||
> | **Validation scripts** | Bash | Script is executable, runs without error, produces expected output |
|
||||
|
||||
---
|
||||
|
||||
## Execution Strategy
|
||||
|
||||
### Parallel Execution Waves
|
||||
|
||||
```
|
||||
Wave 1 (Start Immediately):
|
||||
├── Task 1: Create agents.json configuration [no dependencies]
|
||||
└── Task 2: Create prompts/ directory structure [no dependencies]
|
||||
|
||||
Wave 2 (After Wave 1):
|
||||
├── Task 3: Chiron prompt [depends: 2]
|
||||
├── Task 4: Chiron-Forge prompt [depends: 2]
|
||||
├── Task 5: Hermes prompt [depends: 2]
|
||||
├── Task 6: Athena prompt [depends: 2]
|
||||
├── Task 7: Apollo prompt [depends: 2]
|
||||
└── Task 8: Calliope prompt [depends: 2]
|
||||
|
||||
Wave 3 (Can parallel with Wave 2):
|
||||
├── Task 9: Basecamp skill [no dependencies]
|
||||
├── Task 10: Outline skill [no dependencies]
|
||||
├── Task 11: MS Teams skill [no dependencies]
|
||||
├── Task 12: Outlook skill [no dependencies]
|
||||
└── Task 13: Obsidian skill [no dependencies]
|
||||
|
||||
Wave 4 (After Wave 2 + 3):
|
||||
└── Task 14: Validation script [depends: 1, 3-8]
|
||||
|
||||
Critical Path: Task 1 → Task 2 → Tasks 3-8 → Task 14
|
||||
Parallel Speedup: ~50% faster than sequential
|
||||
```
|
||||
|
||||
### Dependency Matrix
|
||||
|
||||
| Task | Depends On | Blocks | Can Parallelize With |
|
||||
|------|------------|--------|---------------------|
|
||||
| 1 | None | 14 | 2, 9-13 |
|
||||
| 2 | None | 3-8 | 1, 9-13 |
|
||||
| 3-8 | 2 | 14 | Each other, 9-13 |
|
||||
| 9-13 | None | None | Each other, 1-2 |
|
||||
| 14 | 1, 3-8 | None | (final) |
|
||||
|
||||
### Agent Dispatch Summary
|
||||
|
||||
| Wave | Tasks | Recommended Category |
|
||||
|------|-------|---------------------|
|
||||
| 1 | 1, 2 | quick |
|
||||
| 2 | 3-8 | quick (parallel) |
|
||||
| 3 | 9-13 | quick (parallel) |
|
||||
| 4 | 14 | quick |
|
||||
|
||||
---
|
||||
|
||||
## TODOs
|
||||
|
||||
### Wave 1: Foundation
|
||||
|
||||
- [x] 1. Create agents.json with 6 agent definitions
|
||||
|
||||
**What to do**:
|
||||
- Update existing `agents/agents.json` to add all 6 agents
|
||||
- Each agent needs: description, mode, model, prompt reference
|
||||
- Primary agents: chiron, chiron-forge
|
||||
- Subagents: hermes, athena, apollo, calliope
|
||||
- All agents should have `question: "allow"` permission
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not add MCP server configuration
|
||||
- Do not change model from current pattern
|
||||
- Do not add inline prompts (use file references)
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`agent-development`]
|
||||
- `agent-development`: Provides agent configuration patterns and best practices
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 1 (with Task 2)
|
||||
- **Blocks**: Task 14
|
||||
- **Blocked By**: None
|
||||
|
||||
**References**:
|
||||
- `agents/agents.json:1-7` - Current chiron agent configuration pattern
|
||||
- `skills/agent-development/SKILL.md:40-76` - JSON agent structure reference
|
||||
- `skills/agent-development/SKILL.md:226-277` - Permissions system reference
|
||||
- `skills/agent-development/references/opencode-agents-json-example.md` - Complete examples
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: agents.json is valid JSON with all 6 agents
|
||||
Tool: Bash (python)
|
||||
Steps:
|
||||
1. python3 -c "import json; data = json.load(open('agents/agents.json')); print(len(data))"
|
||||
2. Assert: Output is "6"
|
||||
3. python3 -c "import json; data = json.load(open('agents/agents.json')); print(sorted(data.keys()))"
|
||||
4. Assert: Output contains ['apollo', 'athena', 'calliope', 'chiron', 'chiron-forge', 'hermes']
|
||||
Expected Result: JSON parses, all 6 agents present
|
||||
Evidence: Command output captured
|
||||
|
||||
Scenario: Each agent has required fields
|
||||
Tool: Bash (python)
|
||||
Steps:
|
||||
1. python3 -c "
|
||||
import json
|
||||
data = json.load(open('agents/agents.json'))
|
||||
for name, agent in data.items():
|
||||
assert 'description' in agent, f'{name}: missing description'
|
||||
assert 'mode' in agent, f'{name}: missing mode'
|
||||
assert 'prompt' in agent, f'{name}: missing prompt'
|
||||
print('All agents valid')
|
||||
"
|
||||
2. Assert: Output is "All agents valid"
|
||||
Expected Result: All required fields present
|
||||
Evidence: Validation output captured
|
||||
|
||||
Scenario: Primary agents have correct mode
|
||||
Tool: Bash (python)
|
||||
Steps:
|
||||
1. python3 -c "
|
||||
import json
|
||||
data = json.load(open('agents/agents.json'))
|
||||
assert data['chiron']['mode'] == 'primary'
|
||||
assert data['chiron-forge']['mode'] == 'primary'
|
||||
print('Primary modes correct')
|
||||
"
|
||||
Expected Result: Both primary agents have mode=primary
|
||||
Evidence: Command output
|
||||
|
||||
Scenario: Subagents have correct mode
|
||||
Tool: Bash (python)
|
||||
Steps:
|
||||
1. python3 -c "
|
||||
import json
|
||||
data = json.load(open('agents/agents.json'))
|
||||
for name in ['hermes', 'athena', 'apollo', 'calliope']:
|
||||
assert data[name]['mode'] == 'subagent', f'{name}: wrong mode'
|
||||
print('Subagent modes correct')
|
||||
"
|
||||
Expected Result: All subagents have mode=subagent
|
||||
Evidence: Command output
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(agents): add chiron agent framework with 6 agents`
|
||||
- Files: `agents/agents.json`
|
||||
- Pre-commit: `python3 -c "import json; json.load(open('agents/agents.json'))"`
|
||||
|
||||
---
|
||||
|
||||
- [x] 2. Create prompts directory structure
|
||||
|
||||
**What to do**:
|
||||
- Create `prompts/` directory if not exists
|
||||
- Directory will hold all agent system prompt files
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not create prompt files yet (done in Wave 2)
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: []
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 1 (with Task 1)
|
||||
- **Blocks**: Tasks 3-8
|
||||
- **Blocked By**: None
|
||||
|
||||
**References**:
|
||||
- `skills/agent-development/SKILL.md:148-159` - Prompt file conventions
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: prompts directory exists
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -d prompts && echo "exists" || echo "missing"
|
||||
2. Assert: Output is "exists"
|
||||
Expected Result: Directory created
|
||||
Evidence: Command output
|
||||
```
|
||||
|
||||
**Commit**: NO (groups with Task 1)
|
||||
|
||||
---
|
||||
|
||||
### Wave 2: Agent Prompts
|
||||
|
||||
- [x] 3. Create Chiron (Plan Mode) system prompt
|
||||
|
||||
**What to do**:
|
||||
- Create `prompts/chiron.txt`
|
||||
- Define Chiron as the main orchestrator in plan/analysis mode
|
||||
- Include delegation logic to subagents (Hermes, Athena, Apollo, Calliope)
|
||||
- Include Question tool usage for ambiguous requests
|
||||
- Focus on: planning, analysis, guidance, delegation
|
||||
- Permissions: read-only, no file modifications
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not allow write/edit operations
|
||||
- Do not include execution responsibilities
|
||||
- Do not overlap with Chiron-Forge's build capabilities
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`agent-development`]
|
||||
- `agent-development`: System prompt design patterns
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2 (with Tasks 4-8)
|
||||
- **Blocks**: Task 14
|
||||
- **Blocked By**: Task 2
|
||||
|
||||
**References**:
|
||||
- `skills/agent-development/SKILL.md:349-386` - System prompt design patterns
|
||||
- `skills/agent-development/SKILL.md:397-415` - Prompt best practices
|
||||
- `skills/agent-development/references/system-prompt-design.md` - Detailed prompt patterns
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Chiron prompt file exists and is substantial
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -f prompts/chiron.txt && echo "exists" || echo "missing"
|
||||
2. Assert: Output is "exists"
|
||||
3. wc -c < prompts/chiron.txt
|
||||
4. Assert: Output is > 500 (substantial content)
|
||||
Expected Result: File exists with meaningful content
|
||||
Evidence: File size captured
|
||||
|
||||
Scenario: Chiron prompt contains orchestrator role
|
||||
Tool: Bash (grep)
|
||||
Steps:
|
||||
1. grep -qi "orchestrat" prompts/chiron.txt && echo "found" || echo "missing"
|
||||
2. Assert: Output is "found"
|
||||
3. grep -qi "delegat" prompts/chiron.txt && echo "found" || echo "missing"
|
||||
4. Assert: Output is "found"
|
||||
Expected Result: Prompt describes orchestration and delegation
|
||||
Evidence: grep output
|
||||
|
||||
Scenario: Chiron prompt references subagents
|
||||
Tool: Bash (grep)
|
||||
Steps:
|
||||
1. grep -qi "hermes" prompts/chiron.txt && echo "found" || echo "missing"
|
||||
2. grep -qi "athena" prompts/chiron.txt && echo "found" || echo "missing"
|
||||
3. grep -qi "apollo" prompts/chiron.txt && echo "found" || echo "missing"
|
||||
4. grep -qi "calliope" prompts/chiron.txt && echo "found" || echo "missing"
|
||||
Expected Result: All 4 subagents mentioned
|
||||
Evidence: grep outputs
|
||||
```
|
||||
|
||||
**Commit**: YES (group with Tasks 4-8)
|
||||
- Message: `feat(prompts): add chiron and subagent system prompts`
|
||||
- Files: `prompts/*.txt`
|
||||
- Pre-commit: `for f in prompts/*.txt; do test -s "$f" || exit 1; done`
|
||||
|
||||
---
|
||||
|
||||
- [x] 4. Create Chiron-Forge (Build Mode) system prompt
|
||||
|
||||
**What to do**:
|
||||
- Create `prompts/chiron-forge.txt`
|
||||
- Define as Chiron's execution/build counterpart
|
||||
- Full write access for task execution
|
||||
- Can modify files, run commands, complete tasks
|
||||
- Still delegates to subagents for specialized domains
|
||||
- Uses Question tool for destructive operations confirmation
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not make it a planning-only agent (that's Chiron)
|
||||
- Do not allow destructive operations without confirmation
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`agent-development`]
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2 (with Tasks 3, 5-8)
|
||||
- **Blocks**: Task 14
|
||||
- **Blocked By**: Task 2
|
||||
|
||||
**References**:
|
||||
- `skills/agent-development/SKILL.md:316-346` - Complete agent example with chiron/chiron-forge pattern
|
||||
- `skills/agent-development/SKILL.md:253-277` - Permission patterns for bash commands
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Chiron-Forge prompt file exists
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -f prompts/chiron-forge.txt && wc -c < prompts/chiron-forge.txt
|
||||
2. Assert: Output > 500
|
||||
Expected Result: File exists with substantial content
|
||||
Evidence: File size
|
||||
|
||||
Scenario: Chiron-Forge prompt emphasizes execution
|
||||
Tool: Bash (grep)
|
||||
Steps:
|
||||
1. grep -qi "execut" prompts/chiron-forge.txt && echo "found" || echo "missing"
|
||||
2. grep -qi "build" prompts/chiron-forge.txt && echo "found" || echo "missing"
|
||||
Expected Result: Execution/build terminology present
|
||||
Evidence: grep output
|
||||
```
|
||||
|
||||
**Commit**: YES (groups with Task 3)
|
||||
|
||||
---
|
||||
|
||||
- [x] 5. Create Hermes (Work Communication) system prompt
|
||||
|
||||
**What to do**:
|
||||
- Create `prompts/hermes.txt`
|
||||
- Specialization: Basecamp tasks, Outlook email, MS Teams meetings
|
||||
- Greek god of communication, messengers, quick tasks
|
||||
- Uses Question tool for: which tool to use, clarifying recipients
|
||||
- Focus on: task updates, email drafting, meeting scheduling
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not handle documentation (Athena's domain)
|
||||
- Do not handle personal/private tools (Apollo's domain)
|
||||
- Do not write long-form content (Calliope's domain)
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`agent-development`]
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2
|
||||
- **Blocks**: Task 14
|
||||
- **Blocked By**: Task 2
|
||||
|
||||
**References**:
|
||||
- `skills/agent-development/SKILL.md:349-378` - Standard prompt structure
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Hermes prompt defines communication domain
|
||||
Tool: Bash (grep)
|
||||
Steps:
|
||||
1. grep -qi "basecamp" prompts/hermes.txt && echo "found" || echo "missing"
|
||||
2. grep -qi "outlook\|email" prompts/hermes.txt && echo "found" || echo "missing"
|
||||
3. grep -qi "teams\|meeting" prompts/hermes.txt && echo "found" || echo "missing"
|
||||
Expected Result: All 3 tools mentioned
|
||||
Evidence: grep outputs
|
||||
```
|
||||
|
||||
**Commit**: YES (groups with Task 3)
|
||||
|
||||
---
|
||||
|
||||
- [x] 6. Create Athena (Work Knowledge) system prompt
|
||||
|
||||
**What to do**:
|
||||
- Create `prompts/athena.txt`
|
||||
- Specialization: Outline wiki, documentation, knowledge organization
|
||||
- Greek goddess of wisdom and strategic warfare
|
||||
- Focus on: wiki search, knowledge retrieval, documentation updates
|
||||
- Uses Question tool for: which document to update, clarifying search scope
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not handle communication (Hermes's domain)
|
||||
- Do not handle private knowledge (Apollo's domain)
|
||||
- Do not write creative content (Calliope's domain)
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`agent-development`]
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2
|
||||
- **Blocks**: Task 14
|
||||
- **Blocked By**: Task 2
|
||||
|
||||
**References**:
|
||||
- `skills/agent-development/SKILL.md:349-378` - Standard prompt structure
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Athena prompt defines knowledge domain
|
||||
Tool: Bash (grep)
|
||||
Steps:
|
||||
1. grep -qi "outline" prompts/athena.txt && echo "found" || echo "missing"
|
||||
2. grep -qi "wiki\|knowledge" prompts/athena.txt && echo "found" || echo "missing"
|
||||
3. grep -qi "document" prompts/athena.txt && echo "found" || echo "missing"
|
||||
Expected Result: Outline and knowledge terms present
|
||||
Evidence: grep outputs
|
||||
```
|
||||
|
||||
**Commit**: YES (groups with Task 3)
|
||||
|
||||
---
|
||||
|
||||
- [x] 7. Create Apollo (Private Knowledge) system prompt
|
||||
|
||||
**What to do**:
|
||||
- Create `prompts/apollo.txt`
|
||||
- Specialization: Obsidian vault, personal notes, private knowledge graph
|
||||
- Greek god of knowledge, prophecy, and light
|
||||
- Focus on: note search, personal task management, knowledge retrieval
|
||||
- Uses Question tool for: clarifying which vault, which note
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not handle work tools (Hermes/Athena's domain)
|
||||
- Do not expose personal data to work contexts
|
||||
- Do not write long-form content (Calliope's domain)
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`agent-development`]
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2
|
||||
- **Blocks**: Task 14
|
||||
- **Blocked By**: Task 2
|
||||
|
||||
**References**:
|
||||
- `skills/agent-development/SKILL.md:349-378` - Standard prompt structure
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Apollo prompt defines private knowledge domain
|
||||
Tool: Bash (grep)
|
||||
Steps:
|
||||
1. grep -qi "obsidian" prompts/apollo.txt && echo "found" || echo "missing"
|
||||
2. grep -qi "personal\|private" prompts/apollo.txt && echo "found" || echo "missing"
|
||||
3. grep -qi "note\|vault" prompts/apollo.txt && echo "found" || echo "missing"
|
||||
Expected Result: Obsidian and personal knowledge terms present
|
||||
Evidence: grep outputs
|
||||
```
|
||||
|
||||
**Commit**: YES (groups with Task 3)
|
||||
|
||||
---
|
||||
|
||||
- [x] 8. Create Calliope (Writing) system prompt
|
||||
|
||||
**What to do**:
|
||||
- Create `prompts/calliope.txt`
|
||||
- Specialization: documentation writing, reports, meeting notes, prose
|
||||
- Greek muse of epic poetry and eloquence
|
||||
- Focus on: drafting documents, summarizing, writing assistance
|
||||
- Uses Question tool for: clarifying tone, audience, format
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not manage tools directly (delegates to other agents for tool access)
|
||||
- Do not handle short communication (Hermes's domain)
|
||||
- Do not overlap with Athena's wiki management
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`agent-development`]
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2
|
||||
- **Blocks**: Task 14
|
||||
- **Blocked By**: Task 2
|
||||
|
||||
**References**:
|
||||
- `skills/agent-development/SKILL.md:349-378` - Standard prompt structure
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Calliope prompt defines writing domain
|
||||
Tool: Bash (grep)
|
||||
Steps:
|
||||
1. grep -qi "writ" prompts/calliope.txt && echo "found" || echo "missing"
|
||||
2. grep -qi "document" prompts/calliope.txt && echo "found" || echo "missing"
|
||||
3. grep -qi "report\|summar" prompts/calliope.txt && echo "found" || echo "missing"
|
||||
Expected Result: Writing and documentation terms present
|
||||
Evidence: grep outputs
|
||||
```
|
||||
|
||||
**Commit**: YES (groups with Task 3)
|
||||
|
||||
---
|
||||
|
||||
### Wave 3: Tool Integration Skills
|
||||
|
||||
- [x] 9. Create Basecamp integration skill
|
||||
|
||||
**What to do**:
|
||||
- Create `skills/basecamp/SKILL.md`
|
||||
- Document Basecamp MCP capabilities (63 tools from georgeantonopoulos/Basecamp-MCP-Server)
|
||||
- Include: projects, todos, messages, card tables, campfire, webhooks
|
||||
- Provide workflow examples for common operations
|
||||
- Reference MCP tool names for agent use
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not include MCP server setup instructions (managed by Nix)
|
||||
- Do not duplicate general project management advice
|
||||
- Do not include authentication handling
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`skill-creator`]
|
||||
- `skill-creator`: Provides skill structure patterns and validation
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 3 (with Tasks 10-13)
|
||||
- **Blocks**: None
|
||||
- **Blocked By**: None
|
||||
|
||||
**References**:
|
||||
- `skills/skill-creator/SKILL.md` - Skill creation patterns
|
||||
- `skills/brainstorming/SKILL.md` - Example skill structure
|
||||
- https://github.com/georgeantonopoulos/Basecamp-MCP-Server - MCP tool documentation
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Basecamp skill has valid structure
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -d skills/basecamp && echo "dir exists"
|
||||
2. test -f skills/basecamp/SKILL.md && echo "file exists"
|
||||
3. ./scripts/test-skill.sh --validate basecamp || echo "validation failed"
|
||||
Expected Result: Directory and SKILL.md exist, validation passes
|
||||
Evidence: Command outputs
|
||||
|
||||
Scenario: Basecamp skill has valid frontmatter
|
||||
Tool: Bash (python)
|
||||
Steps:
|
||||
1. python3 -c "
|
||||
import yaml
|
||||
content = open('skills/basecamp/SKILL.md').read()
|
||||
front = content.split('---')[1]
|
||||
data = yaml.safe_load(front)
|
||||
assert data['name'] == 'basecamp', 'name mismatch'
|
||||
assert 'description' in data, 'missing description'
|
||||
print('Valid')
|
||||
"
|
||||
Expected Result: YAML frontmatter valid with correct name
|
||||
Evidence: Python output
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(skills): add basecamp integration skill`
|
||||
- Files: `skills/basecamp/SKILL.md`
|
||||
- Pre-commit: `./scripts/test-skill.sh --validate basecamp`
|
||||
|
||||
---
|
||||
|
||||
- [x] 10. Create Outline wiki integration skill
|
||||
|
||||
**What to do**:
|
||||
- Create `skills/outline/SKILL.md`
|
||||
- Document Outline API capabilities
|
||||
- Include: document CRUD, search, collections, sharing
|
||||
- Provide workflow examples for knowledge management
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not include MCP server setup
|
||||
- Do not duplicate wiki concepts
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`skill-creator`]
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 3
|
||||
- **Blocks**: None
|
||||
- **Blocked By**: None
|
||||
|
||||
**References**:
|
||||
- `skills/skill-creator/SKILL.md` - Skill creation patterns
|
||||
- https://www.getoutline.com/developers - Outline API documentation
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Outline skill has valid structure
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -d skills/outline && test -f skills/outline/SKILL.md && echo "exists"
|
||||
2. ./scripts/test-skill.sh --validate outline || echo "failed"
|
||||
Expected Result: Valid skill structure
|
||||
Evidence: Command output
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(skills): add outline wiki integration skill`
|
||||
- Files: `skills/outline/SKILL.md`
|
||||
- Pre-commit: `./scripts/test-skill.sh --validate outline`
|
||||
|
||||
---
|
||||
|
||||
- [x] 11. Create MS Teams integration skill
|
||||
|
||||
**What to do**:
|
||||
- Create `skills/msteams/SKILL.md`
|
||||
- Document MS Teams Graph API capabilities via MCP
|
||||
- Include: channels, messages, meetings, chat
|
||||
- Provide workflow examples for team communication
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not include Graph API authentication flows
|
||||
- Do not overlap with Outlook email functionality
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`skill-creator`]
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 3
|
||||
- **Blocks**: None
|
||||
- **Blocked By**: None
|
||||
|
||||
**References**:
|
||||
- `skills/skill-creator/SKILL.md` - Skill creation patterns
|
||||
- https://learn.microsoft.com/en-us/graph/api/resources/teams-api-overview - Teams API
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: MS Teams skill has valid structure
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -d skills/msteams && test -f skills/msteams/SKILL.md && echo "exists"
|
||||
2. ./scripts/test-skill.sh --validate msteams || echo "failed"
|
||||
Expected Result: Valid skill structure
|
||||
Evidence: Command output
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(skills): add ms teams integration skill`
|
||||
- Files: `skills/msteams/SKILL.md`
|
||||
- Pre-commit: `./scripts/test-skill.sh --validate msteams`
|
||||
|
||||
---
|
||||
|
||||
- [x] 12. Create Outlook email integration skill
|
||||
|
||||
**What to do**:
|
||||
- Create `skills/outlook/SKILL.md`
|
||||
- Document Outlook Graph API capabilities via MCP
|
||||
- Include: mail CRUD, calendar, contacts, folders
|
||||
- Provide workflow examples for email management
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not include Graph API authentication
|
||||
- Do not overlap with Teams functionality
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`skill-creator`]
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 3
|
||||
- **Blocks**: None
|
||||
- **Blocked By**: None
|
||||
|
||||
**References**:
|
||||
- `skills/skill-creator/SKILL.md` - Skill creation patterns
|
||||
- https://learn.microsoft.com/en-us/graph/outlook-mail-concept-overview - Outlook API
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Outlook skill has valid structure
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -d skills/outlook && test -f skills/outlook/SKILL.md && echo "exists"
|
||||
2. ./scripts/test-skill.sh --validate outlook || echo "failed"
|
||||
Expected Result: Valid skill structure
|
||||
Evidence: Command output
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(skills): add outlook email integration skill`
|
||||
- Files: `skills/outlook/SKILL.md`
|
||||
- Pre-commit: `./scripts/test-skill.sh --validate outlook`
|
||||
|
||||
---
|
||||
|
||||
- [x] 13. Create Obsidian integration skill
|
||||
|
||||
**What to do**:
|
||||
- Create `skills/obsidian/SKILL.md`
|
||||
- Document Obsidian Local REST API capabilities
|
||||
- Include: vault operations, note CRUD, search, daily notes
|
||||
- Reference skills/brainstorming/references/obsidian-workflow.md for patterns
|
||||
- Provide workflow examples for personal knowledge management
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not include plugin installation
|
||||
- Do not duplicate general note-taking advice
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`skill-creator`]
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 3
|
||||
- **Blocks**: None
|
||||
- **Blocked By**: None
|
||||
|
||||
**References**:
|
||||
- `skills/skill-creator/SKILL.md` - Skill creation patterns
|
||||
- `skills/brainstorming/SKILL.md` - Example skill structure
|
||||
- `skills/brainstorming/references/obsidian-workflow.md` - Existing Obsidian patterns
|
||||
- https://coddingtonbear.github.io/obsidian-local-rest-api/ - Local REST API docs
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Obsidian skill has valid structure
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -d skills/obsidian && test -f skills/obsidian/SKILL.md && echo "exists"
|
||||
2. ./scripts/test-skill.sh --validate obsidian || echo "failed"
|
||||
Expected Result: Valid skill structure
|
||||
Evidence: Command output
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(skills): add obsidian integration skill`
|
||||
- Files: `skills/obsidian/SKILL.md`
|
||||
- Pre-commit: `./scripts/test-skill.sh --validate obsidian`
|
||||
|
||||
---
|
||||
|
||||
### Wave 4: Validation
|
||||
|
||||
- [x] 14. Create agent validation script
|
||||
|
||||
**What to do**:
|
||||
- Create `scripts/validate-agents.sh`
|
||||
- Validate agents.json structure and required fields
|
||||
- Verify all referenced prompt files exist
|
||||
- Check prompt files are non-empty
|
||||
- Integrate with existing test-skill.sh patterns
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not require MCP servers for validation
|
||||
- Do not perform functional agent testing (just structural)
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: []
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: NO
|
||||
- **Parallel Group**: Sequential (Wave 4)
|
||||
- **Blocks**: None
|
||||
- **Blocked By**: Tasks 1, 3-8
|
||||
|
||||
**References**:
|
||||
- `scripts/test-skill.sh` - Existing validation script pattern
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Validation script is executable
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -x scripts/validate-agents.sh && echo "executable" || echo "not executable"
|
||||
2. Assert: Output is "executable"
|
||||
Expected Result: Script has execute permission
|
||||
Evidence: Command output
|
||||
|
||||
Scenario: Validation script runs successfully
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. ./scripts/validate-agents.sh
|
||||
2. Assert: Exit code is 0
|
||||
Expected Result: All validations pass
|
||||
Evidence: Script output
|
||||
|
||||
Scenario: Validation script catches missing files
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. mv prompts/chiron.txt prompts/chiron.txt.bak
|
||||
2. ./scripts/validate-agents.sh
|
||||
3. Assert: Exit code is NOT 0
|
||||
4. mv prompts/chiron.txt.bak prompts/chiron.txt
|
||||
Expected Result: Script detects missing prompt file
|
||||
Evidence: Error output
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(scripts): add agent validation script`
|
||||
- Files: `scripts/validate-agents.sh`
|
||||
- Pre-commit: `./scripts/validate-agents.sh`
|
||||
|
||||
---
|
||||
|
||||
## Commit Strategy
|
||||
|
||||
| After Task | Message | Files | Verification |
|
||||
|------------|---------|-------|--------------|
|
||||
| 1, 2 | `feat(agents): add chiron agent framework with 6 agents` | agents/agents.json, prompts/ | `python3 -c "import json; json.load(open('agents/agents.json'))"` |
|
||||
| 3-8 | `feat(prompts): add chiron and subagent system prompts` | prompts/*.txt | `for f in prompts/*.txt; do test -s "$f"; done` |
|
||||
| 9 | `feat(skills): add basecamp integration skill` | skills/basecamp/ | `./scripts/test-skill.sh --validate basecamp` |
|
||||
| 10 | `feat(skills): add outline wiki integration skill` | skills/outline/ | `./scripts/test-skill.sh --validate outline` |
|
||||
| 11 | `feat(skills): add ms teams integration skill` | skills/msteams/ | `./scripts/test-skill.sh --validate msteams` |
|
||||
| 12 | `feat(skills): add outlook email integration skill` | skills/outlook/ | `./scripts/test-skill.sh --validate outlook` |
|
||||
| 13 | `feat(skills): add obsidian integration skill` | skills/obsidian/ | `./scripts/test-skill.sh --validate obsidian` |
|
||||
| 14 | `feat(scripts): add agent validation script` | scripts/validate-agents.sh | `./scripts/validate-agents.sh` |
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
### Verification Commands
|
||||
```bash
|
||||
# Validate agents.json
|
||||
python3 -c "import json; json.load(open('agents/agents.json'))" # Expected: exit 0
|
||||
|
||||
# Count agents
|
||||
python3 -c "import json; print(len(json.load(open('agents/agents.json'))))" # Expected: 6
|
||||
|
||||
# Validate all prompts exist
|
||||
for f in chiron chiron-forge hermes athena apollo calliope; do
|
||||
test -s prompts/$f.txt && echo "$f: OK" || echo "$f: MISSING"
|
||||
done
|
||||
|
||||
# Validate all skills
|
||||
./scripts/test-skill.sh --validate # Expected: all pass
|
||||
|
||||
# Run full validation
|
||||
./scripts/validate-agents.sh # Expected: exit 0
|
||||
```
|
||||
|
||||
### Final Checklist
|
||||
- [x] All 6 agents defined in agents.json
|
||||
- [x] All 6 prompt files exist and are non-empty
|
||||
- [x] All 5 skills have valid SKILL.md with YAML frontmatter
|
||||
- [x] validate-agents.sh passes
|
||||
- [x] test-skill.sh --validate passes
|
||||
- [x] No MCP configuration in repo
|
||||
- [x] No inline prompts in agents.json
|
||||
- [x] All agent names are Greek mythology (not conflicting with Oh My OpenCode)
|
||||
@@ -1,897 +0,0 @@
|
||||
# Memory System for AGENTS + Obsidian CODEX
|
||||
|
||||
## TL;DR
|
||||
|
||||
> **Quick Summary**: Build a dual-layer memory system equivalent to openclaw's — Mem0 for fast semantic search/auto-recall + Obsidian CODEX vault for human-readable, versioned knowledge. Memories are stored in both layers and cross-referenced via IDs.
|
||||
>
|
||||
> **Deliverables**:
|
||||
> - New `skills/memory/SKILL.md` — Core orchestration skill (auto-capture, auto-recall, dual-layer sync)
|
||||
> - New `80-memory/` folder in CODEX vault with category subfolders + memory template
|
||||
> - Obsidian MCP server configuration (cyanheads/obsidian-mcp-server)
|
||||
> - Updated skills (mem0-memory, obsidian), Apollo prompt, CODEX docs, user profile
|
||||
>
|
||||
> **Estimated Effort**: Medium (9 tasks across config/docs, no traditional code)
|
||||
> **Parallel Execution**: YES — 4 waves
|
||||
> **Critical Path**: Task 1 (vault infra) → Task 4 (memory skill) → Task 9 (validation)
|
||||
|
||||
---
|
||||
|
||||
## Context
|
||||
|
||||
### Original Request
|
||||
Adapt openclaw's memory system for the opencode AGENTS repo, integrated with the Obsidian CODEX vault at `~/CODEX`. The vault should serve as a "second brain" for both the user AND AI agents.
|
||||
|
||||
### Interview Summary
|
||||
**Key Discussions**:
|
||||
- Analyzed openclaw's 3-layer memory architecture (SQLite+vectors builtin, memory-core plugin, memory-lancedb plugin with auto-capture/auto-recall)
|
||||
- User confirmed Mem0 is available self-hosted at localhost:8000 — just needs spinning up
|
||||
- User chose `80-memory/` as dedicated vault folder with category subfolders
|
||||
- User chose auto+explicit capture (LLM extraction at session end + "remember this" commands)
|
||||
- User chose agent QA only (no unit test infrastructure — repo is config/docs only)
|
||||
- No Obsidian MCP server currently configured — plan to add cyanheads/obsidian-mcp-server
|
||||
|
||||
**Research Findings**:
|
||||
- cyanheads/obsidian-mcp-server (363 stars) — Best MCP server: frontmatter management, vault cache, search with pagination, tag management
|
||||
- GitHub Copilot's memory system: citation-based verification pattern (Phase 2 candidate)
|
||||
- Production recommendation: dual-layer (operational memory + documented knowledge)
|
||||
- Mem0 provides semantic search, user_id/agent_id/run_id scoping, metadata support, `/health` endpoint
|
||||
- Auto-capture best practice: max 3 per session, LLM extraction > regex patterns
|
||||
|
||||
### Metis Review
|
||||
**Identified Gaps** (addressed):
|
||||
- 80-memory/ subfolders vs flat pattern: Resolved — follows `30-resources/` pattern (subfolders by TYPE), not `50-zettelkasten/` flat pattern
|
||||
- Mem0 health check: Added prerequisite validation step
|
||||
- Error handling undefined: Defined — Mem0 unavailable → skip, Obsidian unavailable → Mem0 only
|
||||
- Deployment order: Defined — CODEX first → MCP config → skills → validation
|
||||
- Scope creep risk: Locked down — citation verification, memory deletion/lifecycle, dashboards all Phase 2
|
||||
- Agent role clarity: Defined — memory skill loadable by any agent, Apollo is primary memory specialist
|
||||
|
||||
---
|
||||
|
||||
## Work Objectives
|
||||
|
||||
### Core Objective
|
||||
Build a dual-layer memory system for opencode agents that stores memories in Mem0 (semantic search, operational) AND the Obsidian CODEX vault (human-readable, versioned, wiki-linked). Equivalent in capability to openclaw's memory system.
|
||||
|
||||
### Concrete Deliverables
|
||||
**AGENTS repo** (`~/p/AI/AGENTS`):
|
||||
- `skills/memory/SKILL.md` — NEW: Core memory skill
|
||||
- `skills/memory/references/mcp-config.md` — NEW: Obsidian MCP server config documentation
|
||||
- `skills/mem0-memory/SKILL.md` — UPDATED: Add categories, dual-layer sync
|
||||
- `skills/obsidian/SKILL.md` — UPDATED: Add 80-memory/ conventions
|
||||
- `prompts/apollo.txt` — UPDATED: Add memory management responsibilities
|
||||
- `context/profile.md` — UPDATED: Add memory system configuration
|
||||
|
||||
**CODEX vault** (`~/CODEX`):
|
||||
- `80-memory/` — NEW: Folder with subfolders (preferences/, facts/, decisions/, entities/, other/)
|
||||
- `templates/memory.md` — NEW: Memory note template
|
||||
- `tag-taxonomy.md` — UPDATED: Add #memory/* tags
|
||||
- `AGENTS.md` — UPDATED: Add 80-memory/ docs, folder decision tree, memory workflows
|
||||
- `README.md` — UPDATED: Add 80-memory/ to folder structure
|
||||
|
||||
**Infrastructure** (Nix home-manager — outside AGENTS repo):
|
||||
- Add cyanheads/obsidian-mcp-server to opencode.json MCP section
|
||||
|
||||
### Definition of Done
|
||||
- [x] All 11 files created/updated as specified
|
||||
- [x] `curl http://localhost:8000/health` returns 200 (Mem0 running)
|
||||
- [~] `curl http://127.0.0.1:27124/vault-info` returns vault info (Obsidian REST API) — *Requires Obsidian desktop app to be open*
|
||||
- [x] `./scripts/test-skill.sh --validate` passes for new/updated skills
|
||||
- [x] 80-memory/ folder exists in CODEX vault with 5 subfolders
|
||||
- [x] Memory template creates valid notes with correct frontmatter
|
||||
|
||||
### Must Have
|
||||
- Dual-layer storage: every memory in Mem0 AND Obsidian
|
||||
- Auto-capture at session end (LLM-based, max 3 per session)
|
||||
- Explicit "remember this" command support
|
||||
- Auto-recall: inject relevant memories before agent starts
|
||||
- 5 categories: preference, fact, decision, entity, other
|
||||
- Health checks before memory operations
|
||||
- Cross-reference: mem0_id in Obsidian frontmatter, obsidian_ref in Mem0 metadata
|
||||
- Error handling: graceful degradation when either layer unavailable
|
||||
|
||||
### Must NOT Have (Guardrails)
|
||||
- NO citation-based memory verification (Phase 2)
|
||||
- NO memory expiration/lifecycle management (Phase 2)
|
||||
- NO memory deletion/forget functionality (Phase 2)
|
||||
- NO memory search UI or Obsidian dashboards (Phase 2)
|
||||
- NO conflict resolution UI between layers (manual edit only)
|
||||
- NO unit tests (repo has no test infrastructure — agent QA only)
|
||||
- NO subfolders in 50-zettelkasten/ or 70-tasks/ (respect flat structure)
|
||||
- NO new memory categories beyond the 5 defined
|
||||
- NO modifications to existing Obsidian templates (only ADD memory.md)
|
||||
- NO changes to agents.json (no new agents or agent config changes)
|
||||
|
||||
---
|
||||
|
||||
## Verification Strategy
|
||||
|
||||
> **UNIVERSAL RULE: ZERO HUMAN INTERVENTION**
|
||||
>
|
||||
> ALL tasks MUST be verifiable WITHOUT any human action.
|
||||
> Every criterion is verifiable by running a command or checking file existence.
|
||||
|
||||
### Test Decision
|
||||
- **Infrastructure exists**: NO (config-only repo)
|
||||
- **Automated tests**: None (agent QA only)
|
||||
- **Framework**: N/A
|
||||
|
||||
### Agent-Executed QA Scenarios (MANDATORY — ALL tasks)
|
||||
|
||||
Verification tools by deliverable type:
|
||||
|
||||
| Type | Tool | How Agent Verifies |
|
||||
|------|------|-------------------|
|
||||
| Vault folders/files | Bash (ls, test -f) | Check existence, content |
|
||||
| Skill YAML frontmatter | Bash (grep, python) | Parse and validate fields |
|
||||
| Mem0 API | Bash (curl) | Send requests, parse JSON |
|
||||
| Obsidian REST API | Bash (curl) | Read notes, check frontmatter |
|
||||
| MCP server | Bash (npx) | Test server startup |
|
||||
|
||||
---
|
||||
|
||||
## Execution Strategy
|
||||
|
||||
### Parallel Execution Waves
|
||||
|
||||
```
|
||||
Wave 1 (Start Immediately — no dependencies):
|
||||
├── Task 1: CODEX vault memory infrastructure (folders, template, tags)
|
||||
└── Task 3: Obsidian MCP server config documentation
|
||||
|
||||
Wave 2 (After Wave 1 — depends on vault structure existing):
|
||||
├── Task 2: CODEX vault documentation updates (AGENTS.md, README.md)
|
||||
├── Task 4: Create core memory skill (skills/memory/SKILL.md)
|
||||
├── Task 5: Update Mem0 memory skill
|
||||
└── Task 6: Update Obsidian skill
|
||||
|
||||
Wave 3 (After Wave 2 — depends on skill content for prompt/profile):
|
||||
├── Task 7: Update Apollo agent prompt
|
||||
└── Task 8: Update user context profile
|
||||
|
||||
Wave 4 (After all — final validation):
|
||||
└── Task 9: End-to-end validation
|
||||
|
||||
Critical Path: Task 1 → Task 4 → Task 9
|
||||
Parallel Speedup: ~50% faster than sequential
|
||||
```
|
||||
|
||||
### Dependency Matrix
|
||||
|
||||
| Task | Depends On | Blocks | Can Parallelize With |
|
||||
|------|------------|--------|---------------------|
|
||||
| 1 | None | 2, 4, 5, 6 | 3 |
|
||||
| 2 | 1 | 9 | 4, 5, 6 |
|
||||
| 3 | None | 4 | 1 |
|
||||
| 4 | 1, 3 | 7, 8, 9 | 5, 6 |
|
||||
| 5 | 1 | 9 | 4, 6 |
|
||||
| 6 | 1 | 9 | 4, 5 |
|
||||
| 7 | 4 | 9 | 8 |
|
||||
| 8 | 4 | 9 | 7 |
|
||||
| 9 | ALL | None | None (final) |
|
||||
|
||||
### Agent Dispatch Summary
|
||||
|
||||
| Wave | Tasks | Recommended Agents |
|
||||
|------|-------|-------------------|
|
||||
| 1 | 1, 3 | task(category="quick", load_skills=["obsidian"], run_in_background=false) |
|
||||
| 2 | 2, 4, 5, 6 | dispatch parallel: task(category="unspecified-high") for Task 4; task(category="quick") for 2, 5, 6 |
|
||||
| 3 | 7, 8 | task(category="quick", run_in_background=false) |
|
||||
| 4 | 9 | task(category="unspecified-low", run_in_background=false) |
|
||||
|
||||
---
|
||||
|
||||
## TODOs
|
||||
|
||||
- [x] 1. CODEX Vault Memory Infrastructure
|
||||
|
||||
**What to do**:
|
||||
- Create `80-memory/` folder with 5 subfolders: `preferences/`, `facts/`, `decisions/`, `entities/`, `other/`
|
||||
- Create each subfolder with a `.gitkeep` file so git tracks empty directories
|
||||
- Create `templates/memory.md` — memory note template with frontmatter:
|
||||
```yaml
|
||||
---
|
||||
type: memory
|
||||
category: # preference | fact | decision | entity | other
|
||||
mem0_id: # Mem0 memory ID (e.g., "mem_abc123")
|
||||
source: explicit # explicit | auto-capture
|
||||
importance: # critical | high | medium | low
|
||||
created: <% tp.date.now("YYYY-MM-DD") %>
|
||||
updated: <% tp.date.now("YYYY-MM-DD") %>
|
||||
tags:
|
||||
- memory
|
||||
sync_targets: []
|
||||
---
|
||||
|
||||
# Memory Title
|
||||
|
||||
## Content
|
||||
<!-- The actual memory content -->
|
||||
|
||||
## Context
|
||||
<!-- When/where this was learned, conversation context -->
|
||||
|
||||
## Related
|
||||
<!-- Wiki links to related notes -->
|
||||
```
|
||||
- Update `tag-taxonomy.md` — add `#memory` tag category with subtags:
|
||||
```
|
||||
#memory
|
||||
├── #memory/preference
|
||||
├── #memory/fact
|
||||
├── #memory/decision
|
||||
├── #memory/entity
|
||||
└── #memory/other
|
||||
```
|
||||
Include usage examples and definitions for each category
|
||||
|
||||
**Must NOT do**:
|
||||
- Do NOT create subfolders inside 50-zettelkasten/ or 70-tasks/
|
||||
- Do NOT modify existing templates (only ADD memory.md)
|
||||
- Do NOT use Templater syntax that doesn't match existing templates
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- Reason: Simple file creation, no complex logic
|
||||
- **Skills**: [`obsidian`]
|
||||
- `obsidian`: Vault conventions, frontmatter patterns, template structure
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 1 (with Task 3)
|
||||
- **Blocks**: Tasks 2, 4, 5, 6
|
||||
- **Blocked By**: None
|
||||
|
||||
**References**:
|
||||
|
||||
**Pattern References**:
|
||||
- `/home/m3tam3re/CODEX/30-resources/` — Subfolder-by-type pattern to follow (bookmarks/, literature/, meetings/, people/, recipes/)
|
||||
- `/home/m3tam3re/CODEX/templates/task.md` — Template frontmatter pattern (type, status, created, updated, tags, sync_targets)
|
||||
- `/home/m3tam3re/CODEX/templates/bookmark.md` — Simpler template example
|
||||
|
||||
**Documentation References**:
|
||||
- `/home/m3tam3re/CODEX/AGENTS.md:22-27` — Frontmatter conventions (required fields: type, created, updated)
|
||||
- `/home/m3tam3re/CODEX/AGENTS.md:163-176` — Template locations table (add memory row)
|
||||
- `/home/m3tam3re/CODEX/tag-taxonomy.md:1-18` — Tag structure rules (max 3 levels, kebab-case)
|
||||
|
||||
**WHY Each Reference Matters**:
|
||||
- `30-resources/` shows that subfolders-by-type is the established vault pattern for categorized content
|
||||
- `task.md` template shows the exact frontmatter field set expected by the vault
|
||||
- `tag-taxonomy.md` rules show the 3-level max hierarchy constraint for new tags
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
**Agent-Executed QA Scenarios:**
|
||||
|
||||
```
|
||||
Scenario: Verify 80-memory folder structure
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -d /home/m3tam3re/CODEX/80-memory/preferences
|
||||
2. test -d /home/m3tam3re/CODEX/80-memory/facts
|
||||
3. test -d /home/m3tam3re/CODEX/80-memory/decisions
|
||||
4. test -d /home/m3tam3re/CODEX/80-memory/entities
|
||||
5. test -d /home/m3tam3re/CODEX/80-memory/other
|
||||
Expected Result: All 5 directories exist (exit code 0 for each)
|
||||
Evidence: Shell output captured
|
||||
|
||||
Scenario: Verify memory template exists with correct frontmatter
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -f /home/m3tam3re/CODEX/templates/memory.md
|
||||
2. grep "type: memory" /home/m3tam3re/CODEX/templates/memory.md
|
||||
3. grep "category:" /home/m3tam3re/CODEX/templates/memory.md
|
||||
4. grep "mem0_id:" /home/m3tam3re/CODEX/templates/memory.md
|
||||
Expected Result: File exists and contains required frontmatter fields
|
||||
Evidence: grep output captured
|
||||
|
||||
Scenario: Verify tag-taxonomy updated with memory tags
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. grep "#memory" /home/m3tam3re/CODEX/tag-taxonomy.md
|
||||
2. grep "#memory/preference" /home/m3tam3re/CODEX/tag-taxonomy.md
|
||||
3. grep "#memory/fact" /home/m3tam3re/CODEX/tag-taxonomy.md
|
||||
Expected Result: All memory tags present in taxonomy
|
||||
Evidence: grep output captured
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(vault): add 80-memory folder structure and memory template`
|
||||
- Files: `80-memory/`, `templates/memory.md`, `tag-taxonomy.md`
|
||||
- Repo: `~/CODEX`
|
||||
|
||||
---
|
||||
|
||||
- [x] 2. CODEX Vault Documentation Updates
|
||||
|
||||
**What to do**:
|
||||
- Update `AGENTS.md`:
|
||||
- Add `80-memory/` row to Folder Structure table (line ~11)
|
||||
- Add `#### 80-memory` section in Folder Details (after 70-tasks section, ~line 161)
|
||||
- Update Folder Decision Tree to include memory branch: `Is it a memory/learned fact? → YES → 80-memory/`
|
||||
- Add Memory template row to Template Locations table (line ~165)
|
||||
- Add Memory Workflows section (after Sync Workflow): create memory, retrieve memory, dual-layer sync
|
||||
- Update `README.md`:
|
||||
- Add `80-memory/` to folder structure diagram with subfolders
|
||||
- Add `80-memory/` row to Folder Details section
|
||||
- Add memory template to Templates table
|
||||
|
||||
**Must NOT do**:
|
||||
- Do NOT rewrite existing sections — only ADD new content
|
||||
- Do NOT remove any existing folder/template documentation
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- Reason: Documentation additions to existing files, following established patterns
|
||||
- **Skills**: [`obsidian`]
|
||||
- `obsidian`: Vault documentation conventions
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2 (with Tasks 4, 5, 6)
|
||||
- **Blocks**: Task 9
|
||||
- **Blocked By**: Task 1 (needs folder structure to reference)
|
||||
|
||||
**References**:
|
||||
|
||||
**Pattern References**:
|
||||
- `/home/m3tam3re/CODEX/AGENTS.md:110-161` — Existing Folder Details sections to follow pattern
|
||||
- `/home/m3tam3re/CODEX/AGENTS.md:75-108` — Folder Decision Tree format
|
||||
- `/home/m3tam3re/CODEX/README.md` — Folder structure diagram format
|
||||
|
||||
**WHY Each Reference Matters**:
|
||||
- AGENTS.md folder details show the exact format: Purpose, Structure (flat/subfolders), Key trait, When to use, Naming convention
|
||||
- Decision tree shows the exact `├─ YES →` format to follow
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Verify AGENTS.md has 80-memory documentation
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. grep "80-memory" /home/m3tam3re/CODEX/AGENTS.md
|
||||
2. grep "Is it a memory" /home/m3tam3re/CODEX/AGENTS.md
|
||||
3. grep "templates/memory.md" /home/m3tam3re/CODEX/AGENTS.md
|
||||
Expected Result: All three patterns found
|
||||
Evidence: grep output
|
||||
|
||||
Scenario: Verify README.md has 80-memory in structure
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. grep "80-memory" /home/m3tam3re/CODEX/README.md
|
||||
2. grep "preferences/" /home/m3tam3re/CODEX/README.md
|
||||
Expected Result: Folder and subfolder documented
|
||||
Evidence: grep output
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `docs(vault): add 80-memory documentation to AGENTS.md and README.md`
|
||||
- Files: `AGENTS.md`, `README.md`
|
||||
- Repo: `~/CODEX`
|
||||
|
||||
---
|
||||
|
||||
- [x] 3. Obsidian MCP Server Configuration Documentation
|
||||
|
||||
**What to do**:
|
||||
- Create `skills/memory/references/mcp-config.md` documenting:
|
||||
- cyanheads/obsidian-mcp-server configuration for opencode.json
|
||||
- Required environment variables: `OBSIDIAN_API_KEY`, `OBSIDIAN_BASE_URL`, `OBSIDIAN_VERIFY_SSL`, `OBSIDIAN_ENABLE_CACHE`
|
||||
- opencode.json MCP section snippet:
|
||||
```json
|
||||
"Obsidian-Vault": {
|
||||
"command": ["npx", "obsidian-mcp-server"],
|
||||
"environment": {
|
||||
"OBSIDIAN_API_KEY": "<your-api-key>",
|
||||
"OBSIDIAN_BASE_URL": "http://127.0.0.1:27123",
|
||||
"OBSIDIAN_VERIFY_SSL": "false",
|
||||
"OBSIDIAN_ENABLE_CACHE": "true"
|
||||
},
|
||||
"enabled": true,
|
||||
"type": "local"
|
||||
}
|
||||
```
|
||||
- Nix home-manager snippet showing how to add to `programs.opencode.settings.mcp`
|
||||
- Note that this requires `home-manager switch` after adding
|
||||
- Available MCP tools list: obsidian_read_note, obsidian_update_note, obsidian_global_search, obsidian_manage_frontmatter, obsidian_manage_tags, obsidian_list_notes, obsidian_delete_note, obsidian_search_replace
|
||||
- How to get the API key from Obsidian: Settings → Local REST API plugin
|
||||
|
||||
**Must NOT do**:
|
||||
- Do NOT directly modify `~/.config/opencode/opencode.json` (Nix-managed)
|
||||
- Do NOT modify `agents/agents.json`
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- Reason: Creating a single reference doc
|
||||
- **Skills**: [`obsidian`]
|
||||
- `obsidian`: Obsidian REST API configuration knowledge
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 1 (with Task 1)
|
||||
- **Blocks**: Task 4
|
||||
- **Blocked By**: None
|
||||
|
||||
**References**:
|
||||
|
||||
**Pattern References**:
|
||||
- `/home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md:156-166` — Existing API reference pattern
|
||||
- `/home/m3tam3re/.config/opencode/opencode.json:77-127` — Current MCP config format (Exa, Basecamp, etc.)
|
||||
|
||||
**External References**:
|
||||
- GitHub: `https://github.com/cyanheads/obsidian-mcp-server` — Config docs, env vars, tool list
|
||||
- npm: `npx obsidian-mcp-server` — Installation method
|
||||
|
||||
**WHY Each Reference Matters**:
|
||||
- opencode.json MCP section shows exact JSON format needed (command array, environment, enabled, type)
|
||||
- cyanheads repo shows required env vars and their defaults
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Verify MCP config reference file exists
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -f /home/m3tam3re/p/AI/AGENTS/skills/memory/references/mcp-config.md
|
||||
2. grep "obsidian-mcp-server" /home/m3tam3re/p/AI/AGENTS/skills/memory/references/mcp-config.md
|
||||
3. grep "OBSIDIAN_API_KEY" /home/m3tam3re/p/AI/AGENTS/skills/memory/references/mcp-config.md
|
||||
4. grep "home-manager" /home/m3tam3re/p/AI/AGENTS/skills/memory/references/mcp-config.md
|
||||
Expected Result: File exists with MCP config, env vars, and Nix instructions
|
||||
Evidence: grep output
|
||||
```
|
||||
|
||||
**Commit**: YES (groups with Task 4)
|
||||
- Message: `feat(memory): add core memory skill and MCP config reference`
|
||||
- Files: `skills/memory/SKILL.md`, `skills/memory/references/mcp-config.md`
|
||||
- Repo: `~/p/AI/AGENTS`
|
||||
|
||||
---
|
||||
|
||||
- [x] 4. Create Core Memory Skill
|
||||
|
||||
**What to do**:
|
||||
- Create `skills/memory/SKILL.md` — the central orchestration skill for the dual-layer memory system
|
||||
- YAML frontmatter:
|
||||
```yaml
|
||||
---
|
||||
name: memory
|
||||
description: "Dual-layer memory system (Mem0 + Obsidian CODEX). Use when: (1) storing information for future recall ('remember this'), (2) auto-capturing session insights, (3) recalling past decisions/preferences/facts, (4) injecting relevant context before tasks. Triggers: 'remember', 'recall', 'what do I know about', 'memory', session end."
|
||||
compatibility: opencode
|
||||
---
|
||||
```
|
||||
- Sections to include:
|
||||
1. **Overview** — Dual-layer architecture (Mem0 operational + Obsidian documented)
|
||||
2. **Prerequisites** — Mem0 running at localhost:8000, Obsidian MCP configured (reference mcp-config.md)
|
||||
3. **Memory Categories** — 5 categories with definitions and examples:
|
||||
- preference: Personal preferences (UI, workflow, communication style)
|
||||
- fact: Objective information about user/work (role, tech stack, constraints)
|
||||
- decision: Architectural/tool choices made (with rationale)
|
||||
- entity: People, organizations, systems, concepts
|
||||
- other: Everything else
|
||||
4. **Workflow 1: Store Memory (Explicit)** — User says "remember X":
|
||||
- Classify category
|
||||
- POST to Mem0 `/memories` with user_id, metadata (category, source: "explicit")
|
||||
- Create Obsidian note in `80-memory/<category>/` using memory template
|
||||
- Cross-reference: mem0_id in Obsidian frontmatter, obsidian_ref in Mem0 metadata
|
||||
5. **Workflow 2: Recall Memory** — User asks "what do I know about X":
|
||||
- POST to Mem0 `/search` with query
|
||||
- Return results with Obsidian note paths for reference
|
||||
6. **Workflow 3: Auto-Capture (Session End)** — Automatic extraction:
|
||||
- Scan conversation for memory-worthy content (preferences stated, decisions made, important facts)
|
||||
- Select top 3 highest-value memories
|
||||
- For each: store in Mem0 AND create Obsidian note (source: "auto-capture")
|
||||
- Present to user: "I captured these memories: [list]. Confirm or reject?"
|
||||
7. **Workflow 4: Auto-Recall (Session Start)** — Context injection:
|
||||
- On session start, search Mem0 with user's first message
|
||||
- If relevant memories found (score > 0.7), inject as `<relevant-memories>` context
|
||||
- Limit to top 5 most relevant
|
||||
8. **Error Handling** — Graceful degradation:
|
||||
- Mem0 unavailable: `curl http://localhost:8000/health` fails → skip all memory ops, warn user
|
||||
- Obsidian unavailable: Store in Mem0 only, log that Obsidian sync failed
|
||||
- Both unavailable: Skip memory entirely, continue without memory features
|
||||
9. **Integration** — How other skills/agents use memory:
|
||||
- Load `memory` skill to access memory workflows
|
||||
- Apollo is primary memory specialist
|
||||
- Any agent can search/store via Mem0 REST API patterns in `mem0-memory` skill
|
||||
|
||||
**Must NOT do**:
|
||||
- Do NOT implement citation-based verification
|
||||
- Do NOT implement memory deletion/forget
|
||||
- Do NOT add memory expiration logic
|
||||
- Do NOT create dashboards or search UI
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `unspecified-high`
|
||||
- Reason: Core deliverable requiring careful architecture documentation, must be comprehensive
|
||||
- **Skills**: [`obsidian`, `mem0-memory`]
|
||||
- `obsidian`: Vault conventions, template patterns, frontmatter standards
|
||||
- `mem0-memory`: Mem0 REST API patterns, endpoint details
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2 (with Tasks 2, 5, 6)
|
||||
- **Blocks**: Tasks 7, 8, 9
|
||||
- **Blocked By**: Tasks 1, 3
|
||||
|
||||
**References**:
|
||||
|
||||
**Pattern References**:
|
||||
- `/home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md` — Full file: Mem0 REST API patterns, endpoint table, identity scopes, workflow patterns
|
||||
- `/home/m3tam3re/p/AI/AGENTS/skills/obsidian/SKILL.md` — Full file: Obsidian REST API patterns, create/read/update note workflows, frontmatter conventions
|
||||
- `/home/m3tam3re/p/AI/AGENTS/skills/reflection/SKILL.md` — Skill structure pattern (overview, workflows, integration)
|
||||
|
||||
**API References**:
|
||||
- `/home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md:13-21` — Quick Reference endpoint table
|
||||
- `/home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md:90-109` — Identity scopes (user_id, agent_id, run_id)
|
||||
|
||||
**Documentation References**:
|
||||
- `/home/m3tam3re/CODEX/AGENTS.md:22-27` — Frontmatter conventions for vault notes
|
||||
- `/home/m3tam3re/p/AI/AGENTS/skills/memory/references/mcp-config.md` — MCP server config (created in Task 3)
|
||||
|
||||
**External References**:
|
||||
- OpenClaw reference: `/home/m3tam3re/p/AI/openclaw/extensions/memory-lancedb/index.ts` — Auto-capture regex patterns, auto-recall injection, importance scoring (use as inspiration, not copy)
|
||||
|
||||
**WHY Each Reference Matters**:
|
||||
- mem0-memory SKILL.md provides the exact API endpoints and patterns to reference in dual-layer sync workflows
|
||||
- obsidian SKILL.md provides the vault file creation patterns (curl commands, path encoding)
|
||||
- openclaw memory-lancedb shows the auto-capture/auto-recall architecture to adapt
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Validate skill YAML frontmatter
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -f /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
2. grep "^name: memory$" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
3. grep "^compatibility: opencode$" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
4. grep "description:" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
Expected Result: Valid YAML frontmatter with name, description, compatibility
|
||||
Evidence: grep output
|
||||
|
||||
Scenario: Verify skill contains all required workflows
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. grep -c "## Workflow" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
2. grep "Auto-Capture" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
3. grep "Auto-Recall" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
4. grep "Error Handling" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
5. grep "preference" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
Expected Result: At least 4 workflow sections, auto-capture, auto-recall, error handling, categories
|
||||
Evidence: grep output
|
||||
|
||||
Scenario: Verify dual-layer sync pattern documented
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. grep "mem0_id" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
2. grep "obsidian_ref" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
3. grep "localhost:8000" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
4. grep "80-memory" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
Expected Result: Cross-reference IDs and both layer endpoints documented
|
||||
Evidence: grep output
|
||||
```
|
||||
|
||||
**Commit**: YES (groups with Task 3)
|
||||
- Message: `feat(memory): add core memory skill and MCP config reference`
|
||||
- Files: `skills/memory/SKILL.md`, `skills/memory/references/mcp-config.md`
|
||||
- Repo: `~/p/AI/AGENTS`
|
||||
|
||||
---
|
||||
|
||||
- [x] 5. Update Mem0 Memory Skill
|
||||
|
||||
**What to do**:
|
||||
- Add "Memory Categories" section after Identity Scopes (line ~109):
|
||||
- Table: category name, definition, Obsidian path, example
|
||||
- Metadata pattern for categories: `{"category": "preference", "source": "explicit|auto-capture"}`
|
||||
- Add "Dual-Layer Sync" section after Workflow Patterns:
|
||||
- After storing to Mem0, also create Obsidian note in `80-memory/<category>/`
|
||||
- Include mem0_id from response in Obsidian note frontmatter
|
||||
- Include obsidian_ref path in Mem0 metadata via update
|
||||
- Add "Health Check" workflow: Check `/health` before any memory operations
|
||||
- Add "Error Handling" section: What to do when Mem0 is unavailable
|
||||
|
||||
**Must NOT do**:
|
||||
- Do NOT delete existing content
|
||||
- Do NOT change the YAML frontmatter description (triggers)
|
||||
- Do NOT change existing API endpoint documentation
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- Reason: Adding sections to existing well-structured file
|
||||
- **Skills**: [`mem0-memory`]
|
||||
- `mem0-memory`: Existing skill patterns to extend
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2 (with Tasks 2, 4, 6)
|
||||
- **Blocks**: Task 9
|
||||
- **Blocked By**: Task 1
|
||||
|
||||
**References**:
|
||||
|
||||
- `/home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md` — Full file: current content to extend (preserve ALL existing content)
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Verify categories added to mem0-memory skill
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. grep "Memory Categories" /home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md
|
||||
2. grep "preference" /home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md
|
||||
3. grep "Dual-Layer" /home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md
|
||||
4. grep "80-memory" /home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md
|
||||
Expected Result: New sections present alongside existing content
|
||||
Evidence: grep output
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(mem0-memory): add memory categories and dual-layer sync patterns`
|
||||
- Files: `skills/mem0-memory/SKILL.md`
|
||||
- Repo: `~/p/AI/AGENTS`
|
||||
|
||||
---
|
||||
|
||||
- [x] 6. Update Obsidian Skill
|
||||
|
||||
**What to do**:
|
||||
- Add "Memory Folder Conventions" section (after Best Practices, ~line 228):
|
||||
- Document `80-memory/` structure with 5 subfolders
|
||||
- Memory note naming: kebab-case (e.g., `prefers-dark-mode.md`)
|
||||
- Required frontmatter fields for memory notes (type, category, mem0_id, etc.)
|
||||
- Add "Memory Note Workflows" section:
|
||||
- Create memory note: POST to vault REST API with memory template content
|
||||
- Read memory note: GET with path encoding for `80-memory/` paths
|
||||
- Search memories: Search within `80-memory/` path filter
|
||||
- Update Integration table to include memory skill handoff
|
||||
|
||||
**Must NOT do**:
|
||||
- Do NOT change existing content or workflows
|
||||
- Do NOT modify the YAML frontmatter
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`obsidian`]
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2
|
||||
- **Blocks**: Task 9
|
||||
- **Blocked By**: Task 1
|
||||
|
||||
**References**:
|
||||
- `/home/m3tam3re/p/AI/AGENTS/skills/obsidian/SKILL.md` — Full file: current content to extend
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Verify memory conventions added to obsidian skill
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. grep "Memory Folder" /home/m3tam3re/p/AI/AGENTS/skills/obsidian/SKILL.md
|
||||
2. grep "80-memory" /home/m3tam3re/p/AI/AGENTS/skills/obsidian/SKILL.md
|
||||
3. grep "mem0_id" /home/m3tam3re/p/AI/AGENTS/skills/obsidian/SKILL.md
|
||||
Expected Result: Memory folder docs and frontmatter patterns present
|
||||
Evidence: grep output
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(obsidian): add memory folder conventions and workflows`
|
||||
- Files: `skills/obsidian/SKILL.md`
|
||||
- Repo: `~/p/AI/AGENTS`
|
||||
|
||||
---
|
||||
|
||||
- [x] 7. Update Apollo Agent Prompt
|
||||
|
||||
**What to do**:
|
||||
- Add "Memory Management" to Core Responsibilities list (after item 4):
|
||||
- Store memories in dual-layer system (Mem0 + Obsidian CODEX)
|
||||
- Retrieve memories via semantic search (Mem0)
|
||||
- Auto-capture session insights at session end (max 3, confirm with user)
|
||||
- Handle explicit "remember this" requests
|
||||
- Inject relevant memories into context on session start
|
||||
- Add memory-related tools to Tool Usage section
|
||||
- Add memory error handling to Edge Cases
|
||||
|
||||
**Must NOT do**:
|
||||
- Do NOT remove existing responsibilities
|
||||
- Do NOT change Apollo's identity or boundaries
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: []
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 3 (with Task 8)
|
||||
- **Blocks**: Task 9
|
||||
- **Blocked By**: Task 4
|
||||
|
||||
**References**:
|
||||
- `/home/m3tam3re/p/AI/AGENTS/prompts/apollo.txt` — Full file (47 lines): current prompt to extend
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Verify memory management added to Apollo prompt
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. grep -i "memory" /home/m3tam3re/p/AI/AGENTS/prompts/apollo.txt | wc -l
|
||||
2. grep "Mem0" /home/m3tam3re/p/AI/AGENTS/prompts/apollo.txt
|
||||
3. grep "auto-capture" /home/m3tam3re/p/AI/AGENTS/prompts/apollo.txt
|
||||
Expected Result: Multiple memory references, Mem0 mentioned, auto-capture documented
|
||||
Evidence: grep output
|
||||
```
|
||||
|
||||
**Commit**: YES (groups with Task 8)
|
||||
- Message: `feat(agents): add memory management to Apollo prompt and user profile`
|
||||
- Files: `prompts/apollo.txt`, `context/profile.md`
|
||||
- Repo: `~/p/AI/AGENTS`
|
||||
|
||||
---
|
||||
|
||||
- [x] 8. Update User Context Profile
|
||||
|
||||
**What to do**:
|
||||
- Add "Memory System" section to `context/profile.md`:
|
||||
- Mem0 endpoint: `http://localhost:8000`
|
||||
- Mem0 user_id: `m3tam3re` (or whatever the user's ID should be)
|
||||
- Obsidian vault path: `~/CODEX`
|
||||
- Memory folder: `80-memory/`
|
||||
- Auto-capture: enabled, max 3 per session
|
||||
- Auto-recall: enabled, top 5 results, score threshold 0.7
|
||||
- Memory categories: preference, fact, decision, entity, other
|
||||
- Obsidian MCP server: cyanheads/obsidian-mcp-server (see skills/memory/references/mcp-config.md)
|
||||
|
||||
**Must NOT do**:
|
||||
- Do NOT remove existing profile content
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: []
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 3 (with Task 7)
|
||||
- **Blocks**: Task 9
|
||||
- **Blocked By**: Task 4
|
||||
|
||||
**References**:
|
||||
- `/home/m3tam3re/p/AI/AGENTS/context/profile.md` — Current profile to extend
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Verify memory config in profile
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. grep "Memory System" /home/m3tam3re/p/AI/AGENTS/context/profile.md
|
||||
2. grep "localhost:8000" /home/m3tam3re/p/AI/AGENTS/context/profile.md
|
||||
3. grep "80-memory" /home/m3tam3re/p/AI/AGENTS/context/profile.md
|
||||
4. grep "auto-capture" /home/m3tam3re/p/AI/AGENTS/context/profile.md
|
||||
Expected Result: Memory system section with all config values
|
||||
Evidence: grep output
|
||||
```
|
||||
|
||||
**Commit**: YES (groups with Task 7)
|
||||
- Message: `feat(agents): add memory management to Apollo prompt and user profile`
|
||||
- Files: `prompts/apollo.txt`, `context/profile.md`
|
||||
- Repo: `~/p/AI/AGENTS`
|
||||
|
||||
---
|
||||
|
||||
- [x] 9. End-to-End Validation
|
||||
|
||||
**What to do**:
|
||||
- Verify ALL files exist and contain expected content
|
||||
- Run skill validation: `./scripts/test-skill.sh memory`
|
||||
- Test Mem0 availability: `curl http://localhost:8000/health`
|
||||
- Test Obsidian REST API: `curl http://127.0.0.1:27124/vault-info`
|
||||
- Verify CODEX vault structure: `ls -la ~/CODEX/80-memory/`
|
||||
- Verify template: `cat ~/CODEX/templates/memory.md | head -20`
|
||||
- Check all YAML frontmatter valid across new/updated skill files
|
||||
|
||||
**Must NOT do**:
|
||||
- Do NOT create automated test infrastructure
|
||||
- Do NOT modify any files — validation only
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `unspecified-low`
|
||||
- Reason: Verification only, running commands and checking outputs
|
||||
- **Skills**: []
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: NO
|
||||
- **Parallel Group**: Wave 4 (final, sequential)
|
||||
- **Blocks**: None (final task)
|
||||
- **Blocked By**: ALL tasks (1-8)
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Full file existence check
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -f ~/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
2. test -f ~/p/AI/AGENTS/skills/memory/references/mcp-config.md
|
||||
3. test -d ~/CODEX/80-memory/preferences
|
||||
4. test -f ~/CODEX/templates/memory.md
|
||||
5. grep "80-memory" ~/CODEX/AGENTS.md
|
||||
6. grep "#memory" ~/CODEX/tag-taxonomy.md
|
||||
7. grep "80-memory" ~/CODEX/README.md
|
||||
8. grep -i "memory" ~/p/AI/AGENTS/prompts/apollo.txt
|
||||
9. grep "Memory System" ~/p/AI/AGENTS/context/profile.md
|
||||
Expected Result: All checks pass (exit code 0)
|
||||
Evidence: Shell output captured
|
||||
|
||||
Scenario: Mem0 health check
|
||||
Tool: Bash
|
||||
Preconditions: Mem0 server must be running
|
||||
Steps:
|
||||
1. curl -s -o /dev/null -w "%{http_code}" http://localhost:8000/health
|
||||
Expected Result: HTTP 200
|
||||
Evidence: Status code captured
|
||||
Note: If Mem0 not running, this test will fail — spin up Mem0 first
|
||||
|
||||
Scenario: Obsidian REST API check
|
||||
Tool: Bash
|
||||
Preconditions: Obsidian desktop app must be running with Local REST API plugin
|
||||
Steps:
|
||||
1. curl -s -o /dev/null -w "%{http_code}" http://127.0.0.1:27124/vault-info
|
||||
Expected Result: HTTP 200
|
||||
Evidence: Status code captured
|
||||
Note: Requires Obsidian desktop app to be open
|
||||
|
||||
Scenario: Skill validation
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. cd ~/p/AI/AGENTS && ./scripts/test-skill.sh memory
|
||||
Expected Result: Validation passes (no errors)
|
||||
Evidence: Script output captured
|
||||
```
|
||||
|
||||
**Commit**: NO (validation only, no file changes)
|
||||
|
||||
---
|
||||
|
||||
## Commit Strategy
|
||||
|
||||
| After Task | Message | Files | Repo | Verification |
|
||||
|------------|---------|-------|------|--------------|
|
||||
| 1 | `feat(vault): add 80-memory folder structure and memory template` | 80-memory/, templates/memory.md, tag-taxonomy.md | ~/CODEX | ls + grep |
|
||||
| 2 | `docs(vault): add 80-memory documentation to AGENTS.md and README.md` | AGENTS.md, README.md | ~/CODEX | grep |
|
||||
| 3+4 | `feat(memory): add core memory skill and MCP config reference` | skills/memory/SKILL.md, skills/memory/references/mcp-config.md | ~/p/AI/AGENTS | test-skill.sh |
|
||||
| 5 | `feat(mem0-memory): add memory categories and dual-layer sync patterns` | skills/mem0-memory/SKILL.md | ~/p/AI/AGENTS | grep |
|
||||
| 6 | `feat(obsidian): add memory folder conventions and workflows` | skills/obsidian/SKILL.md | ~/p/AI/AGENTS | grep |
|
||||
| 7+8 | `feat(agents): add memory management to Apollo prompt and user profile` | prompts/apollo.txt, context/profile.md | ~/p/AI/AGENTS | grep |
|
||||
|
||||
**Note**: Two different git repos! CODEX and AGENTS commits are independent.
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
### Verification Commands
|
||||
```bash
|
||||
# CODEX vault structure
|
||||
ls ~/CODEX/80-memory/ # Expected: preferences/ facts/ decisions/ entities/ other/
|
||||
cat ~/CODEX/templates/memory.md | head -5 # Expected: ---\ntype: memory
|
||||
grep "#memory" ~/CODEX/tag-taxonomy.md # Expected: #memory/* tags
|
||||
|
||||
# AGENTS skill validation
|
||||
cd ~/p/AI/AGENTS && ./scripts/test-skill.sh memory # Expected: pass
|
||||
|
||||
# Infrastructure (requires services running)
|
||||
curl -s http://localhost:8000/health # Expected: 200
|
||||
curl -s http://127.0.0.1:27124/vault-info # Expected: 200
|
||||
```
|
||||
|
||||
### Final Checklist
|
||||
- [x] All "Must Have" present (dual-layer, auto-capture, auto-recall, categories, health checks, error handling)
|
||||
- [x] All "Must NOT Have" absent (no citation system, no deletion, no dashboards, no unit tests)
|
||||
- [x] CODEX commits pushed (vault structure + docs)
|
||||
- [x] AGENTS commits pushed (skills + prompts + profile)
|
||||
- [x] User reminded to add Obsidian MCP to Nix config and run `home-manager switch`
|
||||
- [x] User reminded to spin up Mem0 server before using memory features
|
||||
File diff suppressed because it is too large
Load Diff
49
AGENTS.md
49
AGENTS.md
@@ -1,15 +1,5 @@
|
||||
# Opencode Skills Repository
|
||||
|
||||
## MANDATORY: Use td for Task Management
|
||||
|
||||
Run td usage --new-session at conversation start (or after /clear). This tells you what to work on next.
|
||||
|
||||
Sessions are automatic (based on terminal/agent context). Optional:
|
||||
- td session "name" to label the current session
|
||||
- td session --new to force a new session in the same context
|
||||
|
||||
Use td usage -q after first read.
|
||||
|
||||
Configuration repository for Opencode Agent Skills, context files, and agent configurations. Deployed via Nix home-manager to `~/.config/opencode/`.
|
||||
|
||||
## Quick Commands
|
||||
@@ -22,21 +12,22 @@ Configuration repository for Opencode Agent Skills, context files, and agent con
|
||||
|
||||
# Skill creation
|
||||
python3 skills/skill-creator/scripts/init_skill.py <name> --path skills/
|
||||
|
||||
# Issue tracking (beads)
|
||||
bd ready && bd create "title" && bd close <id> && bd sync
|
||||
```
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
.
|
||||
├── skills/ # Agent skills (25 modules)
|
||||
├── skills/ # Agent skills (15 modules)
|
||||
│ └── skill-name/
|
||||
│ ├── SKILL.md # Required: YAML frontmatter + workflows
|
||||
│ ├── scripts/ # Executable code (optional)
|
||||
│ ├── references/ # Domain docs (optional)
|
||||
│ └── assets/ # Templates/files (optional)
|
||||
├── rules/ # AI coding rules (languages, concerns, frameworks)
|
||||
│ ├── languages/ # Python, TypeScript, Nix, Shell
|
||||
│ ├── concerns/ # Testing, naming, documentation, etc.
|
||||
│ └── frameworks/ # Framework-specific rules (n8n, etc.)
|
||||
├── agents/ # Agent definitions (agents.json)
|
||||
├── prompts/ # System prompts (chiron*.txt)
|
||||
├── context/ # User profiles
|
||||
@@ -68,7 +59,7 @@ compatibility: opencode
|
||||
## Anti-Patterns (CRITICAL)
|
||||
|
||||
**Frontend Design**: NEVER use generic AI aesthetics, NEVER converge on common choices
|
||||
**Excalidraw**: NEVER use diamond shapes (broken arrows), NEVER use `label` property
|
||||
**Excalidraw**: NEVER use `label` property (use boundElements + text element pairs instead)
|
||||
**Debugging**: NEVER fix just symptom, ALWAYS find root cause first
|
||||
**Excel**: ALWAYS respect existing template conventions over guidelines
|
||||
**Structure**: NEVER place scripts/docs outside scripts/references/ directories
|
||||
@@ -87,27 +78,46 @@ compatibility: opencode
|
||||
|
||||
## Deployment
|
||||
|
||||
**Nix pattern** (non-flake input):
|
||||
**Nix flake pattern**:
|
||||
```nix
|
||||
agents = {
|
||||
url = "git+https://code.m3ta.dev/m3tam3re/AGENTS";
|
||||
flake = false; # Files only, not a Nix flake
|
||||
inputs.nixpkgs.follows = "nixpkgs"; # Optional but recommended
|
||||
};
|
||||
```
|
||||
|
||||
**Exports:**
|
||||
- `packages.skills-runtime` — composable runtime with all skill dependencies
|
||||
- `devShells.default` — dev environment for working on skills
|
||||
|
||||
**Mapping** (via home-manager):
|
||||
- `skills/`, `context/`, `commands/`, `prompts/` → symlinks
|
||||
- `agents/agents.json` → embedded into config.json
|
||||
- Agent changes: require `home-manager switch`
|
||||
- Other changes: visible immediately
|
||||
|
||||
## Rules System
|
||||
|
||||
Centralized AI coding rules consumed via `mkOpencodeRules` from m3ta-nixpkgs:
|
||||
|
||||
```nix
|
||||
# In project flake.nix
|
||||
m3taLib.opencode-rules.mkOpencodeRules {
|
||||
inherit agents;
|
||||
languages = [ "python" "typescript" ];
|
||||
frameworks = [ "n8n" ];
|
||||
};
|
||||
```
|
||||
|
||||
See `rules/USAGE.md` for full documentation.
|
||||
|
||||
## Notes for AI Agents
|
||||
|
||||
1. **Config-only repo** - No compilation, no build, manual validation only
|
||||
2. **Skills are documentation** - Write for AI consumption, progressive disclosure
|
||||
3. **Consistent structure** - All skills follow 4-level deep pattern (skills/name/ + optional subdirs)
|
||||
4. **Cross-cutting concerns** - Standardized SKILL.md, workflow patterns, delegation rules
|
||||
5. **Always push** - Session completion workflow: commit + bd sync + git push
|
||||
5. **Always push** - Session completion workflow: commit + git push
|
||||
|
||||
## Quality Gates
|
||||
|
||||
@@ -115,4 +125,5 @@ Before committing:
|
||||
1. `./scripts/test-skill.sh --validate`
|
||||
2. Python shebang + docstrings check
|
||||
3. No extraneous files (README.md, CHANGELOG.md in skills/)
|
||||
4. Git status clean
|
||||
4. If skill has scripts with external dependencies → verify `flake.nix` is updated (see skill-creator Step 4)
|
||||
5. Git status clean
|
||||
|
||||
276
README.md
276
README.md
@@ -8,36 +8,45 @@ This repository serves as a **personal AI operating system** - a collection of s
|
||||
|
||||
- **Productivity & Task Management** - PARA methodology, GTD workflows, project tracking
|
||||
- **Knowledge Management** - Note-taking, research workflows, information organization
|
||||
- **Communications** - Email management, meeting scheduling, follow-up tracking
|
||||
- **AI Development** - Tools for creating new skills and agent configurations
|
||||
- **Memory & Context** - Persistent memory systems, conversation analysis
|
||||
- **Document Processing** - PDF manipulation, spreadsheet handling, diagram generation
|
||||
- **Custom Workflows** - Domain-specific automation and specialized agents
|
||||
|
||||
## 📂 Repository Structure
|
||||
|
||||
```
|
||||
.
|
||||
├── agent/ # Agent definitions (agents.json)
|
||||
├── prompts/ # Agent system prompts (chiron.txt, chiron-forge.txt)
|
||||
├── agents/ # Agent definitions (agents.json)
|
||||
├── prompts/ # Agent system prompts (chiron.txt, chiron-forge.txt, etc.)
|
||||
├── context/ # User profiles and preferences
|
||||
│ └── profile.md # Work style, PARA areas, preferences
|
||||
├── command/ # Custom command definitions
|
||||
├── commands/ # Custom command definitions
|
||||
│ └── reflection.md
|
||||
├── skill/ # Opencode Agent Skills (11+ skills)
|
||||
│ ├── task-management/ # PARA-based productivity
|
||||
│ ├── skill-creator/ # Meta-skill for creating skills
|
||||
│ ├── reflection/ # Conversation analysis
|
||||
│ ├── communications/ # Email & messaging
|
||||
│ ├── calendar-scheduling/ # Time management
|
||||
│ ├── mem0-memory/ # Persistent memory
|
||||
│ ├── research/ # Investigation workflows
|
||||
│ ├── knowledge-management/ # Note capture & organization
|
||||
├── skills/ # Opencode Agent Skills (15 skills)
|
||||
│ ├── agent-development/ # Agent creation and configuration
|
||||
│ ├── basecamp/ # Basecamp project management
|
||||
│ ├── brainstorming/ # Ideation & strategic thinking
|
||||
│ └── plan-writing/ # Project planning templates
|
||||
│ ├── doc-translator/ # Documentation translation
|
||||
│ ├── excalidraw/ # Architecture diagrams
|
||||
│ ├── frontend-design/ # UI/UX design patterns
|
||||
│ ├── memory/ # Persistent memory system
|
||||
│ ├── obsidian/ # Obsidian vault management
|
||||
│ ├── outline/ # Outline wiki integration
|
||||
│ ├── pdf/ # PDF manipulation toolkit
|
||||
│ ├── prompt-engineering-patterns/ # Prompt patterns
|
||||
│ ├── reflection/ # Conversation analysis
|
||||
│ ├── skill-creator/ # Meta-skill for creating skills
|
||||
│ ├── systematic-debugging/ # Debugging methodology
|
||||
│ └── xlsx/ # Spreadsheet handling
|
||||
├── scripts/ # Repository utility scripts
|
||||
│ └── test-skill.sh # Test skills without deploying
|
||||
├── .beads/ # Issue tracking database
|
||||
├── rules/ # AI coding rules
|
||||
│ ├── languages/ # Python, TypeScript, Nix, Shell
|
||||
│ ├── concerns/ # Testing, naming, documentation
|
||||
│ └── frameworks/ # Framework-specific rules (n8n)
|
||||
├── flake.nix # Nix flake: dev shell + skills-runtime export
|
||||
├── .envrc # direnv config (use flake)
|
||||
├── AGENTS.md # Developer documentation
|
||||
└── README.md # This file
|
||||
```
|
||||
@@ -46,43 +55,96 @@ This repository serves as a **personal AI operating system** - a collection of s
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- **Opencode** - AI coding assistant ([opencode.dev](https://opencode.ai))
|
||||
- **Nix** (optional) - For declarative deployment via home-manager
|
||||
- **Python 3** - For skill validation and creation scripts
|
||||
- **Nix** with flakes enabled — for reproducible dependency management and deployment
|
||||
- **direnv** (recommended) — auto-activates the development environment when entering the repo
|
||||
- **Opencode** — AI coding assistant ([opencode.ai](https://opencode.ai))
|
||||
|
||||
### Installation
|
||||
|
||||
#### Option 1: Nix Flake (Recommended)
|
||||
|
||||
This repository is consumed as a **non-flake input** by your NixOS configuration:
|
||||
This repository is a **Nix flake** that exports:
|
||||
|
||||
- **`devShells.default`** — development environment for working on skills (activated via direnv)
|
||||
- **`packages.skills-runtime`** — composable runtime with all skill script dependencies (Python packages + system tools)
|
||||
|
||||
**Consume in your system flake:**
|
||||
|
||||
```nix
|
||||
# In your flake.nix
|
||||
# flake.nix
|
||||
inputs.agents = {
|
||||
url = "git+https://code.m3ta.dev/m3tam3re/AGENTS";
|
||||
flake = false; # Pure files, not a Nix flake
|
||||
inputs.nixpkgs.follows = "nixpkgs";
|
||||
};
|
||||
|
||||
# In your home-manager module (e.g., opencode.nix)
|
||||
xdg.configFile = {
|
||||
"opencode/skill".source = "${inputs.agents}/skill";
|
||||
"opencode/skills".source = "${inputs.agents}/skills";
|
||||
"opencode/context".source = "${inputs.agents}/context";
|
||||
"opencode/command".source = "${inputs.agents}/command";
|
||||
"opencode/commands".source = "${inputs.agents}/commands";
|
||||
"opencode/prompts".source = "${inputs.agents}/prompts";
|
||||
};
|
||||
|
||||
# Agent config is embedded into config.json, not deployed as files
|
||||
programs.opencode.settings.agent = builtins.fromJSON
|
||||
(builtins.readFile "${inputs.agents}/agent/agents.json");
|
||||
(builtins.readFile "${inputs.agents}/agents/agents.json");
|
||||
```
|
||||
|
||||
Rebuild your system:
|
||||
**Deploy skills via home-manager:**
|
||||
|
||||
```nix
|
||||
# home-manager module (e.g., opencode.nix)
|
||||
{ inputs, system, ... }:
|
||||
{
|
||||
# Skill files — symlinked, changes visible immediately
|
||||
xdg.configFile = {
|
||||
"opencode/skills".source = "${inputs.agents}/skills";
|
||||
"opencode/context".source = "${inputs.agents}/context";
|
||||
"opencode/commands".source = "${inputs.agents}/commands";
|
||||
"opencode/prompts".source = "${inputs.agents}/prompts";
|
||||
};
|
||||
|
||||
# Agent config — embedded into config.json (requires home-manager switch)
|
||||
programs.opencode.settings.agent = builtins.fromJSON
|
||||
(builtins.readFile "${inputs.agents}/agents/agents.json");
|
||||
|
||||
# Skills runtime — ensures opencode always has script dependencies
|
||||
home.packages = [ inputs.agents.packages.${system}.skills-runtime ];
|
||||
}
|
||||
```
|
||||
|
||||
**Compose into project flakes** (so opencode has skill deps in any project):
|
||||
|
||||
```nix
|
||||
# Any project's flake.nix
|
||||
{
|
||||
inputs.agents.url = "git+https://code.m3ta.dev/m3tam3re/AGENTS";
|
||||
inputs.agents.inputs.nixpkgs.follows = "nixpkgs";
|
||||
|
||||
outputs = { self, nixpkgs, agents, ... }:
|
||||
let
|
||||
system = "x86_64-linux";
|
||||
pkgs = nixpkgs.legacyPackages.${system};
|
||||
in {
|
||||
devShells.${system}.default = pkgs.mkShell {
|
||||
packages = [
|
||||
# project-specific tools
|
||||
pkgs.nodejs
|
||||
# skill script dependencies
|
||||
agents.packages.${system}.skills-runtime
|
||||
];
|
||||
};
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
Rebuild:
|
||||
|
||||
```bash
|
||||
home-manager switch
|
||||
```
|
||||
|
||||
**Note**: The `agent/` directory is NOT deployed as files. Instead, `agents.json` is read at Nix evaluation time and embedded into the opencode `config.json`.
|
||||
**Note**: The `agents/` directory is NOT deployed as files. Instead, `agents.json` is read at Nix evaluation time and embedded into the opencode `config.json`.
|
||||
|
||||
#### Option 2: Manual Installation
|
||||
|
||||
@@ -92,8 +154,11 @@ Clone and symlink:
|
||||
# Clone repository
|
||||
git clone https://github.com/yourusername/AGENTS.git ~/AGENTS
|
||||
|
||||
# Create symlink to Opencode config directory
|
||||
ln -s ~/AGENTS ~/.config/opencode
|
||||
# Create symlinks to Opencode config directory
|
||||
ln -s ~/AGENTS/skills ~/.config/opencode/skills
|
||||
ln -s ~/AGENTS/context ~/.config/opencode/context
|
||||
ln -s ~/AGENTS/commands ~/.config/opencode/commands
|
||||
ln -s ~/AGENTS/prompts ~/.config/opencode/prompts
|
||||
```
|
||||
|
||||
### Verify Installation
|
||||
@@ -101,8 +166,8 @@ ln -s ~/AGENTS ~/.config/opencode
|
||||
Check that Opencode can see your skills:
|
||||
|
||||
```bash
|
||||
# Skills should be available at ~/.config/opencode/skill/
|
||||
ls ~/.config/opencode/skill/
|
||||
# Skills should be available at ~/.config/opencode/skills/
|
||||
ls ~/.config/opencode/skills/
|
||||
```
|
||||
|
||||
## 🎨 Creating Your First Skill
|
||||
@@ -112,19 +177,19 @@ Skills are modular packages that extend Opencode with specialized knowledge and
|
||||
### 1. Initialize a New Skill
|
||||
|
||||
```bash
|
||||
python3 skill/skill-creator/scripts/init_skill.py my-skill-name --path skill/
|
||||
python3 skills/skill-creator/scripts/init_skill.py my-skill-name --path skills/
|
||||
```
|
||||
|
||||
This creates:
|
||||
|
||||
- `skill/my-skill-name/SKILL.md` - Main skill documentation
|
||||
- `skill/my-skill-name/scripts/` - Executable code (optional)
|
||||
- `skill/my-skill-name/references/` - Reference documentation (optional)
|
||||
- `skill/my-skill-name/assets/` - Templates and files (optional)
|
||||
- `skills/my-skill-name/SKILL.md` - Main skill documentation
|
||||
- `skills/my-skill-name/scripts/` - Executable code (optional)
|
||||
- `skills/my-skill-name/references/` - Reference documentation (optional)
|
||||
- `skills/my-skill-name/assets/` - Templates and files (optional)
|
||||
|
||||
### 2. Edit the Skill
|
||||
|
||||
Open `skill/my-skill-name/SKILL.md` and customize:
|
||||
Open `skills/my-skill-name/SKILL.md` and customize:
|
||||
|
||||
```yaml
|
||||
---
|
||||
@@ -139,68 +204,98 @@ compatibility: opencode
|
||||
[Your skill instructions for Opencode]
|
||||
```
|
||||
|
||||
### 3. Validate the Skill
|
||||
### 3. Register Dependencies
|
||||
|
||||
```bash
|
||||
python3 skill/skill-creator/scripts/quick_validate.py skill/my-skill-name
|
||||
If your skill includes scripts with external dependencies, add them to `flake.nix`:
|
||||
|
||||
```nix
|
||||
# Python packages — add to pythonEnv:
|
||||
# my-skill: my_script.py
|
||||
some-python-package
|
||||
|
||||
# System tools — add to skills-runtime paths:
|
||||
# my-skill: needed by my_script.py
|
||||
pkgs.some-tool
|
||||
```
|
||||
|
||||
### 4. Test the Skill
|
||||
Verify: `nix develop --command python3 -c "import some_package"`
|
||||
|
||||
Test your skill without deploying via home-manager:
|
||||
### 4. Validate the Skill
|
||||
|
||||
```bash
|
||||
python3 skills/skill-creator/scripts/quick_validate.py skills/my-skill-name
|
||||
```
|
||||
|
||||
### 5. Test the Skill
|
||||
|
||||
```bash
|
||||
# Use the test script to validate and list skills
|
||||
./scripts/test-skill.sh my-skill-name # Validate specific skill
|
||||
./scripts/test-skill.sh --list # List all dev skills
|
||||
./scripts/test-skill.sh --run # Launch opencode with dev skills
|
||||
```
|
||||
|
||||
The test script creates a temporary config directory with symlinks to this repo's skills, allowing you to test changes before committing.
|
||||
|
||||
## 📚 Available Skills
|
||||
|
||||
| Skill | Purpose | Status |
|
||||
| ------------------------ | ------------------------------------------------------- | --------- |
|
||||
| **task-management** | PARA-based productivity with Obsidian Tasks integration | ✅ Active |
|
||||
| **skill-creator** | Guide for creating new Opencode skills | ✅ Active |
|
||||
| **reflection** | Conversation analysis and skill improvement | ✅ Active |
|
||||
| **communications** | Email drafts, follow-ups, message management | ✅ Active |
|
||||
| **calendar-scheduling** | Time blocking, meeting management | ✅ Active |
|
||||
| **mem0-memory** | Persistent memory storage and retrieval | ✅ Active |
|
||||
| **research** | Investigation workflows, source management | ✅ Active |
|
||||
| **knowledge-management** | Note capture, knowledge organization | ✅ Active |
|
||||
| --------------------------- | -------------------------------------------------------------- | ------------ |
|
||||
| **agent-development** | Create and configure Opencode agents | ✅ Active |
|
||||
| **basecamp** | Basecamp project & todo management via MCP | ✅ Active |
|
||||
| **brainstorming** | General-purpose ideation with Obsidian save | ✅ Active |
|
||||
| **plan-writing** | Project plans with templates (kickoff, tasks, risks) | ✅ Active |
|
||||
| **brainstorming** | General-purpose ideation and strategic thinking | ✅ Active |
|
||||
| **doc-translator** | Documentation translation to German/Czech with Outline publish | ✅ Active |
|
||||
| **excalidraw** | Architecture diagrams from codebase analysis | ✅ Active |
|
||||
| **frontend-design** | Production-grade UI/UX with high design quality | ✅ Active |
|
||||
| **memory** | SQLite-based persistent memory with hybrid search | ✅ Active |
|
||||
| **obsidian** | Obsidian vault management via Local REST API | ✅ Active |
|
||||
| **outline** | Outline wiki integration for team documentation | ✅ Active |
|
||||
| **pdf** | PDF manipulation, extraction, creation, and forms | ✅ Active |
|
||||
| **prompt-engineering-patterns** | Advanced prompt engineering techniques | ✅ Active |
|
||||
| **reflection** | Conversation analysis and skill improvement | ✅ Active |
|
||||
| **skill-creator** | Guide for creating new Opencode skills | ✅ Active |
|
||||
| **systematic-debugging** | Debugging methodology for bugs and test failures | ✅ Active |
|
||||
| **xlsx** | Spreadsheet creation, editing, and analysis | ✅ Active |
|
||||
|
||||
## 🤖 AI Agents
|
||||
|
||||
### Chiron - Personal Assistant
|
||||
### Primary Agents
|
||||
|
||||
**Configuration**: `agent/agents.json` + `prompts/chiron.txt`
|
||||
| Agent | Mode | Purpose |
|
||||
| ------------------- | ------- | ---------------------------------------------------- |
|
||||
| **Chiron** | Plan | Read-only analysis, planning, and guidance |
|
||||
| **Chiron Forge** | Build | Full execution and task completion with safety |
|
||||
|
||||
Chiron is a personal AI assistant focused on productivity and task management. Named after the wise centaur from Greek mythology, Chiron provides:
|
||||
### Subagents (Specialists)
|
||||
|
||||
- Task and project management guidance
|
||||
- Daily and weekly review workflows
|
||||
- Skill routing based on user intent
|
||||
- Integration with productivity tools (Obsidian, ntfy, n8n)
|
||||
| Agent | Domain | Purpose |
|
||||
| ------------------- | ---------------- | ------------------------------------------ |
|
||||
| **Hermes** | Communication | Basecamp, Outlook, MS Teams |
|
||||
| **Athena** | Research | Outline wiki, documentation, knowledge |
|
||||
| **Apollo** | Private Knowledge| Obsidian vault, personal notes |
|
||||
| **Calliope** | Writing | Documentation, reports, prose |
|
||||
|
||||
**Modes**:
|
||||
**Configuration**: `agents/agents.json` + `prompts/*.txt`
|
||||
|
||||
- **Chiron** (Plan Mode) - Read-only analysis and planning (`prompts/chiron.txt`)
|
||||
- **Chiron-Forge** (Worker Mode) - Full write access with safety prompts (`prompts/chiron-forge.txt`)
|
||||
## 🛠️ Development
|
||||
|
||||
**Triggers**: Personal productivity requests, task management, reviews, planning
|
||||
### Environment
|
||||
|
||||
## 🛠️ Development Workflow
|
||||
The repository includes a Nix flake with a development shell. With [direnv](https://direnv.net/) installed, the environment activates automatically:
|
||||
|
||||
```bash
|
||||
cd AGENTS/
|
||||
# → direnv: loading .envrc
|
||||
# → 🔧 AGENTS dev shell active — Python 3.13.x, jq-1.x
|
||||
|
||||
# All skill script dependencies are now available:
|
||||
python3 -c "import pypdf, openpyxl, yaml" # ✔️
|
||||
pdftoppm -v # ✔️
|
||||
```
|
||||
|
||||
Without direnv, activate manually: `nix develop`
|
||||
|
||||
### Quality Gates
|
||||
|
||||
Before committing:
|
||||
|
||||
1. **Validate skills**: `./scripts/test-skill.sh --validate` or `python3 skill/skill-creator/scripts/quick_validate.py skill/<name>`
|
||||
1. **Validate skills**: `./scripts/test-skill.sh --validate` or `python3 skills/skill-creator/scripts/quick_validate.py skills/<name>`
|
||||
2. **Test locally**: `./scripts/test-skill.sh --run` to launch opencode with dev skills
|
||||
3. **Check formatting**: Ensure YAML frontmatter is valid
|
||||
4. **Update docs**: Keep README and AGENTS.md in sync
|
||||
@@ -210,9 +305,10 @@ Before committing:
|
||||
### Essential Documentation
|
||||
|
||||
- **AGENTS.md** - Complete developer guide for AI agents
|
||||
- **skill/skill-creator/SKILL.md** - Comprehensive skill creation guide
|
||||
- **skill/skill-creator/references/workflows.md** - Workflow pattern library
|
||||
- **skill/skill-creator/references/output-patterns.md** - Output formatting patterns
|
||||
- **skills/skill-creator/SKILL.md** - Comprehensive skill creation guide
|
||||
- **skills/skill-creator/references/workflows.md** - Workflow pattern library
|
||||
- **skills/skill-creator/references/output-patterns.md** - Output formatting patterns
|
||||
- **rules/USAGE.md** - AI coding rules integration guide
|
||||
|
||||
### Skill Design Principles
|
||||
|
||||
@@ -223,22 +319,26 @@ Before committing:
|
||||
|
||||
### Example Skills to Study
|
||||
|
||||
- **task-management/** - Full implementation with Obsidian Tasks integration
|
||||
- **skill-creator/** - Meta-skill with bundled resources
|
||||
- **reflection/** - Conversation analysis with rating system
|
||||
- **basecamp/** - MCP server integration with multiple tool categories
|
||||
- **brainstorming/** - Framework-based ideation with Obsidian markdown save
|
||||
- **plan-writing/** - Template-driven document generation
|
||||
- **memory/** - SQLite-based hybrid search implementation
|
||||
- **excalidraw/** - Diagram generation with JSON templates and Python renderer
|
||||
|
||||
## 🔧 Customization
|
||||
|
||||
### Modify Agent Behavior
|
||||
|
||||
Edit `agent/agents.json` for agent definitions and `prompts/*.txt` for system prompts:
|
||||
Edit `agents/agents.json` for agent definitions and `prompts/*.txt` for system prompts:
|
||||
|
||||
- `agent/agents.json` - Agent names, models, permissions
|
||||
- `agents/agents.json` - Agent names, models, permissions
|
||||
- `prompts/chiron.txt` - Chiron (Plan Mode) system prompt
|
||||
- `prompts/chiron-forge.txt` - Chiron-Forge (Worker Mode) system prompt
|
||||
- `prompts/chiron-forge.txt` - Chiron Forge (Build Mode) system prompt
|
||||
- `prompts/hermes.txt` - Hermes (Communication) system prompt
|
||||
- `prompts/athena.txt` - Athena (Research) system prompt
|
||||
- `prompts/apollo.txt` - Apollo (Private Knowledge) system prompt
|
||||
- `prompts/calliope.txt` - Calliope (Writing) system prompt
|
||||
|
||||
**Note**: Agent changes require `home-manager switch` to take effect (config is embedded, not symlinked).
|
||||
|
||||
@@ -253,7 +353,22 @@ Edit `context/profile.md` to configure:
|
||||
|
||||
### Add Custom Commands
|
||||
|
||||
Create new command definitions in `command/` directory following the pattern in `command/reflection.md`.
|
||||
Create new command definitions in `commands/` directory following the pattern in `commands/reflection.md`.
|
||||
|
||||
### Add Project Rules
|
||||
|
||||
Use the rules system to inject AI coding rules into projects:
|
||||
|
||||
```nix
|
||||
# In project flake.nix
|
||||
m3taLib.opencode-rules.mkOpencodeRules {
|
||||
inherit agents;
|
||||
languages = [ "python" "typescript" ];
|
||||
frameworks = [ "n8n" ];
|
||||
};
|
||||
```
|
||||
|
||||
See `rules/USAGE.md` for full documentation.
|
||||
|
||||
## 🌟 Use Cases
|
||||
|
||||
@@ -309,15 +424,14 @@ This repository contains personal configurations and skills. Feel free to use th
|
||||
## 🔗 Links
|
||||
|
||||
- [Opencode](https://opencode.dev) - AI coding assistant
|
||||
- [Beads](https://github.com/steveyegge/beads) - AI-native issue tracking
|
||||
- [PARA Method](https://fortelabs.com/blog/para/) - Productivity methodology
|
||||
- [Obsidian](https://obsidian.md) - Knowledge management platform
|
||||
|
||||
## 🙋 Questions?
|
||||
|
||||
- Check `AGENTS.md` for detailed developer documentation
|
||||
- Review existing skills in `skill/` for examples
|
||||
- See `skill/skill-creator/SKILL.md` for skill creation guide
|
||||
- Review existing skills in `skills/` for examples
|
||||
- See `skills/skill-creator/SKILL.md` for skill creation guide
|
||||
|
||||
---
|
||||
|
||||
|
||||
27
flake.lock
generated
Normal file
27
flake.lock
generated
Normal file
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"nodes": {
|
||||
"nixpkgs": {
|
||||
"locked": {
|
||||
"lastModified": 1772479524,
|
||||
"narHash": "sha256-u7nCaNiMjqvKpE+uZz9hE7pgXXTmm5yvdtFaqzSzUQI=",
|
||||
"owner": "NixOS",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "4215e62dc2cd3bc705b0a423b9719ff6be378a43",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "NixOS",
|
||||
"ref": "nixpkgs-unstable",
|
||||
"repo": "nixpkgs",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"root": {
|
||||
"inputs": {
|
||||
"nixpkgs": "nixpkgs"
|
||||
}
|
||||
}
|
||||
},
|
||||
"root": "root",
|
||||
"version": 7
|
||||
}
|
||||
68
flake.nix
Normal file
68
flake.nix
Normal file
@@ -0,0 +1,68 @@
|
||||
{
|
||||
description = "Opencode Agent Skills — development environment & runtime";
|
||||
|
||||
inputs = { nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable"; };
|
||||
|
||||
outputs = { self, nixpkgs }:
|
||||
let
|
||||
supportedSystems = [ "x86_64-linux" "aarch64-linux" "aarch64-darwin" ];
|
||||
forAllSystems = nixpkgs.lib.genAttrs supportedSystems;
|
||||
in {
|
||||
# Composable runtime for project flakes and home-manager.
|
||||
# Usage:
|
||||
# home.packages = [ inputs.agents.packages.${system}.skills-runtime ];
|
||||
# devShells.default = pkgs.mkShell {
|
||||
# packages = [ inputs.agents.packages.${system}.skills-runtime ];
|
||||
# };
|
||||
packages = forAllSystems (system:
|
||||
let
|
||||
pkgs = nixpkgs.legacyPackages.${system};
|
||||
|
||||
pythonEnv = pkgs.python3.withPackages (ps:
|
||||
with ps; [
|
||||
# skill-creator: quick_validate.py
|
||||
pyyaml
|
||||
|
||||
# xlsx: recalc.py
|
||||
openpyxl
|
||||
|
||||
# prompt-engineering-patterns: optimize-prompt.py
|
||||
numpy
|
||||
|
||||
# pdf: multiple scripts
|
||||
pypdf
|
||||
pillow # PIL
|
||||
pdf2image
|
||||
|
||||
# excalidraw: render_excalidraw.py
|
||||
playwright
|
||||
]);
|
||||
in {
|
||||
skills-runtime = pkgs.buildEnv {
|
||||
name = "opencode-skills-runtime";
|
||||
paths = [
|
||||
pythonEnv
|
||||
pkgs.poppler-utils # pdf: pdftoppm/pdfinfo
|
||||
pkgs.jq # shell scripts
|
||||
pkgs.playwright-driver.browsers # excalidraw: chromium for rendering
|
||||
];
|
||||
};
|
||||
});
|
||||
|
||||
# Dev shell for working on this repo (wraps skills-runtime).
|
||||
devShells = forAllSystems (system:
|
||||
let
|
||||
pkgs = nixpkgs.legacyPackages.${system};
|
||||
in {
|
||||
default = pkgs.mkShell {
|
||||
packages = [ self.packages.${system}.skills-runtime ];
|
||||
|
||||
env.PLAYWRIGHT_BROWSERS_PATH = "${pkgs.playwright-driver.browsers}";
|
||||
|
||||
shellHook = ''
|
||||
echo "🔧 AGENTS dev shell active — Python $(python3 --version 2>&1 | cut -d' ' -f2), $(jq --version)"
|
||||
'';
|
||||
};
|
||||
});
|
||||
};
|
||||
}
|
||||
62
rules/USAGE.md
Normal file
62
rules/USAGE.md
Normal file
@@ -0,0 +1,62 @@
|
||||
# Opencode Rules Usage
|
||||
|
||||
Add AI coding rules to your project via `mkOpencodeRules`.
|
||||
|
||||
## flake.nix Setup
|
||||
|
||||
```nix
|
||||
{
|
||||
inputs = {
|
||||
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
|
||||
m3ta-nixpkgs.url = "git+https://code.m3ta.dev/m3tam3re/nixpkgs";
|
||||
agents = {
|
||||
url = "git+https://code.m3ta.dev/m3tam3re/AGENTS";
|
||||
flake = false;
|
||||
};
|
||||
};
|
||||
|
||||
outputs = { self, nixpkgs, m3ta-nixpkgs, agents, ... }:
|
||||
let
|
||||
system = "x86_64-linux";
|
||||
pkgs = nixpkgs.legacyPackages.${system};
|
||||
m3taLib = m3ta-nixpkgs.lib.${system};
|
||||
in {
|
||||
devShells.${system}.default = let
|
||||
rules = m3taLib.opencode-rules.mkOpencodeRules {
|
||||
inherit agents;
|
||||
languages = [ "python" "typescript" ];
|
||||
frameworks = [ "n8n" ];
|
||||
};
|
||||
in pkgs.mkShell {
|
||||
shellHook = rules.shellHook;
|
||||
};
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
- `agents` (required): Path to AGENTS repo flake input
|
||||
- `languages` (optional): List of language names (e.g., `["python" "typescript"]`)
|
||||
- `concerns` (optional): Rule categories (default: all standard concerns)
|
||||
- `frameworks` (optional): List of framework names (e.g., `["n8n" "django"]`)
|
||||
- `extraInstructions` (optional): Additional instruction file paths
|
||||
|
||||
## .gitignore
|
||||
|
||||
Add to your project's `.gitignore`:
|
||||
```
|
||||
.opencode-rules
|
||||
opencode.json
|
||||
```
|
||||
|
||||
## Project Overrides
|
||||
|
||||
Create `AGENTS.md` in your project root to override central rules. OpenCode applies project-level rules with precedence over central ones.
|
||||
|
||||
## Updating Rules
|
||||
|
||||
When central rules are updated:
|
||||
```bash
|
||||
nix flake update agents
|
||||
```
|
||||
163
rules/concerns/coding-style.md
Normal file
163
rules/concerns/coding-style.md
Normal file
@@ -0,0 +1,163 @@
|
||||
# Coding Style
|
||||
|
||||
## Critical Rules (MUST follow)
|
||||
|
||||
Always prioritize readability over cleverness. Never write code that requires mental gymnastics to understand.
|
||||
Always fail fast and explicitly. Never silently swallow errors or hide exceptions.
|
||||
Always keep functions under 20 lines. Never create monolithic functions that do multiple things.
|
||||
Always validate inputs at function boundaries. Never trust external data implicitly.
|
||||
|
||||
## Formatting
|
||||
|
||||
Prefer consistent indentation throughout the codebase. Never mix tabs and spaces.
|
||||
Prefer meaningful variable names over short abbreviations. Never use single letters except for loop counters.
|
||||
|
||||
### Correct:
|
||||
```lang
|
||||
const maxRetryAttempts = 3;
|
||||
const connectionTimeout = 5000;
|
||||
|
||||
for (let attempt = 1; attempt <= maxRetryAttempts; attempt++) {
|
||||
// process attempt
|
||||
}
|
||||
```
|
||||
|
||||
### Incorrect:
|
||||
```lang
|
||||
const m = 3;
|
||||
const t = 5000;
|
||||
|
||||
for (let i = 1; i <= m; i++) {
|
||||
// process attempt
|
||||
}
|
||||
```
|
||||
|
||||
## Patterns and Anti-Patterns
|
||||
|
||||
Never repeat yourself. Always extract duplicated logic into reusable functions.
|
||||
Prefer composition over inheritance. Never create deep inheritance hierarchies.
|
||||
Always use guard clauses to reduce nesting. Never write arrow-shaped code.
|
||||
|
||||
### Correct:
|
||||
```lang
|
||||
def process_user(user):
|
||||
if not user:
|
||||
return None
|
||||
if not user.is_active:
|
||||
return None
|
||||
return user.calculate_score()
|
||||
```
|
||||
|
||||
### Incorrect:
|
||||
```lang
|
||||
def process_user(user):
|
||||
if user:
|
||||
if user.is_active:
|
||||
return user.calculate_score()
|
||||
else:
|
||||
return None
|
||||
else:
|
||||
return None
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
Always handle specific exceptions. Never use broad catch-all exception handlers.
|
||||
Always log error context, not just the error message. Never let errors vanish without trace.
|
||||
|
||||
### Correct:
|
||||
```lang
|
||||
try:
|
||||
data = fetch_resource(url)
|
||||
return parse_data(data)
|
||||
except NetworkError as e:
|
||||
log_error(f"Network failed for {url}: {e}")
|
||||
raise
|
||||
except ParseError as e:
|
||||
log_error(f"Parse failed for {url}: {e}")
|
||||
return fallback_data
|
||||
```
|
||||
|
||||
### Incorrect:
|
||||
```lang
|
||||
try:
|
||||
data = fetch_resource(url)
|
||||
return parse_data(data)
|
||||
except Exception:
|
||||
pass
|
||||
```
|
||||
|
||||
## Type Safety
|
||||
|
||||
Always use type annotations where supported. Never rely on implicit type coercion.
|
||||
Prefer explicit type checks over duck typing for public APIs. Never assume type behavior.
|
||||
|
||||
### Correct:
|
||||
```lang
|
||||
function calculateTotal(price: number, quantity: number): number {
|
||||
return price * quantity;
|
||||
}
|
||||
```
|
||||
|
||||
### Incorrect:
|
||||
```lang
|
||||
function calculateTotal(price, quantity) {
|
||||
return price * quantity;
|
||||
}
|
||||
```
|
||||
|
||||
## Function Design
|
||||
|
||||
Always write pure functions when possible. Never mutate arguments unless required.
|
||||
Always limit function parameters to 3 or fewer. Never pass objects to hide parameter complexity.
|
||||
|
||||
### Correct:
|
||||
```lang
|
||||
def create_user(name: str, email: str) -> User:
|
||||
return User(name=name, email=email, created_at=now())
|
||||
```
|
||||
|
||||
### Incorrect:
|
||||
```lang
|
||||
def create_user(config: dict) -> User:
|
||||
return User(
|
||||
name=config['name'],
|
||||
email=config['email'],
|
||||
created_at=config['timestamp']
|
||||
)
|
||||
```
|
||||
|
||||
## SOLID Principles
|
||||
|
||||
Never let classes depend on concrete implementations. Always depend on abstractions.
|
||||
Always ensure classes are open for extension but closed for modification. Never change working code to add features.
|
||||
Prefer many small interfaces over one large interface. Never force clients to depend on methods they don't use.
|
||||
|
||||
### Correct:
|
||||
```lang
|
||||
class EmailSender {
|
||||
send(message: Message): void {
|
||||
// implementation
|
||||
}
|
||||
}
|
||||
|
||||
class NotificationService {
|
||||
constructor(private sender: EmailSender) {}
|
||||
}
|
||||
```
|
||||
|
||||
### Incorrect:
|
||||
```lang
|
||||
class NotificationService {
|
||||
sendEmail(message: Message): void { }
|
||||
sendSMS(message: Message): void { }
|
||||
sendPush(message: Message): void { }
|
||||
}
|
||||
```
|
||||
|
||||
## Critical Rules (REPEAT)
|
||||
|
||||
Always write self-documenting code. Never rely on comments to explain complex logic.
|
||||
Always refactor when you see code smells. Never let technical debt accumulate.
|
||||
Always test edge cases explicitly. Never assume happy path only behavior.
|
||||
Never commit commented-out code. Always remove it or restore it.
|
||||
149
rules/concerns/documentation.md
Normal file
149
rules/concerns/documentation.md
Normal file
@@ -0,0 +1,149 @@
|
||||
# Documentation Rules
|
||||
|
||||
## When to Document
|
||||
|
||||
**Document public APIs**. Every public function, class, method, and module needs documentation. Users need to know how to use your code.
|
||||
**Document complex logic**. Algorithms, state machines, and non-obvious implementations need explanations. Future readers will thank you.
|
||||
**Document business rules**. Encode domain knowledge directly in comments. Don't make anyone reverse-engineer requirements from code.
|
||||
**Document trade-offs**. When you choose between alternatives, explain why. Help future maintainers understand the decision context.
|
||||
**Do NOT document obvious code**. Comments like `// get user` add noise. Delete them.
|
||||
|
||||
## Docstring Formats
|
||||
|
||||
### Python (Google Style)
|
||||
|
||||
```python
|
||||
def calculate_price(quantity: int, unit_price: float, discount: float = 0.0) -> float:
|
||||
"""Calculate total price after discount.
|
||||
Args:
|
||||
quantity: Number of items ordered.
|
||||
unit_price: Price per item in USD.
|
||||
discount: Decimal discount rate (0.0 to 1.0).
|
||||
Returns:
|
||||
Final price in USD.
|
||||
Raises:
|
||||
ValueError: If quantity is negative.
|
||||
"""
|
||||
```
|
||||
|
||||
### JavaScript/TypeScript (JSDoc)
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Validates user input against security rules.
|
||||
* @param {string} input - Raw user input from form.
|
||||
* @param {Object} rules - Validation constraints.
|
||||
* @param {number} rules.maxLength - Maximum allowed length.
|
||||
* @returns {boolean} True if input passes all rules.
|
||||
* @throws {ValidationError} If input violates security constraints.
|
||||
*/
|
||||
function validateInput(input, rules) {
|
||||
```
|
||||
|
||||
### Bash
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
# Deploy application to production environment.
|
||||
#
|
||||
# Usage: ./deploy.sh [environment]
|
||||
#
|
||||
# Args:
|
||||
# environment: Target environment (staging|production). Default: staging.
|
||||
#
|
||||
# Exits:
|
||||
# 0 on success, 1 on deployment failure.
|
||||
```
|
||||
|
||||
## Inline Comments: WHY Not WHAT
|
||||
|
||||
**Incorrect:**
|
||||
```python
|
||||
# Iterate through all users
|
||||
for user in users:
|
||||
# Check if user is active
|
||||
if user.active:
|
||||
# Increment counter
|
||||
count += 1
|
||||
```
|
||||
|
||||
**Correct:**
|
||||
```python
|
||||
# Count only active users to calculate monthly revenue
|
||||
for user in users:
|
||||
if user.active:
|
||||
count += 1
|
||||
```
|
||||
|
||||
**Incorrect:**
|
||||
```javascript
|
||||
// Set timeout to 5000
|
||||
setTimeout(() => {
|
||||
// Show error message
|
||||
alert('Error');
|
||||
}, 5000);
|
||||
```
|
||||
|
||||
**Correct:**
|
||||
```javascript
|
||||
// 5000ms delay prevents duplicate alerts during rapid retries
|
||||
setTimeout(() => {
|
||||
alert('Error');
|
||||
}, 5000);
|
||||
```
|
||||
|
||||
**Incorrect:**
|
||||
```bash
|
||||
# Remove temporary files
|
||||
rm -rf /tmp/app/*
|
||||
```
|
||||
|
||||
**Correct:**
|
||||
```bash
|
||||
# Clear temp directory before batch import to prevent partial state
|
||||
rm -rf /tmp/app/*
|
||||
```
|
||||
|
||||
**Rule:** Describe the intent and context. Never describe what the code obviously does.
|
||||
|
||||
## README Standards
|
||||
|
||||
Every project needs a README at the top level.
|
||||
|
||||
**Required sections:**
|
||||
1. **What it does** - One sentence summary
|
||||
2. **Installation** - Setup commands
|
||||
3. **Usage** - Basic example
|
||||
4. **Configuration** - Environment variables and settings
|
||||
5. **Contributing** - How to contribute
|
||||
|
||||
**Example structure:**
|
||||
|
||||
```markdown
|
||||
# Project Name
|
||||
|
||||
One-line description of what this project does.
|
||||
|
||||
## Installation
|
||||
```bash
|
||||
npm install
|
||||
```
|
||||
|
||||
## Usage
|
||||
```bash
|
||||
npm start
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Create `.env` file:
|
||||
```
|
||||
API_KEY=your_key_here
|
||||
```
|
||||
|
||||
## Contributing
|
||||
|
||||
See [CONTRIBUTING.md](./CONTRIBUTING.md).
|
||||
```
|
||||
|
||||
**Keep READMEs focused**. Link to separate docs for complex topics. Don't make the README a tutorial.
|
||||
118
rules/concerns/git-workflow.md
Normal file
118
rules/concerns/git-workflow.md
Normal file
@@ -0,0 +1,118 @@
|
||||
# Git Workflow Rules
|
||||
|
||||
## Conventional Commits
|
||||
|
||||
Format: `<type>(<scope>): <subject>`
|
||||
|
||||
### Commit Types
|
||||
|
||||
- **feat**: New feature
|
||||
- `feat(auth): add OAuth2 login flow`
|
||||
- `feat(api): expose user endpoints`
|
||||
|
||||
- **fix**: Bug fix
|
||||
- `fix(payment): resolve timeout on Stripe calls`
|
||||
- `fix(ui): button not clickable on mobile`
|
||||
|
||||
- **refactor**: Code refactoring (no behavior change)
|
||||
- `refactor(utils): extract date helpers`
|
||||
- `refactor(api): simplify error handling`
|
||||
|
||||
- **docs**: Documentation only
|
||||
- `docs(readme): update installation steps`
|
||||
- `docs(api): add endpoint examples`
|
||||
|
||||
- **chore**: Maintenance tasks
|
||||
- `chore(deps): update Node to 20`
|
||||
- `chore(ci): add GitHub actions workflow`
|
||||
|
||||
- **test**: Tests only
|
||||
- `test(auth): add unit tests for login`
|
||||
- `test(e2e): add checkout flow tests`
|
||||
|
||||
- **style**: Formatting, no logic change
|
||||
- `style: sort imports alphabetically`
|
||||
|
||||
### Commit Rules
|
||||
|
||||
- Subject max 72 chars
|
||||
- Imperative mood ("add", not "added")
|
||||
- No period at end
|
||||
- Reference issues: `Closes #123`
|
||||
|
||||
## Branch Naming
|
||||
|
||||
Pattern: `<type>/<short-description>`
|
||||
|
||||
### Branch Types
|
||||
|
||||
- `feature/add-user-dashboard`
|
||||
- `feature/enable-dark-mode`
|
||||
- `fix/login-redirect-loop`
|
||||
- `fix/payment-timeout-error`
|
||||
- `refactor/extract-user-service`
|
||||
- `refactor/simplify-auth-flow`
|
||||
- `hotfix/security-vulnerability`
|
||||
|
||||
### Branch Rules
|
||||
|
||||
- Lowercase and hyphens
|
||||
- Max 50 chars
|
||||
- Delete after merge
|
||||
|
||||
## Pull Requests
|
||||
|
||||
### PR Title
|
||||
|
||||
Follow Conventional Commit format:
|
||||
- `feat: add user dashboard`
|
||||
- `fix: resolve login redirect loop`
|
||||
|
||||
### PR Description
|
||||
|
||||
```markdown
|
||||
## What
|
||||
Brief description
|
||||
|
||||
## Why
|
||||
Reason for change
|
||||
|
||||
## How
|
||||
Implementation approach
|
||||
|
||||
## Testing
|
||||
Steps performed
|
||||
|
||||
## Checklist
|
||||
- [ ] Tests pass
|
||||
- [ ] Code reviewed
|
||||
- [ ] Docs updated
|
||||
```
|
||||
|
||||
## Merge Strategy
|
||||
|
||||
### Squash Merge
|
||||
|
||||
- Many small commits
|
||||
- One cohesive feature
|
||||
- Clean history
|
||||
|
||||
### Merge Commit
|
||||
|
||||
- Preserve commit history
|
||||
- Distinct milestones
|
||||
- Detailed history preferred
|
||||
|
||||
### When to Rebase
|
||||
|
||||
- Before opening PR
|
||||
- Resolving conflicts
|
||||
- Keeping current with main
|
||||
|
||||
## General Rules
|
||||
|
||||
- Pull latest from main before starting
|
||||
- Write atomic commits
|
||||
- Run tests before pushing
|
||||
- Request peer review before merge
|
||||
- Never force push to main/master
|
||||
105
rules/concerns/naming.md
Normal file
105
rules/concerns/naming.md
Normal file
@@ -0,0 +1,105 @@
|
||||
# Naming Conventions
|
||||
|
||||
Use consistent naming across all code. Follow language-specific conventions.
|
||||
|
||||
## Language Reference
|
||||
|
||||
| Type | Python | TypeScript | Nix | Shell |
|
||||
|------|--------|------------|-----|-------|
|
||||
| Variables | snake_case | camelCase | camelCase | UPPER_SNAKE |
|
||||
| Functions | snake_case | camelCase | camelCase | lower_case |
|
||||
| Classes | PascalCase | PascalCase | - | - |
|
||||
| Constants | UPPER_SNAKE | UPPER_SNAKE | camelCase | UPPER_SNAKE |
|
||||
| Files | snake_case | camelCase | hyphen-case | hyphen-case |
|
||||
| Modules | snake_case | camelCase | - | - |
|
||||
|
||||
## General Rules
|
||||
|
||||
**Files**: Use hyphen-case for documentation, snake_case for Python, camelCase for TypeScript. Names should describe content.
|
||||
|
||||
**Variables**: Use descriptive names. Avoid single letters except loop counters. No Hungarian notation.
|
||||
|
||||
**Functions**: Use verb-noun pattern. Name describes what it does, not how it does it.
|
||||
|
||||
**Classes**: Use PascalCase with descriptive nouns. Avoid abbreviations.
|
||||
|
||||
**Constants**: Use UPPER_SNAKE with descriptive names. Group related constants.
|
||||
|
||||
## Examples
|
||||
|
||||
Python:
|
||||
```python
|
||||
# Variables
|
||||
user_name = "alice"
|
||||
is_authenticated = True
|
||||
|
||||
# Functions
|
||||
def get_user_data(user_id):
|
||||
pass
|
||||
|
||||
# Classes
|
||||
class UserProfile:
|
||||
pass
|
||||
|
||||
# Constants
|
||||
MAX_RETRIES = 3
|
||||
API_ENDPOINT = "https://api.example.com"
|
||||
```
|
||||
|
||||
TypeScript:
|
||||
```typescript
|
||||
// Variables
|
||||
const userName = "alice";
|
||||
const isAuthenticated = true;
|
||||
|
||||
// Functions
|
||||
function getUserData(userId: string): User {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Classes
|
||||
class UserProfile {
|
||||
private name: string;
|
||||
}
|
||||
|
||||
// Constants
|
||||
const MAX_RETRIES = 3;
|
||||
const API_ENDPOINT = "https://api.example.com";
|
||||
```
|
||||
|
||||
Nix:
|
||||
```nix
|
||||
# Variables
|
||||
let
|
||||
userName = "alice";
|
||||
isAuthenticated = true;
|
||||
in
|
||||
# ...
|
||||
```
|
||||
|
||||
Shell:
|
||||
```bash
|
||||
# Variables
|
||||
USER_NAME="alice"
|
||||
IS_AUTHENTICATED=true
|
||||
|
||||
# Functions
|
||||
get_user_data() {
|
||||
echo "Getting data"
|
||||
}
|
||||
|
||||
# Constants
|
||||
MAX_RETRIES=3
|
||||
API_ENDPOINT="https://api.example.com"
|
||||
```
|
||||
|
||||
## File Naming
|
||||
|
||||
Use these patterns consistently. No exceptions.
|
||||
|
||||
- Skills: `hyphen-case`
|
||||
- Python: `snake_case.py`
|
||||
- TypeScript: `camelCase.ts` or `hyphen-case.ts`
|
||||
- Nix: `hyphen-case.nix`
|
||||
- Shell: `hyphen-case.sh`
|
||||
- Markdown: `UPPERCASE.md` or `sentence-case.md`
|
||||
82
rules/concerns/project-structure.md
Normal file
82
rules/concerns/project-structure.md
Normal file
@@ -0,0 +1,82 @@
|
||||
# Project Structure
|
||||
|
||||
## Python
|
||||
|
||||
Use src layout for all projects. Place application code in `src/<project>/`, tests in `tests/`.
|
||||
|
||||
```
|
||||
project/
|
||||
├── src/myproject/
|
||||
│ ├── __init__.py
|
||||
│ ├── main.py # Entry point
|
||||
│ └── core/
|
||||
│ └── module.py
|
||||
├── tests/
|
||||
│ ├── __init__.py
|
||||
│ └── test_module.py
|
||||
├── pyproject.toml # Config
|
||||
├── README.md
|
||||
└── .gitignore
|
||||
```
|
||||
|
||||
**Rules:**
|
||||
- One module per directory file
|
||||
- `__init__.py` in every package
|
||||
- Entry point in `src/myproject/main.py`
|
||||
- Config in root: `pyproject.toml`, `requirements.txt`
|
||||
|
||||
## TypeScript
|
||||
|
||||
Use `src/` for source, `dist/` for build output.
|
||||
|
||||
```
|
||||
project/
|
||||
├── src/
|
||||
│ ├── index.ts # Entry point
|
||||
│ ├── core/
|
||||
│ │ └── module.ts
|
||||
│ └── types.ts
|
||||
├── tests/
|
||||
│ └── module.test.ts
|
||||
├── package.json # Config
|
||||
├── tsconfig.json
|
||||
└── README.md
|
||||
```
|
||||
|
||||
**Rules:**
|
||||
- One module per file
|
||||
- Index exports from `src/index.ts`
|
||||
- Entry point in `src/index.ts`
|
||||
- Config in root: `package.json`, `tsconfig.json`
|
||||
|
||||
## Nix
|
||||
|
||||
Use `modules/` for NixOS modules, `pkgs/` for packages.
|
||||
|
||||
```
|
||||
nix-config/
|
||||
├── modules/
|
||||
│ ├── default.nix # Module list
|
||||
│ └── my-service.nix
|
||||
├── pkgs/
|
||||
│ └── my-package/
|
||||
│ └── default.nix
|
||||
├── flake.nix # Entry point
|
||||
├── flake.lock
|
||||
└── README.md
|
||||
```
|
||||
|
||||
**Rules:**
|
||||
- One module per file in `modules/`
|
||||
- One package per directory in `pkgs/`
|
||||
- Entry point in `flake.nix`
|
||||
- Config in root: `flake.nix`, shell.nix
|
||||
|
||||
## General
|
||||
|
||||
- Use hyphen-case for directories
|
||||
- Use kebab-case for file names
|
||||
- Config files in project root
|
||||
- Tests separate from source
|
||||
- Docs in root: README.md, CHANGELOG.md
|
||||
- Hidden configs: .env, .gitignore
|
||||
476
rules/concerns/tdd.md
Normal file
476
rules/concerns/tdd.md
Normal file
@@ -0,0 +1,476 @@
|
||||
# Test-Driven Development (Strict Enforcement)
|
||||
|
||||
## Critical Rules (MUST follow)
|
||||
|
||||
**NEVER write production code without a failing test first.**
|
||||
**ALWAYS follow the red-green-refactor cycle. No exceptions.**
|
||||
**NEVER skip the refactor step. Code quality is mandatory.**
|
||||
**ALWAYS commit after green, never commit red tests.**
|
||||
|
||||
---
|
||||
|
||||
## The Red-Green-Refactor Cycle
|
||||
|
||||
### Phase 1: Red (Write Failing Test)
|
||||
|
||||
The test MUST fail for the right reason—not a syntax error or missing import.
|
||||
|
||||
```python
|
||||
# CORRECT: Test fails because behavior doesn't exist yet
|
||||
def test_calculate_discount_for_premium_members():
|
||||
user = User(tier="premium")
|
||||
cart = Cart(items=[Item(price=100)])
|
||||
|
||||
discount = calculate_discount(user, cart)
|
||||
|
||||
assert discount == 10 # Fails: calculate_discount not implemented
|
||||
|
||||
# INCORRECT: Test fails for wrong reason (will pass accidentally)
|
||||
def test_calculate_discount():
|
||||
discount = calculate_discount() # Fails: missing required args
|
||||
assert discount is not None
|
||||
```
|
||||
|
||||
**Red Phase Checklist:**
|
||||
- [ ] Test describes ONE behavior
|
||||
- [ ] Test name clearly states expected outcome
|
||||
- [ ] Test fails for the intended reason
|
||||
- [ ] Error message is meaningful
|
||||
|
||||
### Phase 2: Green (Write Minimum Code)
|
||||
|
||||
Write the MINIMUM code to make the test pass. Do not implement future features.
|
||||
|
||||
```python
|
||||
# CORRECT: Minimum implementation
|
||||
def calculate_discount(user, cart):
|
||||
if user.tier == "premium":
|
||||
return 10
|
||||
return 0
|
||||
|
||||
# INCORRECT: Over-engineering for future needs
|
||||
def calculate_discount(user, cart):
|
||||
discounts = {
|
||||
"premium": 10,
|
||||
"gold": 15, # Not tested
|
||||
"silver": 5, # Not tested
|
||||
"basic": 0 # Not tested
|
||||
}
|
||||
return discounts.get(user.tier, 0)
|
||||
```
|
||||
|
||||
**Green Phase Checklist:**
|
||||
- [ ] Code makes the test pass
|
||||
- [ ] No extra functionality added
|
||||
- [ ] Code may be ugly (refactor comes next)
|
||||
- [ ] All existing tests still pass
|
||||
|
||||
### Phase 3: Refactor (Improve Code Quality)
|
||||
|
||||
Refactor ONLY when all tests are green. Make small, incremental changes.
|
||||
|
||||
```python
|
||||
# BEFORE (Green but messy)
|
||||
def calculate_discount(user, cart):
|
||||
if user.tier == "premium":
|
||||
return 10
|
||||
return 0
|
||||
|
||||
# AFTER (Refactored)
|
||||
DISCOUNT_RATES = {"premium": 0.10}
|
||||
|
||||
def calculate_discount(user, cart):
|
||||
rate = DISCOUNT_RATES.get(user.tier, 0)
|
||||
return int(cart.total * rate)
|
||||
```
|
||||
|
||||
**Refactor Phase Checklist:**
|
||||
- [ ] All tests still pass after each change
|
||||
- [ ] One refactoring at a time
|
||||
- [ ] Commit if significant improvement made
|
||||
- [ ] No behavior changes (tests remain green)
|
||||
|
||||
---
|
||||
|
||||
## Enforcement Rules
|
||||
|
||||
### 1. Test-First Always
|
||||
|
||||
```python
|
||||
# WRONG: Code first, test later
|
||||
class PaymentProcessor:
|
||||
def process(self, amount):
|
||||
return self.gateway.charge(amount)
|
||||
|
||||
# Then write test... (TOO LATE!)
|
||||
|
||||
# CORRECT: Test first
|
||||
def test_process_payment_charges_gateway():
|
||||
mock_gateway = MockGateway()
|
||||
processor = PaymentProcessor(gateway=mock_gateway)
|
||||
|
||||
processor.process(100)
|
||||
|
||||
assert mock_gateway.charged_amount == 100
|
||||
```
|
||||
|
||||
### 2. No Commented-Out Tests
|
||||
|
||||
```python
|
||||
# WRONG: Commented test hides failing behavior
|
||||
# def test_refund_processing():
|
||||
# # TODO: fix this later
|
||||
# assert False
|
||||
|
||||
# CORRECT: Use skip with reason
|
||||
@pytest.mark.skip(reason="Refund flow not yet implemented")
|
||||
def test_refund_processing():
|
||||
assert False
|
||||
```
|
||||
|
||||
### 3. Commit Hygiene
|
||||
|
||||
```bash
|
||||
# WRONG: Committing with failing tests
|
||||
git commit -m "WIP: adding payment"
|
||||
# Tests fail in CI
|
||||
|
||||
# CORRECT: Only commit green
|
||||
git commit -m "Add payment processing"
|
||||
# All tests pass locally and in CI
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## AI-Assisted TDD Patterns
|
||||
|
||||
### Pattern 1: Explicit Test Request
|
||||
|
||||
When working with AI assistants, request tests explicitly:
|
||||
|
||||
```
|
||||
CORRECT PROMPT:
|
||||
"Write a failing test for calculating user discounts based on tier.
|
||||
Then implement the minimum code to make it pass."
|
||||
|
||||
INCORRECT PROMPT:
|
||||
"Implement a discount calculator with tier support."
|
||||
```
|
||||
|
||||
### Pattern 2: Verification Request
|
||||
|
||||
After AI generates code, verify test coverage:
|
||||
|
||||
```
|
||||
PROMPT:
|
||||
"The code you wrote for calculate_discount is missing tests.
|
||||
First, show me a failing test for the edge case where cart is empty.
|
||||
Then make it pass with minimum code."
|
||||
```
|
||||
|
||||
### Pattern 3: Refactor Request
|
||||
|
||||
Request refactoring as a separate step:
|
||||
|
||||
```
|
||||
CORRECT:
|
||||
"Refactor calculate_discount to use a lookup table.
|
||||
Run tests after each change."
|
||||
|
||||
INCORRECT:
|
||||
"Refactor and add new features at the same time."
|
||||
```
|
||||
|
||||
### Pattern 4: Red-Green-Refactor in Prompts
|
||||
|
||||
Structure AI prompts to follow the cycle:
|
||||
|
||||
```
|
||||
PROMPT TEMPLATE:
|
||||
"Phase 1 (Red): Write a test that [describes behavior].
|
||||
The test should fail because [reason].
|
||||
Show me the failing test output.
|
||||
|
||||
Phase 2 (Green): Write the minimum code to pass this test.
|
||||
No extra features.
|
||||
|
||||
Phase 3 (Refactor): Review the code. Suggest improvements.
|
||||
I'll approve before you apply changes."
|
||||
```
|
||||
|
||||
### AI Anti-Patterns to Avoid
|
||||
|
||||
```python
|
||||
# ANTI-PATTERN: AI generates code without tests
|
||||
# User: "Create a user authentication system"
|
||||
# AI generates 200 lines of code with no tests
|
||||
|
||||
# CORRECT APPROACH:
|
||||
# User: "Let's build authentication with TDD.
|
||||
# First, write a failing test for successful login."
|
||||
|
||||
# ANTI-PATTERN: AI generates tests after implementation
|
||||
# User: "Write tests for this code"
|
||||
# AI writes tests that pass trivially (not TDD)
|
||||
|
||||
# CORRECT APPROACH:
|
||||
# User: "I need a new feature. Write the failing test first."
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Legacy Code Strategy
|
||||
|
||||
### 1. Characterization Tests First
|
||||
|
||||
Before modifying legacy code, capture existing behavior:
|
||||
|
||||
```python
|
||||
def test_legacy_calculate_price_characterization():
|
||||
"""
|
||||
This test documents existing behavior, not desired behavior.
|
||||
Do not change expected values without understanding impact.
|
||||
"""
|
||||
# Given: Current production inputs
|
||||
order = Order(items=[Item(price=100, quantity=2)])
|
||||
|
||||
# When: Execute legacy code
|
||||
result = legacy_calculate_price(order)
|
||||
|
||||
# Then: Capture ACTUAL output (even if wrong)
|
||||
assert result == 215 # Includes mystery 7.5% surcharge
|
||||
```
|
||||
|
||||
### 2. Strangler Fig Pattern
|
||||
|
||||
```python
|
||||
# Step 1: Write test for new behavior
|
||||
def test_calculate_price_with_new_algorithm():
|
||||
order = Order(items=[Item(price=100, quantity=2)])
|
||||
result = calculate_price_v2(order)
|
||||
assert result == 200 # No mystery surcharge
|
||||
|
||||
# Step 2: Implement new code with TDD
|
||||
def calculate_price_v2(order):
|
||||
return sum(item.price * item.quantity for item in order.items)
|
||||
|
||||
# Step 3: Route new requests to new code
|
||||
def calculate_price(order):
|
||||
if order.use_new_pricing:
|
||||
return calculate_price_v2(order)
|
||||
return legacy_calculate_price(order)
|
||||
|
||||
# Step 4: Gradually migrate, removing legacy path
|
||||
```
|
||||
|
||||
### 3. Safe Refactoring Sequence
|
||||
|
||||
```python
|
||||
# 1. Add characterization tests
|
||||
# 2. Extract method (tests stay green)
|
||||
# 3. Add unit tests for extracted method
|
||||
# 4. Refactor extracted method with TDD
|
||||
# 5. Inline or delete old method
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration Test TDD
|
||||
|
||||
### Outside-In (London School)
|
||||
|
||||
```python
|
||||
# 1. Write acceptance test (fails end-to-end)
|
||||
def test_user_can_complete_purchase():
|
||||
user = create_user()
|
||||
add_item_to_cart(user, item)
|
||||
|
||||
result = complete_purchase(user)
|
||||
|
||||
assert result.status == "success"
|
||||
assert user.has_receipt()
|
||||
|
||||
# 2. Drop down to unit test for first component
|
||||
def test_cart_calculates_total():
|
||||
cart = Cart()
|
||||
cart.add(Item(price=100))
|
||||
|
||||
assert cart.total == 100
|
||||
|
||||
# 3. Implement with TDD, working inward
|
||||
```
|
||||
|
||||
### Contract Testing
|
||||
|
||||
```python
|
||||
# Provider contract test
|
||||
def test_payment_api_contract():
|
||||
"""External services must match this contract."""
|
||||
response = client.post("/payments", json={
|
||||
"amount": 100,
|
||||
"currency": "USD"
|
||||
})
|
||||
|
||||
assert response.status_code == 201
|
||||
assert "transaction_id" in response.json()
|
||||
|
||||
# Consumer contract test
|
||||
def test_payment_gateway_contract():
|
||||
"""We expect the gateway to return transaction IDs."""
|
||||
mock_gateway = MockPaymentGateway()
|
||||
mock_gateway.expect_charge(amount=100).and_return(
|
||||
transaction_id="tx_123"
|
||||
)
|
||||
|
||||
result = process_payment(mock_gateway, amount=100)
|
||||
|
||||
assert result.transaction_id == "tx_123"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Refactoring Rules
|
||||
|
||||
### Rule 1: Refactor Only When Green
|
||||
|
||||
```python
|
||||
# WRONG: Refactoring with failing test
|
||||
def test_new_feature():
|
||||
assert False # Failing
|
||||
|
||||
def existing_code():
|
||||
# Refactoring here is DANGEROUS
|
||||
pass
|
||||
|
||||
# CORRECT: All tests pass before refactoring
|
||||
def existing_code():
|
||||
# Safe to refactor now
|
||||
pass
|
||||
```
|
||||
|
||||
### Rule 2: One Refactoring at a Time
|
||||
|
||||
```python
|
||||
# WRONG: Multiple refactorings at once
|
||||
def process_order(order):
|
||||
# Changed: variable name
|
||||
# Changed: extracted method
|
||||
# Changed: added caching
|
||||
# Which broke it? Who knows.
|
||||
pass
|
||||
|
||||
# CORRECT: One change, test, commit
|
||||
# Commit 1: Rename variable
|
||||
# Commit 2: Extract method
|
||||
# Commit 3: Add caching
|
||||
```
|
||||
|
||||
### Rule 3: Baby Steps
|
||||
|
||||
```python
|
||||
# WRONG: Large refactoring
|
||||
# Before: 500-line monolith
|
||||
# After: 10 new classes
|
||||
# Risk: Too high
|
||||
|
||||
# CORRECT: Extract one method at a time
|
||||
# Step 1: Extract calculate_total (commit)
|
||||
# Step 2: Extract validate_items (commit)
|
||||
# Step 3: Extract apply_discounts (commit)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Test Quality Gates
|
||||
|
||||
### Pre-Commit Hooks
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# .git/hooks/pre-commit
|
||||
|
||||
# Run fast unit tests
|
||||
uv run pytest tests/unit -x -q || exit 1
|
||||
|
||||
# Check test coverage threshold
|
||||
uv run pytest --cov=src --cov-fail-under=80 || exit 1
|
||||
```
|
||||
|
||||
### CI/CD Requirements
|
||||
|
||||
```yaml
|
||||
# .github/workflows/test.yml
|
||||
- name: Run Tests
|
||||
run: |
|
||||
pytest --cov=src --cov-report=xml --cov-fail-under=80
|
||||
|
||||
- name: Check Test Quality
|
||||
run: |
|
||||
# Fail if new code lacks tests
|
||||
diff-cover coverage.xml --fail-under=80
|
||||
```
|
||||
|
||||
### Code Review Checklist
|
||||
|
||||
```markdown
|
||||
## TDD Verification
|
||||
- [ ] New code has corresponding tests
|
||||
- [ ] Tests were written FIRST (check commit order)
|
||||
- [ ] Each test tests ONE behavior
|
||||
- [ ] Test names describe the scenario
|
||||
- [ ] No commented-out or skipped tests without reason
|
||||
- [ ] Coverage maintained or improved
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## When TDD Is Not Appropriate
|
||||
|
||||
TDD may be skipped ONLY for:
|
||||
|
||||
### 1. Exploratory Prototypes
|
||||
|
||||
```python
|
||||
# prototype.py - Delete after learning
|
||||
# No tests needed for throwaway exploration
|
||||
def quick_test_api():
|
||||
response = requests.get("https://api.example.com")
|
||||
print(response.json())
|
||||
```
|
||||
|
||||
### 2. One-Time Scripts
|
||||
|
||||
```python
|
||||
# migrate_data.py - Run once, discard
|
||||
# Tests would cost more than value provided
|
||||
```
|
||||
|
||||
### 3. Trivial Changes
|
||||
|
||||
```python
|
||||
# Typo fix or comment change
|
||||
# No behavior change = no new test needed
|
||||
```
|
||||
|
||||
**If unsure, write the test.**
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Phase | Rule | Check |
|
||||
|---------|-----------------------------------------|-------------------------------------|
|
||||
| Red | Write failing test first | Test fails for right reason |
|
||||
| Green | Write minimum code to pass | No extra features |
|
||||
| Refactor| Improve code while tests green | Run tests after each change |
|
||||
| Commit | Only commit green tests | All tests pass in CI |
|
||||
|
||||
## TDD Mantra
|
||||
|
||||
```
|
||||
Red. Green. Refactor. Commit. Repeat.
|
||||
|
||||
No test = No code.
|
||||
No green = No commit.
|
||||
No refactor = Technical debt.
|
||||
```
|
||||
134
rules/concerns/testing.md
Normal file
134
rules/concerns/testing.md
Normal file
@@ -0,0 +1,134 @@
|
||||
# Testing Rules
|
||||
|
||||
## Arrange-Act-Assert Pattern
|
||||
|
||||
Structure every test in three distinct phases:
|
||||
|
||||
```python
|
||||
# Arrange: Set up the test data and conditions
|
||||
user = User(name="Alice", role="admin")
|
||||
session = create_test_session(user.id)
|
||||
|
||||
# Act: Execute the behavior under test
|
||||
result = grant_permission(session, "read_documents")
|
||||
|
||||
# Assert: Verify the expected outcome
|
||||
assert result.granted is True
|
||||
assert result.permissions == ["read_documents"]
|
||||
```
|
||||
|
||||
Never mix phases. Comment each phase clearly for complex setups. Keep Act phase to one line if possible.
|
||||
|
||||
## Behavior vs Implementation Testing
|
||||
|
||||
Test behavior, not implementation details:
|
||||
|
||||
```python
|
||||
# GOOD: Tests the observable behavior
|
||||
def test_user_can_login():
|
||||
response = login("alice@example.com", "password123")
|
||||
assert response.status_code == 200
|
||||
assert "session_token" in response.cookies
|
||||
|
||||
# BAD: Tests internal implementation
|
||||
def test_login_sets_database_flag():
|
||||
login("alice@example.com", "password123")
|
||||
user = User.get(email="alice@example.com")
|
||||
assert user._logged_in_flag is True # Private field
|
||||
```
|
||||
|
||||
Focus on inputs and outputs. Test public contracts. Refactor internals freely without breaking tests.
|
||||
|
||||
## Mocking Philosophy
|
||||
|
||||
Mock external dependencies, not internal code:
|
||||
|
||||
```python
|
||||
# GOOD: Mock external services
|
||||
@patch("requests.post")
|
||||
def test_sends_notification_to_slack(mock_post):
|
||||
send_notification("Build complete!")
|
||||
mock_post.assert_called_once_with(
|
||||
"https://slack.com/api/chat.postMessage",
|
||||
json={"text": "Build complete!"}
|
||||
)
|
||||
|
||||
# BAD: Mock internal methods
|
||||
@patch("NotificationService._format_message")
|
||||
def test_notification_formatting(mock_format):
|
||||
# Don't mock private methods
|
||||
send_notification("Build complete!")
|
||||
```
|
||||
|
||||
Mock when:
|
||||
- Dependency is slow (database, network, file system)
|
||||
- Dependency is unreliable (external APIs)
|
||||
- Dependency is expensive (third-party services)
|
||||
|
||||
Don't mock when:
|
||||
- Testing the dependency itself
|
||||
- The dependency is fast and stable
|
||||
- The mock becomes more complex than real implementation
|
||||
|
||||
## Coverage Expectations
|
||||
|
||||
Write tests for:
|
||||
- Critical business logic (aim for 90%+)
|
||||
- Edge cases and error paths (aim for 80%+)
|
||||
- Public APIs and contracts (aim for 100%)
|
||||
|
||||
Don't obsess over:
|
||||
- Trivial getters/setters
|
||||
- Generated code
|
||||
- One-line wrappers
|
||||
|
||||
Coverage is a floor, not a ceiling. A test suite at 100% coverage that doesn't verify behavior is worthless.
|
||||
|
||||
## Test-Driven Development
|
||||
|
||||
Follow the red-green-refactor cycle:
|
||||
1. Red: Write failing test for new behavior
|
||||
2. Green: Write minimum code to pass
|
||||
3. Refactor: improve code while tests stay green
|
||||
|
||||
Write tests first for new features. Write tests after for bug fixes. Never refactor without tests.
|
||||
|
||||
## Test Organization
|
||||
|
||||
Group tests by feature or behavior, not by file structure. Name tests to describe the scenario:
|
||||
|
||||
```python
|
||||
class TestUserAuthentication:
|
||||
def test_valid_credentials_succeeds(self):
|
||||
pass
|
||||
|
||||
def test_invalid_credentials_fails(self):
|
||||
pass
|
||||
|
||||
def test_locked_account_fails(self):
|
||||
pass
|
||||
```
|
||||
|
||||
Each test should stand alone. Avoid shared state between tests. Use fixtures or setup methods to reduce duplication.
|
||||
|
||||
## Test Data
|
||||
|
||||
Use realistic test data that reflects production scenarios:
|
||||
|
||||
```python
|
||||
# GOOD: Realistic values
|
||||
user = User(
|
||||
email="alice@example.com",
|
||||
name="Alice Smith",
|
||||
age=28
|
||||
)
|
||||
|
||||
# BAD: Placeholder values
|
||||
user = User(
|
||||
email="test@test.com",
|
||||
name="Test User",
|
||||
age=999
|
||||
)
|
||||
```
|
||||
|
||||
Avoid magic strings and numbers. Use named constants for expected values that change often.
|
||||
42
rules/frameworks/n8n.md
Normal file
42
rules/frameworks/n8n.md
Normal file
@@ -0,0 +1,42 @@
|
||||
# n8n Workflow Automation Rules
|
||||
|
||||
## Workflow Design
|
||||
- Start with a clear trigger: Webhook, Schedule, or Event source
|
||||
- Keep workflows under 20 nodes for maintainability
|
||||
- Group related logic with sub-workflows
|
||||
- Use the "Switch" node for conditional branching
|
||||
- Add "Wait" nodes between rate-limited API calls
|
||||
|
||||
## Node Naming
|
||||
- Use verb-based names: `Fetch Users`, `Transform Data`, `Send Email`
|
||||
- Prefix data nodes: `Get_`, `Set_`, `Update_`
|
||||
- Prefix conditionals: `Check_`, `If_`, `When_`
|
||||
- Prefix actions: `Send_`, `Create_`, `Delete_`
|
||||
- Add version suffix to API nodes: `API_v1_Users`
|
||||
|
||||
## Error Handling
|
||||
- Always add an Error Trigger node
|
||||
- Route errors to a "Notify Failure" branch
|
||||
- Log error details: `$json.error.message`, `$json.node.name`
|
||||
- Send alerts on critical failures
|
||||
- Add "Continue On Fail" for non-essential nodes
|
||||
|
||||
## Data Flow
|
||||
- Use "Set" nodes to normalize output structure
|
||||
- Reference previous nodes: `{{ $json.field }}`
|
||||
- Use "Merge" node to combine multiple data sources
|
||||
- Apply "Code" node for complex transformations
|
||||
- Clean data before sending to external APIs
|
||||
|
||||
## Credential Security
|
||||
- Store all secrets in n8n credentials manager
|
||||
- Never hardcode API keys or tokens
|
||||
- Use environment-specific credential sets
|
||||
- Rotate credentials regularly
|
||||
- Limit credential scope to minimum required permissions
|
||||
|
||||
## Testing
|
||||
- Test each node independently with "Execute Node"
|
||||
- Verify data structure at each step
|
||||
- Mock external dependencies during development
|
||||
- Log workflow execution for debugging
|
||||
0
rules/languages/.gitkeep
Normal file
0
rules/languages/.gitkeep
Normal file
129
rules/languages/nix.md
Normal file
129
rules/languages/nix.md
Normal file
@@ -0,0 +1,129 @@
|
||||
# Nix Code Conventions
|
||||
|
||||
## Formatting
|
||||
|
||||
- Use `alejandra` for formatting
|
||||
- camelCase for variables, `PascalCase` for types
|
||||
- 2 space indentation (alejandra default)
|
||||
- No trailing whitespace
|
||||
|
||||
## Flake Structure
|
||||
|
||||
```nix
|
||||
{
|
||||
description = "Description here";
|
||||
inputs = {
|
||||
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
|
||||
flake-utils.url = "github:numtide/flake-utils";
|
||||
};
|
||||
outputs = { self, nixpkgs, flake-utils, ... }:
|
||||
flake-utils.lib.eachDefaultSystem (system:
|
||||
let
|
||||
pkgs = nixpkgs.legacyPackages.${system};
|
||||
in {
|
||||
packages.default = pkgs.hello;
|
||||
devShells.default = pkgs.mkShell {
|
||||
buildInputs = [ pkgs.hello ];
|
||||
};
|
||||
}
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
## Module Patterns
|
||||
|
||||
Standard module function signature:
|
||||
|
||||
```nix
|
||||
{ config, lib, pkgs, ... }:
|
||||
{
|
||||
options.myService.enable = lib.mkEnableOption "my service";
|
||||
config = lib.mkIf config.myService.enable {
|
||||
services.myService.enable = true;
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## Conditionals and Merging
|
||||
|
||||
- Use `mkIf` for conditional config
|
||||
- Use `mkMerge` to combine multiple config sets
|
||||
- Use `mkOptionDefault` for defaults that can be overridden
|
||||
|
||||
```nix
|
||||
config = lib.mkMerge [
|
||||
(lib.mkIf cfg.enable { ... })
|
||||
(lib.mkIf cfg.extraConfig { ... })
|
||||
];
|
||||
```
|
||||
|
||||
## Anti-Patterns (AVOID)
|
||||
|
||||
### `with pkgs;`
|
||||
Bad: Pollutes namespace, hard to trace origins
|
||||
```nix
|
||||
{ pkgs, ... }:
|
||||
{
|
||||
packages = with pkgs; [ vim git ];
|
||||
}
|
||||
```
|
||||
|
||||
Good: Explicit references
|
||||
```nix
|
||||
{ pkgs, ... }:
|
||||
{
|
||||
packages = [ pkgs.vim pkgs.git ];
|
||||
}
|
||||
```
|
||||
|
||||
### `builtins.fetchTarball`
|
||||
Use flake inputs instead. `fetchTarball` is non-reproducible.
|
||||
|
||||
### Impure operations
|
||||
Avoid `import <nixpkgs>` in flakes. Always use inputs.
|
||||
|
||||
### `builtins.getAttr` / `builtins.hasAttr`
|
||||
Use `lib.attrByPath` or `lib.optionalAttrs` instead.
|
||||
|
||||
## Home Manager Patterns
|
||||
|
||||
```nix
|
||||
{ config, pkgs, lib, ... }:
|
||||
{
|
||||
home.packages = with pkgs; [ ripgrep fd ];
|
||||
programs.zsh.enable = true;
|
||||
xdg.configFile."myapp/config".text = "...";
|
||||
}
|
||||
```
|
||||
|
||||
## Overlays
|
||||
|
||||
```nix
|
||||
{ config, lib, pkgs, ... }:
|
||||
let
|
||||
myOverlay = final: prev: {
|
||||
myPackage = prev.myPackage.overrideAttrs (old: { ... });
|
||||
};
|
||||
in
|
||||
{
|
||||
nixpkgs.overlays = [ myOverlay ];
|
||||
}
|
||||
```
|
||||
|
||||
## Imports and References
|
||||
|
||||
- Use flake inputs for dependencies
|
||||
- `lib` is always available in modules
|
||||
- Reference packages via `pkgs.packageName`
|
||||
- Use `callPackage` for complex package definitions
|
||||
|
||||
## File Organization
|
||||
|
||||
```
|
||||
flake.nix # Entry point
|
||||
modules/ # NixOS modules
|
||||
services/
|
||||
my-service.nix
|
||||
overlays/ # Package overrides
|
||||
default.nix
|
||||
```
|
||||
224
rules/languages/python.md
Normal file
224
rules/languages/python.md
Normal file
@@ -0,0 +1,224 @@
|
||||
# Python Language Rules
|
||||
|
||||
## Toolchain
|
||||
|
||||
### Package Management (uv)
|
||||
```bash
|
||||
uv init my-project --package
|
||||
uv add numpy pandas
|
||||
uv add --dev pytest ruff pyright hypothesis
|
||||
uv run python -m pytest
|
||||
uv lock --upgrade-package numpy
|
||||
```
|
||||
|
||||
### Linting & Formatting (ruff)
|
||||
```toml
|
||||
[tool.ruff]
|
||||
line-length = 100
|
||||
target-version = "py311"
|
||||
|
||||
[tool.ruff.lint]
|
||||
select = ["E", "F", "W", "I", "N", "UP"]
|
||||
ignore = ["E501"]
|
||||
|
||||
[tool.ruff.format]
|
||||
quote-style = "double"
|
||||
```
|
||||
|
||||
### Type Checking (pyright)
|
||||
```toml
|
||||
[tool.pyright]
|
||||
typeCheckingMode = "strict"
|
||||
reportMissingTypeStubs = true
|
||||
reportUnknownMemberType = true
|
||||
```
|
||||
|
||||
### Testing (pytest + hypothesis)
|
||||
```python
|
||||
import pytest
|
||||
from hypothesis import given, strategies as st
|
||||
|
||||
@given(st.integers(), st.integers())
|
||||
def test_addition_commutative(a, b):
|
||||
assert a + b == b + a
|
||||
|
||||
@pytest.fixture
|
||||
def user_data():
|
||||
return {"name": "Alice", "age": 30}
|
||||
|
||||
def test_user_creation(user_data):
|
||||
user = User(**user_data)
|
||||
assert user.name == "Alice"
|
||||
```
|
||||
|
||||
### Data Validation (Pydantic)
|
||||
```python
|
||||
from pydantic import BaseModel, Field, validator
|
||||
|
||||
class User(BaseModel):
|
||||
name: str = Field(min_length=1, max_length=100)
|
||||
age: int = Field(ge=0, le=150)
|
||||
email: str
|
||||
|
||||
@validator('email')
|
||||
def email_must_contain_at(cls, v):
|
||||
if '@' not in v:
|
||||
raise ValueError('must contain @')
|
||||
return v
|
||||
```
|
||||
|
||||
## Idioms
|
||||
|
||||
### Comprehensions
|
||||
```python
|
||||
# List comprehension
|
||||
squares = [x**2 for x in range(10) if x % 2 == 0]
|
||||
|
||||
# Dict comprehension
|
||||
word_counts = {word: text.count(word) for word in unique_words}
|
||||
|
||||
# Set comprehension
|
||||
unique_chars = {char for char in text if char.isalpha()}
|
||||
```
|
||||
|
||||
### Context Managers
|
||||
```python
|
||||
# Built-in context managers
|
||||
with open('file.txt', 'r') as f:
|
||||
content = f.read()
|
||||
|
||||
# Custom context manager
|
||||
from contextlib import contextmanager
|
||||
|
||||
@contextmanager
|
||||
def timer():
|
||||
start = time.time()
|
||||
yield
|
||||
print(f"Elapsed: {time.time() - start:.2f}s")
|
||||
```
|
||||
|
||||
### Generators
|
||||
```python
|
||||
def fibonacci():
|
||||
a, b = 0, 1
|
||||
while True:
|
||||
yield a
|
||||
a, b = b, a + b
|
||||
|
||||
def read_lines(file_path):
|
||||
with open(file_path) as f:
|
||||
for line in f:
|
||||
yield line.strip()
|
||||
```
|
||||
|
||||
### F-strings
|
||||
```python
|
||||
name = "Alice"
|
||||
age = 30
|
||||
|
||||
# Basic interpolation
|
||||
msg = f"Name: {name}, Age: {age}"
|
||||
|
||||
# Expression evaluation
|
||||
msg = f"Next year: {age + 1}"
|
||||
|
||||
# Format specs
|
||||
msg = f"Price: ${price:.2f}"
|
||||
msg = f"Hex: {0xFF:X}"
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### Bare Except
|
||||
```python
|
||||
# AVOID: Catches all exceptions including SystemExit
|
||||
try:
|
||||
risky_operation()
|
||||
except:
|
||||
pass
|
||||
|
||||
# USE: Catch specific exceptions
|
||||
try:
|
||||
risky_operation()
|
||||
except ValueError as e:
|
||||
log_error(e)
|
||||
except KeyError as e:
|
||||
log_error(e)
|
||||
```
|
||||
|
||||
### Mutable Defaults
|
||||
```python
|
||||
# AVOID: Default argument created once
|
||||
def append_item(item, items=[]):
|
||||
items.append(item)
|
||||
return items
|
||||
|
||||
# USE: None as sentinel
|
||||
def append_item(item, items=None):
|
||||
if items is None:
|
||||
items = []
|
||||
items.append(item)
|
||||
return items
|
||||
```
|
||||
|
||||
### Global State
|
||||
```python
|
||||
# AVOID: Global mutable state
|
||||
counter = 0
|
||||
|
||||
def increment():
|
||||
global counter
|
||||
counter += 1
|
||||
|
||||
# USE: Class-based state
|
||||
class Counter:
|
||||
def __init__(self):
|
||||
self.count = 0
|
||||
|
||||
def increment(self):
|
||||
self.count += 1
|
||||
```
|
||||
|
||||
### Star Imports
|
||||
```python
|
||||
# AVOID: Pollutes namespace, unclear origins
|
||||
from module import *
|
||||
|
||||
# USE: Explicit imports
|
||||
from module import specific_function, MyClass
|
||||
import module as m
|
||||
```
|
||||
|
||||
## Project Setup
|
||||
|
||||
### pyproject.toml Structure
|
||||
```toml
|
||||
[project]
|
||||
name = "my-project"
|
||||
version = "0.1.0"
|
||||
requires-python = ">=3.11"
|
||||
dependencies = [
|
||||
"pydantic>=2.0",
|
||||
"httpx>=0.25",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
dev = ["pytest", "ruff", "pyright", "hypothesis"]
|
||||
|
||||
[build-system]
|
||||
requires = ["hatchling"]
|
||||
build-backend = "hatchling.build"
|
||||
```
|
||||
|
||||
### src Layout
|
||||
```
|
||||
my-project/
|
||||
├── pyproject.toml
|
||||
└── src/
|
||||
└── my_project/
|
||||
├── __init__.py
|
||||
├── main.py
|
||||
└── utils/
|
||||
├── __init__.py
|
||||
└── helpers.py
|
||||
```
|
||||
100
rules/languages/shell.md
Normal file
100
rules/languages/shell.md
Normal file
@@ -0,0 +1,100 @@
|
||||
# Shell Scripting Rules
|
||||
|
||||
## Shebang
|
||||
|
||||
Always use `#!/usr/bin/env bash` for portability. Never hardcode `/bin/bash`.
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
```
|
||||
|
||||
## Strict Mode
|
||||
|
||||
Enable strict mode in every script.
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
```
|
||||
|
||||
- `-e`: Exit on error
|
||||
- `-u`: Error on unset variables
|
||||
- `-o pipefail`: Return exit status of last failed pipe command
|
||||
|
||||
## Shellcheck
|
||||
|
||||
Run shellcheck on all scripts before committing.
|
||||
|
||||
```bash
|
||||
shellcheck script.sh
|
||||
```
|
||||
|
||||
## Quoting
|
||||
|
||||
Quote all variable expansions and command substitutions. Use arrays instead of word-splitting strings.
|
||||
|
||||
```bash
|
||||
# Good
|
||||
"${var}"
|
||||
files=("file1.txt" "file2.txt")
|
||||
for f in "${files[@]}"; do
|
||||
process "$f"
|
||||
done
|
||||
|
||||
# Bad
|
||||
$var
|
||||
files="file1.txt file2.txt"
|
||||
for f in $files; do
|
||||
process $f
|
||||
done
|
||||
```
|
||||
|
||||
## Functions
|
||||
|
||||
Define with parentheses, use `local` for variables.
|
||||
|
||||
```bash
|
||||
my_function() {
|
||||
local result
|
||||
result=$(some_command)
|
||||
echo "$result"
|
||||
}
|
||||
```
|
||||
|
||||
## Command Substitution
|
||||
|
||||
Use `$()` not backticks. Nests cleanly.
|
||||
|
||||
```bash
|
||||
# Good
|
||||
output=$(ls "$dir")
|
||||
|
||||
# Bad
|
||||
output=`ls $dir`
|
||||
```
|
||||
|
||||
## POSIX Portability
|
||||
|
||||
Write POSIX-compliant scripts when targeting `/bin/sh`.
|
||||
|
||||
- Use `[[` only for bash scripts
|
||||
- Use `printf` instead of `echo -e`
|
||||
- Avoid `[[`, `((`, `&>` in sh scripts
|
||||
|
||||
## Error Handling
|
||||
|
||||
Use `trap` for cleanup.
|
||||
|
||||
```bash
|
||||
cleanup() {
|
||||
rm -f /tmp/lockfile
|
||||
}
|
||||
trap cleanup EXIT
|
||||
```
|
||||
|
||||
## Readability
|
||||
|
||||
- Use 2-space indentation
|
||||
- Limit lines to 80 characters
|
||||
- Add comments for non-obvious logic
|
||||
- Separate sections with blank lines
|
||||
150
rules/languages/typescript.md
Normal file
150
rules/languages/typescript.md
Normal file
@@ -0,0 +1,150 @@
|
||||
# TypeScript Patterns
|
||||
|
||||
## Strict tsconfig
|
||||
|
||||
Always enable strict mode and key safety options:
|
||||
|
||||
```json
|
||||
{
|
||||
"compilerOptions": {
|
||||
"strict": true,
|
||||
"noUncheckedIndexedAccess": true,
|
||||
"noImplicitReturns": true,
|
||||
"noFallthroughCasesInSwitch": true,
|
||||
"noUnusedLocals": true,
|
||||
"noUnusedParameters": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Discriminated Unions
|
||||
|
||||
Use discriminated unions for exhaustive type safety:
|
||||
|
||||
```ts
|
||||
type Result =
|
||||
| { success: true; data: string }
|
||||
| { success: false; error: Error };
|
||||
|
||||
function handleResult(result: Result): string {
|
||||
if (result.success) {
|
||||
return result.data;
|
||||
}
|
||||
throw result.error;
|
||||
}
|
||||
```
|
||||
|
||||
## Branded Types
|
||||
|
||||
Prevent type confusion with nominal branding:
|
||||
|
||||
```ts
|
||||
type UserId = string & { readonly __brand: unique symbol };
|
||||
type Email = string & { readonly __brand: unique symbol };
|
||||
|
||||
function createUserId(id: string): UserId {
|
||||
return id as UserId;
|
||||
}
|
||||
|
||||
function sendEmail(email: Email, userId: UserId) {}
|
||||
```
|
||||
|
||||
## satisfies Operator
|
||||
|
||||
Use `satisfies` for type-safe object literal inference:
|
||||
|
||||
```ts
|
||||
const config = {
|
||||
port: 3000,
|
||||
host: "localhost",
|
||||
} satisfies {
|
||||
port: number;
|
||||
host: string;
|
||||
debug?: boolean;
|
||||
};
|
||||
|
||||
config.port; // number
|
||||
config.host; // string
|
||||
```
|
||||
|
||||
## as const Assertions
|
||||
|
||||
Freeze literal types with `as const`:
|
||||
|
||||
```ts
|
||||
const routes = {
|
||||
home: "/",
|
||||
about: "/about",
|
||||
contact: "/contact",
|
||||
} as const;
|
||||
|
||||
type Route = typeof routes[keyof typeof routes];
|
||||
```
|
||||
|
||||
## Modern Features
|
||||
|
||||
```ts
|
||||
// Promise.withResolvers()
|
||||
const { promise, resolve, reject } = Promise.withResolvers<string>();
|
||||
|
||||
// Object.groupBy()
|
||||
const users = [
|
||||
{ name: "Alice", role: "admin" },
|
||||
{ name: "Bob", role: "user" },
|
||||
];
|
||||
const grouped = Object.groupBy(users, u => u.role);
|
||||
|
||||
// using statement for disposables
|
||||
class Resource implements Disposable {
|
||||
async [Symbol.asyncDispose]() {
|
||||
await this.cleanup();
|
||||
}
|
||||
}
|
||||
async function withResource() {
|
||||
using r = new Resource();
|
||||
}
|
||||
```
|
||||
|
||||
## Toolchain
|
||||
|
||||
Prefer modern tooling:
|
||||
- Runtime: `bun` or `tsx` (no `tsc` for execution)
|
||||
- Linting: `biome` (preferred) or `eslint`
|
||||
- Formatting: `biome` (built-in) or `prettier`
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
Avoid these TypeScript patterns:
|
||||
|
||||
```ts
|
||||
// NEVER use as any
|
||||
const data = response as any;
|
||||
|
||||
// NEVER use @ts-ignore
|
||||
// @ts-ignore
|
||||
const value = unknownFunction();
|
||||
|
||||
// NEVER use ! assertion (non-null)
|
||||
const element = document.querySelector("#foo")!;
|
||||
|
||||
// NEVER use enum (prefer union)
|
||||
enum Status { Active, Inactive } // ❌
|
||||
|
||||
// Prefer const object or union
|
||||
type Status = "Active" | "Inactive"; // ✅
|
||||
const Status = { Active: "Active", Inactive: "Inactive" } as const; // ✅
|
||||
```
|
||||
|
||||
## Indexed Access Safety
|
||||
|
||||
With `noUncheckedIndexedAccess`, handle undefined:
|
||||
|
||||
```ts
|
||||
const arr: string[] = ["a", "b"];
|
||||
const item = arr[0]; // string | undefined
|
||||
|
||||
const item2 = arr.at(0); // string | undefined
|
||||
|
||||
const map = new Map<string, number>();
|
||||
const value = map.get("key"); // number | undefined
|
||||
```
|
||||
@@ -1,266 +1,544 @@
|
||||
---
|
||||
name: excalidraw
|
||||
description: Generate architecture diagrams as .excalidraw files from codebase analysis. Use when the user asks to create architecture diagrams, system diagrams, visualize codebase structure, or generate excalidraw files.
|
||||
description: "Create Excalidraw diagram JSON files that make visual arguments. Use when: (1) user wants to visualize workflows, architectures, or concepts, (2) creating system diagrams, (3) generating .excalidraw files. Triggers: excalidraw, diagram, visualize, architecture diagram, system diagram."
|
||||
compatibility: opencode
|
||||
---
|
||||
|
||||
# Excalidraw Diagram Generator
|
||||
# Excalidraw Diagram Creator
|
||||
|
||||
Generate architecture diagrams as `.excalidraw` files directly from codebase analysis.
|
||||
Generate `.excalidraw` JSON files that **argue visually**, not just display information.
|
||||
|
||||
## Customization
|
||||
|
||||
**All colors and brand-specific styles live in one file:** `references/color-palette.md`. Read it before generating any diagram and use it as the single source of truth for all color choices — shape fills, strokes, text colors, evidence artifact backgrounds, everything.
|
||||
|
||||
To make this skill produce diagrams in your own brand style, edit `color-palette.md`. Everything else in this file is universal design methodology and Excalidraw best practices.
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
## Core Philosophy
|
||||
|
||||
**User just asks:**
|
||||
```
|
||||
"Generate an architecture diagram for this project"
|
||||
"Create an excalidraw diagram of the system"
|
||||
"Visualize this codebase as an excalidraw file"
|
||||
```
|
||||
**Diagrams should ARGUE, not DISPLAY.**
|
||||
|
||||
**Claude Code will:**
|
||||
1. Analyze the codebase (any language/framework)
|
||||
2. Identify components, services, databases, APIs
|
||||
3. Map relationships and data flows
|
||||
4. Generate valid `.excalidraw` JSON with dynamic IDs and labels
|
||||
A diagram isn't formatted text. It's a visual argument that shows relationships, causality, and flow that words alone can't express. The shape should BE the meaning.
|
||||
|
||||
**No prerequisites:** Works without existing diagrams, Terraform, or specific file types.
|
||||
**The Isomorphism Test**: If you removed all text, would the structure alone communicate the concept? If not, redesign.
|
||||
|
||||
**The Education Test**: Could someone learn something concrete from this diagram, or does it just label boxes? A good diagram teaches—it shows actual formats, real event names, concrete examples.
|
||||
|
||||
---
|
||||
|
||||
## Critical Rules
|
||||
## Depth Assessment (Do This First)
|
||||
|
||||
### 1. NEVER Use Diamond Shapes
|
||||
Before designing, determine what level of detail this diagram needs:
|
||||
|
||||
Diamond arrow connections are broken in raw Excalidraw JSON. Use styled rectangles instead:
|
||||
### Simple/Conceptual Diagrams
|
||||
Use abstract shapes when:
|
||||
- Explaining a mental model or philosophy
|
||||
- The audience doesn't need technical specifics
|
||||
- The concept IS the abstraction (e.g., "separation of concerns")
|
||||
|
||||
| Semantic Meaning | Rectangle Style |
|
||||
|------------------|-----------------|
|
||||
| Orchestrator/Hub | Coral (`#ffa8a8`/`#c92a2a`) + strokeWidth: 3 |
|
||||
| Decision Point | Orange (`#ffd8a8`/`#e8590c`) + dashed stroke |
|
||||
### Comprehensive/Technical Diagrams
|
||||
Use concrete examples when:
|
||||
- Diagramming a real system, protocol, or architecture
|
||||
- The diagram will be used to teach or explain (e.g., YouTube video)
|
||||
- The audience needs to understand what things actually look like
|
||||
- You're showing how multiple technologies integrate
|
||||
|
||||
### 2. Labels Require TWO Elements
|
||||
**For technical diagrams, you MUST include evidence artifacts** (see below).
|
||||
|
||||
The `label` property does NOT work in raw JSON. Every labeled shape needs:
|
||||
---
|
||||
|
||||
```json
|
||||
// 1. Shape with boundElements reference
|
||||
{
|
||||
"id": "my-box",
|
||||
"type": "rectangle",
|
||||
"boundElements": [{ "type": "text", "id": "my-box-text" }]
|
||||
}
|
||||
## Research Mandate (For Technical Diagrams)
|
||||
|
||||
// 2. Separate text element with containerId
|
||||
{
|
||||
"id": "my-box-text",
|
||||
"type": "text",
|
||||
"containerId": "my-box",
|
||||
"text": "My Label"
|
||||
}
|
||||
**Before drawing anything technical, research the actual specifications.**
|
||||
|
||||
If you're diagramming a protocol, API, or framework:
|
||||
1. Look up the actual JSON/data formats
|
||||
2. Find the real event names, method names, or API endpoints
|
||||
3. Understand how the pieces actually connect
|
||||
4. Use real terminology, not generic placeholders
|
||||
|
||||
Bad: "Protocol" → "Frontend"
|
||||
Good: "AG-UI streams events (RUN_STARTED, STATE_DELTA, A2UI_UPDATE)" → "CopilotKit renders via createA2UIMessageRenderer()"
|
||||
|
||||
**Research makes diagrams accurate AND educational.**
|
||||
|
||||
---
|
||||
|
||||
## Evidence Artifacts
|
||||
|
||||
Evidence artifacts are concrete examples that prove your diagram is accurate and help viewers learn. Include them in technical diagrams.
|
||||
|
||||
**Types of evidence artifacts** (choose what's relevant to your diagram):
|
||||
|
||||
| Artifact Type | When to Use | How to Render |
|
||||
|---------------|-------------|---------------|
|
||||
| **Code snippets** | APIs, integrations, implementation details | Dark rectangle + syntax-colored text (see color palette for evidence artifact colors) |
|
||||
| **Data/JSON examples** | Data formats, schemas, payloads | Dark rectangle + colored text (see color palette) |
|
||||
| **Event/step sequences** | Protocols, workflows, lifecycles | Timeline pattern (line + dots + labels) |
|
||||
| **UI mockups** | Showing actual output/results | Nested rectangles mimicking real UI |
|
||||
| **Real input content** | Showing what goes IN to a system | Rectangle with sample content visible |
|
||||
| **API/method names** | Real function calls, endpoints | Use actual names from docs, not placeholders |
|
||||
|
||||
**Example**: For a diagram about a streaming protocol, you might show:
|
||||
- The actual event names from the spec (not just "Event 1", "Event 2")
|
||||
- A code snippet showing how to connect
|
||||
- What the streamed data actually looks like
|
||||
|
||||
**Example**: For a diagram about a data transformation pipeline:
|
||||
- Show sample input data (actual format, not "Input")
|
||||
- Show sample output data (actual format, not "Output")
|
||||
- Show intermediate states if relevant
|
||||
|
||||
The key principle: **show what things actually look like**, not just what they're called.
|
||||
|
||||
---
|
||||
|
||||
## Multi-Zoom Architecture
|
||||
|
||||
Comprehensive diagrams operate at multiple zoom levels simultaneously. Think of it like a map that shows both the country borders AND the street names.
|
||||
|
||||
### Level 1: Summary Flow
|
||||
A simplified overview showing the full pipeline or process at a glance. Often placed at the top or bottom of the diagram.
|
||||
|
||||
*Example*: `Input → Processing → Output` or `Client → Server → Database`
|
||||
|
||||
### Level 2: Section Boundaries
|
||||
Labeled regions that group related components. These create visual "rooms" that help viewers understand what belongs together.
|
||||
|
||||
*Example*: Grouping by responsibility (Backend / Frontend), by phase (Setup / Execution / Cleanup), or by team (User / System / External)
|
||||
|
||||
### Level 3: Detail Inside Sections
|
||||
Evidence artifacts, code snippets, and concrete examples within each section. This is where the educational value lives.
|
||||
|
||||
*Example*: Inside a "Backend" section, you might show the actual API response format, not just a box labeled "API Response"
|
||||
|
||||
**For comprehensive diagrams, aim to include all three levels.** The summary gives context, the sections organize, and the details teach.
|
||||
|
||||
### Bad vs Good
|
||||
|
||||
| Bad (Displaying) | Good (Arguing) |
|
||||
|------------------|----------------|
|
||||
| 5 equal boxes with labels | Each concept has a shape that mirrors its behavior |
|
||||
| Card grid layout | Visual structure matches conceptual structure |
|
||||
| Icons decorating text | Shapes that ARE the meaning |
|
||||
| Same container for everything | Distinct visual vocabulary per concept |
|
||||
| Everything in a box | Free-floating text with selective containers |
|
||||
|
||||
### Simple vs Comprehensive (Know Which You Need)
|
||||
|
||||
| Simple Diagram | Comprehensive Diagram |
|
||||
|----------------|----------------------|
|
||||
| Generic labels: "Input" → "Process" → "Output" | Specific: shows what the input/output actually looks like |
|
||||
| Named boxes: "API", "Database", "Client" | Named boxes + examples of actual requests/responses |
|
||||
| "Events" or "Messages" label | Timeline with real event/message names from the spec |
|
||||
| "UI" or "Dashboard" rectangle | Mockup showing actual UI elements and content |
|
||||
| ~30 seconds to explain | ~2-3 minutes of teaching content |
|
||||
| Viewer learns the structure | Viewer learns the structure AND the details |
|
||||
|
||||
**Simple diagrams** are fine for abstract concepts, quick overviews, or when the audience already knows the details. **Comprehensive diagrams** are needed for technical architectures, tutorials, educational content, or when you want the diagram itself to teach.
|
||||
|
||||
---
|
||||
|
||||
## Container vs. Free-Floating Text
|
||||
|
||||
**Not every piece of text needs a shape around it.** Default to free-floating text. Add containers only when they serve a purpose.
|
||||
|
||||
| Use a Container When... | Use Free-Floating Text When... |
|
||||
|------------------------|-------------------------------|
|
||||
| It's the focal point of a section | It's a label or description |
|
||||
| It needs visual grouping with other elements | It's supporting detail or metadata |
|
||||
| Arrows need to connect to it | It describes something nearby |
|
||||
| The shape itself carries meaning (decision diamond, etc.) | It's a section title, subtitle, or annotation |
|
||||
| It represents a distinct "thing" in the system | It's a section title, subtitle, or annotation |
|
||||
|
||||
**Typography as hierarchy**: Use font size, weight, and color to create visual hierarchy without boxes. A 28px title doesn't need a rectangle around it.
|
||||
|
||||
**The container test**: For each boxed element, ask "Would this work as free-floating text?" If yes, remove the container.
|
||||
|
||||
---
|
||||
|
||||
## Design Process (Do This BEFORE Generating JSON)
|
||||
|
||||
### Step 0: Assess Depth Required
|
||||
Before anything else, determine if this needs to be:
|
||||
- **Simple/Conceptual**: Abstract shapes, labels, relationships (mental models, philosophies)
|
||||
- **Comprehensive/Technical**: Concrete examples, code snippets, real data (systems, architectures, tutorials)
|
||||
|
||||
**If comprehensive**: Do research first. Look up actual specs, formats, event names, APIs.
|
||||
|
||||
### Step 1: Understand Deeply
|
||||
Read the content. For each concept, ask:
|
||||
- What does this concept **DO**? (not what IS it)
|
||||
- What relationships exist between concepts?
|
||||
- What's the core transformation or flow?
|
||||
- **What would someone need to SEE to understand this?** (not just read about)
|
||||
|
||||
### Step 2: Map Concepts to Patterns
|
||||
For each concept, find the visual pattern that mirrors its behavior:
|
||||
|
||||
| If the concept... | Use this pattern |
|
||||
|-------------------|------------------|
|
||||
| Spawns multiple outputs | **Fan-out** (radial arrows from center) |
|
||||
| Combines inputs into one | **Convergence** (funnel, arrows merging) |
|
||||
| Has hierarchy/nesting | **Tree** (lines + free-floating text) |
|
||||
| Is a sequence of steps | **Timeline** (line + dots + free-floating labels) |
|
||||
| Loops or improves continuously | **Spiral/Cycle** (arrow returning to start) |
|
||||
| Is an abstract state or context | **Cloud** (overlapping ellipses) |
|
||||
| Transforms input to output | **Assembly line** (before → process → after) |
|
||||
| Compares two things | **Side-by-side** (parallel with contrast) |
|
||||
| Separates into phases | **Gap/Break** (visual separation between sections) |
|
||||
|
||||
### Step 3: Ensure Variety
|
||||
For multi-concept diagrams: **each major concept must use a different visual pattern**. No uniform cards or grids.
|
||||
|
||||
### Step 4: Sketch the Flow
|
||||
Before JSON, mentally trace how the eye moves through the diagram. There should be a clear visual story.
|
||||
|
||||
### Step 5: Generate JSON
|
||||
Only now create the Excalidraw elements. **See below for how to handle large diagrams.**
|
||||
|
||||
### Step 6: Render & Validate (MANDATORY)
|
||||
After generating the JSON, you MUST run the render-view-fix loop until the diagram looks right. This is not optional — see the **Render & Validate** section below for the full process.
|
||||
|
||||
---
|
||||
|
||||
## Large / Comprehensive Diagram Strategy
|
||||
|
||||
**For comprehensive or technical diagrams, you MUST build the JSON one section at a time.** Do NOT attempt to generate the entire file in a single pass. This is a hard constraint — output token limits mean a comprehensive diagram easily exceeds capacity in one shot. Even if it didn't, generating everything at once leads to worse quality. Section-by-section is better in every way.
|
||||
|
||||
### The Section-by-Section Workflow
|
||||
|
||||
**Phase 1: Build each section**
|
||||
|
||||
1. **Create the base file** with the JSON wrapper (`type`, `version`, `appState`, `files`) and the first section of elements.
|
||||
2. **Add one section per edit.** Each section gets its own dedicated pass — take your time with it. Think carefully about the layout, spacing, and how this section connects to what's already there.
|
||||
3. **Use descriptive string IDs** (e.g., `"trigger_rect"`, `"arrow_fan_left"`) so cross-section references are readable.
|
||||
4. **Namespace seeds by section** (e.g., section 1 uses 100xxx, section 2 uses 200xxx) to avoid collisions.
|
||||
5. **Update cross-section bindings** as you go. When a new section's element needs to bind to an element from a previous section (e.g., an arrow connecting sections), edit the earlier element's `boundElements` array at the same time.
|
||||
|
||||
**Phase 2: Review the whole**
|
||||
|
||||
After all sections are in place, read through the complete JSON and check:
|
||||
- Are cross-section arrows bound correctly on both ends?
|
||||
- Is the overall spacing balanced, or are some sections cramped while others have too much whitespace?
|
||||
- Do IDs and bindings all reference elements that actually exist?
|
||||
|
||||
Fix any alignment or binding issues before rendering.
|
||||
|
||||
**Phase 3: Render & validate**
|
||||
|
||||
Now run the render-view-fix loop from the Render & Validate section. This is where you'll catch visual issues that aren't obvious from JSON — overlaps, clipping, imbalanced composition.
|
||||
|
||||
### Section Boundaries
|
||||
|
||||
Plan your sections around natural visual groupings from the diagram plan. A typical large diagram might split into:
|
||||
|
||||
- **Section 1**: Entry point / trigger
|
||||
- **Section 2**: First decision or routing
|
||||
- **Section 3**: Main content (hero section — may be the largest single section)
|
||||
- **Section 4-N**: Remaining phases, outputs, etc.
|
||||
|
||||
Each section should be independently understandable: its elements, internal arrows, and any cross-references to adjacent sections.
|
||||
|
||||
### What NOT to Do
|
||||
|
||||
- **Don't generate the entire diagram in one response.** You will hit the output token limit and produce truncated, broken JSON. Even if the diagram is small enough to fit, splitting into sections produces better results.
|
||||
- **Don't write a Python generator script.** The templating and coordinate math seem helpful but introduce a layer of indirection that makes debugging harder. Hand-crafted JSON with descriptive IDs is more maintainable.
|
||||
|
||||
---
|
||||
|
||||
## Visual Pattern Library
|
||||
|
||||
### Fan-Out (One-to-Many)
|
||||
Central element with arrows radiating to multiple targets. Use for: sources, PRDs, root causes, central hubs.
|
||||
```
|
||||
○
|
||||
↗
|
||||
□ → ○
|
||||
↘
|
||||
○
|
||||
```
|
||||
|
||||
### 3. Elbow Arrows Need Three Properties
|
||||
### Convergence (Many-to-One)
|
||||
Multiple inputs merging through arrows to single output. Use for: aggregation, funnels, synthesis.
|
||||
```
|
||||
○ ↘
|
||||
○ → □
|
||||
○ ↗
|
||||
```
|
||||
|
||||
For 90-degree corners (not curved):
|
||||
### Tree (Hierarchy)
|
||||
Parent-child branching with connecting lines and free-floating text (no boxes needed). Use for: file systems, org charts, taxonomies.
|
||||
```
|
||||
label
|
||||
├── label
|
||||
│ ├── label
|
||||
│ └── label
|
||||
└── label
|
||||
```
|
||||
Use `line` elements for the trunk and branches, free-floating text for labels.
|
||||
|
||||
### Spiral/Cycle (Continuous Loop)
|
||||
Elements in sequence with arrow returning to start. Use for: feedback loops, iterative processes, evolution.
|
||||
```
|
||||
□ → □
|
||||
↑ ↓
|
||||
□ ← □
|
||||
```
|
||||
|
||||
### Cloud (Abstract State)
|
||||
Overlapping ellipses with varied sizes. Use for: context, memory, conversations, mental states.
|
||||
|
||||
### Assembly Line (Transformation)
|
||||
Input → Process Box → Output with clear before/after. Use for: transformations, processing, conversion.
|
||||
```
|
||||
○○○ → [PROCESS] → □□□
|
||||
chaos order
|
||||
```
|
||||
|
||||
### Side-by-Side (Comparison)
|
||||
Two parallel structures with visual contrast. Use for: before/after, options, trade-offs.
|
||||
|
||||
### Gap/Break (Separation)
|
||||
Visual whitespace or barrier between sections. Use for: phase changes, context resets, boundaries.
|
||||
|
||||
### Lines as Structure
|
||||
Use lines (type: `line`, not arrows) as primary structural elements instead of boxes:
|
||||
- **Timelines**: Vertical or horizontal line with small dots (10-20px ellipses) at intervals, free-floating labels beside each dot
|
||||
- **Tree structures**: Vertical trunk line + horizontal branch lines, with free-floating text labels (no boxes needed)
|
||||
- **Dividers**: Thin dashed lines to separate sections
|
||||
- **Flow spines**: A central line that elements relate to, rather than connecting boxes
|
||||
|
||||
```
|
||||
Timeline: Tree:
|
||||
●─── Label 1 │
|
||||
│ ├── item
|
||||
●─── Label 2 │ ├── sub
|
||||
│ │ └── sub
|
||||
●─── Label 3 └── item
|
||||
```
|
||||
|
||||
Lines + free-floating text often creates a cleaner result than boxes + contained text.
|
||||
|
||||
---
|
||||
|
||||
## Shape Meaning
|
||||
|
||||
Choose shape based on what it represents—or use no shape at all:
|
||||
|
||||
| Concept Type | Shape | Why |
|
||||
|--------------|-------|-----|
|
||||
| Labels, descriptions, details | **none** (free-floating text) | Typography creates hierarchy |
|
||||
| Section titles, annotations | **none** (free-floating text) | Font size/weight is enough |
|
||||
| Markers on a timeline | small `ellipse` (10-20px) | Visual anchor, not container |
|
||||
| Start, trigger, input | `ellipse` | Soft, origin-like |
|
||||
| End, output, result | `ellipse` | Completion, destination |
|
||||
| Decision, condition | `diamond` | Classic decision symbol |
|
||||
| Process, action, step | `rectangle` | Contained action |
|
||||
| Abstract state, context | overlapping `ellipse` | Fuzzy, cloud-like |
|
||||
| Hierarchy node | lines + text (no boxes) | Structure through lines |
|
||||
|
||||
**Rule**: Default to no container. Add shapes only when they carry meaning. Aim for <30% of text elements to be inside containers.
|
||||
|
||||
---
|
||||
|
||||
## Color as Meaning
|
||||
|
||||
Colors encode information, not decoration. Every color choice should come from `references/color-palette.md` — the semantic shape colors, text hierarchy colors, and evidence artifact colors are all defined there.
|
||||
|
||||
**Key principles:**
|
||||
- Each semantic purpose (start, end, decision, AI, error, etc.) has a specific fill/stroke pair
|
||||
- Free-floating text uses color for hierarchy (titles, subtitles, details — each at a different level)
|
||||
- Evidence artifacts (code snippets, JSON examples) use their own dark background + colored text scheme
|
||||
- Always pair a darker stroke with a lighter fill for contrast
|
||||
|
||||
**Do not invent new colors.** If a concept doesn't fit an existing semantic category, use Primary/Neutral or Secondary.
|
||||
|
||||
---
|
||||
|
||||
## Modern Aesthetics
|
||||
|
||||
For clean, professional diagrams:
|
||||
|
||||
### Roughness
|
||||
- `roughness: 0` — Clean, crisp edges. Use for modern/technical diagrams.
|
||||
- `roughness: 1` — Hand-drawn, organic feel. Use for brainstorming/informal diagrams.
|
||||
|
||||
**Default to 0** for most professional use cases.
|
||||
|
||||
### Stroke Width
|
||||
- `strokeWidth: 1` — Thin, elegant. Good for lines, dividers, subtle connections.
|
||||
- `strokeWidth: 2` — Standard. Good for shapes and primary arrows.
|
||||
- `strokeWidth: 3` — Bold. Use sparingly for emphasis (main flow line, key connections).
|
||||
|
||||
### Opacity
|
||||
**Always use `opacity: 100` for all elements.** Use color, size, and stroke width to create hierarchy instead of transparency.
|
||||
|
||||
### Small Markers Instead of Shapes
|
||||
Instead of full shapes, use small dots (10-20px ellipses) as:
|
||||
- Timeline markers
|
||||
- Bullet points
|
||||
- Connection nodes
|
||||
- Visual anchors for free-floating text
|
||||
|
||||
---
|
||||
|
||||
## Layout Principles
|
||||
|
||||
### Hierarchy Through Scale
|
||||
- **Hero**: 300×150 - visual anchor, most important
|
||||
- **Primary**: 180×90
|
||||
- **Secondary**: 120×60
|
||||
- **Small**: 60×40
|
||||
|
||||
### Whitespace = Importance
|
||||
The most important element has the most empty space around it (200px+).
|
||||
|
||||
### Flow Direction
|
||||
Guide the eye: typically left→right or top→bottom for sequences, radial for hub-and-spoke.
|
||||
|
||||
### Connections Required
|
||||
Position alone doesn't show relationships. If A relates to B, there must be an arrow.
|
||||
|
||||
---
|
||||
|
||||
## Text Rules
|
||||
|
||||
**CRITICAL**: The JSON `text` property contains ONLY readable words.
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "arrow",
|
||||
"roughness": 0, // Clean lines
|
||||
"roundness": null, // Sharp corners
|
||||
"elbowed": true // 90-degree mode
|
||||
"id": "myElement1",
|
||||
"text": "Start",
|
||||
"originalText": "Start"
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Arrow Edge Calculations
|
||||
|
||||
Arrows must start/end at shape edges, not centers:
|
||||
|
||||
| Edge | Formula |
|
||||
|------|---------|
|
||||
| Top | `(x + width/2, y)` |
|
||||
| Bottom | `(x + width/2, y + height)` |
|
||||
| Left | `(x, y + height/2)` |
|
||||
| Right | `(x + width, y + height/2)` |
|
||||
|
||||
**Detailed arrow routing:** See `references/arrows.md`
|
||||
Settings: `fontSize: 16`, `fontFamily: 3`, `textAlign: "center"`, `verticalAlign: "middle"`
|
||||
|
||||
---
|
||||
|
||||
## Element Types
|
||||
## JSON Structure
|
||||
|
||||
| Type | Use For |
|
||||
|------|---------|
|
||||
| `rectangle` | Services, databases, containers, orchestrators |
|
||||
| `ellipse` | Users, external systems, start/end points |
|
||||
| `text` | Labels inside shapes, titles, annotations |
|
||||
| `arrow` | Data flow, connections, dependencies |
|
||||
| `line` | Grouping boundaries, separators |
|
||||
|
||||
**Full JSON format:** See `references/json-format.md`
|
||||
|
||||
---
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 1: Analyze Codebase
|
||||
|
||||
Discover components by looking for:
|
||||
|
||||
| Codebase Type | What to Look For |
|
||||
|---------------|------------------|
|
||||
| Monorepo | `packages/*/package.json`, workspace configs |
|
||||
| Microservices | `docker-compose.yml`, k8s manifests |
|
||||
| IaC | Terraform/Pulumi resource definitions |
|
||||
| Backend API | Route definitions, controllers, DB models |
|
||||
| Frontend | Component hierarchy, API calls |
|
||||
|
||||
**Use tools:**
|
||||
- `Glob` → `**/package.json`, `**/Dockerfile`, `**/*.tf`
|
||||
- `Grep` → `app.get`, `@Controller`, `CREATE TABLE`
|
||||
- `Read` → README, config files, entry points
|
||||
|
||||
### Step 2: Plan Layout
|
||||
|
||||
**Vertical flow (most common):**
|
||||
```
|
||||
Row 1: Users/Entry points (y: 100)
|
||||
Row 2: Frontend/Gateway (y: 230)
|
||||
Row 3: Orchestration (y: 380)
|
||||
Row 4: Services (y: 530)
|
||||
Row 5: Data layer (y: 680)
|
||||
|
||||
Columns: x = 100, 300, 500, 700, 900
|
||||
Element size: 160-200px x 80-90px
|
||||
```
|
||||
|
||||
**Other patterns:** See `references/examples.md`
|
||||
|
||||
### Step 3: Generate Elements
|
||||
|
||||
For each component:
|
||||
1. Create shape with unique `id`
|
||||
2. Add `boundElements` referencing text
|
||||
3. Create text with `containerId`
|
||||
4. Choose color based on type
|
||||
|
||||
**Color palettes:** See `references/colors.md`
|
||||
|
||||
### Step 4: Add Connections
|
||||
|
||||
For each relationship:
|
||||
1. Calculate source edge point
|
||||
2. Plan elbow route (avoid overlaps)
|
||||
3. Create arrow with `points` array
|
||||
4. Match stroke color to destination type
|
||||
|
||||
**Arrow patterns:** See `references/arrows.md`
|
||||
|
||||
### Step 5: Add Grouping (Optional)
|
||||
|
||||
For logical groupings:
|
||||
- Large transparent rectangle with `strokeStyle: "dashed"`
|
||||
- Standalone text label at top-left
|
||||
|
||||
### Step 6: Validate and Write
|
||||
|
||||
Run validation before writing. Save to `docs/` or user-specified path.
|
||||
|
||||
**Validation checklist:** See `references/validation.md`
|
||||
|
||||
---
|
||||
|
||||
## Quick Arrow Reference
|
||||
|
||||
**Straight down:**
|
||||
```json
|
||||
{ "points": [[0, 0], [0, 110]], "x": 590, "y": 290 }
|
||||
{
|
||||
"type": "excalidraw",
|
||||
"version": 2,
|
||||
"source": "https://excalidraw.com",
|
||||
"elements": [...],
|
||||
"appState": {
|
||||
"viewBackgroundColor": "#ffffff",
|
||||
"gridSize": 20
|
||||
},
|
||||
"files": {}
|
||||
}
|
||||
```
|
||||
|
||||
**L-shape (left then down):**
|
||||
```json
|
||||
{ "points": [[0, 0], [-325, 0], [-325, 125]], "x": 525, "y": 420 }
|
||||
```
|
||||
## Element Templates
|
||||
|
||||
**U-turn (callback):**
|
||||
```json
|
||||
{ "points": [[0, 0], [50, 0], [50, -125], [20, -125]], "x": 710, "y": 440 }
|
||||
```
|
||||
|
||||
**Arrow width/height** = bounding box of points:
|
||||
```
|
||||
points [[0,0], [-440,0], [-440,70]] → width=440, height=70
|
||||
```
|
||||
|
||||
**Multiple arrows from same edge** - stagger positions:
|
||||
```
|
||||
5 arrows: 20%, 35%, 50%, 65%, 80% across edge width
|
||||
```
|
||||
See `references/element-templates.md` for copy-paste JSON templates for each element type (text, line, dot, rectangle, arrow). Pull colors from `references/color-palette.md` based on each element's semantic purpose.
|
||||
|
||||
---
|
||||
|
||||
## Default Color Palette
|
||||
## Render & Validate (MANDATORY)
|
||||
|
||||
| Component | Background | Stroke |
|
||||
|-----------|------------|--------|
|
||||
| Frontend | `#a5d8ff` | `#1971c2` |
|
||||
| Backend/API | `#d0bfff` | `#7048e8` |
|
||||
| Database | `#b2f2bb` | `#2f9e44` |
|
||||
| Storage | `#ffec99` | `#f08c00` |
|
||||
| AI/ML | `#e599f7` | `#9c36b5` |
|
||||
| External APIs | `#ffc9c9` | `#e03131` |
|
||||
| Orchestration | `#ffa8a8` | `#c92a2a` |
|
||||
| Message Queue | `#fff3bf` | `#fab005` |
|
||||
| Cache | `#ffe8cc` | `#fd7e14` |
|
||||
| Users | `#e7f5ff` | `#1971c2` |
|
||||
You cannot judge a diagram from JSON alone. After generating or editing the Excalidraw JSON, you MUST render it to PNG, view the image, and fix what you see — in a loop until it's right. This is a core part of the workflow, not a final check.
|
||||
|
||||
**Cloud-specific palettes:** See `references/colors.md`
|
||||
### How to Render
|
||||
|
||||
Run the render script from the skill's `references/` directory:
|
||||
|
||||
```bash
|
||||
python3 <skill-references-dir>/render_excalidraw.py <path-to-file.excalidraw>
|
||||
```
|
||||
|
||||
This outputs a PNG next to the `.excalidraw` file. Then use the **Read tool** on the PNG to actually view it.
|
||||
|
||||
### The Loop
|
||||
|
||||
After generating the initial JSON, run this cycle:
|
||||
|
||||
**1. Render & View** — Run the render script, then Read the PNG.
|
||||
|
||||
**2. Audit against your original vision** — Before looking for bugs, compare the rendered result to what you designed in Steps 1-4. Ask:
|
||||
- Does the visual structure match the conceptual structure you planned?
|
||||
- Does each section use the pattern you intended (fan-out, convergence, timeline, etc.)?
|
||||
- Does the eye flow through the diagram in the order you designed?
|
||||
- Is the visual hierarchy correct — hero elements dominant, supporting elements smaller?
|
||||
- For technical diagrams: are the evidence artifacts (code snippets, data examples) readable and properly placed?
|
||||
|
||||
**3. Check for visual defects:**
|
||||
- Text clipped by or overflowing its container
|
||||
- Text or shapes overlapping other elements
|
||||
- Arrows crossing through elements instead of routing around them
|
||||
- Arrows landing on the wrong element or pointing into empty space
|
||||
- Labels floating ambiguously (not clearly anchored to what they describe)
|
||||
- Uneven spacing between elements that should be evenly spaced
|
||||
- Sections with too much whitespace next to sections that are too cramped
|
||||
- Text too small to read at the rendered size
|
||||
- Overall composition feels lopsided or unbalanced
|
||||
|
||||
**4. Fix** — Edit the JSON to address everything you found. Common fixes:
|
||||
- Widen containers when text is clipped
|
||||
- Adjust `x`/`y` coordinates to fix spacing and alignment
|
||||
- Add intermediate waypoints to arrow `points` arrays to route around elements
|
||||
- Reposition labels closer to the element they describe
|
||||
- Resize elements to rebalance visual weight across sections
|
||||
|
||||
**5. Re-render & re-view** — Run the render script again and Read the new PNG.
|
||||
|
||||
**6. Repeat** — Keep cycling until the diagram passes both the vision check (Step 2) and the defect check (Step 3). Typically takes 2-4 iterations. Don't stop after one pass just because there are no critical bugs — if the composition could be better, improve it.
|
||||
|
||||
### When to Stop
|
||||
|
||||
The loop is done when:
|
||||
- The rendered diagram matches the conceptual design from your planning steps
|
||||
- No text is clipped, overlapping, or unreadable
|
||||
- Arrows route cleanly and connect to the right elements
|
||||
- Spacing is consistent and the composition is balanced
|
||||
- You'd be comfortable showing it to someone without caveats
|
||||
|
||||
---
|
||||
|
||||
## Quick Validation Checklist
|
||||
## Quality Checklist
|
||||
|
||||
Before writing file:
|
||||
- [ ] Every shape with label has boundElements + text element
|
||||
- [ ] Text elements have containerId matching shape
|
||||
- [ ] Multi-point arrows have `elbowed: true`, `roundness: null`
|
||||
- [ ] Arrow x,y = source shape edge point
|
||||
- [ ] Arrow final point offset reaches target edge
|
||||
- [ ] No diamond shapes
|
||||
- [ ] No duplicate IDs
|
||||
### Depth & Evidence (Check First for Technical Diagrams)
|
||||
1. **Research done**: Did you look up actual specs, formats, event names?
|
||||
2. **Evidence artifacts**: Are there code snippets, JSON examples, or real data?
|
||||
3. **Multi-zoom**: Does it have summary flow + section boundaries + detail?
|
||||
4. **Concrete over abstract**: Real content shown, not just labeled boxes?
|
||||
5. **Educational value**: Could someone learn something concrete from this?
|
||||
|
||||
**Full validation algorithm:** See `references/validation.md`
|
||||
### Conceptual
|
||||
6. **Isomorphism**: Does each visual structure mirror its concept's behavior?
|
||||
7. **Argument**: Does the diagram SHOW something text alone couldn't?
|
||||
8. **Variety**: Does each major concept use a different visual pattern?
|
||||
9. **No uniform containers**: Avoided card grids and equal boxes?
|
||||
|
||||
---
|
||||
### Container Discipline
|
||||
10. **Minimal containers**: Could any boxed element work as free-floating text instead?
|
||||
11. **Lines as structure**: Are tree/timeline patterns using lines + text rather than boxes?
|
||||
12. **Typography hierarchy**: Are font size and color creating visual hierarchy (reducing need for boxes)?
|
||||
|
||||
## Common Issues
|
||||
### Structural
|
||||
13. **Connections**: Every relationship has an arrow or line
|
||||
14. **Flow**: Clear visual path for the eye to follow
|
||||
15. **Hierarchy**: Important elements are larger/more isolated
|
||||
|
||||
| Issue | Fix |
|
||||
|-------|-----|
|
||||
| Labels don't appear | Use TWO elements (shape + text), not `label` property |
|
||||
| Arrows curved | Add `elbowed: true`, `roundness: null`, `roughness: 0` |
|
||||
| Arrows floating | Calculate x,y from shape edge, not center |
|
||||
| Arrows overlapping | Stagger start positions across edge |
|
||||
### Technical
|
||||
16. **Text clean**: `text` contains only readable words
|
||||
17. **Font**: `fontFamily: 3`
|
||||
18. **Roughness**: `roughness: 0` for clean/modern (unless hand-drawn style requested)
|
||||
19. **Opacity**: `opacity: 100` for all elements (no transparency)
|
||||
20. **Container ratio**: <30% of text elements should be inside containers
|
||||
|
||||
**Detailed bug fixes:** See `references/validation.md`
|
||||
|
||||
---
|
||||
|
||||
## Reference Files
|
||||
|
||||
| File | Contents |
|
||||
|------|----------|
|
||||
| `references/json-format.md` | Element types, required properties, text bindings |
|
||||
| `references/arrows.md` | Routing algorithm, patterns, bindings, staggering |
|
||||
| `references/colors.md` | Default, AWS, Azure, GCP, K8s palettes |
|
||||
| `references/examples.md` | Complete JSON examples, layout patterns |
|
||||
| `references/validation.md` | Checklists, validation algorithm, bug fixes |
|
||||
|
||||
---
|
||||
|
||||
## Output
|
||||
|
||||
- **Location:** `docs/architecture/` or user-specified
|
||||
- **Filename:** Descriptive, e.g., `system-architecture.excalidraw`
|
||||
- **Testing:** Open in https://excalidraw.com or VS Code extension
|
||||
### Visual Validation (Render Required)
|
||||
21. **Rendered to PNG**: Diagram has been rendered and visually inspected
|
||||
22. **No text overflow**: All text fits within its container
|
||||
23. **No overlapping elements**: Shapes and text don't overlap unintentionally
|
||||
24. **Even spacing**: Similar elements have consistent spacing
|
||||
25. **Arrows land correctly**: Arrows connect to intended elements without crossing others
|
||||
26. **Readable at export size**: Text is legible in the rendered PNG
|
||||
27. **Balanced composition**: No large empty voids or overcrowded regions
|
||||
|
||||
@@ -1,288 +0,0 @@
|
||||
# Arrow Routing Reference
|
||||
|
||||
Complete guide for creating elbow arrows with proper connections.
|
||||
|
||||
---
|
||||
|
||||
## Critical: Elbow Arrow Properties
|
||||
|
||||
Three required properties for 90-degree corners:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "arrow",
|
||||
"roughness": 0, // Clean lines
|
||||
"roundness": null, // Sharp corners (not curved)
|
||||
"elbowed": true // Enables elbow mode
|
||||
}
|
||||
```
|
||||
|
||||
**Without these, arrows will be curved, not 90-degree elbows.**
|
||||
|
||||
---
|
||||
|
||||
## Edge Calculation Formulas
|
||||
|
||||
| Shape Type | Edge | Formula |
|
||||
|------------|------|---------|
|
||||
| Rectangle | Top | `(x + width/2, y)` |
|
||||
| Rectangle | Bottom | `(x + width/2, y + height)` |
|
||||
| Rectangle | Left | `(x, y + height/2)` |
|
||||
| Rectangle | Right | `(x + width, y + height/2)` |
|
||||
| Ellipse | Top | `(x + width/2, y)` |
|
||||
| Ellipse | Bottom | `(x + width/2, y + height)` |
|
||||
|
||||
---
|
||||
|
||||
## Universal Arrow Routing Algorithm
|
||||
|
||||
```
|
||||
FUNCTION createArrow(source, target, sourceEdge, targetEdge):
|
||||
// Step 1: Get source edge point
|
||||
sourcePoint = getEdgePoint(source, sourceEdge)
|
||||
|
||||
// Step 2: Get target edge point
|
||||
targetPoint = getEdgePoint(target, targetEdge)
|
||||
|
||||
// Step 3: Calculate offsets
|
||||
dx = targetPoint.x - sourcePoint.x
|
||||
dy = targetPoint.y - sourcePoint.y
|
||||
|
||||
// Step 4: Determine routing pattern
|
||||
IF sourceEdge == "bottom" AND targetEdge == "top":
|
||||
IF abs(dx) < 10: // Nearly aligned
|
||||
points = [[0, 0], [0, dy]]
|
||||
ELSE: // Need L-shape
|
||||
points = [[0, 0], [dx, 0], [dx, dy]]
|
||||
|
||||
ELSE IF sourceEdge == "right" AND targetEdge == "left":
|
||||
IF abs(dy) < 10:
|
||||
points = [[0, 0], [dx, 0]]
|
||||
ELSE:
|
||||
points = [[0, 0], [0, dy], [dx, dy]]
|
||||
|
||||
ELSE IF sourceEdge == targetEdge: // U-turn
|
||||
clearance = 50
|
||||
IF sourceEdge == "right":
|
||||
points = [[0, 0], [clearance, 0], [clearance, dy], [dx, dy]]
|
||||
ELSE IF sourceEdge == "bottom":
|
||||
points = [[0, 0], [0, clearance], [dx, clearance], [dx, dy]]
|
||||
|
||||
// Step 5: Calculate bounding box
|
||||
width = max(abs(p[0]) for p in points)
|
||||
height = max(abs(p[1]) for p in points)
|
||||
|
||||
RETURN {x: sourcePoint.x, y: sourcePoint.y, points, width, height}
|
||||
|
||||
FUNCTION getEdgePoint(shape, edge):
|
||||
SWITCH edge:
|
||||
"top": RETURN (shape.x + shape.width/2, shape.y)
|
||||
"bottom": RETURN (shape.x + shape.width/2, shape.y + shape.height)
|
||||
"left": RETURN (shape.x, shape.y + shape.height/2)
|
||||
"right": RETURN (shape.x + shape.width, shape.y + shape.height/2)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Arrow Patterns Reference
|
||||
|
||||
| Pattern | Points | Use Case |
|
||||
|---------|--------|----------|
|
||||
| Down | `[[0,0], [0,h]]` | Vertical connection |
|
||||
| Right | `[[0,0], [w,0]]` | Horizontal connection |
|
||||
| L-left-down | `[[0,0], [-w,0], [-w,h]]` | Go left, then down |
|
||||
| L-right-down | `[[0,0], [w,0], [w,h]]` | Go right, then down |
|
||||
| L-down-left | `[[0,0], [0,h], [-w,h]]` | Go down, then left |
|
||||
| L-down-right | `[[0,0], [0,h], [w,h]]` | Go down, then right |
|
||||
| S-shape | `[[0,0], [0,h1], [w,h1], [w,h2]]` | Navigate around obstacles |
|
||||
| U-turn | `[[0,0], [w,0], [w,-h], [0,-h]]` | Callback/return arrows |
|
||||
|
||||
---
|
||||
|
||||
## Worked Examples
|
||||
|
||||
### Vertical Connection (Bottom to Top)
|
||||
|
||||
```
|
||||
Source: x=500, y=200, width=180, height=90
|
||||
Target: x=500, y=400, width=180, height=90
|
||||
|
||||
source_bottom = (500 + 180/2, 200 + 90) = (590, 290)
|
||||
target_top = (500 + 180/2, 400) = (590, 400)
|
||||
|
||||
Arrow x = 590, y = 290
|
||||
Distance = 400 - 290 = 110
|
||||
Points = [[0, 0], [0, 110]]
|
||||
```
|
||||
|
||||
### Fan-out (One to Many)
|
||||
|
||||
```
|
||||
Orchestrator: x=570, y=400, width=140, height=80
|
||||
Target: x=120, y=550, width=160, height=80
|
||||
|
||||
orchestrator_bottom = (570 + 140/2, 400 + 80) = (640, 480)
|
||||
target_top = (120 + 160/2, 550) = (200, 550)
|
||||
|
||||
Arrow x = 640, y = 480
|
||||
Horizontal offset = 200 - 640 = -440
|
||||
Vertical offset = 550 - 480 = 70
|
||||
|
||||
Points = [[0, 0], [-440, 0], [-440, 70]] // Left first, then down
|
||||
```
|
||||
|
||||
### U-turn (Callback)
|
||||
|
||||
```
|
||||
Source: x=570, y=400, width=140, height=80
|
||||
Target: x=550, y=270, width=180, height=90
|
||||
Connection: Right of source -> Right of target
|
||||
|
||||
source_right = (570 + 140, 400 + 80/2) = (710, 440)
|
||||
target_right = (550 + 180, 270 + 90/2) = (730, 315)
|
||||
|
||||
Arrow x = 710, y = 440
|
||||
Vertical distance = 315 - 440 = -125
|
||||
Final x offset = 730 - 710 = 20
|
||||
|
||||
Points = [[0, 0], [50, 0], [50, -125], [20, -125]]
|
||||
// Right 50px (clearance), up 125px, left 30px
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Staggering Multiple Arrows
|
||||
|
||||
When N arrows leave from same edge, spread evenly:
|
||||
|
||||
```
|
||||
FUNCTION getStaggeredPositions(shape, edge, numArrows):
|
||||
positions = []
|
||||
FOR i FROM 0 TO numArrows-1:
|
||||
percentage = 0.2 + (0.6 * i / (numArrows - 1))
|
||||
|
||||
IF edge == "bottom" OR edge == "top":
|
||||
x = shape.x + shape.width * percentage
|
||||
y = (edge == "bottom") ? shape.y + shape.height : shape.y
|
||||
ELSE:
|
||||
x = (edge == "right") ? shape.x + shape.width : shape.x
|
||||
y = shape.y + shape.height * percentage
|
||||
|
||||
positions.append({x, y})
|
||||
RETURN positions
|
||||
|
||||
// Examples:
|
||||
// 2 arrows: 20%, 80%
|
||||
// 3 arrows: 20%, 50%, 80%
|
||||
// 5 arrows: 20%, 35%, 50%, 65%, 80%
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Arrow Bindings
|
||||
|
||||
For better visual attachment, use `startBinding` and `endBinding`:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "arrow-workflow-convert",
|
||||
"type": "arrow",
|
||||
"x": 525,
|
||||
"y": 420,
|
||||
"width": 325,
|
||||
"height": 125,
|
||||
"points": [[0, 0], [-325, 0], [-325, 125]],
|
||||
"roughness": 0,
|
||||
"roundness": null,
|
||||
"elbowed": true,
|
||||
"startBinding": {
|
||||
"elementId": "cloud-workflows",
|
||||
"focus": 0,
|
||||
"gap": 1,
|
||||
"fixedPoint": [0.5, 1]
|
||||
},
|
||||
"endBinding": {
|
||||
"elementId": "convert-pdf-service",
|
||||
"focus": 0,
|
||||
"gap": 1,
|
||||
"fixedPoint": [0.5, 0]
|
||||
},
|
||||
"startArrowhead": null,
|
||||
"endArrowhead": "arrow"
|
||||
}
|
||||
```
|
||||
|
||||
### fixedPoint Values
|
||||
|
||||
- Top center: `[0.5, 0]`
|
||||
- Bottom center: `[0.5, 1]`
|
||||
- Left center: `[0, 0.5]`
|
||||
- Right center: `[1, 0.5]`
|
||||
|
||||
### Update Shape boundElements
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "cloud-workflows",
|
||||
"boundElements": [
|
||||
{ "type": "text", "id": "cloud-workflows-text" },
|
||||
{ "type": "arrow", "id": "arrow-workflow-convert" }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Bidirectional Arrows
|
||||
|
||||
For two-way data flows:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "arrow",
|
||||
"startArrowhead": "arrow",
|
||||
"endArrowhead": "arrow"
|
||||
}
|
||||
```
|
||||
|
||||
Arrowhead options: `null`, `"arrow"`, `"bar"`, `"dot"`, `"triangle"`
|
||||
|
||||
---
|
||||
|
||||
## Arrow Labels
|
||||
|
||||
Position standalone text near arrow midpoint:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "arrow-api-db-label",
|
||||
"type": "text",
|
||||
"x": 305, // Arrow x + offset
|
||||
"y": 245, // Arrow midpoint
|
||||
"text": "SQL",
|
||||
"fontSize": 12,
|
||||
"containerId": null,
|
||||
"backgroundColor": "#ffffff"
|
||||
}
|
||||
```
|
||||
|
||||
**Positioning formula:**
|
||||
- Vertical: `label.y = arrow.y + (total_height / 2)`
|
||||
- Horizontal: `label.x = arrow.x + (total_width / 2)`
|
||||
- L-shaped: Position at corner or longest segment midpoint
|
||||
|
||||
---
|
||||
|
||||
## Width/Height Calculation
|
||||
|
||||
Arrow `width` and `height` = bounding box of path:
|
||||
|
||||
```
|
||||
points = [[0, 0], [-440, 0], [-440, 70]]
|
||||
width = abs(-440) = 440
|
||||
height = abs(70) = 70
|
||||
|
||||
points = [[0, 0], [50, 0], [50, -125], [20, -125]]
|
||||
width = max(abs(50), abs(20)) = 50
|
||||
height = abs(-125) = 125
|
||||
```
|
||||
67
skills/excalidraw/references/color-palette.md
Normal file
67
skills/excalidraw/references/color-palette.md
Normal file
@@ -0,0 +1,67 @@
|
||||
# Color Palette & Brand Style
|
||||
|
||||
**This is the single source of truth for all colors and brand-specific styles.** To customize diagrams for your own brand, edit this file — everything else in the skill is universal.
|
||||
|
||||
---
|
||||
|
||||
## Shape Colors (Semantic)
|
||||
|
||||
Colors encode meaning, not decoration. Each semantic purpose has a fill/stroke pair.
|
||||
|
||||
| Semantic Purpose | Fill | Stroke |
|
||||
|------------------|------|--------|
|
||||
| Primary/Neutral | `#3b82f6` | `#1e3a5f` |
|
||||
| Secondary | `#60a5fa` | `#1e3a5f` |
|
||||
| Tertiary | `#93c5fd` | `#1e3a5f` |
|
||||
| Start/Trigger | `#fed7aa` | `#c2410c` |
|
||||
| End/Success | `#a7f3d0` | `#047857` |
|
||||
| Warning/Reset | `#fee2e2` | `#dc2626` |
|
||||
| Decision | `#fef3c7` | `#b45309` |
|
||||
| AI/LLM | `#ddd6fe` | `#6d28d9` |
|
||||
| Inactive/Disabled | `#dbeafe` | `#1e40af` (use dashed stroke) |
|
||||
| Error | `#fecaca` | `#b91c1c` |
|
||||
|
||||
**Rule**: Always pair a darker stroke with a lighter fill for contrast.
|
||||
|
||||
---
|
||||
|
||||
## Text Colors (Hierarchy)
|
||||
|
||||
Use color on free-floating text to create visual hierarchy without containers.
|
||||
|
||||
| Level | Color | Use For |
|
||||
|-------|-------|---------|
|
||||
| Title | `#1e40af` | Section headings, major labels |
|
||||
| Subtitle | `#3b82f6` | Subheadings, secondary labels |
|
||||
| Body/Detail | `#64748b` | Descriptions, annotations, metadata |
|
||||
| On light fills | `#374151` | Text inside light-colored shapes |
|
||||
| On dark fills | `#ffffff` | Text inside dark-colored shapes |
|
||||
|
||||
---
|
||||
|
||||
## Evidence Artifact Colors
|
||||
|
||||
Used for code snippets, data examples, and other concrete evidence inside technical diagrams.
|
||||
|
||||
| Artifact | Background | Text Color |
|
||||
|----------|-----------|------------|
|
||||
| Code snippet | `#1e293b` | Syntax-colored (language-appropriate) |
|
||||
| JSON/data example | `#1e293b` | `#22c55e` (green) |
|
||||
|
||||
---
|
||||
|
||||
## Default Stroke & Line Colors
|
||||
|
||||
| Element | Color |
|
||||
|---------|-------|
|
||||
| Arrows | Use the stroke color of the source element's semantic purpose |
|
||||
| Structural lines (dividers, trees, timelines) | Primary stroke (`#1e3a5f`) or Slate (`#64748b`) |
|
||||
| Marker dots (fill + stroke) | Primary fill (`#3b82f6`) |
|
||||
|
||||
---
|
||||
|
||||
## Background
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| Canvas background | `#ffffff` |
|
||||
@@ -1,91 +0,0 @@
|
||||
# Color Palettes Reference
|
||||
|
||||
Color schemes for different platforms and component types.
|
||||
|
||||
---
|
||||
|
||||
## Default Palette (Platform-Agnostic)
|
||||
|
||||
| Component Type | Background | Stroke | Example |
|
||||
|----------------|------------|--------|---------|
|
||||
| Frontend/UI | `#a5d8ff` | `#1971c2` | Next.js, React apps |
|
||||
| Backend/API | `#d0bfff` | `#7048e8` | API servers, processors |
|
||||
| Database | `#b2f2bb` | `#2f9e44` | PostgreSQL, MySQL, MongoDB |
|
||||
| Storage | `#ffec99` | `#f08c00` | Object storage, file systems |
|
||||
| AI/ML Services | `#e599f7` | `#9c36b5` | ML models, AI APIs |
|
||||
| External APIs | `#ffc9c9` | `#e03131` | Third-party services |
|
||||
| Orchestration | `#ffa8a8` | `#c92a2a` | Workflows, schedulers |
|
||||
| Validation | `#ffd8a8` | `#e8590c` | Validators, checkers |
|
||||
| Network/Security | `#dee2e6` | `#495057` | VPC, IAM, firewalls |
|
||||
| Classification | `#99e9f2` | `#0c8599` | Routers, classifiers |
|
||||
| Users/Actors | `#e7f5ff` | `#1971c2` | User ellipses |
|
||||
| Message Queue | `#fff3bf` | `#fab005` | Kafka, RabbitMQ, SQS |
|
||||
| Cache | `#ffe8cc` | `#fd7e14` | Redis, Memcached |
|
||||
| Monitoring | `#d3f9d8` | `#40c057` | Prometheus, Grafana |
|
||||
|
||||
---
|
||||
|
||||
## AWS Palette
|
||||
|
||||
| Service Category | Background | Stroke |
|
||||
|-----------------|------------|--------|
|
||||
| Compute (EC2, Lambda, ECS) | `#ff9900` | `#cc7a00` |
|
||||
| Storage (S3, EBS) | `#3f8624` | `#2d6119` |
|
||||
| Database (RDS, DynamoDB) | `#3b48cc` | `#2d3899` |
|
||||
| Networking (VPC, Route53) | `#8c4fff` | `#6b3dcc` |
|
||||
| Security (IAM, KMS) | `#dd344c` | `#b12a3d` |
|
||||
| Analytics (Kinesis, Athena) | `#8c4fff` | `#6b3dcc` |
|
||||
| ML (SageMaker, Bedrock) | `#01a88d` | `#017d69` |
|
||||
|
||||
---
|
||||
|
||||
## Azure Palette
|
||||
|
||||
| Service Category | Background | Stroke |
|
||||
|-----------------|------------|--------|
|
||||
| Compute | `#0078d4` | `#005a9e` |
|
||||
| Storage | `#50e6ff` | `#3cb5cc` |
|
||||
| Database | `#0078d4` | `#005a9e` |
|
||||
| Networking | `#773adc` | `#5a2ca8` |
|
||||
| Security | `#ff8c00` | `#cc7000` |
|
||||
| AI/ML | `#50e6ff` | `#3cb5cc` |
|
||||
|
||||
---
|
||||
|
||||
## GCP Palette
|
||||
|
||||
| Service Category | Background | Stroke |
|
||||
|-----------------|------------|--------|
|
||||
| Compute (GCE, Cloud Run) | `#4285f4` | `#3367d6` |
|
||||
| Storage (GCS) | `#34a853` | `#2d8e47` |
|
||||
| Database (Cloud SQL, Firestore) | `#ea4335` | `#c53929` |
|
||||
| Networking | `#fbbc04` | `#d99e04` |
|
||||
| AI/ML (Vertex AI) | `#9334e6` | `#7627b8` |
|
||||
|
||||
---
|
||||
|
||||
## Kubernetes Palette
|
||||
|
||||
| Component | Background | Stroke |
|
||||
|-----------|------------|--------|
|
||||
| Pod | `#326ce5` | `#2756b8` |
|
||||
| Service | `#326ce5` | `#2756b8` |
|
||||
| Deployment | `#326ce5` | `#2756b8` |
|
||||
| ConfigMap/Secret | `#7f8c8d` | `#626d6e` |
|
||||
| Ingress | `#00d4aa` | `#00a888` |
|
||||
| Node | `#303030` | `#1a1a1a` |
|
||||
| Namespace | `#f0f0f0` | `#c0c0c0` (dashed) |
|
||||
|
||||
---
|
||||
|
||||
## Diagram Type Suggestions
|
||||
|
||||
| Diagram Type | Recommended Layout | Key Elements |
|
||||
|--------------|-------------------|--------------|
|
||||
| Microservices | Vertical flow | Services, databases, queues, API gateway |
|
||||
| Data Pipeline | Horizontal flow | Sources, transformers, sinks, storage |
|
||||
| Event-Driven | Hub-and-spoke | Event bus center, producers/consumers |
|
||||
| Kubernetes | Layered groups | Namespace boxes, pods inside deployments |
|
||||
| CI/CD | Horizontal flow | Source -> Build -> Test -> Deploy -> Monitor |
|
||||
| Network | Hierarchical | Internet -> LB -> VPC -> Subnets -> Instances |
|
||||
| User Flow | Swimlanes | User actions, system responses, external calls |
|
||||
182
skills/excalidraw/references/element-templates.md
Normal file
182
skills/excalidraw/references/element-templates.md
Normal file
@@ -0,0 +1,182 @@
|
||||
# Element Templates
|
||||
|
||||
Copy-paste JSON templates for each Excalidraw element type. The `strokeColor` and `backgroundColor` values are placeholders — always pull actual colors from `color-palette.md` based on the element's semantic purpose.
|
||||
|
||||
## Free-Floating Text (no container)
|
||||
```json
|
||||
{
|
||||
"type": "text",
|
||||
"id": "label1",
|
||||
"x": 100, "y": 100,
|
||||
"width": 200, "height": 25,
|
||||
"text": "Section Title",
|
||||
"originalText": "Section Title",
|
||||
"fontSize": 20,
|
||||
"fontFamily": 3,
|
||||
"textAlign": "left",
|
||||
"verticalAlign": "top",
|
||||
"strokeColor": "<title color from palette>",
|
||||
"backgroundColor": "transparent",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"seed": 11111,
|
||||
"version": 1,
|
||||
"versionNonce": 22222,
|
||||
"isDeleted": false,
|
||||
"groupIds": [],
|
||||
"boundElements": null,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"containerId": null,
|
||||
"lineHeight": 1.25
|
||||
}
|
||||
```
|
||||
|
||||
## Line (structural, not arrow)
|
||||
```json
|
||||
{
|
||||
"type": "line",
|
||||
"id": "line1",
|
||||
"x": 100, "y": 100,
|
||||
"width": 0, "height": 200,
|
||||
"strokeColor": "<structural line color from palette>",
|
||||
"backgroundColor": "transparent",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 2,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"seed": 44444,
|
||||
"version": 1,
|
||||
"versionNonce": 55555,
|
||||
"isDeleted": false,
|
||||
"groupIds": [],
|
||||
"boundElements": null,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"points": [[0, 0], [0, 200]]
|
||||
}
|
||||
```
|
||||
|
||||
## Small Marker Dot
|
||||
```json
|
||||
{
|
||||
"type": "ellipse",
|
||||
"id": "dot1",
|
||||
"x": 94, "y": 94,
|
||||
"width": 12, "height": 12,
|
||||
"strokeColor": "<marker dot color from palette>",
|
||||
"backgroundColor": "<marker dot color from palette>",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"seed": 66666,
|
||||
"version": 1,
|
||||
"versionNonce": 77777,
|
||||
"isDeleted": false,
|
||||
"groupIds": [],
|
||||
"boundElements": null,
|
||||
"link": null,
|
||||
"locked": false
|
||||
}
|
||||
```
|
||||
|
||||
## Rectangle
|
||||
```json
|
||||
{
|
||||
"type": "rectangle",
|
||||
"id": "elem1",
|
||||
"x": 100, "y": 100, "width": 180, "height": 90,
|
||||
"strokeColor": "<stroke from palette based on semantic purpose>",
|
||||
"backgroundColor": "<fill from palette based on semantic purpose>",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 2,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"seed": 12345,
|
||||
"version": 1,
|
||||
"versionNonce": 67890,
|
||||
"isDeleted": false,
|
||||
"groupIds": [],
|
||||
"boundElements": [{"id": "text1", "type": "text"}],
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"roundness": {"type": 3}
|
||||
}
|
||||
```
|
||||
|
||||
## Text (centered in shape)
|
||||
```json
|
||||
{
|
||||
"type": "text",
|
||||
"id": "text1",
|
||||
"x": 130, "y": 132,
|
||||
"width": 120, "height": 25,
|
||||
"text": "Process",
|
||||
"originalText": "Process",
|
||||
"fontSize": 16,
|
||||
"fontFamily": 3,
|
||||
"textAlign": "center",
|
||||
"verticalAlign": "middle",
|
||||
"strokeColor": "<text color — match parent shape's stroke or use 'on light/dark fills' from palette>",
|
||||
"backgroundColor": "transparent",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"seed": 11111,
|
||||
"version": 1,
|
||||
"versionNonce": 22222,
|
||||
"isDeleted": false,
|
||||
"groupIds": [],
|
||||
"boundElements": null,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"containerId": "elem1",
|
||||
"lineHeight": 1.25
|
||||
}
|
||||
```
|
||||
|
||||
## Arrow
|
||||
```json
|
||||
{
|
||||
"type": "arrow",
|
||||
"id": "arrow1",
|
||||
"x": 282, "y": 145, "width": 118, "height": 0,
|
||||
"strokeColor": "<arrow color — typically matches source element's stroke from palette>",
|
||||
"backgroundColor": "transparent",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 2,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"angle": 0,
|
||||
"seed": 33333,
|
||||
"version": 1,
|
||||
"versionNonce": 44444,
|
||||
"isDeleted": false,
|
||||
"groupIds": [],
|
||||
"boundElements": null,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"points": [[0, 0], [118, 0]],
|
||||
"startBinding": {"elementId": "elem1", "focus": 0, "gap": 2},
|
||||
"endBinding": {"elementId": "elem2", "focus": 0, "gap": 2},
|
||||
"startArrowhead": null,
|
||||
"endArrowhead": "arrow"
|
||||
}
|
||||
```
|
||||
|
||||
For curves: use 3+ points in `points` array.
|
||||
@@ -1,381 +0,0 @@
|
||||
# Complete Examples Reference
|
||||
|
||||
Full JSON examples showing proper element structure.
|
||||
|
||||
---
|
||||
|
||||
## 3-Tier Architecture Example
|
||||
|
||||
This is a REFERENCE showing JSON structure. Replace IDs, labels, positions, and colors based on discovered components.
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "excalidraw",
|
||||
"version": 2,
|
||||
"source": "claude-code-excalidraw-skill",
|
||||
"elements": [
|
||||
{
|
||||
"id": "user",
|
||||
"type": "ellipse",
|
||||
"x": 150,
|
||||
"y": 50,
|
||||
"width": 100,
|
||||
"height": 60,
|
||||
"angle": 0,
|
||||
"strokeColor": "#1971c2",
|
||||
"backgroundColor": "#e7f5ff",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 2,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 1,
|
||||
"opacity": 100,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": { "type": 2 },
|
||||
"seed": 1,
|
||||
"version": 1,
|
||||
"versionNonce": 1,
|
||||
"isDeleted": false,
|
||||
"boundElements": [{ "type": "text", "id": "user-text" }],
|
||||
"updated": 1,
|
||||
"link": null,
|
||||
"locked": false
|
||||
},
|
||||
{
|
||||
"id": "user-text",
|
||||
"type": "text",
|
||||
"x": 175,
|
||||
"y": 67,
|
||||
"width": 50,
|
||||
"height": 25,
|
||||
"angle": 0,
|
||||
"strokeColor": "#1e1e1e",
|
||||
"backgroundColor": "transparent",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": null,
|
||||
"seed": 2,
|
||||
"version": 1,
|
||||
"versionNonce": 2,
|
||||
"isDeleted": false,
|
||||
"boundElements": null,
|
||||
"updated": 1,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"text": "User",
|
||||
"fontSize": 16,
|
||||
"fontFamily": 1,
|
||||
"textAlign": "center",
|
||||
"verticalAlign": "middle",
|
||||
"baseline": 14,
|
||||
"containerId": "user",
|
||||
"originalText": "User",
|
||||
"lineHeight": 1.25
|
||||
},
|
||||
{
|
||||
"id": "frontend",
|
||||
"type": "rectangle",
|
||||
"x": 100,
|
||||
"y": 180,
|
||||
"width": 200,
|
||||
"height": 80,
|
||||
"angle": 0,
|
||||
"strokeColor": "#1971c2",
|
||||
"backgroundColor": "#a5d8ff",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 2,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 1,
|
||||
"opacity": 100,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": { "type": 3 },
|
||||
"seed": 3,
|
||||
"version": 1,
|
||||
"versionNonce": 3,
|
||||
"isDeleted": false,
|
||||
"boundElements": [{ "type": "text", "id": "frontend-text" }],
|
||||
"updated": 1,
|
||||
"link": null,
|
||||
"locked": false
|
||||
},
|
||||
{
|
||||
"id": "frontend-text",
|
||||
"type": "text",
|
||||
"x": 105,
|
||||
"y": 195,
|
||||
"width": 190,
|
||||
"height": 50,
|
||||
"angle": 0,
|
||||
"strokeColor": "#1e1e1e",
|
||||
"backgroundColor": "transparent",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": null,
|
||||
"seed": 4,
|
||||
"version": 1,
|
||||
"versionNonce": 4,
|
||||
"isDeleted": false,
|
||||
"boundElements": null,
|
||||
"updated": 1,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"text": "Frontend\nNext.js",
|
||||
"fontSize": 16,
|
||||
"fontFamily": 1,
|
||||
"textAlign": "center",
|
||||
"verticalAlign": "middle",
|
||||
"baseline": 14,
|
||||
"containerId": "frontend",
|
||||
"originalText": "Frontend\nNext.js",
|
||||
"lineHeight": 1.25
|
||||
},
|
||||
{
|
||||
"id": "database",
|
||||
"type": "rectangle",
|
||||
"x": 100,
|
||||
"y": 330,
|
||||
"width": 200,
|
||||
"height": 80,
|
||||
"angle": 0,
|
||||
"strokeColor": "#2f9e44",
|
||||
"backgroundColor": "#b2f2bb",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 2,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 1,
|
||||
"opacity": 100,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": { "type": 3 },
|
||||
"seed": 5,
|
||||
"version": 1,
|
||||
"versionNonce": 5,
|
||||
"isDeleted": false,
|
||||
"boundElements": [{ "type": "text", "id": "database-text" }],
|
||||
"updated": 1,
|
||||
"link": null,
|
||||
"locked": false
|
||||
},
|
||||
{
|
||||
"id": "database-text",
|
||||
"type": "text",
|
||||
"x": 105,
|
||||
"y": 345,
|
||||
"width": 190,
|
||||
"height": 50,
|
||||
"angle": 0,
|
||||
"strokeColor": "#1e1e1e",
|
||||
"backgroundColor": "transparent",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 1,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": null,
|
||||
"seed": 6,
|
||||
"version": 1,
|
||||
"versionNonce": 6,
|
||||
"isDeleted": false,
|
||||
"boundElements": null,
|
||||
"updated": 1,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"text": "Database\nPostgreSQL",
|
||||
"fontSize": 16,
|
||||
"fontFamily": 1,
|
||||
"textAlign": "center",
|
||||
"verticalAlign": "middle",
|
||||
"baseline": 14,
|
||||
"containerId": "database",
|
||||
"originalText": "Database\nPostgreSQL",
|
||||
"lineHeight": 1.25
|
||||
},
|
||||
{
|
||||
"id": "arrow-user-frontend",
|
||||
"type": "arrow",
|
||||
"x": 200,
|
||||
"y": 115,
|
||||
"width": 0,
|
||||
"height": 60,
|
||||
"angle": 0,
|
||||
"strokeColor": "#1971c2",
|
||||
"backgroundColor": "transparent",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 2,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": null,
|
||||
"seed": 7,
|
||||
"version": 1,
|
||||
"versionNonce": 7,
|
||||
"isDeleted": false,
|
||||
"boundElements": null,
|
||||
"updated": 1,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"points": [[0, 0], [0, 60]],
|
||||
"lastCommittedPoint": null,
|
||||
"startBinding": null,
|
||||
"endBinding": null,
|
||||
"startArrowhead": null,
|
||||
"endArrowhead": "arrow",
|
||||
"elbowed": true
|
||||
},
|
||||
{
|
||||
"id": "arrow-frontend-database",
|
||||
"type": "arrow",
|
||||
"x": 200,
|
||||
"y": 265,
|
||||
"width": 0,
|
||||
"height": 60,
|
||||
"angle": 0,
|
||||
"strokeColor": "#2f9e44",
|
||||
"backgroundColor": "transparent",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 2,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 0,
|
||||
"opacity": 100,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": null,
|
||||
"seed": 8,
|
||||
"version": 1,
|
||||
"versionNonce": 8,
|
||||
"isDeleted": false,
|
||||
"boundElements": null,
|
||||
"updated": 1,
|
||||
"link": null,
|
||||
"locked": false,
|
||||
"points": [[0, 0], [0, 60]],
|
||||
"lastCommittedPoint": null,
|
||||
"startBinding": null,
|
||||
"endBinding": null,
|
||||
"startArrowhead": null,
|
||||
"endArrowhead": "arrow",
|
||||
"elbowed": true
|
||||
}
|
||||
],
|
||||
"appState": {
|
||||
"gridSize": 20,
|
||||
"viewBackgroundColor": "#ffffff"
|
||||
},
|
||||
"files": {}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Layout Patterns
|
||||
|
||||
### Vertical Flow (Most Common)
|
||||
|
||||
```
|
||||
Grid positioning:
|
||||
- Column width: 200-250px
|
||||
- Row height: 130-150px
|
||||
- Element size: 160-200px x 80-90px
|
||||
- Spacing: 40-50px between elements
|
||||
|
||||
Row positions (y):
|
||||
Row 0: 20 (title)
|
||||
Row 1: 100 (users/entry points)
|
||||
Row 2: 230 (frontend/gateway)
|
||||
Row 3: 380 (orchestration)
|
||||
Row 4: 530 (services)
|
||||
Row 5: 680 (data layer)
|
||||
Row 6: 830 (external services)
|
||||
|
||||
Column positions (x):
|
||||
Col 0: 100
|
||||
Col 1: 300
|
||||
Col 2: 500
|
||||
Col 3: 700
|
||||
Col 4: 900
|
||||
```
|
||||
|
||||
### Horizontal Flow (Pipelines)
|
||||
|
||||
```
|
||||
Stage positions (x):
|
||||
Stage 0: 100 (input/source)
|
||||
Stage 1: 350 (transform 1)
|
||||
Stage 2: 600 (transform 2)
|
||||
Stage 3: 850 (transform 3)
|
||||
Stage 4: 1100 (output/sink)
|
||||
|
||||
All stages at same y: 200
|
||||
Arrows: "right" -> "left" connections
|
||||
```
|
||||
|
||||
### Hub-and-Spoke
|
||||
|
||||
```
|
||||
Center hub: x=500, y=350
|
||||
8 positions at 45° increments:
|
||||
N: (500, 150)
|
||||
NE: (640, 210)
|
||||
E: (700, 350)
|
||||
SE: (640, 490)
|
||||
S: (500, 550)
|
||||
SW: (360, 490)
|
||||
W: (300, 350)
|
||||
NW: (360, 210)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Complex Architecture Layout
|
||||
|
||||
```
|
||||
Row 0: Title/Header (y: 20)
|
||||
Row 1: Users/Clients (y: 80)
|
||||
Row 2: Frontend/Gateway (y: 200)
|
||||
Row 3: Orchestration (y: 350)
|
||||
Row 4: Processing Services (y: 550)
|
||||
Row 5: Data Layer (y: 680)
|
||||
Row 6: External Services (y: 830)
|
||||
|
||||
Columns (x):
|
||||
Col 0: 120
|
||||
Col 1: 320
|
||||
Col 2: 520
|
||||
Col 3: 720
|
||||
Col 4: 920
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Diagram Complexity Guidelines
|
||||
|
||||
| Complexity | Max Elements | Max Arrows | Approach |
|
||||
|------------|-------------|------------|----------|
|
||||
| Simple | 5-10 | 5-10 | Single file, no groups |
|
||||
| Medium | 10-25 | 15-30 | Use grouping rectangles |
|
||||
| Complex | 25-50 | 30-60 | Split into multiple diagrams |
|
||||
| Very Complex | 50+ | 60+ | Multiple focused diagrams |
|
||||
|
||||
**When to split:**
|
||||
- More than 50 elements
|
||||
- Create: `architecture-overview.excalidraw`, `architecture-data-layer.excalidraw`
|
||||
|
||||
**When to use groups:**
|
||||
- 3+ related services
|
||||
- Same deployment unit
|
||||
- Logical boundaries (VPC, Security Zone)
|
||||
@@ -1,210 +0,0 @@
|
||||
# Excalidraw JSON Format Reference
|
||||
|
||||
Complete reference for Excalidraw JSON structure and element types.
|
||||
|
||||
---
|
||||
|
||||
## File Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "excalidraw",
|
||||
"version": 2,
|
||||
"source": "claude-code-excalidraw-skill",
|
||||
"elements": [],
|
||||
"appState": {
|
||||
"gridSize": 20,
|
||||
"viewBackgroundColor": "#ffffff"
|
||||
},
|
||||
"files": {}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Element Types
|
||||
|
||||
| Type | Use For | Arrow Reliability |
|
||||
|------|---------|-------------------|
|
||||
| `rectangle` | Services, components, databases, containers, orchestrators, decision points | Excellent |
|
||||
| `ellipse` | Users, external systems, start/end points | Good |
|
||||
| `text` | Labels inside shapes, titles, annotations | N/A |
|
||||
| `arrow` | Data flow, connections, dependencies | N/A |
|
||||
| `line` | Grouping boundaries, separators | N/A |
|
||||
|
||||
### BANNED: Diamond Shapes
|
||||
|
||||
**NEVER use `type: "diamond"` in generated diagrams.**
|
||||
|
||||
Diamond arrow connections are fundamentally broken in raw Excalidraw JSON:
|
||||
- Excalidraw applies `roundness` to diamond vertices during rendering
|
||||
- Visual edges appear offset from mathematical edge points
|
||||
- No offset formula reliably compensates
|
||||
- Arrows appear disconnected/floating
|
||||
|
||||
**Use styled rectangles instead** for visual distinction:
|
||||
|
||||
| Semantic Meaning | Rectangle Style |
|
||||
|------------------|-----------------|
|
||||
| Orchestrator/Hub | Coral (`#ffa8a8`/`#c92a2a`) + strokeWidth: 3 |
|
||||
| Decision Point | Orange (`#ffd8a8`/`#e8590c`) + dashed stroke |
|
||||
| Central Router | Larger size + bold color |
|
||||
|
||||
---
|
||||
|
||||
## Required Element Properties
|
||||
|
||||
Every element MUST have these properties:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "unique-id-string",
|
||||
"type": "rectangle",
|
||||
"x": 100,
|
||||
"y": 100,
|
||||
"width": 200,
|
||||
"height": 80,
|
||||
"angle": 0,
|
||||
"strokeColor": "#1971c2",
|
||||
"backgroundColor": "#a5d8ff",
|
||||
"fillStyle": "solid",
|
||||
"strokeWidth": 2,
|
||||
"strokeStyle": "solid",
|
||||
"roughness": 1,
|
||||
"opacity": 100,
|
||||
"groupIds": [],
|
||||
"frameId": null,
|
||||
"roundness": { "type": 3 },
|
||||
"seed": 1,
|
||||
"version": 1,
|
||||
"versionNonce": 1,
|
||||
"isDeleted": false,
|
||||
"boundElements": null,
|
||||
"updated": 1,
|
||||
"link": null,
|
||||
"locked": false
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Text Inside Shapes (Labels)
|
||||
|
||||
**Every labeled shape requires TWO elements:**
|
||||
|
||||
### Shape with boundElements
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "{component-id}",
|
||||
"type": "rectangle",
|
||||
"x": 500,
|
||||
"y": 200,
|
||||
"width": 200,
|
||||
"height": 90,
|
||||
"strokeColor": "#1971c2",
|
||||
"backgroundColor": "#a5d8ff",
|
||||
"boundElements": [{ "type": "text", "id": "{component-id}-text" }],
|
||||
// ... other required properties
|
||||
}
|
||||
```
|
||||
|
||||
### Text with containerId
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "{component-id}-text",
|
||||
"type": "text",
|
||||
"x": 505, // shape.x + 5
|
||||
"y": 220, // shape.y + (shape.height - text.height) / 2
|
||||
"width": 190, // shape.width - 10
|
||||
"height": 50,
|
||||
"text": "{Component Name}\n{Subtitle}",
|
||||
"fontSize": 16,
|
||||
"fontFamily": 1,
|
||||
"textAlign": "center",
|
||||
"verticalAlign": "middle",
|
||||
"containerId": "{component-id}",
|
||||
"originalText": "{Component Name}\n{Subtitle}",
|
||||
"lineHeight": 1.25,
|
||||
// ... other required properties
|
||||
}
|
||||
```
|
||||
|
||||
### DO NOT Use the `label` Property
|
||||
|
||||
The `label` property is for the JavaScript API, NOT raw JSON files:
|
||||
|
||||
```json
|
||||
// WRONG - will show empty boxes
|
||||
{ "type": "rectangle", "label": { "text": "My Label" } }
|
||||
|
||||
// CORRECT - requires TWO elements
|
||||
// 1. Shape with boundElements reference
|
||||
// 2. Separate text element with containerId
|
||||
```
|
||||
|
||||
### Text Positioning
|
||||
|
||||
- Text `x` = shape `x` + 5
|
||||
- Text `y` = shape `y` + (shape.height - text.height) / 2
|
||||
- Text `width` = shape `width` - 10
|
||||
- Use `\n` for multi-line labels
|
||||
- Always use `textAlign: "center"` and `verticalAlign: "middle"`
|
||||
|
||||
### ID Naming Convention
|
||||
|
||||
Always use pattern: `{shape-id}-text` for text element IDs.
|
||||
|
||||
---
|
||||
|
||||
## Dynamic ID Generation
|
||||
|
||||
IDs and labels are generated from codebase analysis:
|
||||
|
||||
| Discovered Component | Generated ID | Generated Label |
|
||||
|---------------------|--------------|-----------------|
|
||||
| Express API server | `express-api` | `"API Server\nExpress.js"` |
|
||||
| PostgreSQL database | `postgres-db` | `"PostgreSQL\nDatabase"` |
|
||||
| Redis cache | `redis-cache` | `"Redis\nCache Layer"` |
|
||||
| S3 bucket for uploads | `s3-uploads` | `"S3 Bucket\nuploads/"` |
|
||||
| Lambda function | `lambda-processor` | `"Lambda\nProcessor"` |
|
||||
| React frontend | `react-frontend` | `"React App\nFrontend"` |
|
||||
|
||||
---
|
||||
|
||||
## Grouping with Dashed Rectangles
|
||||
|
||||
For logical groupings (namespaces, VPCs, pipelines):
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "group-ai-pipeline",
|
||||
"type": "rectangle",
|
||||
"x": 100,
|
||||
"y": 500,
|
||||
"width": 1000,
|
||||
"height": 280,
|
||||
"strokeColor": "#9c36b5",
|
||||
"backgroundColor": "transparent",
|
||||
"strokeStyle": "dashed",
|
||||
"roughness": 0,
|
||||
"roundness": null,
|
||||
"boundElements": null
|
||||
}
|
||||
```
|
||||
|
||||
Group labels are standalone text (no containerId) at top-left:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "group-ai-pipeline-label",
|
||||
"type": "text",
|
||||
"x": 120,
|
||||
"y": 510,
|
||||
"text": "AI Processing Pipeline (Cloud Run)",
|
||||
"textAlign": "left",
|
||||
"verticalAlign": "top",
|
||||
"containerId": null
|
||||
}
|
||||
```
|
||||
71
skills/excalidraw/references/json-schema.md
Normal file
71
skills/excalidraw/references/json-schema.md
Normal file
@@ -0,0 +1,71 @@
|
||||
# Excalidraw JSON Schema
|
||||
|
||||
## Element Types
|
||||
|
||||
| Type | Use For |
|
||||
|------|---------|
|
||||
| `rectangle` | Processes, actions, components |
|
||||
| `ellipse` | Entry/exit points, external systems |
|
||||
| `diamond` | Decisions, conditionals |
|
||||
| `arrow` | Connections between shapes |
|
||||
| `text` | Labels inside shapes |
|
||||
| `line` | Non-arrow connections |
|
||||
| `frame` | Grouping containers |
|
||||
|
||||
## Common Properties
|
||||
|
||||
All elements share these:
|
||||
|
||||
| Property | Type | Description |
|
||||
|----------|------|-------------|
|
||||
| `id` | string | Unique identifier |
|
||||
| `type` | string | Element type |
|
||||
| `x`, `y` | number | Position in pixels |
|
||||
| `width`, `height` | number | Size in pixels |
|
||||
| `strokeColor` | string | Border color (hex) |
|
||||
| `backgroundColor` | string | Fill color (hex or "transparent") |
|
||||
| `fillStyle` | string | "solid", "hachure", "cross-hatch" |
|
||||
| `strokeWidth` | number | 1, 2, or 4 |
|
||||
| `strokeStyle` | string | "solid", "dashed", "dotted" |
|
||||
| `roughness` | number | 0 (smooth), 1 (default), 2 (rough) |
|
||||
| `opacity` | number | 0-100 |
|
||||
| `seed` | number | Random seed for roughness |
|
||||
|
||||
## Text-Specific Properties
|
||||
|
||||
| Property | Description |
|
||||
|----------|-------------|
|
||||
| `text` | The display text |
|
||||
| `originalText` | Same as text |
|
||||
| `fontSize` | Size in pixels (16-20 recommended) |
|
||||
| `fontFamily` | 3 for monospace (use this) |
|
||||
| `textAlign` | "left", "center", "right" |
|
||||
| `verticalAlign` | "top", "middle", "bottom" |
|
||||
| `containerId` | ID of parent shape |
|
||||
|
||||
## Arrow-Specific Properties
|
||||
|
||||
| Property | Description |
|
||||
|----------|-------------|
|
||||
| `points` | Array of [x, y] coordinates |
|
||||
| `startBinding` | Connection to start shape |
|
||||
| `endBinding` | Connection to end shape |
|
||||
| `startArrowhead` | null, "arrow", "bar", "dot", "triangle" |
|
||||
| `endArrowhead` | null, "arrow", "bar", "dot", "triangle" |
|
||||
|
||||
## Binding Format
|
||||
|
||||
```json
|
||||
{
|
||||
"elementId": "shapeId",
|
||||
"focus": 0,
|
||||
"gap": 2
|
||||
}
|
||||
```
|
||||
|
||||
## Rectangle Roundness
|
||||
|
||||
Add for rounded corners:
|
||||
```json
|
||||
"roundness": { "type": 3 }
|
||||
```
|
||||
205
skills/excalidraw/references/render_excalidraw.py
Normal file
205
skills/excalidraw/references/render_excalidraw.py
Normal file
@@ -0,0 +1,205 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Render Excalidraw JSON to PNG using Playwright + headless Chromium.
|
||||
|
||||
Usage:
|
||||
python3 render_excalidraw.py <path-to-file.excalidraw> [--output path.png] [--scale 2] [--width 1920]
|
||||
|
||||
Dependencies (playwright, chromium) are provided by the Nix flake / direnv environment.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def validate_excalidraw(data: dict) -> list[str]:
|
||||
"""Validate Excalidraw JSON structure. Returns list of errors (empty = valid)."""
|
||||
errors: list[str] = []
|
||||
|
||||
if data.get("type") != "excalidraw":
|
||||
errors.append(f"Expected type 'excalidraw', got '{data.get('type')}'")
|
||||
|
||||
if "elements" not in data:
|
||||
errors.append("Missing 'elements' array")
|
||||
elif not isinstance(data["elements"], list):
|
||||
errors.append("'elements' must be an array")
|
||||
elif len(data["elements"]) == 0:
|
||||
errors.append("'elements' array is empty — nothing to render")
|
||||
|
||||
return errors
|
||||
|
||||
|
||||
def compute_bounding_box(elements: list[dict]) -> tuple[float, float, float, float]:
|
||||
"""Compute bounding box (min_x, min_y, max_x, max_y) across all elements."""
|
||||
min_x = float("inf")
|
||||
min_y = float("inf")
|
||||
max_x = float("-inf")
|
||||
max_y = float("-inf")
|
||||
|
||||
for el in elements:
|
||||
if el.get("isDeleted"):
|
||||
continue
|
||||
x = el.get("x", 0)
|
||||
y = el.get("y", 0)
|
||||
w = el.get("width", 0)
|
||||
h = el.get("height", 0)
|
||||
|
||||
# For arrows/lines, points array defines the shape relative to x,y
|
||||
if el.get("type") in ("arrow", "line") and "points" in el:
|
||||
for px, py in el["points"]:
|
||||
min_x = min(min_x, x + px)
|
||||
min_y = min(min_y, y + py)
|
||||
max_x = max(max_x, x + px)
|
||||
max_y = max(max_y, y + py)
|
||||
else:
|
||||
min_x = min(min_x, x)
|
||||
min_y = min(min_y, y)
|
||||
max_x = max(max_x, x + abs(w))
|
||||
max_y = max(max_y, y + abs(h))
|
||||
|
||||
if min_x == float("inf"):
|
||||
return (0, 0, 800, 600)
|
||||
|
||||
return (min_x, min_y, max_x, max_y)
|
||||
|
||||
|
||||
def render(
|
||||
excalidraw_path: Path,
|
||||
output_path: Path | None = None,
|
||||
scale: int = 2,
|
||||
max_width: int = 1920,
|
||||
) -> Path:
|
||||
"""Render an .excalidraw file to PNG. Returns the output PNG path."""
|
||||
# Import playwright here so validation errors show before import errors
|
||||
try:
|
||||
from playwright.sync_api import sync_playwright
|
||||
except ImportError:
|
||||
print("ERROR: playwright not installed.", file=sys.stderr)
|
||||
print("Ensure the Nix dev shell is active (direnv allow).", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Read and validate
|
||||
raw = excalidraw_path.read_text(encoding="utf-8")
|
||||
try:
|
||||
data = json.loads(raw)
|
||||
except json.JSONDecodeError as e:
|
||||
print(f"ERROR: Invalid JSON in {excalidraw_path}: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
errors = validate_excalidraw(data)
|
||||
if errors:
|
||||
print(f"ERROR: Invalid Excalidraw file:", file=sys.stderr)
|
||||
for err in errors:
|
||||
print(f" - {err}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Compute viewport size from element bounding box
|
||||
elements = [e for e in data["elements"] if not e.get("isDeleted")]
|
||||
min_x, min_y, max_x, max_y = compute_bounding_box(elements)
|
||||
padding = 80
|
||||
diagram_w = max_x - min_x + padding * 2
|
||||
diagram_h = max_y - min_y + padding * 2
|
||||
|
||||
# Cap viewport width, let height be natural
|
||||
vp_width = min(int(diagram_w), max_width)
|
||||
vp_height = max(int(diagram_h), 600)
|
||||
|
||||
# Output path
|
||||
if output_path is None:
|
||||
output_path = excalidraw_path.with_suffix(".png")
|
||||
|
||||
# Template path (same directory as this script)
|
||||
template_path = Path(__file__).parent / "render_template.html"
|
||||
if not template_path.exists():
|
||||
print(f"ERROR: Template not found at {template_path}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
template_url = template_path.as_uri()
|
||||
|
||||
with sync_playwright() as p:
|
||||
try:
|
||||
browser = p.chromium.launch(headless=True)
|
||||
except Exception as e:
|
||||
if "Executable doesn't exist" in str(e) or "browserType.launch" in str(e):
|
||||
print("ERROR: Chromium not installed for Playwright.", file=sys.stderr)
|
||||
print("Ensure the Nix dev shell is active (direnv allow).", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
raise
|
||||
|
||||
page = browser.new_page(
|
||||
viewport={"width": vp_width, "height": vp_height},
|
||||
device_scale_factor=scale,
|
||||
)
|
||||
|
||||
# Load the template
|
||||
page.goto(template_url)
|
||||
|
||||
# Wait for the ES module to load (imports from esm.sh)
|
||||
page.wait_for_function("window.__moduleReady === true", timeout=30000)
|
||||
|
||||
# Inject the diagram data and render
|
||||
json_str = json.dumps(data)
|
||||
result = page.evaluate(f"window.renderDiagram({json_str})")
|
||||
|
||||
if not result or not result.get("success"):
|
||||
error_msg = (
|
||||
result.get("error", "Unknown render error")
|
||||
if result
|
||||
else "renderDiagram returned null"
|
||||
)
|
||||
print(f"ERROR: Render failed: {error_msg}", file=sys.stderr)
|
||||
browser.close()
|
||||
sys.exit(1)
|
||||
|
||||
# Wait for render completion signal
|
||||
page.wait_for_function("window.__renderComplete === true", timeout=15000)
|
||||
|
||||
# Screenshot the SVG element
|
||||
svg_el = page.query_selector("#root svg")
|
||||
if svg_el is None:
|
||||
print("ERROR: No SVG element found after render.", file=sys.stderr)
|
||||
browser.close()
|
||||
sys.exit(1)
|
||||
|
||||
svg_el.screenshot(path=str(output_path))
|
||||
browser.close()
|
||||
|
||||
return output_path
|
||||
|
||||
|
||||
def main() -> None:
|
||||
"""Entry point for rendering Excalidraw JSON files to PNG."""
|
||||
parser = argparse.ArgumentParser(description="Render Excalidraw JSON to PNG")
|
||||
parser.add_argument("input", type=Path, help="Path to .excalidraw JSON file")
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
"-o",
|
||||
type=Path,
|
||||
default=None,
|
||||
help="Output PNG path (default: same name with .png)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--scale", "-s", type=int, default=2, help="Device scale factor (default: 2)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--width",
|
||||
"-w",
|
||||
type=int,
|
||||
default=1920,
|
||||
help="Max viewport width (default: 1920)",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
if not args.input.exists():
|
||||
print(f"ERROR: File not found: {args.input}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
png_path = render(args.input, args.output, args.scale, args.width)
|
||||
print(str(png_path))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
57
skills/excalidraw/references/render_template.html
Normal file
57
skills/excalidraw/references/render_template.html
Normal file
@@ -0,0 +1,57 @@
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="utf-8" />
|
||||
<style>
|
||||
* { margin: 0; padding: 0; box-sizing: border-box; }
|
||||
body { background: #ffffff; overflow: hidden; }
|
||||
#root { display: inline-block; }
|
||||
#root svg { display: block; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div id="root"></div>
|
||||
|
||||
<script type="module">
|
||||
import { exportToSvg } from "https://esm.sh/@excalidraw/excalidraw?bundle";
|
||||
|
||||
window.renderDiagram = async function(jsonData) {
|
||||
try {
|
||||
const data = typeof jsonData === "string" ? JSON.parse(jsonData) : jsonData;
|
||||
const elements = data.elements || [];
|
||||
const appState = data.appState || {};
|
||||
const files = data.files || {};
|
||||
|
||||
// Force white background in appState
|
||||
appState.viewBackgroundColor = appState.viewBackgroundColor || "#ffffff";
|
||||
appState.exportWithDarkMode = false;
|
||||
|
||||
const svg = await exportToSvg({
|
||||
elements: elements,
|
||||
appState: {
|
||||
...appState,
|
||||
exportBackground: true,
|
||||
},
|
||||
files: files,
|
||||
});
|
||||
|
||||
// Clear any previous render
|
||||
const root = document.getElementById("root");
|
||||
root.innerHTML = "";
|
||||
root.appendChild(svg);
|
||||
|
||||
window.__renderComplete = true;
|
||||
window.__renderError = null;
|
||||
return { success: true, width: svg.getAttribute("width"), height: svg.getAttribute("height") };
|
||||
} catch (err) {
|
||||
window.__renderComplete = true;
|
||||
window.__renderError = err.message;
|
||||
return { success: false, error: err.message };
|
||||
}
|
||||
};
|
||||
|
||||
// Signal that the module is loaded and ready
|
||||
window.__moduleReady = true;
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,182 +0,0 @@
|
||||
# Validation Reference
|
||||
|
||||
Checklists, validation algorithms, and common bug fixes.
|
||||
|
||||
---
|
||||
|
||||
## Pre-Flight Validation Algorithm
|
||||
|
||||
Run BEFORE writing the file:
|
||||
|
||||
```
|
||||
FUNCTION validateDiagram(elements):
|
||||
errors = []
|
||||
|
||||
// 1. Validate shape-text bindings
|
||||
FOR each shape IN elements WHERE shape.boundElements != null:
|
||||
FOR each binding IN shape.boundElements:
|
||||
textElement = findById(elements, binding.id)
|
||||
IF textElement == null:
|
||||
errors.append("Shape {shape.id} references missing text {binding.id}")
|
||||
ELSE IF textElement.containerId != shape.id:
|
||||
errors.append("Text containerId doesn't match shape")
|
||||
|
||||
// 2. Validate arrow connections
|
||||
FOR each arrow IN elements WHERE arrow.type == "arrow":
|
||||
sourceShape = findShapeNear(elements, arrow.x, arrow.y)
|
||||
IF sourceShape == null:
|
||||
errors.append("Arrow {arrow.id} doesn't start from shape edge")
|
||||
|
||||
finalPoint = arrow.points[arrow.points.length - 1]
|
||||
endX = arrow.x + finalPoint[0]
|
||||
endY = arrow.y + finalPoint[1]
|
||||
targetShape = findShapeNear(elements, endX, endY)
|
||||
IF targetShape == null:
|
||||
errors.append("Arrow {arrow.id} doesn't end at shape edge")
|
||||
|
||||
IF arrow.points.length > 2:
|
||||
IF arrow.elbowed != true:
|
||||
errors.append("Arrow {arrow.id} missing elbowed:true")
|
||||
IF arrow.roundness != null:
|
||||
errors.append("Arrow {arrow.id} should have roundness:null")
|
||||
|
||||
// 3. Validate unique IDs
|
||||
ids = [el.id for el in elements]
|
||||
duplicates = findDuplicates(ids)
|
||||
IF duplicates.length > 0:
|
||||
errors.append("Duplicate IDs: {duplicates}")
|
||||
|
||||
// 4. Validate bounding boxes
|
||||
FOR each arrow IN elements WHERE arrow.type == "arrow":
|
||||
maxX = max(abs(p[0]) for p in arrow.points)
|
||||
maxY = max(abs(p[1]) for p in arrow.points)
|
||||
IF arrow.width < maxX OR arrow.height < maxY:
|
||||
errors.append("Arrow {arrow.id} bounding box too small")
|
||||
|
||||
RETURN errors
|
||||
|
||||
FUNCTION findShapeNear(elements, x, y, tolerance=15):
|
||||
FOR each shape IN elements WHERE shape.type IN ["rectangle", "ellipse"]:
|
||||
edges = [
|
||||
(shape.x + shape.width/2, shape.y), // top
|
||||
(shape.x + shape.width/2, shape.y + shape.height), // bottom
|
||||
(shape.x, shape.y + shape.height/2), // left
|
||||
(shape.x + shape.width, shape.y + shape.height/2) // right
|
||||
]
|
||||
FOR each edge IN edges:
|
||||
IF abs(edge.x - x) < tolerance AND abs(edge.y - y) < tolerance:
|
||||
RETURN shape
|
||||
RETURN null
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Checklists
|
||||
|
||||
### Before Generating
|
||||
|
||||
- [ ] Identified all components from codebase
|
||||
- [ ] Mapped all connections/data flows
|
||||
- [ ] Chose layout pattern (vertical, horizontal, hub-and-spoke)
|
||||
- [ ] Selected color palette (default, AWS, Azure, K8s)
|
||||
- [ ] Planned grid positions
|
||||
- [ ] Created ID naming scheme
|
||||
|
||||
### During Generation
|
||||
|
||||
- [ ] Every labeled shape has BOTH shape AND text elements
|
||||
- [ ] Shape has `boundElements: [{ "type": "text", "id": "{id}-text" }]`
|
||||
- [ ] Text has `containerId: "{shape-id}"`
|
||||
- [ ] Multi-point arrows have `elbowed: true`, `roundness: null`, `roughness: 0`
|
||||
- [ ] Arrows have `startBinding` and `endBinding`
|
||||
- [ ] No diamond shapes used
|
||||
- [ ] Applied staggering formula for multiple arrows
|
||||
|
||||
### Arrow Validation (Every Arrow)
|
||||
|
||||
- [ ] Arrow `x,y` calculated from shape edge
|
||||
- [ ] Final point offset = `targetEdge - sourceEdge`
|
||||
- [ ] Arrow `width` = `max(abs(point[0]))`
|
||||
- [ ] Arrow `height` = `max(abs(point[1]))`
|
||||
- [ ] U-turn arrows have 40-60px clearance
|
||||
|
||||
### After Generation
|
||||
|
||||
- [ ] All `boundElements` IDs reference valid text elements
|
||||
- [ ] All `containerId` values reference valid shapes
|
||||
- [ ] All arrows start within 15px of shape edge
|
||||
- [ ] All arrows end within 15px of shape edge
|
||||
- [ ] No duplicate IDs
|
||||
- [ ] Arrow bounding boxes match points
|
||||
- [ ] File is valid JSON
|
||||
|
||||
---
|
||||
|
||||
## Common Bugs and Fixes
|
||||
|
||||
### Bug: Arrow appears disconnected/floating
|
||||
|
||||
**Cause**: Arrow `x,y` not calculated from shape edge.
|
||||
|
||||
**Fix**:
|
||||
```
|
||||
Rectangle bottom: arrow_x = shape.x + shape.width/2
|
||||
arrow_y = shape.y + shape.height
|
||||
```
|
||||
|
||||
### Bug: Arrow endpoint doesn't reach target
|
||||
|
||||
**Cause**: Final point offset calculated incorrectly.
|
||||
|
||||
**Fix**:
|
||||
```
|
||||
target_edge = (target.x + target.width/2, target.y)
|
||||
offset_x = target_edge.x - arrow.x
|
||||
offset_y = target_edge.y - arrow.y
|
||||
Final point = [offset_x, offset_y]
|
||||
```
|
||||
|
||||
### Bug: Multiple arrows from same source overlap
|
||||
|
||||
**Cause**: All arrows start from identical `x,y`.
|
||||
|
||||
**Fix**: Stagger start positions:
|
||||
```
|
||||
For 5 arrows from bottom edge:
|
||||
arrow1.x = shape.x + shape.width * 0.2
|
||||
arrow2.x = shape.x + shape.width * 0.35
|
||||
arrow3.x = shape.x + shape.width * 0.5
|
||||
arrow4.x = shape.x + shape.width * 0.65
|
||||
arrow5.x = shape.x + shape.width * 0.8
|
||||
```
|
||||
|
||||
### Bug: Callback arrow doesn't loop correctly
|
||||
|
||||
**Cause**: U-turn path lacks clearance.
|
||||
|
||||
**Fix**: Use 4-point path:
|
||||
```
|
||||
Points = [[0, 0], [clearance, 0], [clearance, -vert], [final_x, -vert]]
|
||||
clearance = 40-60px
|
||||
```
|
||||
|
||||
### Bug: Labels don't appear inside shapes
|
||||
|
||||
**Cause**: Using `label` property instead of separate text element.
|
||||
|
||||
**Fix**: Create TWO elements:
|
||||
1. Shape with `boundElements` referencing text
|
||||
2. Text with `containerId` referencing shape
|
||||
|
||||
### Bug: Arrows are curved, not 90-degree
|
||||
|
||||
**Cause**: Missing elbow properties.
|
||||
|
||||
**Fix**: Add all three:
|
||||
```json
|
||||
{
|
||||
"roughness": 0,
|
||||
"roundness": null,
|
||||
"elbowed": true
|
||||
}
|
||||
```
|
||||
@@ -1,75 +0,0 @@
|
||||
---
|
||||
name: memory
|
||||
description: "Persistent memory system for Opencode agents. SQLite-based hybrid search over Obsidian vault. Use when: (1) storing user preferences/decisions, (2) recalling past context, (3) searching knowledge base. Triggers: remember, recall, memory, store, preference."
|
||||
compatibility: opencode
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
opencode-memory is a SQLite-based hybrid memory system for Opencode agents. It indexes markdown files from your Obsidian vault (`~/CODEX/80-memory/`) and session transcripts, providing fast hybrid search (vector + keyword BM25).
|
||||
|
||||
## Architecture
|
||||
|
||||
- **Source of truth**: Markdown files at `~/CODEX/80-memory/`
|
||||
- **Derived index**: SQLite at `~/.local/share/opencode-memory/index.db`
|
||||
- **Hybrid search**: FTS5 (BM25) + vec0 (vector similarity)
|
||||
- **Embeddings**: OpenAI text-embedding-3-small (1536 dimensions)
|
||||
|
||||
## Available Tools
|
||||
|
||||
### memory_search
|
||||
Hybrid search over all indexed content (vault + sessions).
|
||||
|
||||
```
|
||||
memory_search(query, maxResults?, source?)
|
||||
```
|
||||
|
||||
- `query`: Search query (natural language)
|
||||
- `maxResults`: Max results (default 6)
|
||||
- `source`: Filter by "memory", "sessions", or "all"
|
||||
|
||||
### memory_store
|
||||
Store new memory as markdown file in vault.
|
||||
|
||||
```
|
||||
memory_store(content, title?, category?)
|
||||
```
|
||||
|
||||
- `content`: Memory content to store
|
||||
- `title`: Optional title (slugified for filename)
|
||||
- `category`: "preferences", "facts", "decisions", "entities", "other"
|
||||
|
||||
### memory_get
|
||||
Read specific file/lines from vault.
|
||||
|
||||
```
|
||||
memory_get(filePath, startLine?, endLine?)
|
||||
```
|
||||
|
||||
## Auto-Behaviors
|
||||
|
||||
- **Auto-recall**: On session.created, relevant memories are searched and injected
|
||||
- **Auto-capture**: On session.idle, preferences/decisions are extracted and stored
|
||||
- **Token budget**: Max 2000 tokens injected to respect context limits
|
||||
|
||||
## Workflows
|
||||
|
||||
### Recall information
|
||||
Before answering about past work, preferences, or decisions:
|
||||
1. Call `memory_search` with relevant query
|
||||
2. Use `memory_get` to retrieve full context if needed
|
||||
|
||||
### Store new information
|
||||
When user expresses preference or decision:
|
||||
1. Call `memory_store` with content and category
|
||||
|
||||
## Vault Structure
|
||||
|
||||
```
|
||||
~/CODEX/80-memory/
|
||||
├── preferences/ # User preferences
|
||||
├── facts/ # Factual knowledge
|
||||
├── decisions/ # Design decisions
|
||||
├── entities/ # People, projects, concepts
|
||||
└── other/ # Uncategorized memories
|
||||
```
|
||||
@@ -1,54 +0,0 @@
|
||||
# opencode-memory Deployment Guide
|
||||
|
||||
## Installation
|
||||
|
||||
### Option 1: Nix (Recommended)
|
||||
|
||||
Add to your Nix flake:
|
||||
|
||||
```nix
|
||||
inputs.opencode-memory = {
|
||||
url = "git+https://code.m3ta.dev/m3tam3re/opencode-memory";
|
||||
flake = false;
|
||||
};
|
||||
```
|
||||
|
||||
### Option 2: npm
|
||||
|
||||
```bash
|
||||
npm install -g @m3tam3re/opencode-memory
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Add to `~/.config/opencode/opencode.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"plugins": [
|
||||
"opencode-memory"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
- `OPENAI_API_KEY`: Required for embeddings
|
||||
|
||||
## Vault Location
|
||||
|
||||
Default: `~/CODEX/80-memory/`
|
||||
|
||||
Override in plugin config if needed.
|
||||
|
||||
## Rebuild Index
|
||||
|
||||
```bash
|
||||
bun run src/cli.ts --rebuild
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
1. Start Opencode
|
||||
2. Call `memory_search` with any query
|
||||
3. Verify no errors in logs
|
||||
@@ -1,109 +0,0 @@
|
||||
# Obsidian MCP Server Configuration
|
||||
|
||||
## Overview
|
||||
|
||||
This document describes how to configure the [cyanheads/obsidian-mcp-server](https://github.com/cyanheads/obsidian-mcp-server) for use with Opencode. This MCP server enables AI agents to interact with the Obsidian vault via the Local REST API plugin.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **Obsidian Desktop App** - Must be running
|
||||
2. **Local REST API Plugin** - Installed and enabled in Obsidian
|
||||
3. **API Key** - Generated from plugin settings
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Description | Default | Required |
|
||||
|----------|-------------|---------|----------|
|
||||
| `OBSIDIAN_API_KEY` | API key from Local REST API plugin | - | Yes |
|
||||
| `OBSIDIAN_BASE_URL` | Base URL for REST API | `http://127.0.0.1:27123` | No |
|
||||
| `OBSIDIAN_VERIFY_SSL` | Verify SSL certificates | `false` | No |
|
||||
| `OBSIDIAN_ENABLE_CACHE` | Enable vault caching | `true` | No |
|
||||
|
||||
## opencode.json Configuration
|
||||
|
||||
Add this to your `programs.opencode.settings.mcp` in your Nix home-manager config:
|
||||
|
||||
```json
|
||||
"Obsidian-Vault": {
|
||||
"command": ["npx", "obsidian-mcp-server"],
|
||||
"environment": {
|
||||
"OBSIDIAN_API_KEY": "<your-api-key>",
|
||||
"OBSIDIAN_BASE_URL": "http://127.0.0.1:27123",
|
||||
"OBSIDIAN_VERIFY_SSL": "false",
|
||||
"OBSIDIAN_ENABLE_CACHE": "true"
|
||||
},
|
||||
"enabled": true,
|
||||
"type": "local"
|
||||
}
|
||||
```
|
||||
|
||||
**Note**: Replace `<your-api-key>` with the key from Obsidian Settings → Local REST API.
|
||||
|
||||
## Nix Home-Manager Integration
|
||||
|
||||
In your NixOS/home-manager configuration:
|
||||
|
||||
```nix
|
||||
programs.opencode.settings.mcp = {
|
||||
# ... other MCP servers ...
|
||||
|
||||
"Obsidian-Vault" = {
|
||||
command = ["npx" "obsidian-mcp-server"];
|
||||
environment = {
|
||||
OBSIDIAN_API_KEY = "<your-api-key>";
|
||||
OBSIDIAN_BASE_URL = "http://127.0.0.1:27123";
|
||||
OBSIDIAN_VERIFY_SSL = "false";
|
||||
OBSIDIAN_ENABLE_CACHE = "true";
|
||||
};
|
||||
enabled = true;
|
||||
type = "local";
|
||||
};
|
||||
};
|
||||
```
|
||||
|
||||
After updating, run:
|
||||
```bash
|
||||
home-manager switch
|
||||
```
|
||||
|
||||
## Getting the API Key
|
||||
|
||||
1. Open Obsidian Settings
|
||||
2. Navigate to Community Plugins → Local REST API
|
||||
3. Copy the API key shown in settings
|
||||
4. Paste into your configuration
|
||||
|
||||
## Available MCP Tools
|
||||
|
||||
Once configured, these tools are available:
|
||||
|
||||
| Tool | Description |
|
||||
|------|-------------|
|
||||
| `obsidian_read_note` | Read a note's content |
|
||||
| `obsidian_update_note` | Create or update a note |
|
||||
| `obsidian_global_search` | Search the entire vault |
|
||||
| `obsidian_manage_frontmatter` | Get/set frontmatter fields |
|
||||
| `obsidian_manage_tags` | Add/remove tags |
|
||||
| `obsidian_list_notes` | List notes in a folder |
|
||||
| `obsidian_delete_note` | Delete a note |
|
||||
| `obsidian_search_replace` | Search and replace in a note |
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Server not responding
|
||||
- Ensure Obsidian desktop app is running
|
||||
- Check Local REST API plugin is enabled
|
||||
- Verify API key matches
|
||||
|
||||
### Connection refused
|
||||
- Check the base URL (default: `http://127.0.0.1:27123`)
|
||||
- Some setups use port 27124 - check plugin settings
|
||||
|
||||
### npx not found
|
||||
- Ensure Node.js is installed
|
||||
- Run `npm install -g npx` if needed
|
||||
|
||||
## References
|
||||
|
||||
- [cyanheads/obsidian-mcp-server GitHub](https://github.com/cyanheads/obsidian-mcp-server)
|
||||
- [Obsidian Local REST API Plugin](https://github.com/czottmann/obsidian-local-rest-api)
|
||||
@@ -1,108 +0,0 @@
|
||||
---
|
||||
name: msteams
|
||||
description: "Microsoft Teams Graph API integration for team communication. Use when: (1) Managing teams and channels, (2) Sending/receiving channel messages, (3) Scheduling or managing meetings, (4) Handling chat conversations. Triggers: 'Teams', 'meeting', 'channel', 'team message', 'chat', 'Teams message'."
|
||||
compatibility: opencode
|
||||
---
|
||||
|
||||
# Microsoft Teams Integration
|
||||
|
||||
Microsoft Teams Graph API integration for managing team communication, channels, messages, meetings, and chat conversations via MCP tools.
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
### Teams & Channels
|
||||
- **List joined teams**: Retrieve all teams the user is a member of
|
||||
- **Manage channels**: Create, list, and manage channels within teams
|
||||
- **Team membership**: Add, remove, and update team members
|
||||
|
||||
### Channel Messages
|
||||
- **Send messages**: Post messages to channels with rich text support
|
||||
- **Retrieve messages**: List channel messages with filtering by date range
|
||||
- **Message management**: Read and respond to channel communications
|
||||
|
||||
### Online Meetings
|
||||
- **Schedule meetings**: Create online meetings with participants
|
||||
- **Manage meetings**: Update meeting details and coordinates
|
||||
- **Meeting access**: Retrieve join links and meeting information
|
||||
- **Presence**: Check user presence and activity status
|
||||
|
||||
### Chat
|
||||
- **Direct messages**: 1:1 chat conversations with users
|
||||
- **Group chats**: Multi-person chat conversations
|
||||
- **Chat messages**: Send and receive chat messages
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Send Channel Message
|
||||
|
||||
1. Identify target team and channel
|
||||
2. Compose message content
|
||||
3. Use MCP tool to send message to channel
|
||||
|
||||
Example:
|
||||
```
|
||||
"Post a message to the 'General' channel in 'Engineering' team about the deployment status"
|
||||
```
|
||||
|
||||
### Schedule Meeting
|
||||
|
||||
1. Determine meeting participants
|
||||
2. Set meeting time and duration
|
||||
3. Create meeting title and description
|
||||
4. Use MCP tool to create online meeting
|
||||
|
||||
Example:
|
||||
```
|
||||
"Schedule a meeting with @alice and @bob for Friday 2pm to discuss the project roadmap"
|
||||
```
|
||||
|
||||
### List Channel Messages
|
||||
|
||||
1. Specify team and channel
|
||||
2. Define date range (required for polling)
|
||||
3. Retrieve and display messages
|
||||
|
||||
Example:
|
||||
```
|
||||
"Show me all messages in #general from the last week"
|
||||
```
|
||||
|
||||
### Send Direct Message
|
||||
|
||||
1. Identify recipient user
|
||||
2. Compose message
|
||||
3. Use MCP chat tool to send message
|
||||
|
||||
Example:
|
||||
```
|
||||
"Send a message to @john asking if the PR review is complete"
|
||||
```
|
||||
|
||||
## MCP Tool Categories
|
||||
|
||||
The MS Teams MCP server provides tool categories for:
|
||||
|
||||
- **Channels**: Team and channel management operations
|
||||
- **Messages**: Channel message operations
|
||||
- **Meetings**: Online meeting scheduling and management
|
||||
- **Chat**: Direct and group chat operations
|
||||
|
||||
## Important Constraints
|
||||
|
||||
**Authentication**: Do NOT include Graph API authentication flows. The MCP server handles authentication configuration.
|
||||
|
||||
**Polling limits**: When retrieving messages, always specify a date range. Polling the same resource more than once per day is a violation of Microsoft APIs Terms of Use.
|
||||
|
||||
**Email overlap**: Do NOT overlap with Outlook email functionality. This skill focuses on Teams-specific communication (channels, chat, meetings), not email operations.
|
||||
|
||||
**File storage**: Files in channels are stored in SharePoint. Use SharePoint-specific operations for file management.
|
||||
|
||||
## Domain Boundaries
|
||||
|
||||
This skill integrates with **Hermes** (work communication agent). Hermes loads this skill when user requests:
|
||||
- Teams-related operations
|
||||
- Meeting scheduling or management
|
||||
- Channel communication
|
||||
- Teams chat conversations
|
||||
|
||||
For email operations, Hermes uses the **outlook** skill instead.
|
||||
@@ -1,231 +0,0 @@
|
||||
---
|
||||
name: outlook
|
||||
description: "Outlook Graph API integration for email, calendar, and contact management. Use when: (1) Reading or sending emails, (2) Managing inbox and folders, (3) Working with calendar events and appointments, (4) Managing contacts, (5) Organizing email messages. Triggers: 'email', 'Outlook', 'inbox', 'calendar', 'contact', 'message', 'folder', 'appointment', 'meeting'."
|
||||
compatibility: opencode
|
||||
---
|
||||
|
||||
# Outlook
|
||||
|
||||
Outlook Graph API integration for mail, calendar, and contact management via MCP. Enables comprehensive email workflows, calendar coordination, and contact organization.
|
||||
|
||||
## Overview
|
||||
|
||||
Outlook is Microsoft 365's messaging and communication hub. This skill provides access to:
|
||||
|
||||
- **Mail**: Send, receive, search, organize, and categorize messages
|
||||
- **Calendar**: Create and manage events, appointments, and meetings
|
||||
- **Contacts**: Manage contact information and relationships
|
||||
- **Folders**: Organize messages into custom folder structures
|
||||
|
||||
## Core Workflows
|
||||
|
||||
### 1. Reading Emails
|
||||
|
||||
Retrieve messages from mailbox:
|
||||
|
||||
```
|
||||
User: "Check my inbox for new messages from John"
|
||||
|
||||
AI: Use Outlook MCP to:
|
||||
1. List messages in Inbox folder
|
||||
2. Filter by sender = "John"
|
||||
3. Display summary (subject, date, importance)
|
||||
```
|
||||
|
||||
**Use search and sort** to efficiently find messages:
|
||||
- Filter by folder (Inbox, Sent, Drafts, etc.)
|
||||
- Sort by date, importance, or subject
|
||||
- Search by sender, subject, or content
|
||||
|
||||
### 2. Sending Emails
|
||||
|
||||
Compose and send new messages:
|
||||
|
||||
```
|
||||
User: "Send an email to sarah@company.com about the project update"
|
||||
|
||||
AI: Use Outlook MCP to:
|
||||
1. Create message draft with recipient and subject
|
||||
2. Add body content
|
||||
3. Send the message
|
||||
```
|
||||
|
||||
**Key options**:
|
||||
- Add attachments
|
||||
- Set importance (low, normal, high)
|
||||
- Add recipients (to, cc, bcc)
|
||||
- Use categories for organization
|
||||
|
||||
### 3. Replying and Forwarding
|
||||
|
||||
Respond to existing messages:
|
||||
|
||||
```
|
||||
User: "Reply to the last email from Mike with thanks"
|
||||
|
||||
AI: Use Outlook MCP to:
|
||||
1. Retrieve the most recent message from Mike
|
||||
2. Create reply with appropriate body
|
||||
3. Send the reply
|
||||
```
|
||||
|
||||
**Supports**:
|
||||
- Reply (to sender)
|
||||
- Reply All (to all recipients)
|
||||
- Forward (to new recipients)
|
||||
|
||||
### 4. Managing Folders
|
||||
|
||||
Organize messages into folder structure:
|
||||
|
||||
```
|
||||
User: "Move all emails from vendor@supplier.com to Vendors folder"
|
||||
|
||||
AI: Use Outlook MCP to:
|
||||
1. Search for messages from vendor@supplier.com
|
||||
2. Move matching messages to Vendors folder
|
||||
3. Confirm move operation
|
||||
```
|
||||
|
||||
**Folder operations**:
|
||||
- List folders
|
||||
- Create new folders
|
||||
- Move messages between folders
|
||||
- Delete folders (careful!)
|
||||
|
||||
### 5. Calendar Events
|
||||
|
||||
Manage calendar and schedule meetings:
|
||||
|
||||
```
|
||||
User: "Schedule a team meeting for Friday at 2pm"
|
||||
|
||||
AI: Use Outlook MCP to:
|
||||
1. Create calendar event
|
||||
2. Set date/time (Friday 2:00 PM)
|
||||
3. Add attendees
|
||||
4. Send meeting invitation
|
||||
```
|
||||
|
||||
**Event details**:
|
||||
- Subject and location
|
||||
- Start/end times
|
||||
- Attendees and required/optional status
|
||||
- Meeting body/agenda
|
||||
- Recurrence (for recurring meetings)
|
||||
|
||||
### 6. Managing Contacts
|
||||
|
||||
Create and maintain contact information:
|
||||
|
||||
```
|
||||
User: "Add Jane Smith to contacts: jane@company.com, 555-1234"
|
||||
|
||||
AI: Use Outlook MCP to:
|
||||
1. Create contact object
|
||||
2. Add email address and phone number
|
||||
3. Save to contacts
|
||||
```
|
||||
|
||||
**Contact fields**:
|
||||
- Name (first, last, display name)
|
||||
- Email addresses (multiple)
|
||||
- Phone numbers (multiple types)
|
||||
- Company, job title
|
||||
- Notes
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Message Organization
|
||||
|
||||
**Categories**: Tag messages with color-coded categories for enhanced organization
|
||||
```
|
||||
User: "Tag these project emails as 'Urgent' category"
|
||||
|
||||
AI: Use Outlook MCP to:
|
||||
1. Retrieve specified messages
|
||||
2. Assign category (e.g., "Urgent")
|
||||
3. Confirm categorization
|
||||
```
|
||||
|
||||
**Importance**: Mark messages as high, normal, or low importance
|
||||
```
|
||||
User: "Mark this message as high priority"
|
||||
|
||||
AI: Use Outlook MCP to update message importance flag
|
||||
```
|
||||
|
||||
**Search**: Find messages by sender, subject, content, or date range
|
||||
```
|
||||
User: "Find all emails about Q4 budget from October"
|
||||
|
||||
AI: Use Outlook MCP to search with filters:
|
||||
- Subject contains "budget"
|
||||
- Date range: October
|
||||
- Optionally filter by sender
|
||||
```
|
||||
|
||||
### Email Intelligence
|
||||
|
||||
**Focused Inbox**: Access messages categorized as focused vs other
|
||||
**Mail Tips**: Check recipient status before sending (auto-reply, full mailbox)
|
||||
**MIME Support**: Handle email in MIME format for interoperability
|
||||
|
||||
## Integration with Other Skills
|
||||
|
||||
This skill focuses on Outlook-specific operations. For related functionality:
|
||||
|
||||
| Need | Skill | When to Use |
|
||||
|------|-------|-------------|
|
||||
| **Team project updates** | basecamp | "Update the Basecamp todo" |
|
||||
| **Team channel messages** | msteams | "Post this in the Teams channel" |
|
||||
| **Private notes about emails** | obsidian | "Save this to Obsidian" |
|
||||
| **Drafting long-form emails** | calliope | "Help me write a professional email" |
|
||||
| **Short quick messages** | hermes (this skill) | "Send a quick update" |
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Email Triage Workflow
|
||||
|
||||
1. **Scan inbox**: List messages sorted by date
|
||||
2. **Categorize**: Assign categories based on content/urgency
|
||||
3. **Action**: Reply, forward, or move to appropriate folder
|
||||
4. **Track**: Flag for follow-up if needed
|
||||
|
||||
### Meeting Coordination
|
||||
|
||||
1. **Check availability**: Query calendar for conflicts
|
||||
2. **Propose time**: Suggest multiple time options
|
||||
3. **Create event**: Set up meeting with attendees
|
||||
4. **Follow up**: Send reminder or agenda
|
||||
|
||||
### Project Communication
|
||||
|
||||
1. **Search thread**: Find all messages related to project
|
||||
2. **Organize**: Move to project folder
|
||||
3. **Categorize**: Tag with project category
|
||||
4. **Summarize**: Extract key points if needed
|
||||
|
||||
## Quality Standards
|
||||
|
||||
- **Accurate recipient addressing**: Verify email addresses before sending
|
||||
- **Clear subject lines**: Ensure subjects accurately reflect content
|
||||
- **Appropriate categorization**: Use categories consistently
|
||||
- **Folder hygiene**: Maintain organized folder structure
|
||||
- **Respect privacy**: Do not share sensitive content indiscriminately
|
||||
|
||||
## Edge Cases
|
||||
|
||||
**Multiple mailboxes**: This skill supports primary and shared mailboxes, not archive mailboxes
|
||||
**Large attachments**: Use appropriate attachment handling for large files
|
||||
**Meeting conflicts**: Check calendar availability before scheduling
|
||||
**Email limits**: Respect rate limits and sending quotas
|
||||
**Deleted items**: Use caution with delete operations (consider archiving instead)
|
||||
|
||||
## Boundaries
|
||||
|
||||
- **Do NOT handle Teams-specific messaging** (Teams's domain)
|
||||
- **Do NOT handle Basecamp communication** (basecamp's domain)
|
||||
- **Do NOT manage wiki documentation** (Athena's domain)
|
||||
- **Do NOT access private Obsidian vaults** (Apollo's domain)
|
||||
- **Do NOT write creative email content** (delegate to calliope for drafts)
|
||||
@@ -79,6 +79,7 @@ Executable code (Python/Bash/etc.) for tasks that require deterministic reliabil
|
||||
- **Example**: `scripts/rotate_pdf.py` for PDF rotation tasks
|
||||
- **Benefits**: Token efficient, deterministic, may be executed without loading into context
|
||||
- **Note**: Scripts may still need to be read by Opencode for patching or environment-specific adjustments
|
||||
- **Dependencies**: Scripts with external dependencies (Python packages, system tools) require those dependencies to be registered in the repository's `flake.nix`. See Step 4 for details.
|
||||
|
||||
##### References (`references/`)
|
||||
|
||||
@@ -302,6 +303,37 @@ To begin implementation, start with the reusable resources identified above: `sc
|
||||
|
||||
Added scripts must be tested by actually running them to ensure there are no bugs and that the output matches what is expected. If there are many similar scripts, only a representative sample needs to be tested to ensure confidence that they all work while balancing time to completion.
|
||||
|
||||
#### Register Dependencies in flake.nix
|
||||
|
||||
When scripts introduce external dependencies (Python packages or system tools), add them to the repository's `flake.nix`. Dependencies are defined once in `pythonEnv` (Python packages) or `packages` (system tools) inside the `skills-runtime` buildEnv. This runtime is exported as `packages.${system}.skills-runtime` and consumed by project flakes and home-manager — ensuring opencode always has the correct environment regardless of which project it runs in.
|
||||
|
||||
**Python packages** — add to the `pythonEnv` block with a comment referencing the skill:
|
||||
|
||||
```nix
|
||||
pythonEnv = pkgs.python3.withPackages (ps:
|
||||
with ps; [
|
||||
# <skill-name>: <script>.py
|
||||
<package-name>
|
||||
]);
|
||||
```
|
||||
|
||||
**System tools** (e.g. `poppler-utils`, `ffmpeg`, `imagemagick`) — add to the `paths` list in the `skills-runtime` buildEnv:
|
||||
|
||||
```nix
|
||||
skills-runtime = pkgs.buildEnv {
|
||||
name = "opencode-skills-runtime";
|
||||
paths = [
|
||||
pythonEnv
|
||||
# <skill-name>: needed by <script>
|
||||
pkgs.<tool-name>
|
||||
];
|
||||
};
|
||||
```
|
||||
|
||||
**Convention**: Each entry must include a comment with `# <skill-name>: <reason>` so dependencies remain traceable to their originating skill.
|
||||
|
||||
After adding dependencies, verify they resolve: `nix develop --command python3 -c "import <package>"`
|
||||
|
||||
Any example files and directories not needed for the skill should be deleted. The initialization script creates example files in `scripts/`, `references/`, and `assets/` to demonstrate structure, but most skills won't need all of them.
|
||||
|
||||
#### Update SKILL.md
|
||||
|
||||
Reference in New Issue
Block a user