docs: update AGENTS.md and README.md for rules system, remove beads
- Add rules/ directory documentation to both files - Update skill count from 25 to 15 modules - Remove beads references (issue tracking removed) - Update skills list with current active skills - Document flake.nix as proper Nix flake (not flake=false) - Add rules system integration section - Clean up sisyphus planning artifacts - Remove deprecated skills (memory, msteams, outlook)
This commit is contained in:
@@ -1,9 +0,0 @@
|
||||
{
|
||||
"active_plan": "/home/m3tam3re/p/AI/AGENTS/.sisyphus/plans/rules-system.md",
|
||||
"started_at": "2026-02-17T17:50:08.922Z",
|
||||
"session_ids": [
|
||||
"ses_393691db2ffe4YZvieMFehJe54"
|
||||
],
|
||||
"plan_name": "rules-system",
|
||||
"agent": "atlas"
|
||||
}
|
||||
@@ -1 +0,0 @@
|
||||
{"instructions":[".opencode-rules/concerns/coding-style.md",".opencode-rules/concerns/naming.md",".opencode-rules/concerns/documentation.md",".opencode-rules/concerns/testing.md",".opencode-rules/concerns/git-workflow.md",".opencode-rules/concerns/project-structure.md"],"shellHook":"# Create/update symlink to AGENTS rules directory\nln -sfn /nix/store/wsqzf0z3hg8mhpq484f24fm72qp4k6sg-AGENTS/rules .opencode-rules\n\n# Generate opencode.json configuration file\ncat > opencode.json <<'OPENCODE_EOF'\n{\"$schema\":\"https://opencode.ai/config.json\",\"instructions\":[\".opencode-rules/concerns/coding-style.md\",\".opencode-rules/concerns/naming.md\",\".opencode-rules/concerns/documentation.md\",\".opencode-rules/concerns/testing.md\",\".opencode-rules/concerns/git-workflow.md\",\".opencode-rules/concerns/project-structure.md\"]}\nOPENCODE_EOF\n"}
|
||||
@@ -1 +0,0 @@
|
||||
{"instructions":[".opencode-rules/concerns/coding-style.md",".opencode-rules/concerns/naming.md",".opencode-rules/concerns/documentation.md",".opencode-rules/concerns/testing.md",".opencode-rules/concerns/git-workflow.md",".opencode-rules/concerns/project-structure.md",".opencode-rules/languages/python.md"],"shellHook":"# Create/update symlink to AGENTS rules directory\nln -sfn /nix/store/4li05383sgf4z0l6bxv8hmvgs600y56x-AGENTS/rules .opencode-rules\n\n# Generate opencode.json configuration file\ncat > opencode.json <<'OPENCODE_EOF'\n{\"$schema\":\"https://opencode.ai/config.json\",\"instructions\":[\".opencode-rules/concerns/coding-style.md\",\".opencode-rules/concerns/naming.md\",\".opencode-rules/concerns/documentation.md\",\".opencode-rules/concerns/testing.md\",\".opencode-rules/concerns/git-workflow.md\",\".opencode-rules/concerns/project-structure.md\",\".opencode-rules/languages/python.md\"]}\nOPENCODE_EOF\n"}
|
||||
@@ -1 +0,0 @@
|
||||
{"instructions":[".opencode-rules/concerns/coding-style.md",".opencode-rules/concerns/naming.md",".opencode-rules/concerns/documentation.md",".opencode-rules/concerns/testing.md",".opencode-rules/concerns/git-workflow.md",".opencode-rules/concerns/project-structure.md",".opencode-rules/languages/python.md",".opencode-rules/languages/typescript.md",".opencode-rules/languages/nix.md",".opencode-rules/languages/shell.md"],"shellHook":"# Create/update symlink to AGENTS rules directory\nln -sfn /nix/store/qzsdn3m85qwarpd43x8k28sja40r21p7-AGENTS/rules .opencode-rules\n\n# Generate opencode.json configuration file\ncat > opencode.json <<'OPENCODE_EOF'\n{\"$schema\":\"https://opencode.ai/config.json\",\"instructions\":[\".opencode-rules/concerns/coding-style.md\",\".opencode-rules/concerns/naming.md\",\".opencode-rules/concerns/documentation.md\",\".opencode-rules/concerns/testing.md\",\".opencode-rules/concerns/git-workflow.md\",\".opencode-rules/concerns/project-structure.md\",\".opencode-rules/languages/python.md\",\".opencode-rules/languages/typescript.md\",\".opencode-rules/languages/nix.md\",\".opencode-rules/languages/shell.md\"]}\nOPENCODE_EOF\n"}
|
||||
@@ -1 +0,0 @@
|
||||
{"instructions":[".opencode-rules/concerns/coding-style.md",".opencode-rules/concerns/naming.md",".opencode-rules/concerns/documentation.md",".opencode-rules/concerns/testing.md",".opencode-rules/concerns/git-workflow.md",".opencode-rules/concerns/project-structure.md",".opencode-rules/languages/python.md",".opencode-rules/frameworks/n8n.md"],"shellHook":"# Create/update symlink to AGENTS rules directory\nln -sfn /nix/store/55brjhy9m1vcgrnd100vmwf9bycjpzpi-AGENTS/rules .opencode-rules\n\n# Generate opencode.json configuration file\ncat > opencode.json <<'OPENCODE_EOF'\n{\"$schema\":\"https://opencode.ai/config.json\",\"instructions\":[\".opencode-rules/concerns/coding-style.md\",\".opencode-rules/concerns/naming.md\",\".opencode-rules/concerns/documentation.md\",\".opencode-rules/concerns/testing.md\",\".opencode-rules/concerns/git-workflow.md\",\".opencode-rules/concerns/project-structure.md\",\".opencode-rules/languages/python.md\",\".opencode-rules/frameworks/n8n.md\"]}\nOPENCODE_EOF\n"}
|
||||
@@ -1 +0,0 @@
|
||||
{"instructions":[".opencode-rules/concerns/coding-style.md",".opencode-rules/concerns/naming.md",".opencode-rules/concerns/documentation.md",".opencode-rules/concerns/testing.md",".opencode-rules/concerns/git-workflow.md",".opencode-rules/concerns/project-structure.md",".opencode-rules/languages/python.md",".opencode-rules/custom.md"],"shellHook":"# Create/update symlink to AGENTS rules directory\nln -sfn /nix/store/r8yfirsyyii9x05qd5kfdvzcqv7sx6az-AGENTS/rules .opencode-rules\n\n# Generate opencode.json configuration file\ncat > opencode.json <<'OPENCODE_EOF'\n{\"$schema\":\"https://opencode.ai/config.json\",\"instructions\":[\".opencode-rules/concerns/coding-style.md\",\".opencode-rules/concerns/naming.md\",\".opencode-rules/concerns/documentation.md\",\".opencode-rules/concerns/testing.md\",\".opencode-rules/concerns/git-workflow.md\",\".opencode-rules/concerns/project-structure.md\",\".opencode-rules/languages/python.md\",\".opencode-rules/custom.md\"]}\nOPENCODE_EOF\n"}
|
||||
@@ -1,153 +0,0 @@
|
||||
# Opencode Rules Nix Module - Manual QA Results
|
||||
|
||||
## Test Summary
|
||||
Date: 2025-02-17
|
||||
Module: `/home/m3tam3re/p/NIX/nixpkgs/lib/opencode-rules.nix`
|
||||
Test Type: Manual QA (nix eval)
|
||||
|
||||
---
|
||||
|
||||
## Scenario Results
|
||||
|
||||
### Scenario 1: Empty Config (Defaults Only)
|
||||
**Command**: `nix eval --impure --json --expr 'let pkgs = import <nixpkgs> {}; m3taLib = import /home/m3tam3re/p/NIX/nixpkgs/lib {lib = pkgs.lib;}; in m3taLib.opencode-rules.mkOpencodeRules { agents = /home/m3tam3re/p/AI/AGENTS; }'`
|
||||
|
||||
**Results**:
|
||||
- ✅ Valid JSON output
|
||||
- ✅ Has `$schema` field in embedded opencode.json
|
||||
- ✅ Has `instructions` field
|
||||
- ✅ Correct instruction count: 6 (default concerns only)
|
||||
|
||||
**Expected Instructions**:
|
||||
1. `.opencode-rules/concerns/coding-style.md`
|
||||
2. `.opencode-rules/concerns/naming.md`
|
||||
3. `.opencode-rules/concerns/documentation.md`
|
||||
4. `.opencode-rules/concerns/testing.md`
|
||||
5. `.opencode-rules/concerns/git-workflow.md`
|
||||
6. `.opencode-rules/concerns/project-structure.md`
|
||||
|
||||
---
|
||||
|
||||
### Scenario 2: Single Language (Python)
|
||||
**Command**: `nix eval --impure --json --expr 'let pkgs = import <nixpkgs> {}; m3taLib = import /home/m3tam3re/p/NIX/nixpkgs/lib {lib = pkgs.lib;}; in m3taLib.opencode-rules.mkOpencodeRules { agents = /home/m3tam3re/p/AI/AGENTS; languages = ["python"]; }'`
|
||||
|
||||
**Results**:
|
||||
- ✅ Valid JSON output
|
||||
- ✅ Has `$schema` field in embedded opencode.json
|
||||
- ✅ Has `instructions` field
|
||||
- ✅ Correct instruction count: 7 (6 concerns + 1 language)
|
||||
|
||||
**Expected Instructions**:
|
||||
- All 6 default concerns
|
||||
- `.opencode-rules/languages/python.md`
|
||||
|
||||
---
|
||||
|
||||
### Scenario 3: Multi-Language
|
||||
**Command**: `nix eval --impure --json --expr 'let pkgs = import <nixpkgs> {}; m3taLib = import /home/m3tam3re/p/NIX/nixpkgs/lib {lib = pkgs.lib;}; in m3taLib.opencode-rules.mkOpencodeRules { agents = /home/m3tam3re/p/AI/AGENTS; languages = ["python" "typescript" "nix" "shell"]; }'`
|
||||
|
||||
**Results**:
|
||||
- ✅ Valid JSON output
|
||||
- ✅ Has `$schema` field in embedded opencode.json
|
||||
- ✅ Has `instructions` field
|
||||
- ✅ Correct instruction count: 10 (6 concerns + 4 languages)
|
||||
|
||||
**Expected Instructions**:
|
||||
- All 6 default concerns
|
||||
- `.opencode-rules/languages/python.md`
|
||||
- `.opencode-rules/languages/typescript.md`
|
||||
- `.opencode-rules/languages/nix.md`
|
||||
- `.opencode-rules/languages/shell.md`
|
||||
|
||||
---
|
||||
|
||||
### Scenario 4: With Frameworks
|
||||
**Command**: `nix eval --impure --json --expr 'let pkgs = import <nixpkgs> {}; m3taLib = import /home/m3tam3re/p/NIX/nixpkgs/lib {lib = pkgs.lib;}; in m3taLib.opencode-rules.mkOpencodeRules { agents = /home/m3tam3re/p/AI/AGENTS; languages = ["python"]; frameworks = ["n8n"]; }'`
|
||||
|
||||
**Results**:
|
||||
- ✅ Valid JSON output
|
||||
- ✅ Has `$schema` field in embedded opencode.json
|
||||
- ✅ Has `instructions` field
|
||||
- ✅ Correct instruction count: 8 (6 concerns + 1 language + 1 framework)
|
||||
|
||||
**Expected Instructions**:
|
||||
- All 6 default concerns
|
||||
- `.opencode-rules/languages/python.md`
|
||||
- `.opencode-rules/frameworks/n8n.md`
|
||||
|
||||
---
|
||||
|
||||
### Scenario 5: Extra Instructions
|
||||
**Command**: `nix eval --impure --json --expr 'let pkgs = import <nixpkgs> {}; m3taLib = import /home/m3tam3re/p/NIX/nixpkgs/lib {lib = pkgs.lib;}; in m3taLib.opencode-rules.mkOpencodeRules { agents = /home/m3tam3re/p/AI/AGENTS; languages = ["python"]; extraInstructions = [".opencode-rules/custom.md"]; }'`
|
||||
|
||||
**Results**:
|
||||
- ✅ Valid JSON output
|
||||
- ✅ Has `$schema` field in embedded opencode.json
|
||||
- ✅ Has `instructions` field
|
||||
- ✅ Correct instruction count: 8 (6 concerns + 1 language + 1 custom)
|
||||
|
||||
**Expected Instructions**:
|
||||
- All 6 default concerns
|
||||
- `.opencode-rules/languages/python.md`
|
||||
- `.opencode-rules/custom.md`
|
||||
|
||||
---
|
||||
|
||||
## Content Quality Spot Checks
|
||||
|
||||
### 1. coding-style.md (Concern Rule)
|
||||
**Assessment**: ✅ High Quality
|
||||
- Clear critical rules with "Always/Never" directives
|
||||
- Good vs. bad code examples
|
||||
- Comprehensive coverage: formatting, patterns, error handling, type safety, function design, SOLID
|
||||
- Well-structured sections
|
||||
|
||||
### 2. python.md (Language Rule)
|
||||
**Assessment**: ✅ High Quality
|
||||
- Modern toolchain recommendations (uv, ruff, pyright, pytest, hypothesis)
|
||||
- Common idioms with practical examples
|
||||
- Anti-patterns with explanations
|
||||
- Project setup structure
|
||||
- Clear, actionable code snippets
|
||||
|
||||
### 3. n8n.md (Framework Rule)
|
||||
**Assessment**: ✅ High Quality
|
||||
- Concise workflow design principles
|
||||
- Clear naming conventions
|
||||
- Error handling patterns
|
||||
- Security best practices
|
||||
- Actionable testing guidelines
|
||||
|
||||
---
|
||||
|
||||
## Issues Encountered
|
||||
|
||||
### Socket File Issue
|
||||
**Issue**: `nix eval` failed with `error: file '/home/m3tam3re/p/AI/AGENTS/.beads/bd.sock' has an unsupported type`
|
||||
|
||||
**Workaround**: Temporarily moved `.beads` directory outside the AGENTS tree during testing
|
||||
|
||||
**Root Cause**: Nix attempts to evaluate/store the `agents` path recursively and encounters unsupported socket files (Unix domain sockets)
|
||||
|
||||
**Recommendation**: Consider adding `.beads` to `.gitignore` and excluding it from path evaluation if possible, or document this limitation for users
|
||||
|
||||
---
|
||||
|
||||
## Final Verdict
|
||||
|
||||
```
|
||||
Scenarios [5/5 pass] | VERDICT: OKAY
|
||||
```
|
||||
|
||||
### Summary
|
||||
- All 5 test scenarios executed successfully
|
||||
- All JSON outputs are valid and properly structured
|
||||
- All embedded `opencode.json` configurations have required `$schema` and `instructions` fields
|
||||
- Instruction counts match expected values for each scenario
|
||||
- Rule content quality is high across concern, language, and framework rules
|
||||
- Shell hook properly generates symlink and configuration file
|
||||
|
||||
### Notes
|
||||
- Socket file issue requires workaround (documented)
|
||||
- Module correctly handles default concerns, multiple languages, frameworks, and custom instructions
|
||||
- Code examples in rules are clear and actionable
|
||||
@@ -1,6 +0,0 @@
|
||||
=== Context Budget ===
|
||||
Concerns: 751
|
||||
Python: 224
|
||||
Total (concerns + python): 975
|
||||
Limit: 1500
|
||||
RESULT: PASS (under 1500)
|
||||
@@ -1 +0,0 @@
|
||||
[".opencode-rules/concerns/coding-style.md",".opencode-rules/concerns/naming.md",".opencode-rules/concerns/documentation.md",".opencode-rules/concerns/testing.md",".opencode-rules/concerns/git-workflow.md",".opencode-rules/concerns/project-structure.md",".opencode-rules/languages/python.md",".opencode-rules/languages/typescript.md",".opencode-rules/languages/nix.md",".opencode-rules/languages/shell.md",".opencode-rules/frameworks/n8n.md"]
|
||||
@@ -1,16 +0,0 @@
|
||||
=== Task 17 Integration Test ===
|
||||
|
||||
File Line Counts:
|
||||
163 /home/m3tam3re/p/AI/AGENTS/rules/concerns/coding-style.md
|
||||
149 /home/m3tam3re/p/AI/AGENTS/rules/concerns/documentation.md
|
||||
118 /home/m3tam3re/p/AI/AGENTS/rules/concerns/git-workflow.md
|
||||
105 /home/m3tam3re/p/AI/AGENTS/rules/concerns/naming.md
|
||||
82 /home/m3tam3re/p/AI/AGENTS/rules/concerns/project-structure.md
|
||||
134 /home/m3tam3re/p/AI/AGENTS/rules/concerns/testing.md
|
||||
129 /home/m3tam3re/p/AI/AGENTS/rules/languages/nix.md
|
||||
224 /home/m3tam3re/p/AI/AGENTS/rules/languages/python.md
|
||||
100 /home/m3tam3re/p/AI/AGENTS/rules/languages/shell.md
|
||||
150 /home/m3tam3re/p/AI/AGENTS/rules/languages/typescript.md
|
||||
42 /home/m3tam3re/p/AI/AGENTS/rules/frameworks/n8n.md
|
||||
1396 total
|
||||
RESULT: All 11 files present
|
||||
@@ -1,13 +0,0 @@
|
||||
=== Path Resolution Check ===
|
||||
OK: rules/concerns/coding-style.md exists
|
||||
OK: rules/concerns/naming.md exists
|
||||
OK: rules/concerns/documentation.md exists
|
||||
OK: rules/concerns/testing.md exists
|
||||
OK: rules/concerns/git-workflow.md exists
|
||||
OK: rules/concerns/project-structure.md exists
|
||||
OK: rules/languages/python.md exists
|
||||
OK: rules/languages/typescript.md exists
|
||||
OK: rules/languages/nix.md exists
|
||||
OK: rules/languages/shell.md exists
|
||||
OK: rules/frameworks/n8n.md exists
|
||||
RESULT: All paths resolve
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,28 +0,0 @@
|
||||
|
||||
## Task 5: Update Mem0 Memory Skill (2026-02-12)
|
||||
|
||||
### Decisions Made
|
||||
|
||||
1. **Section Placement**: Added new sections without disrupting existing content structure
|
||||
- "Memory Categories" after "Identity Scopes" (line ~109)
|
||||
- "Dual-Layer Sync" after "Workflow Patterns" (line ~138)
|
||||
- Extended "Health Check" section with Pre-Operation Check
|
||||
- "Error Handling" at end, before API Reference
|
||||
|
||||
2. **Content Structure**:
|
||||
- Memory Categories: 5-category classification with table format
|
||||
- Dual-Layer Sync: Complete sync pattern with bash example
|
||||
- Health Check: Added pre-operation verification
|
||||
- Error Handling: Comprehensive graceful degradation patterns
|
||||
|
||||
3. **Validation Approach**:
|
||||
- Used `./scripts/test-skill.sh --validate` for skill structure validation
|
||||
- All sections verified with grep commands
|
||||
- Commit and push completed successfully
|
||||
|
||||
### Success Patterns
|
||||
|
||||
- Edit tool works well for adding sections to existing markdown files
|
||||
- Preserving existing content while adding new sections
|
||||
- Using grep for verification of section additions
|
||||
- `./scripts/test-skill.sh --validate` validates YAML frontmatter automatically
|
||||
@@ -1,47 +0,0 @@
|
||||
|
||||
## Core Memory Skill Creation (2026-02-12)
|
||||
|
||||
**Task**: Create `skills/memory/SKILL.md` - dual-layer memory orchestration skill
|
||||
|
||||
**Pattern Identified**:
|
||||
- Skill structure follows YAML frontmatter with required fields:
|
||||
- `name`: skill identifier
|
||||
- `description`: Use when (X), triggers (Y) pattern
|
||||
- `compatibility`: "opencode"
|
||||
- Markdown structure: Overview, Prerequisites, Workflows, Error Handling, Integration, Quick Reference, See Also
|
||||
|
||||
**Verification Pattern**:
|
||||
```bash
|
||||
test -f <path> && echo "File exists"
|
||||
grep "name: <skill>" <path>
|
||||
grep "key-term" <path>
|
||||
```
|
||||
|
||||
**Key Design Decision**:
|
||||
- Central orchestration skill that references underlying implementation skills (mem0-memory, obsidian)
|
||||
- 4 core workflows: Store, Recall, Auto-Capture, Auto-Recall
|
||||
- Error handling with graceful degradation
|
||||
|
||||
## Apollo Agent Prompt Update (2026-02-12)
|
||||
|
||||
**Task**: Add memory management responsibilities to Apollo agent system prompt
|
||||
|
||||
**Edit Pattern**: Multiple targeted edits to single file preserving existing content
|
||||
- Line number-based edits require precise matching of surrounding context
|
||||
- Edit order: Core Responsibilities → Quality Standards → Tool Usage → Edge Cases
|
||||
- Each edit inserts new bullet items without removing existing content
|
||||
|
||||
**Key Additions**:
|
||||
1. Core Responsibilities: "Manage dual-layer memory system (Mem0 + Obsidian CODEX)"
|
||||
2. Quality Standards: Memory storage, auto-capture, retrieval, categories
|
||||
3. Tool Usage: Mem0 REST API (localhost:8000), Obsidian MCP integration
|
||||
4. Edge Cases: Mem0 unavailable, Obsidian unavailable handling
|
||||
|
||||
**Verification Pattern**:
|
||||
```bash
|
||||
grep -c "memory" ~/p/AI/AGENTS/prompts/apollo.txt # Count occurrences
|
||||
grep "Mem0" ~/p/AI/AGENTS/prompts/apollo.txt # Check specific term
|
||||
grep -i "auto-capture" ~/p/AI/AGENTS/prompts/apollo.txt # Case-insensitive
|
||||
```
|
||||
|
||||
**Observation**: grep is case-sensitive by default - use -i for case-insensitive searches
|
||||
@@ -1,120 +0,0 @@
|
||||
# Opencode Memory Plugin — Learnings
|
||||
|
||||
## Session: ses_3a5a47a05ffeoNYfz2RARYsHX9
|
||||
Started: 2026-02-14
|
||||
|
||||
### Architecture Decisions
|
||||
- SQLite + FTS5 + vec0 replaces mem0+qdrant entirely
|
||||
- Markdown at ~/CODEX/80-memory/ is source of truth
|
||||
- SQLite DB at ~/.local/share/opencode-memory/index.db is derived index
|
||||
- OpenAI text-embedding-3-small for embeddings (1536 dimensions)
|
||||
- Hybrid search: 0.7 vector weight + 0.3 BM25 weight
|
||||
- Chunking: 400 tokens, 80 overlap (tiktoken cl100k_base)
|
||||
|
||||
### Key Patterns from Openclaw
|
||||
- MemoryIndexManager pattern (1590 lines) — file watching, chunking, indexing
|
||||
- Hybrid scoring with weighted combination
|
||||
- Embedding cache by content_hash + model
|
||||
- Two sources: "memory" (markdown files) + "sessions" (transcripts)
|
||||
- Two tools: memory_search (hybrid query) + memory_get (read lines)
|
||||
|
||||
### Technical Stack
|
||||
- Runtime: bun
|
||||
- Test framework: bun test (TDD)
|
||||
- SQLite: better-sqlite3 (synchronous API)
|
||||
- Embeddings: openai npm package
|
||||
- Chunking: tiktoken (cl100k_base encoding)
|
||||
- File watching: chokidar
|
||||
- Validation: zod (for tool schemas)
|
||||
|
||||
### Vec0 Extension Findings (Task 1)
|
||||
- **vec0 extension**: NOT AVAILABLE - requires vec0.so shared library not present
|
||||
- **Alternative solution**: sqlite-vec package (v0.1.7-alpha.2) successfully tested
|
||||
- **Loading mechanism**: `sqliteVec.load(db)` loads vector extension into database
|
||||
- **Test result**: Works with Node.js (better-sqlite3 native module compatible)
|
||||
- **Note**: better-sqlite3 does NOT work with Bun runtime (native module incompatibility)
|
||||
- **Testing command**: `node -e "const Database = require('better-sqlite3'); const sqliteVec = require('sqlite-vec'); const db = new Database(':memory:'); sqliteVec.load(db); console.log('OK')"`
|
||||
|
||||
### Bun Runtime Limitations
|
||||
- better-sqlite3 native module NOT compatible with Bun (ERR_DLOPEN_FAILED)
|
||||
- Use Node.js for any code requiring better-sqlite3
|
||||
- Alternative: bun:sqlite API (similar API, but not same library)
|
||||
|
||||
## Wave Progress
|
||||
- Wave 1: IN PROGRESS (Task 1)
|
||||
- Wave 2-6: PENDING
|
||||
|
||||
### Configuration Module Implementation (Task: Config Module)
|
||||
- **TDD approach**: RED-GREEN-REFACTOR cycle successfully applied
|
||||
- **Pattern**: Default config object + resolveConfig() function for merging
|
||||
- **Path expansion**: `expandPath()` helper function handles `~` → `$HOME` expansion
|
||||
- **Test coverage**: 10 tests covering defaults, overrides, path expansion, and config merging
|
||||
- **TypeScript best practices**: Proper type exports from types.ts, type imports in config.ts
|
||||
- **Defaults match openclaw**: chunking (400/80), search weights (0.7/0.3), minScore (0.35), maxResults (6)
|
||||
- **Bun test framework**: Fast execution (~20ms for 10 tests), clean output
|
||||
|
||||
### Database Schema Implementation (Task 2)
|
||||
- **TDD approach**: RED-GREEN-REFACTOR cycle successfully applied for db module
|
||||
- **Schema tables**: meta, files, chunks, embedding_cache, chunks_fts (FTS5), chunks_vec (vec0)
|
||||
- **WAL mode**: Enabled via `db.pragma('journal_mode = WAL')` for better concurrency
|
||||
- **Foreign keys**: Enabled via `db.pragma('foreign_keys = ON')`
|
||||
- **sqlite-vec integration**: Loaded via `sqliteVec.load(db)` for vector search capabilities
|
||||
- **FTS5 virtual table**: External content table referencing chunks for full-text search
|
||||
- **vec0 virtual table**: 1536-dimension float array for OpenAI text-embedding-3-small embeddings
|
||||
- **Test execution**: Use Node.js with tsx for TypeScript execution (not Bun runtime)
|
||||
- **Buffer handling**: Float32Array must be converted to Buffer via `Buffer.from(array.buffer)` for SQLite binding
|
||||
- **In-memory databases**: WAL mode returns 'memory' for :memory: DBs, 'wal' for file-based DBs
|
||||
- **Test coverage**: 9 tests covering table creation, data insertion, FTS5, vec0, WAL mode, and clean closure
|
||||
- **Error handling**: better-sqlite3 throws "The database connection is not open" for operations on closed DBs
|
||||
|
||||
### Node.js Test Execution
|
||||
- **Issue**: better-sqlite3 not compatible with Bun runtime (native module)
|
||||
- **Solution**: Use Node.js with tsx (TypeScript executor) for running tests
|
||||
- **Command**: `npx tsx --test src/__tests__/db.test.ts`
|
||||
- **Node.test API**: Uses `describe`, `it`, `before`, `after` from 'node:test' module
|
||||
- **Assertions**: Use `assert` from 'node:assert' module
|
||||
- **Cleanup**: Use `after()` hooks for database cleanup, not `afterEach()` (node:test difference)
|
||||
|
||||
### Embedding Provider Implementation (Task: Embeddings Module)
|
||||
- **TDD approach**: RED-GREEN-REFACTOR cycle successfully applied for embeddings module
|
||||
- **Mock database**: Created in-memory mock for testing since better-sqlite3 incompatible with Bun
|
||||
- **Float32 precision**: embeddings stored/retrieved via Float32Array has limited precision (use toBeCloseTo in tests)
|
||||
- **Cache implementation**: content_hash + model composite key in embedding_cache table
|
||||
- **Retry logic**: Exponential backoff (1s, 2s, 4s) for 429/500 errors, max 3 retries
|
||||
- **Test coverage**: 11 tests covering embed(), embedBatch(), cache hits/misses, API failures, retries, buffer conversion
|
||||
- **Helper functions**: embeddingToBuffer() and bufferToEmbedding() for Float32Array ↔ Buffer conversion
|
||||
- **Bun spyOn**: Use mockClear() to reset call count without replacing mock implementation
|
||||
- **Buffer size**: Float32 embedding stored as Buffer with size = dimensions * 4 bytes
|
||||
|
||||
### FTS5 BM25 Search Implementation (Task: FTS5 Search Module)
|
||||
- **TDD approach**: RED-GREEN-REFACTOR cycle successfully applied for search module
|
||||
- **buildFtsQuery()**: Extracts alphanumeric tokens via regex `/[A-Za-z0-9_]+/g`, quotes them, joins with AND
|
||||
- **FTS5 escaping**: Tokens are quoted to handle special characters (e.g., `"term"`)
|
||||
- **BM25 score normalization**: `bm25RankToScore(rank)` converts BM25 rank to 0-1 score using `1 / (1 + normalized)`
|
||||
- **FTS5 external content tables**: The schema uses `content='chunks', content_rowid='rowid'` but requires manual insertion into chunks_fts
|
||||
- **Test data setup**: Must manually insert into chunks_fts after inserting into chunks (external content doesn't auto-populate)
|
||||
- **BM25 ranking**: Results are ordered by `rank` column (lower rank = better match for FTS5)
|
||||
- **Error handling**: searchFTS catches SQL errors and returns empty array (graceful degradation)
|
||||
- **MaxResults parameter**: Respects LIMIT clause in SQL query
|
||||
- **SearchResult interface**: Includes id, filePath, startLine, endLine, text, contentHash, source, score (all required)
|
||||
- **Prefix matching**: FTS5 supports prefix queries automatically via token matching (e.g., "test" matches "testing")
|
||||
- **No matches**: Returns empty array when query has no valid tokens or no matches found
|
||||
- **Test coverage**: 7 tests covering basic search, exact keywords, partial words, no matches, ranking, maxResults, and metadata
|
||||
|
||||
### Hybrid Search Implementation (Task: Hybrid Search Combiner)
|
||||
- **TDD approach**: RED-GREEN-REFACTOR cycle successfully applied for hybrid search
|
||||
- **Weighted scoring**: Combined score = vectorWeight * vectorScore + textWeight * textScore (default: 0.7/0.3)
|
||||
- **Result merging**: Uses Map<string, HybridSearchResult> to merge results by chunk ID, preventing duplicates
|
||||
- **Dual-score tracking**: Each result tracks both vectorScore and textScore separately, allowing for degraded modes
|
||||
- **Graceful degradation**: Works with FTS5-only (vector search fails) or vector-only (FTS5 fails)
|
||||
- **minScore filtering**: Results below minScore threshold are filtered out after score calculation
|
||||
- **Score sorting**: Results sorted by combined score in descending order before applying maxResults limit
|
||||
- **Vector search fallback**: searchVector catches errors and returns empty array, allowing FTS5-only operation
|
||||
- **FTS5 query fallback**: searchFTS catches SQL errors and returns empty array, allowing vector-only operation
|
||||
- **Database cleanup**: beforeEach must delete from chunks_fts, chunks_vec, chunks, and files to avoid state bleed
|
||||
- **Virtual table corruption**: Deleting from FTS5/vec0 virtual tables can cause corruption - use try/catch to recreate
|
||||
- **SearchResult type conflict**: SearchResult is imported from types.ts, don't re-export in search.ts
|
||||
- **Test isolation**: Virtual tables (chunks_fts, chunks_vec) must be cleared and potentially recreated between tests
|
||||
- **Buffer conversion**: queryEmbedding converted to Buffer via Buffer.from(new Float32Array(array).buffer)
|
||||
- **Debug logging**: process.env.DEBUG_SEARCH flag enables detailed logging of FTS5 and vector search results
|
||||
- **Test coverage**: 9 tests covering combination, weighting, minScore filtering, deduplication, sorting, maxResults, degraded modes (FTS5-only, vector-only), and custom weights
|
||||
@@ -1,60 +0,0 @@
|
||||
# Rules System - Learnings
|
||||
|
||||
## 2026-02-17T17:50 Session Start
|
||||
|
||||
### Architecture Pattern
|
||||
- Nix helper lives in nixpkgs repo (not AGENTS) - follows ports.nix pattern
|
||||
- AGENTS repo stays pure content (markdown rule files only)
|
||||
- Pattern: `{lib}: { mkOpencodeRules = ...; }`
|
||||
|
||||
### Key Files
|
||||
- nixpkgs: `/home/m3tam3re/p/NIX/nixpkgs/lib/ports.nix` (reference pattern)
|
||||
- nixos-config: `/home/m3tam3re/p/NIX/nixos-config/home/features/coding/opencode.nix` (deployment)
|
||||
- AGENTS: `rules/` directory (content)
|
||||
|
||||
### mkOpencodeRules Signature
|
||||
```nix
|
||||
mkOpencodeRules {
|
||||
agents = inputs.agents; # Non-flake input path
|
||||
languages = [ "python" "typescript" ];
|
||||
concerns ? [ "coding-style" "naming" "documentation" "testing" "git-workflow" "project-structure" ];
|
||||
frameworks ? [ "n8n" ];
|
||||
extraInstructions ? [];
|
||||
}
|
||||
```
|
||||
|
||||
### Consumption Pattern
|
||||
```nix
|
||||
let
|
||||
m3taLib = inputs.m3ta-nixpkgs.lib.${system};
|
||||
rules = m3taLib.opencode-rules.mkOpencodeRules {
|
||||
agents = inputs.agents;
|
||||
languages = [ "python" ];
|
||||
};
|
||||
in pkgs.mkShell { shellHook = rules.shellHook; }
|
||||
```
|
||||
|
||||
### Wave 1: Directory Structure (2026-02-17T18:54)
|
||||
- Successfully created rules/ directory with subdirectories: concerns/, languages/, frameworks/
|
||||
- Added .gitkeep files to each subdirectory (git needs at least one file to track empty directories)
|
||||
- Pattern reference: followed skills/ directory structure convention
|
||||
- USAGE.md already existed in rules/ (created by previous wave)
|
||||
- AGENTS repo stays pure content - no Nix files added (as planned)
|
||||
- Verification: ls confirms all three .gitkeep files exist in proper locations
|
||||
|
||||
### Wave 2: Nix Helper Implementation (2026-02-17T19:02)
|
||||
- Successfully created `/home/m3tam3re/p/NIX/nixpkgs/lib/opencode-rules.nix`
|
||||
- Followed ports.nix pattern EXACTLY: `{lib}: { mkOpencodeRules = ...; }`
|
||||
- Function signature: `{ agents, languages ? [], concerns ? [...], frameworks ? [], extraInstructions ? [] }`
|
||||
- Returns: `{ shellHook, instructions }`
|
||||
- Instructions list built using map functions for each category (concerns, languages, frameworks, extra)
|
||||
- ShellHook creates symlink `.opencode-rules` → `${agents}/rules` and generates `opencode.json` with `$schema`
|
||||
- JSON generation uses `builtins.toJSON opencodeConfig` where opencodeConfig = `{ "$schema" = "..."; inherit instructions; }`
|
||||
- Comprehensive doc comments added matching ports.nix style (multi-line comments with usage examples)
|
||||
- All paths relative to project root via `.opencode-rules/` prefix
|
||||
- Verification passed:
|
||||
- `nix eval --impure` shows file loads and exposes `mkOpencodeRules`
|
||||
- Function returns `{ instructions, shellHook }`
|
||||
- Instructions list builds correctly (concerns + languages + frameworks + extra)
|
||||
- `nix-instantiate --parse` validates syntax is correct
|
||||
- ShellHook contains both symlink creation and JSON generation (heredoc pattern)
|
||||
@@ -1,748 +0,0 @@
|
||||
# Agent Permissions Refinement
|
||||
|
||||
## TL;DR
|
||||
|
||||
> **Quick Summary**: Refine OpenCode agent permissions for Chiron (planning) and Chriton-Forge (build) to implement 2025 AI security best practices with principle of least privilege, human-in-the-loop for critical actions, and explicit guardrails against permission bypass.
|
||||
|
||||
> **Deliverables**:
|
||||
> - Updated `agents/agents.json` with refined permissions for Chiron and Chriton-Forge
|
||||
> - Critical bug fix: Duplicate `external_directory` key in Chiron config
|
||||
> - Enhanced secret blocking with additional patterns
|
||||
> - Bash injection prevention rules
|
||||
> - Git protection against secret commits and repo hijacking
|
||||
|
||||
> **Estimated Effort**: Medium
|
||||
> **Parallel Execution**: NO - sequential changes to single config file
|
||||
> **Critical Path**: Fix duplicate key → Apply Chiron permissions → Apply Chriton-Forge permissions → Validate
|
||||
|
||||
---
|
||||
|
||||
## Context
|
||||
|
||||
### Original Request
|
||||
User wants to refine agent permissions for:
|
||||
- **Chiron**: Planning agent with read-only access, restricted to read-only subagents, no file editing, can create beads issues
|
||||
- **Chriton-Forge**: Build agent with write access restricted to ~/p/**, git commits allowed but git push asks, package install commands ask
|
||||
- **General**: Sane defaults that are secure but open enough for autonomous work
|
||||
|
||||
### Interview Summary
|
||||
**Key Discussions**:
|
||||
- Chiron: Read-only planning, no file editing, bash denied except for `bd *` commands, external_directory ~/p/** only, task permission to restrict subagents to explore/librarian/athena + chiron-forge for handoff
|
||||
- Chriton-Forge: Write access restricted to ~/p/**, git commits allow / git push ask, package install commands ask, git config deny
|
||||
- Workspace path: ~/p/** is symlink to ~/projects/personal/** (just replacing path reference)
|
||||
- Bash security: Block all bash redirect patterns (echo >, cat >, tee, etc.)
|
||||
|
||||
**Research Findings**:
|
||||
- OpenCode supports granular permission rules with wildcards, last-match-wins
|
||||
- 2025 best practices: Principle of least privilege, tiered permissions (read-only auto, destructive ask, JIT privileges), human-in-the-loop for critical actions
|
||||
- Security hardening: Block command injection vectors, prevent git secret commits, add comprehensive secret blocking patterns
|
||||
|
||||
### Metis Review
|
||||
**Critical Issues Identified**:
|
||||
1. **Duplicate `external_directory` key** in Chiron config (lines 8-9 and 27) - second key overrides first, breaking intended behavior
|
||||
2. **Bash edit bypass**: Even with `edit: deny`, bash can write files via redirection (`echo "x" > file.txt`, `cat >`, `tee`)
|
||||
3. **Git secret protection**: Agent could commit secrets (read .env, then git commit .env)
|
||||
4. **Git config hijacking**: Agent could modify .git/config to push to attacker-controlled repo
|
||||
5. **Command injection**: Malicious content could execute via `$()`, backticks, `eval`, `source`
|
||||
6. **Secret blocking incomplete**: Missing patterns for `.local/share/*`, `.cache/*`, `*.db`, `*.keychain`, `*.p12`
|
||||
|
||||
**Guardrails Applied**:
|
||||
- Fix duplicate external_directory key (use single object with catch-all `"*": "ask"` after specific rules)
|
||||
- Add bash file write protection patterns (echo >, cat >, printf >, tee, > operators)
|
||||
- Add git secret protection (`git add *.env*`: deny, `git commit *.env*`: deny)
|
||||
- Add git config protection (`git config *`: deny for Chriton-Forge)
|
||||
- Add bash injection prevention (`$(*`, `` `*``, `eval *`, `source *`)
|
||||
- Expand secret blocking with additional patterns
|
||||
- Add /run/agenix/* to read deny list
|
||||
|
||||
---
|
||||
|
||||
## Work Objectives
|
||||
|
||||
### Core Objective
|
||||
Refine OpenCode agent permissions in `agents/agents.json` to implement security hardening based on 2025 AI agent best practices while maintaining autonomous workflow capabilities.
|
||||
|
||||
### Concrete Deliverables
|
||||
- Updated `agents/agents.json` with:
|
||||
- Chiron: Read-only permissions, subagent restrictions, bash denial (except `bd *`), no file editing
|
||||
- Chriton-Forge: Write access scoped to ~/p/**, git commit allow / push ask, package install ask, git config deny
|
||||
- Both: Enhanced secret blocking, bash injection prevention, git secret protection
|
||||
|
||||
### Definition of Done
|
||||
- [x] Permission configuration updated in `agents/agents.json`
|
||||
- [x] JSON syntax valid (no duplicate keys, valid structure)
|
||||
- [x] Workspace path validated (~/p/** exists and is correct)
|
||||
- [x] Acceptance criteria tests pass (via manual verification)
|
||||
|
||||
### Must Have
|
||||
- Chiron cannot edit files directly
|
||||
- Chiron cannot write files via bash (redirects blocked)
|
||||
- Chiron restricted to read-only subagents + chiron-forge for handoff
|
||||
- Chriton-Forge can only write to ~/p/**
|
||||
- Chriton-Forge cannot git config
|
||||
- Both agents block secret file reads
|
||||
- Both agents prevent command injection
|
||||
- Git operations cannot commit secrets
|
||||
- No duplicate keys in permission configuration
|
||||
|
||||
### Must NOT Have (Guardrails)
|
||||
- **Edit bypass via bash**: No bash redirection patterns that allow file writes when `edit: deny`
|
||||
- **Git secret commits**: No ability to git add/commit .env or credential files
|
||||
- **Repo hijacking**: No git config modification allowed for Chriton-Forge
|
||||
- **Command injection**: No `$()`, backticks, `eval`, `source` execution via bash
|
||||
- **Write scope escape**: Chriton-Forge cannot write outside ~/p/** without asking
|
||||
- **Secret exfiltration**: No access to .env, .ssh, .gnupg, credentials, secrets, .pem, .key, /run/agenix
|
||||
- **Unrestricted bash for Chiron**: Only `bd *` commands allowed
|
||||
|
||||
---
|
||||
|
||||
## Verification Strategy (MANDATORY)
|
||||
|
||||
> This is configuration work, not code development. Manual verification is required after deployment.
|
||||
|
||||
### Test Decision
|
||||
- **Infrastructure exists**: YES (home-manager deployment)
|
||||
- **User wants tests**: NO (Manual-only verification)
|
||||
- **Framework**: None
|
||||
|
||||
### Manual Verification Procedures
|
||||
|
||||
Each TODO includes EXECUTABLE verification procedures that users can run to validate changes.
|
||||
|
||||
**Verification Commands to Run After Deployment:**
|
||||
|
||||
1. **JSON Syntax Validation**:
|
||||
```bash
|
||||
# Validate JSON structure and no duplicate keys
|
||||
jq '.' /home/m3tam3re/p/AI/AGENTS/agents/agents.json > /dev/null 2>&1
|
||||
# Expected: Exit code 0 (valid JSON)
|
||||
|
||||
# Check for duplicate keys (manual review of chiron permission object)
|
||||
# Expected: Single external_directory key, no other duplicates
|
||||
```
|
||||
|
||||
2. **Workspace Path Validation**:
|
||||
```bash
|
||||
ls -la ~/p/ 2>&1
|
||||
# Expected: Directory exists, shows contents (likely symlink to ~/projects/personal/)
|
||||
```
|
||||
|
||||
3. **After Deployment - Chiron Read-Only Test** (manual):
|
||||
- Have Chiron attempt to edit a test file
|
||||
- Expected: Permission denied with clear error message
|
||||
- Have Chiron attempt to write via bash (echo "test" > /tmp/test.txt)
|
||||
- Expected: Permission denied
|
||||
- Have Chiron run `bd ready` command
|
||||
- Expected: Command succeeds, returns JSON output with issue list
|
||||
- Have Chiron attempt to invoke build-capable subagent (sisyphus-junior)
|
||||
- Expected: Permission denied
|
||||
|
||||
4. **After Deployment - Chiron Workspace Access** (manual):
|
||||
- Have Chiron read file within ~/p/**
|
||||
- Expected: Success, returns file contents
|
||||
- Have Chiron read file outside ~/p/**
|
||||
- Expected: Permission denied or ask user
|
||||
- Have Chiron delegate to explore/librarian/athena
|
||||
- Expected: Success, subagent executes
|
||||
|
||||
5. **After Deployment - Chriton-Forge Write Access** (manual):
|
||||
- Have Chriton-Forge write test file in ~/p/** directory
|
||||
- Expected: Success, file created
|
||||
- Have Chriton-Forge attempt to write file to /tmp
|
||||
- Expected: Ask user for approval
|
||||
- Have Chriton-Forge run `git add` and `git commit -m "test"`
|
||||
- Expected: Success, commit created without asking
|
||||
- Have Chriton-Forge attempt `git push`
|
||||
- Expected: Ask user for approval
|
||||
- Have Chriton-Forge attempt `git config`
|
||||
- Expected: Permission denied
|
||||
- Have Chriton-Forge attempt `npm install lodash`
|
||||
- Expected: Ask user for approval
|
||||
|
||||
6. **After Deployment - Secret Blocking Tests** (manual):
|
||||
- Attempt to read .env file with both agents
|
||||
- Expected: Permission denied
|
||||
- Attempt to read /run/agenix/ with Chiron
|
||||
- Expected: Permission denied
|
||||
- Attempt to read .env.example (should be allowed)
|
||||
- Expected: Success
|
||||
|
||||
7. **After Deployment - Bash Injection Prevention** (manual):
|
||||
- Have agent attempt bash -c "$(cat /malicious)"
|
||||
- Expected: Permission denied
|
||||
- Have agent attempt bash -c "`cat /malicious`"
|
||||
- Expected: Permission denied
|
||||
- Have agent attempt eval command
|
||||
- Expected: Permission denied
|
||||
|
||||
8. **After Deployment - Git Secret Protection** (manual):
|
||||
- Have agent attempt `git add .env`
|
||||
- Expected: Permission denied
|
||||
- Have agent attempt `git commit .env`
|
||||
- Expected: Permission denied
|
||||
|
||||
9. **Deployment Verification**:
|
||||
```bash
|
||||
# After home-manager switch, verify config is embedded correctly
|
||||
cat ~/.config/opencode/config.json | jq '.agent.chiron.permission.external_directory'
|
||||
# Expected: Shows ~/p/** rule, no duplicate keys
|
||||
|
||||
# Verify agents load without errors
|
||||
# Expected: No startup errors when launching OpenCode
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution Strategy
|
||||
|
||||
### Parallel Execution Waves
|
||||
|
||||
> Single file sequential changes - no parallelization possible.
|
||||
|
||||
```
|
||||
Single-Threaded Execution:
|
||||
Task 1: Fix duplicate external_directory key
|
||||
Task 2: Apply Chiron permission updates
|
||||
Task 3: Apply Chriton-Forge permission updates
|
||||
Task 4: Validate configuration
|
||||
```
|
||||
|
||||
### Dependency Matrix
|
||||
|
||||
| Task | Depends On | Blocks | Can Parallelize With |
|
||||
|------|------------|--------|---------------------|
|
||||
| 1 | None | 2, 3 | None (must start) |
|
||||
| 2 | 1 | 4 | 3 |
|
||||
| 3 | 1 | 4 | 2 |
|
||||
| 4 | 2, 3 | None | None (validation) |
|
||||
|
||||
### Agent Dispatch Summary
|
||||
|
||||
| Task | Recommended Agent |
|
||||
|------|-----------------|
|
||||
| 1 | delegate_task(category="quick", load_skills=["git-master"]) |
|
||||
| 2 | delegate_task(category="quick", load_skills=["git-master"]) |
|
||||
| 3 | delegate_task(category="quick", load_skills=["git-master"]) |
|
||||
| 4 | User (manual verification) |
|
||||
|
||||
---
|
||||
|
||||
## TODOs
|
||||
|
||||
> Implementation tasks for agent configuration changes. Each task MUST include acceptance criteria with executable verification.
|
||||
|
||||
- [x] 1. Fix Duplicate external_directory Key in Chiron Config
|
||||
|
||||
**What to do**:
|
||||
- Remove duplicate `external_directory` key from Chiron permission object
|
||||
- Consolidate into single object with specific rule + catch-all `"*": "ask"`
|
||||
- Replace `~/projects/personal/**` with `~/p/**` (symlink to same directory)
|
||||
|
||||
**Must NOT do**:
|
||||
- Leave duplicate keys (second key overrides first, breaks config)
|
||||
- Skip workspace path validation (verify ~/p/** exists)
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
> **Category**: quick
|
||||
- Reason: Simple JSON edit, single file change, no complex logic
|
||||
> **Skills**: git-master
|
||||
- git-master: Git workflow for committing changes
|
||||
> **Skills Evaluated but Omitted**:
|
||||
- research: Not needed (no investigation required)
|
||||
- librarian: Not needed (no external docs needed)
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: NO
|
||||
- **Parallel Group**: Sequential
|
||||
- **Blocks**: Tasks 2, 3 (depends on clean config)
|
||||
- **Blocked By**: None (can start immediately)
|
||||
|
||||
**References** (CRITICAL - Be Exhaustive):
|
||||
|
||||
**Pattern References** (existing code to follow):
|
||||
- `agents/agents.json:1-135` - Current agent configuration structure (JSON format, permission object structure)
|
||||
- `agents/agents.json:7-29` - Chiron permission object (current state with duplicate key)
|
||||
|
||||
**API/Type References** (contracts to implement against):
|
||||
- OpenCode permission schema: `{"permission": {"bash": {...}, "edit": "...", "external_directory": {...}, "task": {...}}`
|
||||
|
||||
**Documentation References** (specs and requirements):
|
||||
- Interview draft: `.sisyphus/drafts/agent-permissions-refinement.md` - All user decisions and requirements
|
||||
- Metis analysis: Critical issue #1 - Duplicate external_directory key
|
||||
|
||||
**External References** (libraries and frameworks):
|
||||
- OpenCode docs: https://opencode.ai/docs/permissions/ - Permission system documentation (allow/ask/deny, wildcards, last-match-wins)
|
||||
- OpenCode docs: https://opencode.ai/docs/agents/ - Agent configuration format
|
||||
|
||||
**WHY Each Reference Matters** (explain the relevance):
|
||||
- `agents/agents.json` - Target file to modify, shows current structure and duplicate key bug
|
||||
- Interview draft - Contains all user decisions (~/p/** path, subagent restrictions, etc.)
|
||||
- OpenCode permissions docs - Explains permission system mechanics (last-match-wins critical for rule ordering)
|
||||
- Metis analysis - Identifies the duplicate key bug that MUST be fixed
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
> **CRITICAL: AGENT-EXECUTABLE VERIFICATION ONLY**
|
||||
|
||||
**Automated Verification (config validation)**:
|
||||
\`\`\`bash
|
||||
# Agent runs:
|
||||
jq '.' /home/m3tam3re/p/AI/AGENTS/agents/agents.json > /dev/null 2>&1
|
||||
# Assert: Exit code 0 (valid JSON)
|
||||
|
||||
# Verify single external_directory key in chiron permission object
|
||||
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron.permission | keys' | grep external_directory | wc -l
|
||||
# Assert: Output is "1" (exactly one external_directory key)
|
||||
|
||||
# Verify workspace path exists
|
||||
ls -la ~/p/ 2>&1 | head -1
|
||||
# Assert: Shows directory listing (not "No such file or directory")
|
||||
\`\`\`
|
||||
|
||||
**Evidence to Capture**:
|
||||
- [x] jq validation output (exit code 0)
|
||||
- [x] external_directory key count output (should be "1")
|
||||
- [x] Workspace path ls output (shows directory exists)
|
||||
|
||||
**Commit**: NO (group with Task 2 and 3)
|
||||
|
||||
- [x] 2. Apply Chiron Permission Updates
|
||||
|
||||
**What to do**:
|
||||
- Set `edit` to `"deny"` (planning agent should not write files)
|
||||
- Set `bash` permissions to deny all except `bd *`:
|
||||
```json
|
||||
"bash": {
|
||||
"*": "deny",
|
||||
"bd *": "allow"
|
||||
}
|
||||
```
|
||||
- Set `external_directory` to `~/p/**` with catch-all ask:
|
||||
```json
|
||||
"external_directory": {
|
||||
"~/p/**": "allow",
|
||||
"*": "ask"
|
||||
}
|
||||
```
|
||||
- Add `task` permission to restrict subagents:
|
||||
```json
|
||||
"task": {
|
||||
"*": "deny",
|
||||
"explore": "allow",
|
||||
"librarian": "allow",
|
||||
"athena": "allow",
|
||||
"chiron-forge": "allow"
|
||||
}
|
||||
```
|
||||
- Add `/run/agenix/*` to read deny list
|
||||
- Add expanded secret blocking patterns: `.local/share/*`, `.cache/*`, `*.db`, `*.keychain`, `*.p12`
|
||||
|
||||
**Must NOT do**:
|
||||
- Allow bash file write operators (echo >, cat >, tee, etc.) - will add in Task 3 for both agents
|
||||
- Allow chiron to invoke build-capable subagents beyond chiron-forge
|
||||
- Skip webfetch permission (should be "allow" for research capability)
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
> **Category**: quick
|
||||
- Reason: JSON configuration update, follows clear specifications from draft
|
||||
> **Skills**: git-master
|
||||
- git-master: Git workflow for committing changes
|
||||
> **Skills Evaluated but Omitted**:
|
||||
- research: Not needed (all requirements documented in draft)
|
||||
- librarian: Not needed (no external docs needed)
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2 (with Task 3)
|
||||
- **Blocks**: Task 4
|
||||
- **Blocked By**: Task 1
|
||||
|
||||
**References** (CRITICAL - Be Exhaustive):
|
||||
|
||||
**Pattern References** (existing code to follow):
|
||||
- `agents/agents.json:11-24` - Current Chiron read permissions with secret blocking patterns
|
||||
- `agents/agents.json:114-132` - Athena permission object (read-only subagent reference pattern)
|
||||
|
||||
**API/Type References** (contracts to implement against):
|
||||
- OpenCode task permission schema: `{"task": {"agent-name": "allow"}}`
|
||||
|
||||
**Documentation References** (specs and requirements):
|
||||
- Interview draft: `.sisyphus/drafts/agent-permissions-refinement.md` - Chiron permission decisions
|
||||
- Metis analysis: Guardrails #7, #8 - Secret blocking patterns, task permission implementation
|
||||
|
||||
**External References** (libraries and frameworks):
|
||||
- OpenCode docs: https://opencode.ai/docs/agents/#task-permissions - Task permission documentation
|
||||
- OpenCode docs: https://opencode.ai/docs/permissions/ - Permission level definitions and pattern matching
|
||||
|
||||
**WHY Each Reference Matters** (explain the relevance):
|
||||
- `agents/agents.json:11-24` - Shows current secret blocking patterns to extend
|
||||
- `agents/agents.json:114-132` - Shows read-only subagent pattern for reference (athena: deny bash, deny edit)
|
||||
- Interview draft - Contains exact user requirements for Chiron permissions
|
||||
- OpenCode task docs - Explains how to restrict subagent invocation via task permission
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
> **CRITICAL: AGENT-EXECUTABLE VERIFICATION ONLY**
|
||||
|
||||
**Automated Verification (config validation)**:
|
||||
\`\`\`bash
|
||||
# Agent runs:
|
||||
jq '.chiron.permission.edit' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron.permission.bash."*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron.permission.bash."bd *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "allow"
|
||||
|
||||
jq '.chiron.permission.task."*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron.permission.task | keys' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Contains ["*", "athena", "chiron-forge", "explore", "librarian"]
|
||||
|
||||
jq '.chiron.permission.external_directory."~/p/**"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "allow"
|
||||
|
||||
jq '.chiron.permission.external_directory."*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "ask"
|
||||
|
||||
jq '.chiron.permission.read."/run/agenix/*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
\`\`\`
|
||||
|
||||
**Evidence to Capture**:
|
||||
- [x] Edit permission value (should be "deny")
|
||||
- [x] Bash wildcard permission (should be "deny")
|
||||
- [x] Bash bd permission (should be "allow")
|
||||
- [x] Task wildcard permission (should be "deny")
|
||||
- [x] Task allowlist keys (should show 5 entries)
|
||||
- [x] External directory ~/p/** permission (should be "allow")
|
||||
- [x] External directory wildcard permission (should be "ask")
|
||||
- [x] Read /run/agenix/* permission (should be "deny")
|
||||
|
||||
**Commit**: NO (group with Task 3)
|
||||
|
||||
- [x] 3. Apply Chriton-Forge Permission Updates
|
||||
|
||||
**What to do**:
|
||||
- Split `git *: "ask"` into granular rules:
|
||||
- Allow: `git add *`, `git commit *`, read-only commands (status, log, diff, branch, show, stash, remote)
|
||||
- Ask: `git push *`
|
||||
- Deny: `git config *`
|
||||
- Change package managers from `"ask"` to granular rules:
|
||||
- Ask for installs: `npm install *`, `npm i *`, `npx *`, `pip install *`, `pip3 install *`, `uv *`, `bun install *`, `bun i *`, `bunx *`, `yarn install *`, `yarn add *`, `pnpm install *`, `pnpm add *`, `cargo install *`, `go install *`, `make install`
|
||||
- Allow other commands implicitly (let them use catch-all rules or existing allow patterns)
|
||||
- Set `external_directory` to allow `~/p/**` with catch-all ask:
|
||||
```json
|
||||
"external_directory": {
|
||||
"~/p/**": "allow",
|
||||
"*": "ask"
|
||||
}
|
||||
```
|
||||
- Add bash file write protection patterns (apply to both agents):
|
||||
```json
|
||||
"bash": {
|
||||
"echo * > *": "deny",
|
||||
"cat * > *": "deny",
|
||||
"printf * > *": "deny",
|
||||
"tee": "deny",
|
||||
"*>*": "deny",
|
||||
">*>*": "deny"
|
||||
}
|
||||
```
|
||||
- Add bash command injection prevention (apply to both agents):
|
||||
```json
|
||||
"bash": {
|
||||
"$(*": "deny",
|
||||
"`*": "deny",
|
||||
"eval *": "deny",
|
||||
"source *": "deny"
|
||||
}
|
||||
```
|
||||
- Add git secret protection patterns (apply to both agents):
|
||||
```json
|
||||
"bash": {
|
||||
"git add *.env*": "deny",
|
||||
"git commit *.env*": "deny",
|
||||
"git add *credentials*": "deny",
|
||||
"git add *secrets*": "deny"
|
||||
}
|
||||
```
|
||||
- Add expanded secret blocking patterns to read permission:
|
||||
- `.local/share/*`, `.cache/*`, `*.db`, `*.keychain`, `*.p12`
|
||||
|
||||
**Must NOT do**:
|
||||
- Remove existing bash deny rules for dangerous commands (dd, mkfs, fdisk, parted, eval, sudo, su, systemctl, etc.)
|
||||
- Allow git config modifications
|
||||
- Allow bash to write files via any method (must block all redirect patterns)
|
||||
- Skip command injection prevention ($(), backticks, eval, source)
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
> **Category**: quick
|
||||
- Reason: JSON configuration update, follows clear specifications from draft
|
||||
> **Skills**: git-master
|
||||
- git-master: Git workflow for committing changes
|
||||
> **Skills Evaluated but Omitted**:
|
||||
- research: Not needed (all requirements documented in draft)
|
||||
- librarian: Not needed (no external docs needed)
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2 (with Task 2)
|
||||
- **Blocks**: Task 4
|
||||
- **Blocked By**: Task 1
|
||||
|
||||
**References** (CRITICAL - Be Exhaustive):
|
||||
|
||||
**Pattern References** (existing code to follow):
|
||||
- `agents/agents.json:37-103` - Current Chriton-Forge bash permissions (many explicit allow/ask/deny rules)
|
||||
- `agents/agents.json:37-50` - Current Chriton-Forge read permissions with secret blocking
|
||||
|
||||
**API/Type References** (contracts to implement against):
|
||||
- OpenCode permission schema: Same as Task 2
|
||||
|
||||
**Documentation References** (specs and requirements):
|
||||
- Interview draft: `.sisyphus/drafts/agent-permissions-refinement.md` - Chriton-Forge permission decisions
|
||||
- Metis analysis: Guardrails #1-#6 - Bash edit bypass, git secret protection, command injection, git config protection
|
||||
|
||||
**External References** (libraries and frameworks):
|
||||
- OpenCode docs: https://opencode.ai/docs/permissions/ - Permission pattern matching (wildcards, last-match-wins)
|
||||
|
||||
**WHY Each Reference Matters** (explain the relevance):
|
||||
- `agents/agents.json:37-103` - Shows current bash permission structure (many explicit rules) to extend with new patterns
|
||||
- `agents/agents.json:37-50` - Shows current secret blocking to extend with additional patterns
|
||||
- Interview draft - Contains exact user requirements for Chriton-Forge permissions
|
||||
- Metis analysis - Provides bash injection prevention patterns and git protection rules
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
> **CRITICAL: AGENT-EXECUTABLE VERIFICATION ONLY**
|
||||
|
||||
**Automated Verification (config validation)**:
|
||||
\`\`\`bash
|
||||
# Agent runs:
|
||||
|
||||
# Verify git commit is allowed
|
||||
jq '.chiron-forge.permission.bash."git commit *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "allow"
|
||||
|
||||
# Verify git push asks
|
||||
jq '.chiron-forge.permission.bash."git push *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "ask"
|
||||
|
||||
# Verify git config is denied
|
||||
jq '.chiron-forge.permission.bash."git config *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
# Verify npm install asks
|
||||
jq '.chiron-forge.permission.bash."npm install *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "ask"
|
||||
|
||||
# Verify bash file write redirects are blocked
|
||||
jq '.chiron-forge.permission.bash."echo * > *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron-forge.permission.bash."cat * > *"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron-forge.permission.bash."tee"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
# Verify command injection is blocked
|
||||
jq '.chiron-forge.permission.bash."$(*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron-forge.permission.bash."`*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
# Verify git secret protection
|
||||
jq '.chiron-forge.permission.bash."git add *.env*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron-forge.permission.bash."git commit *.env*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
# Verify external_directory scope
|
||||
jq '.chiron-forge.permission.external_directory."~/p/**"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "allow"
|
||||
|
||||
jq '.chiron-forge.permission.external_directory."*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "ask"
|
||||
|
||||
# Verify expanded secret blocking
|
||||
jq '.chiron-forge.permission.read.".local/share/*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron-forge.permission.read.".cache/*"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
|
||||
jq '.chiron-forge.permission.read."*.db"' /home/m3tam3re/p/AI/AGENTS/agents/agents.json
|
||||
# Assert: Output is "deny"
|
||||
\`\`\`
|
||||
|
||||
**Evidence to Capture**:
|
||||
- [x] Git commit permission (should be "allow")
|
||||
- [x] Git push permission (should be "ask")
|
||||
- [x] Git config permission (should be "deny")
|
||||
- [x] npm install permission (should be "ask")
|
||||
- [x] bash redirect echo > permission (should be "deny")
|
||||
- [x] bash redirect cat > permission (should be "deny")
|
||||
- [x] bash tee permission (should be "deny")
|
||||
- [x] bash $() injection permission (should be "deny")
|
||||
- [x] bash backtick injection permission (should be "deny")
|
||||
- [x] git add *.env* permission (should be "deny")
|
||||
- [x] git commit *.env* permission (should be "deny")
|
||||
- [x] external_directory ~/p/** permission (should be "allow")
|
||||
- [x] external_directory wildcard permission (should be "ask")
|
||||
- [x] read .local/share/* permission (should be "deny")
|
||||
- [x] read .cache/* permission (should be "deny")
|
||||
- [x] read *.db permission (should be "deny")
|
||||
|
||||
**Commit**: YES (groups with Tasks 1, 2, 3)
|
||||
- Message: `chore(agents): refine permissions for Chiron and Chriton-Forge with security hardening`
|
||||
- Files: `agents/agents.json`
|
||||
- Pre-commit: `jq '.' agents/agents.json > /dev/null 2>&1` (validate JSON)
|
||||
|
||||
- [x] 4. Validate Configuration (Manual Verification)
|
||||
|
||||
**What to do**:
|
||||
- Run JSON syntax validation: `jq '.' agents/agents.json`
|
||||
- Verify no duplicate keys in configuration
|
||||
- Verify workspace path exists: `ls -la ~/p/`
|
||||
- Document manual verification procedure for post-deployment testing
|
||||
|
||||
**Must NOT do**:
|
||||
- Skip workspace path validation
|
||||
- Skip duplicate key verification
|
||||
- Proceed to deployment without validation
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
> **Category**: quick
|
||||
- Reason: Simple validation commands, documentation task
|
||||
> **Skills**: git-master
|
||||
- git-master: Git workflow for committing validation script or notes if needed
|
||||
> **Skills Evaluated but Omitted**:
|
||||
- research: Not needed (validation is straightforward)
|
||||
- librarian: Not needed (no external docs needed)
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: NO
|
||||
- **Parallel Group**: Sequential
|
||||
- **Blocks**: None (final validation task)
|
||||
- **Blocked By**: Tasks 2, 3
|
||||
|
||||
**References** (CRITICAL - Be Exhaustive):
|
||||
|
||||
**Pattern References** (existing code to follow):
|
||||
- `AGENTS.md` - Repository documentation structure
|
||||
|
||||
**API/Type References** (contracts to implement against):
|
||||
- N/A (validation task)
|
||||
|
||||
**Documentation References** (specs and requirements):
|
||||
- Interview draft: `.sisyphus/drafts/agent-permissions-refinement.md` - All user requirements
|
||||
- Metis analysis: Guardrails #1-#6 - Validation requirements
|
||||
|
||||
**External References** (libraries and frameworks):
|
||||
- N/A (validation task)
|
||||
|
||||
**WHY Each Reference Matters** (explain the relevance):
|
||||
- Interview draft - Contains all requirements to validate against
|
||||
- Metis analysis - Identifies specific validation steps (duplicate keys, workspace path, etc.)
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
> **CRITICAL: AGENT-EXECUTABLE VERIFICATION ONLY**
|
||||
|
||||
**Automated Verification (config validation)**:
|
||||
\`\`\`bash
|
||||
# Agent runs:
|
||||
|
||||
# JSON syntax validation
|
||||
jq '.' /home/m3tam3re/p/AI/AGENTS/agents/agents.json > /dev/null 2>&1
|
||||
# Assert: Exit code 0
|
||||
|
||||
# Verify no duplicate external_directory keys
|
||||
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron.permission | keys' | grep external_directory | wc -l
|
||||
# Assert: Output is "1"
|
||||
|
||||
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron-forge.permission | keys' | grep external_directory | wc -l
|
||||
# Assert: Output is "1"
|
||||
|
||||
# Verify workspace path exists
|
||||
ls -la ~/p/ 2>&1 | head -1
|
||||
# Assert: Shows directory listing (not "No such file or directory")
|
||||
|
||||
# Verify all permission keys are valid
|
||||
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron.permission' > /dev/null 2>&1
|
||||
# Assert: Exit code 0
|
||||
|
||||
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron-forge.permission' > /dev/null 2>&1
|
||||
# Assert: Exit code 0
|
||||
\`\`\`
|
||||
|
||||
**Evidence to Capture**:
|
||||
- [x] jq validation output (exit code 0)
|
||||
- [x] Chiron external_directory key count (should be "1")
|
||||
- [x] Chriton-Forge external_directory key count (should be "1")
|
||||
- [x] Workspace path ls output (shows directory exists)
|
||||
- [x] Chiron permission object validation (exit code 0)
|
||||
- [x] Chriton-Forge permission object validation (exit code 0)
|
||||
|
||||
**Commit**: NO (validation only, no changes)
|
||||
|
||||
---
|
||||
|
||||
## Commit Strategy
|
||||
|
||||
| After Task | Message | Files | Verification |
|
||||
|------------|---------|-------|--------------|
|
||||
| 1, 2, 3 | `chore(agents): refine permissions for Chiron and Chriton-Forge with security hardening` | agents/agents.json | `jq '.' agents/agents.json > /dev/null` |
|
||||
| 4 | N/A (validation only) | N/A | N/A |
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
### Verification Commands
|
||||
```bash
|
||||
# Pre-deployment validation
|
||||
jq '.' /home/m3tam3re/p/AI/AGENTS/agents/agents.json > /dev/null 2>&1
|
||||
# Expected: Exit code 0
|
||||
|
||||
# Duplicate key check
|
||||
cat /home/m3tam3re/p/AI/AGENTS/agents/agents.json | jq '.chiron.permission | keys' | grep external_directory | wc -l
|
||||
# Expected: 1
|
||||
|
||||
# Workspace path validation
|
||||
ls -la ~/p/ 2>&1
|
||||
# Expected: Directory listing
|
||||
|
||||
# Post-deployment (manual)
|
||||
# Have Chiron attempt file edit → Expected: Permission denied
|
||||
# Have Chiron run bd ready → Expected: Success
|
||||
# Have Chriton-Forge git commit → Expected: Success
|
||||
# Have Chriton-Forge git push → Expected: Ask user
|
||||
# Have agent read .env → Expected: Permission denied
|
||||
```
|
||||
|
||||
### Final Checklist
|
||||
- [x] Duplicate `external_directory` key fixed
|
||||
- [x] Chiron edit set to "deny"
|
||||
- [x] Chiron bash denied except `bd *`
|
||||
- [x] Chiron task permission restricts subagents (explore, librarian, athena, chiron-forge)
|
||||
- [x] Chiron external_directory allows ~/p/** only
|
||||
- [x] Chriton-Forge git commit allowed, git push asks
|
||||
- [x] Chriton-Forge git config denied
|
||||
- [x] Chriton-Forge package install commands ask
|
||||
- [x] Chriton-Forge external_directory allows ~/p/**, asks others
|
||||
- [x] Bash file write operators blocked (echo >, cat >, tee, etc.)
|
||||
- [x] Bash command injection blocked ($(), backticks, eval, source)
|
||||
- [x] Git secret protection added (git add/commit *.env* deny)
|
||||
- [x] Expanded secret blocking patterns added (.local/share/*, .cache/*, *.db, *.keychain, *.p12)
|
||||
- [x] /run/agenix/* blocked in read permissions
|
||||
- [x] JSON syntax valid (jq validates)
|
||||
- [x] No duplicate keys in configuration
|
||||
- [x] Workspace path ~/p/** exists
|
||||
@@ -1,977 +0,0 @@
|
||||
# Chiron Personal Agent Framework
|
||||
|
||||
## TL;DR
|
||||
|
||||
> **Quick Summary**: Create an Oh-My-Opencode-style agent framework for personal productivity with Chiron as the orchestrator, 4 specialized subagents (Hermes, Athena, Apollo, Calliope), and 5 tool integration skills (Basecamp, Outline, MS Teams, Outlook, Obsidian).
|
||||
>
|
||||
> **Deliverables**:
|
||||
> - 6 agent definitions in `agents.json`
|
||||
> - 6 system prompt files in `prompts/`
|
||||
> - 5 tool integration skills in `skills/`
|
||||
> - Validation script extension in `scripts/`
|
||||
>
|
||||
> **Estimated Effort**: Medium
|
||||
> **Parallel Execution**: YES - 3 waves
|
||||
> **Critical Path**: Task 1 (agents.json) → Task 3-7 (prompts) → Task 9-13 (skills) → Task 14 (validation)
|
||||
>
|
||||
> **Status**: ✅ COMPLETE - All 14 main tasks + 6 verification items = 20/20 deliverables
|
||||
|
||||
---
|
||||
|
||||
## Context
|
||||
|
||||
### Original Request
|
||||
Create an agent framework similar to Oh-My-Opencode but focused on personal productivity:
|
||||
- Manage work tasks, appointments, projects via Basecamp, Outline, MS Teams, Outlook
|
||||
- Manage private tasks and knowledge via Obsidian
|
||||
- Greek mythology naming convention (avoiding Oh My OpenCode names)
|
||||
- Main agent named "Chiron"
|
||||
|
||||
### Interview Summary
|
||||
**Key Discussions**:
|
||||
- **Chiron's Role**: Main orchestrator that delegates to specialized subagents
|
||||
- **Agent Count**: Minimal (3-4 agents initially) + 2 primary agents
|
||||
- **Domain Separation**: Separate work vs private agents with clear boundaries
|
||||
- **Tool Priority**: All 4 work tools + Obsidian equally important
|
||||
- **Basecamp MCP**: User confirmed working MCP at georgeantonopoulos/Basecamp-MCP-Server
|
||||
|
||||
**Research Findings**:
|
||||
- Oh My OpenCode names to avoid: Sisyphus, Atlas, Prometheus, Hephaestus, Metis, Momus, Oracle, Librarian, Explore, Multimodal-Looker, Sisyphus-Junior
|
||||
- MCP servers available for all work tools + Obsidian
|
||||
- Protonmail requires custom IMAP/SMTP (deferred)
|
||||
- Current repo has established skill patterns with SKILL.md + optional subdirectories
|
||||
|
||||
### Metis Review
|
||||
**Identified Gaps** (addressed in plan):
|
||||
- Delegation model clarified: Chiron uses Question tool for ambiguous requests
|
||||
- Behavioral difference between Chiron and Chiron-Forge defined
|
||||
- Executable acceptance criteria added for all tasks
|
||||
- Edge cases documented in guardrails section
|
||||
- MCP authentication assumed pre-configured by NixOS (explicit scope boundary)
|
||||
|
||||
---
|
||||
|
||||
## Work Objectives
|
||||
|
||||
### Core Objective
|
||||
Create a personal productivity agent framework following Oh-My-Opencode patterns, enabling AI-assisted management of work and private life through specialized agents that integrate with existing tools.
|
||||
|
||||
### Concrete Deliverables
|
||||
1. `agents/agents.json` - 6 agent definitions (2 primary, 4 subagent)
|
||||
2. `prompts/chiron.txt` - Chiron (plan mode) system prompt
|
||||
3. `prompts/chiron-forge.txt` - Chiron-Forge (build mode) system prompt
|
||||
4. `prompts/hermes.txt` - Work communication agent prompt
|
||||
5. `prompts/athena.txt` - Work knowledge agent prompt
|
||||
6. `prompts/apollo.txt` - Private knowledge agent prompt
|
||||
7. `prompts/calliope.txt` - Writing agent prompt
|
||||
8. `skills/basecamp/SKILL.md` - Basecamp integration skill
|
||||
9. `skills/outline/SKILL.md` - Outline wiki integration skill
|
||||
10. `skills/msteams/SKILL.md` - MS Teams integration skill
|
||||
11. `skills/outlook/SKILL.md` - Outlook email integration skill
|
||||
12. `skills/obsidian/SKILL.md` - Obsidian integration skill
|
||||
13. `scripts/validate-agents.sh` - Agent validation script
|
||||
|
||||
### Definition of Done
|
||||
- [x] `python3 -c "import json; json.load(open('agents/agents.json'))"` → Exit 0
|
||||
- [x] All 6 prompt files exist and are non-empty
|
||||
- [x] All 5 skill directories have valid SKILL.md with YAML frontmatter
|
||||
- [x] `./scripts/test-skill.sh --validate` passes for new skills
|
||||
- [x] `./scripts/validate-agents.sh` passes
|
||||
|
||||
### Must Have
|
||||
- All agents use Question tool for multi-choice decisions
|
||||
- External prompt files (not inline in JSON)
|
||||
- Follow existing skill structure patterns
|
||||
- Greek naming convention for agents
|
||||
- Clear separation between plan mode (Chiron) and build mode (Chiron-Forge)
|
||||
- Skills provide tool-specific knowledge that agents load on demand
|
||||
|
||||
### Must NOT Have (Guardrails)
|
||||
- **NO MCP server configuration** - Managed by NixOS, outside this repo
|
||||
- **NO authentication handling** - Assume pre-configured MCP tools
|
||||
- **NO cross-agent state sharing** - Each agent operates independently
|
||||
- **NO new opencode commands** - Use existing command patterns only
|
||||
- **NO generic "I'm an AI assistant" prompts** - Domain-specific responsibilities only
|
||||
- **NO Protonmail integration** - Deferred to future phase
|
||||
- **NO duplicate tool knowledge across skills** - Each skill focuses on ONE tool
|
||||
- **NO scripts outside scripts/ directory**
|
||||
- **NO model configuration changes** - Keep current `zai-coding-plan/glm-4.7`
|
||||
|
||||
---
|
||||
|
||||
## Verification Strategy (MANDATORY)
|
||||
|
||||
> **UNIVERSAL RULE: ZERO HUMAN INTERVENTION**
|
||||
>
|
||||
> ALL tasks in this plan MUST be verifiable WITHOUT any human action.
|
||||
> This is NOT conditional - it applies to EVERY task, regardless of test strategy.
|
||||
>
|
||||
> ### Test Decision
|
||||
> - **Infrastructure exists**: YES (test-skill.sh)
|
||||
> - **Automated tests**: Tests-after (validation scripts)
|
||||
> - **Framework**: bash + python for validation
|
||||
>
|
||||
> ### Agent-Executed QA Scenarios (MANDATORY - ALL tasks)
|
||||
>
|
||||
> **Verification Tool by Deliverable Type**:
|
||||
>
|
||||
> | Type | Tool | How Agent Verifies |
|
||||
> |------|------|-------------------|
|
||||
> | **agents.json** | Bash (python/jq) | Parse JSON, validate structure, check required fields |
|
||||
> | **Prompt files** | Bash (file checks) | File exists, non-empty, contains expected sections |
|
||||
> | **SKILL.md files** | Bash (test-skill.sh) | YAML frontmatter valid, name matches directory |
|
||||
> | **Validation scripts** | Bash | Script is executable, runs without error, produces expected output |
|
||||
|
||||
---
|
||||
|
||||
## Execution Strategy
|
||||
|
||||
### Parallel Execution Waves
|
||||
|
||||
```
|
||||
Wave 1 (Start Immediately):
|
||||
├── Task 1: Create agents.json configuration [no dependencies]
|
||||
└── Task 2: Create prompts/ directory structure [no dependencies]
|
||||
|
||||
Wave 2 (After Wave 1):
|
||||
├── Task 3: Chiron prompt [depends: 2]
|
||||
├── Task 4: Chiron-Forge prompt [depends: 2]
|
||||
├── Task 5: Hermes prompt [depends: 2]
|
||||
├── Task 6: Athena prompt [depends: 2]
|
||||
├── Task 7: Apollo prompt [depends: 2]
|
||||
└── Task 8: Calliope prompt [depends: 2]
|
||||
|
||||
Wave 3 (Can parallel with Wave 2):
|
||||
├── Task 9: Basecamp skill [no dependencies]
|
||||
├── Task 10: Outline skill [no dependencies]
|
||||
├── Task 11: MS Teams skill [no dependencies]
|
||||
├── Task 12: Outlook skill [no dependencies]
|
||||
└── Task 13: Obsidian skill [no dependencies]
|
||||
|
||||
Wave 4 (After Wave 2 + 3):
|
||||
└── Task 14: Validation script [depends: 1, 3-8]
|
||||
|
||||
Critical Path: Task 1 → Task 2 → Tasks 3-8 → Task 14
|
||||
Parallel Speedup: ~50% faster than sequential
|
||||
```
|
||||
|
||||
### Dependency Matrix
|
||||
|
||||
| Task | Depends On | Blocks | Can Parallelize With |
|
||||
|------|------------|--------|---------------------|
|
||||
| 1 | None | 14 | 2, 9-13 |
|
||||
| 2 | None | 3-8 | 1, 9-13 |
|
||||
| 3-8 | 2 | 14 | Each other, 9-13 |
|
||||
| 9-13 | None | None | Each other, 1-2 |
|
||||
| 14 | 1, 3-8 | None | (final) |
|
||||
|
||||
### Agent Dispatch Summary
|
||||
|
||||
| Wave | Tasks | Recommended Category |
|
||||
|------|-------|---------------------|
|
||||
| 1 | 1, 2 | quick |
|
||||
| 2 | 3-8 | quick (parallel) |
|
||||
| 3 | 9-13 | quick (parallel) |
|
||||
| 4 | 14 | quick |
|
||||
|
||||
---
|
||||
|
||||
## TODOs
|
||||
|
||||
### Wave 1: Foundation
|
||||
|
||||
- [x] 1. Create agents.json with 6 agent definitions
|
||||
|
||||
**What to do**:
|
||||
- Update existing `agents/agents.json` to add all 6 agents
|
||||
- Each agent needs: description, mode, model, prompt reference
|
||||
- Primary agents: chiron, chiron-forge
|
||||
- Subagents: hermes, athena, apollo, calliope
|
||||
- All agents should have `question: "allow"` permission
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not add MCP server configuration
|
||||
- Do not change model from current pattern
|
||||
- Do not add inline prompts (use file references)
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`agent-development`]
|
||||
- `agent-development`: Provides agent configuration patterns and best practices
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 1 (with Task 2)
|
||||
- **Blocks**: Task 14
|
||||
- **Blocked By**: None
|
||||
|
||||
**References**:
|
||||
- `agents/agents.json:1-7` - Current chiron agent configuration pattern
|
||||
- `skills/agent-development/SKILL.md:40-76` - JSON agent structure reference
|
||||
- `skills/agent-development/SKILL.md:226-277` - Permissions system reference
|
||||
- `skills/agent-development/references/opencode-agents-json-example.md` - Complete examples
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: agents.json is valid JSON with all 6 agents
|
||||
Tool: Bash (python)
|
||||
Steps:
|
||||
1. python3 -c "import json; data = json.load(open('agents/agents.json')); print(len(data))"
|
||||
2. Assert: Output is "6"
|
||||
3. python3 -c "import json; data = json.load(open('agents/agents.json')); print(sorted(data.keys()))"
|
||||
4. Assert: Output contains ['apollo', 'athena', 'calliope', 'chiron', 'chiron-forge', 'hermes']
|
||||
Expected Result: JSON parses, all 6 agents present
|
||||
Evidence: Command output captured
|
||||
|
||||
Scenario: Each agent has required fields
|
||||
Tool: Bash (python)
|
||||
Steps:
|
||||
1. python3 -c "
|
||||
import json
|
||||
data = json.load(open('agents/agents.json'))
|
||||
for name, agent in data.items():
|
||||
assert 'description' in agent, f'{name}: missing description'
|
||||
assert 'mode' in agent, f'{name}: missing mode'
|
||||
assert 'prompt' in agent, f'{name}: missing prompt'
|
||||
print('All agents valid')
|
||||
"
|
||||
2. Assert: Output is "All agents valid"
|
||||
Expected Result: All required fields present
|
||||
Evidence: Validation output captured
|
||||
|
||||
Scenario: Primary agents have correct mode
|
||||
Tool: Bash (python)
|
||||
Steps:
|
||||
1. python3 -c "
|
||||
import json
|
||||
data = json.load(open('agents/agents.json'))
|
||||
assert data['chiron']['mode'] == 'primary'
|
||||
assert data['chiron-forge']['mode'] == 'primary'
|
||||
print('Primary modes correct')
|
||||
"
|
||||
Expected Result: Both primary agents have mode=primary
|
||||
Evidence: Command output
|
||||
|
||||
Scenario: Subagents have correct mode
|
||||
Tool: Bash (python)
|
||||
Steps:
|
||||
1. python3 -c "
|
||||
import json
|
||||
data = json.load(open('agents/agents.json'))
|
||||
for name in ['hermes', 'athena', 'apollo', 'calliope']:
|
||||
assert data[name]['mode'] == 'subagent', f'{name}: wrong mode'
|
||||
print('Subagent modes correct')
|
||||
"
|
||||
Expected Result: All subagents have mode=subagent
|
||||
Evidence: Command output
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(agents): add chiron agent framework with 6 agents`
|
||||
- Files: `agents/agents.json`
|
||||
- Pre-commit: `python3 -c "import json; json.load(open('agents/agents.json'))"`
|
||||
|
||||
---
|
||||
|
||||
- [x] 2. Create prompts directory structure
|
||||
|
||||
**What to do**:
|
||||
- Create `prompts/` directory if not exists
|
||||
- Directory will hold all agent system prompt files
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not create prompt files yet (done in Wave 2)
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: []
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 1 (with Task 1)
|
||||
- **Blocks**: Tasks 3-8
|
||||
- **Blocked By**: None
|
||||
|
||||
**References**:
|
||||
- `skills/agent-development/SKILL.md:148-159` - Prompt file conventions
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: prompts directory exists
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -d prompts && echo "exists" || echo "missing"
|
||||
2. Assert: Output is "exists"
|
||||
Expected Result: Directory created
|
||||
Evidence: Command output
|
||||
```
|
||||
|
||||
**Commit**: NO (groups with Task 1)
|
||||
|
||||
---
|
||||
|
||||
### Wave 2: Agent Prompts
|
||||
|
||||
- [x] 3. Create Chiron (Plan Mode) system prompt
|
||||
|
||||
**What to do**:
|
||||
- Create `prompts/chiron.txt`
|
||||
- Define Chiron as the main orchestrator in plan/analysis mode
|
||||
- Include delegation logic to subagents (Hermes, Athena, Apollo, Calliope)
|
||||
- Include Question tool usage for ambiguous requests
|
||||
- Focus on: planning, analysis, guidance, delegation
|
||||
- Permissions: read-only, no file modifications
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not allow write/edit operations
|
||||
- Do not include execution responsibilities
|
||||
- Do not overlap with Chiron-Forge's build capabilities
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`agent-development`]
|
||||
- `agent-development`: System prompt design patterns
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2 (with Tasks 4-8)
|
||||
- **Blocks**: Task 14
|
||||
- **Blocked By**: Task 2
|
||||
|
||||
**References**:
|
||||
- `skills/agent-development/SKILL.md:349-386` - System prompt design patterns
|
||||
- `skills/agent-development/SKILL.md:397-415` - Prompt best practices
|
||||
- `skills/agent-development/references/system-prompt-design.md` - Detailed prompt patterns
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Chiron prompt file exists and is substantial
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -f prompts/chiron.txt && echo "exists" || echo "missing"
|
||||
2. Assert: Output is "exists"
|
||||
3. wc -c < prompts/chiron.txt
|
||||
4. Assert: Output is > 500 (substantial content)
|
||||
Expected Result: File exists with meaningful content
|
||||
Evidence: File size captured
|
||||
|
||||
Scenario: Chiron prompt contains orchestrator role
|
||||
Tool: Bash (grep)
|
||||
Steps:
|
||||
1. grep -qi "orchestrat" prompts/chiron.txt && echo "found" || echo "missing"
|
||||
2. Assert: Output is "found"
|
||||
3. grep -qi "delegat" prompts/chiron.txt && echo "found" || echo "missing"
|
||||
4. Assert: Output is "found"
|
||||
Expected Result: Prompt describes orchestration and delegation
|
||||
Evidence: grep output
|
||||
|
||||
Scenario: Chiron prompt references subagents
|
||||
Tool: Bash (grep)
|
||||
Steps:
|
||||
1. grep -qi "hermes" prompts/chiron.txt && echo "found" || echo "missing"
|
||||
2. grep -qi "athena" prompts/chiron.txt && echo "found" || echo "missing"
|
||||
3. grep -qi "apollo" prompts/chiron.txt && echo "found" || echo "missing"
|
||||
4. grep -qi "calliope" prompts/chiron.txt && echo "found" || echo "missing"
|
||||
Expected Result: All 4 subagents mentioned
|
||||
Evidence: grep outputs
|
||||
```
|
||||
|
||||
**Commit**: YES (group with Tasks 4-8)
|
||||
- Message: `feat(prompts): add chiron and subagent system prompts`
|
||||
- Files: `prompts/*.txt`
|
||||
- Pre-commit: `for f in prompts/*.txt; do test -s "$f" || exit 1; done`
|
||||
|
||||
---
|
||||
|
||||
- [x] 4. Create Chiron-Forge (Build Mode) system prompt
|
||||
|
||||
**What to do**:
|
||||
- Create `prompts/chiron-forge.txt`
|
||||
- Define as Chiron's execution/build counterpart
|
||||
- Full write access for task execution
|
||||
- Can modify files, run commands, complete tasks
|
||||
- Still delegates to subagents for specialized domains
|
||||
- Uses Question tool for destructive operations confirmation
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not make it a planning-only agent (that's Chiron)
|
||||
- Do not allow destructive operations without confirmation
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`agent-development`]
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2 (with Tasks 3, 5-8)
|
||||
- **Blocks**: Task 14
|
||||
- **Blocked By**: Task 2
|
||||
|
||||
**References**:
|
||||
- `skills/agent-development/SKILL.md:316-346` - Complete agent example with chiron/chiron-forge pattern
|
||||
- `skills/agent-development/SKILL.md:253-277` - Permission patterns for bash commands
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Chiron-Forge prompt file exists
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -f prompts/chiron-forge.txt && wc -c < prompts/chiron-forge.txt
|
||||
2. Assert: Output > 500
|
||||
Expected Result: File exists with substantial content
|
||||
Evidence: File size
|
||||
|
||||
Scenario: Chiron-Forge prompt emphasizes execution
|
||||
Tool: Bash (grep)
|
||||
Steps:
|
||||
1. grep -qi "execut" prompts/chiron-forge.txt && echo "found" || echo "missing"
|
||||
2. grep -qi "build" prompts/chiron-forge.txt && echo "found" || echo "missing"
|
||||
Expected Result: Execution/build terminology present
|
||||
Evidence: grep output
|
||||
```
|
||||
|
||||
**Commit**: YES (groups with Task 3)
|
||||
|
||||
---
|
||||
|
||||
- [x] 5. Create Hermes (Work Communication) system prompt
|
||||
|
||||
**What to do**:
|
||||
- Create `prompts/hermes.txt`
|
||||
- Specialization: Basecamp tasks, Outlook email, MS Teams meetings
|
||||
- Greek god of communication, messengers, quick tasks
|
||||
- Uses Question tool for: which tool to use, clarifying recipients
|
||||
- Focus on: task updates, email drafting, meeting scheduling
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not handle documentation (Athena's domain)
|
||||
- Do not handle personal/private tools (Apollo's domain)
|
||||
- Do not write long-form content (Calliope's domain)
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`agent-development`]
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2
|
||||
- **Blocks**: Task 14
|
||||
- **Blocked By**: Task 2
|
||||
|
||||
**References**:
|
||||
- `skills/agent-development/SKILL.md:349-378` - Standard prompt structure
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Hermes prompt defines communication domain
|
||||
Tool: Bash (grep)
|
||||
Steps:
|
||||
1. grep -qi "basecamp" prompts/hermes.txt && echo "found" || echo "missing"
|
||||
2. grep -qi "outlook\|email" prompts/hermes.txt && echo "found" || echo "missing"
|
||||
3. grep -qi "teams\|meeting" prompts/hermes.txt && echo "found" || echo "missing"
|
||||
Expected Result: All 3 tools mentioned
|
||||
Evidence: grep outputs
|
||||
```
|
||||
|
||||
**Commit**: YES (groups with Task 3)
|
||||
|
||||
---
|
||||
|
||||
- [x] 6. Create Athena (Work Knowledge) system prompt
|
||||
|
||||
**What to do**:
|
||||
- Create `prompts/athena.txt`
|
||||
- Specialization: Outline wiki, documentation, knowledge organization
|
||||
- Greek goddess of wisdom and strategic warfare
|
||||
- Focus on: wiki search, knowledge retrieval, documentation updates
|
||||
- Uses Question tool for: which document to update, clarifying search scope
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not handle communication (Hermes's domain)
|
||||
- Do not handle private knowledge (Apollo's domain)
|
||||
- Do not write creative content (Calliope's domain)
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`agent-development`]
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2
|
||||
- **Blocks**: Task 14
|
||||
- **Blocked By**: Task 2
|
||||
|
||||
**References**:
|
||||
- `skills/agent-development/SKILL.md:349-378` - Standard prompt structure
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Athena prompt defines knowledge domain
|
||||
Tool: Bash (grep)
|
||||
Steps:
|
||||
1. grep -qi "outline" prompts/athena.txt && echo "found" || echo "missing"
|
||||
2. grep -qi "wiki\|knowledge" prompts/athena.txt && echo "found" || echo "missing"
|
||||
3. grep -qi "document" prompts/athena.txt && echo "found" || echo "missing"
|
||||
Expected Result: Outline and knowledge terms present
|
||||
Evidence: grep outputs
|
||||
```
|
||||
|
||||
**Commit**: YES (groups with Task 3)
|
||||
|
||||
---
|
||||
|
||||
- [x] 7. Create Apollo (Private Knowledge) system prompt
|
||||
|
||||
**What to do**:
|
||||
- Create `prompts/apollo.txt`
|
||||
- Specialization: Obsidian vault, personal notes, private knowledge graph
|
||||
- Greek god of knowledge, prophecy, and light
|
||||
- Focus on: note search, personal task management, knowledge retrieval
|
||||
- Uses Question tool for: clarifying which vault, which note
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not handle work tools (Hermes/Athena's domain)
|
||||
- Do not expose personal data to work contexts
|
||||
- Do not write long-form content (Calliope's domain)
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`agent-development`]
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2
|
||||
- **Blocks**: Task 14
|
||||
- **Blocked By**: Task 2
|
||||
|
||||
**References**:
|
||||
- `skills/agent-development/SKILL.md:349-378` - Standard prompt structure
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Apollo prompt defines private knowledge domain
|
||||
Tool: Bash (grep)
|
||||
Steps:
|
||||
1. grep -qi "obsidian" prompts/apollo.txt && echo "found" || echo "missing"
|
||||
2. grep -qi "personal\|private" prompts/apollo.txt && echo "found" || echo "missing"
|
||||
3. grep -qi "note\|vault" prompts/apollo.txt && echo "found" || echo "missing"
|
||||
Expected Result: Obsidian and personal knowledge terms present
|
||||
Evidence: grep outputs
|
||||
```
|
||||
|
||||
**Commit**: YES (groups with Task 3)
|
||||
|
||||
---
|
||||
|
||||
- [x] 8. Create Calliope (Writing) system prompt
|
||||
|
||||
**What to do**:
|
||||
- Create `prompts/calliope.txt`
|
||||
- Specialization: documentation writing, reports, meeting notes, prose
|
||||
- Greek muse of epic poetry and eloquence
|
||||
- Focus on: drafting documents, summarizing, writing assistance
|
||||
- Uses Question tool for: clarifying tone, audience, format
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not manage tools directly (delegates to other agents for tool access)
|
||||
- Do not handle short communication (Hermes's domain)
|
||||
- Do not overlap with Athena's wiki management
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`agent-development`]
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2
|
||||
- **Blocks**: Task 14
|
||||
- **Blocked By**: Task 2
|
||||
|
||||
**References**:
|
||||
- `skills/agent-development/SKILL.md:349-378` - Standard prompt structure
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Calliope prompt defines writing domain
|
||||
Tool: Bash (grep)
|
||||
Steps:
|
||||
1. grep -qi "writ" prompts/calliope.txt && echo "found" || echo "missing"
|
||||
2. grep -qi "document" prompts/calliope.txt && echo "found" || echo "missing"
|
||||
3. grep -qi "report\|summar" prompts/calliope.txt && echo "found" || echo "missing"
|
||||
Expected Result: Writing and documentation terms present
|
||||
Evidence: grep outputs
|
||||
```
|
||||
|
||||
**Commit**: YES (groups with Task 3)
|
||||
|
||||
---
|
||||
|
||||
### Wave 3: Tool Integration Skills
|
||||
|
||||
- [x] 9. Create Basecamp integration skill
|
||||
|
||||
**What to do**:
|
||||
- Create `skills/basecamp/SKILL.md`
|
||||
- Document Basecamp MCP capabilities (63 tools from georgeantonopoulos/Basecamp-MCP-Server)
|
||||
- Include: projects, todos, messages, card tables, campfire, webhooks
|
||||
- Provide workflow examples for common operations
|
||||
- Reference MCP tool names for agent use
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not include MCP server setup instructions (managed by Nix)
|
||||
- Do not duplicate general project management advice
|
||||
- Do not include authentication handling
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`skill-creator`]
|
||||
- `skill-creator`: Provides skill structure patterns and validation
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 3 (with Tasks 10-13)
|
||||
- **Blocks**: None
|
||||
- **Blocked By**: None
|
||||
|
||||
**References**:
|
||||
- `skills/skill-creator/SKILL.md` - Skill creation patterns
|
||||
- `skills/brainstorming/SKILL.md` - Example skill structure
|
||||
- https://github.com/georgeantonopoulos/Basecamp-MCP-Server - MCP tool documentation
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Basecamp skill has valid structure
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -d skills/basecamp && echo "dir exists"
|
||||
2. test -f skills/basecamp/SKILL.md && echo "file exists"
|
||||
3. ./scripts/test-skill.sh --validate basecamp || echo "validation failed"
|
||||
Expected Result: Directory and SKILL.md exist, validation passes
|
||||
Evidence: Command outputs
|
||||
|
||||
Scenario: Basecamp skill has valid frontmatter
|
||||
Tool: Bash (python)
|
||||
Steps:
|
||||
1. python3 -c "
|
||||
import yaml
|
||||
content = open('skills/basecamp/SKILL.md').read()
|
||||
front = content.split('---')[1]
|
||||
data = yaml.safe_load(front)
|
||||
assert data['name'] == 'basecamp', 'name mismatch'
|
||||
assert 'description' in data, 'missing description'
|
||||
print('Valid')
|
||||
"
|
||||
Expected Result: YAML frontmatter valid with correct name
|
||||
Evidence: Python output
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(skills): add basecamp integration skill`
|
||||
- Files: `skills/basecamp/SKILL.md`
|
||||
- Pre-commit: `./scripts/test-skill.sh --validate basecamp`
|
||||
|
||||
---
|
||||
|
||||
- [x] 10. Create Outline wiki integration skill
|
||||
|
||||
**What to do**:
|
||||
- Create `skills/outline/SKILL.md`
|
||||
- Document Outline API capabilities
|
||||
- Include: document CRUD, search, collections, sharing
|
||||
- Provide workflow examples for knowledge management
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not include MCP server setup
|
||||
- Do not duplicate wiki concepts
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`skill-creator`]
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 3
|
||||
- **Blocks**: None
|
||||
- **Blocked By**: None
|
||||
|
||||
**References**:
|
||||
- `skills/skill-creator/SKILL.md` - Skill creation patterns
|
||||
- https://www.getoutline.com/developers - Outline API documentation
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Outline skill has valid structure
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -d skills/outline && test -f skills/outline/SKILL.md && echo "exists"
|
||||
2. ./scripts/test-skill.sh --validate outline || echo "failed"
|
||||
Expected Result: Valid skill structure
|
||||
Evidence: Command output
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(skills): add outline wiki integration skill`
|
||||
- Files: `skills/outline/SKILL.md`
|
||||
- Pre-commit: `./scripts/test-skill.sh --validate outline`
|
||||
|
||||
---
|
||||
|
||||
- [x] 11. Create MS Teams integration skill
|
||||
|
||||
**What to do**:
|
||||
- Create `skills/msteams/SKILL.md`
|
||||
- Document MS Teams Graph API capabilities via MCP
|
||||
- Include: channels, messages, meetings, chat
|
||||
- Provide workflow examples for team communication
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not include Graph API authentication flows
|
||||
- Do not overlap with Outlook email functionality
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`skill-creator`]
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 3
|
||||
- **Blocks**: None
|
||||
- **Blocked By**: None
|
||||
|
||||
**References**:
|
||||
- `skills/skill-creator/SKILL.md` - Skill creation patterns
|
||||
- https://learn.microsoft.com/en-us/graph/api/resources/teams-api-overview - Teams API
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: MS Teams skill has valid structure
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -d skills/msteams && test -f skills/msteams/SKILL.md && echo "exists"
|
||||
2. ./scripts/test-skill.sh --validate msteams || echo "failed"
|
||||
Expected Result: Valid skill structure
|
||||
Evidence: Command output
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(skills): add ms teams integration skill`
|
||||
- Files: `skills/msteams/SKILL.md`
|
||||
- Pre-commit: `./scripts/test-skill.sh --validate msteams`
|
||||
|
||||
---
|
||||
|
||||
- [x] 12. Create Outlook email integration skill
|
||||
|
||||
**What to do**:
|
||||
- Create `skills/outlook/SKILL.md`
|
||||
- Document Outlook Graph API capabilities via MCP
|
||||
- Include: mail CRUD, calendar, contacts, folders
|
||||
- Provide workflow examples for email management
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not include Graph API authentication
|
||||
- Do not overlap with Teams functionality
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`skill-creator`]
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 3
|
||||
- **Blocks**: None
|
||||
- **Blocked By**: None
|
||||
|
||||
**References**:
|
||||
- `skills/skill-creator/SKILL.md` - Skill creation patterns
|
||||
- https://learn.microsoft.com/en-us/graph/outlook-mail-concept-overview - Outlook API
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Outlook skill has valid structure
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -d skills/outlook && test -f skills/outlook/SKILL.md && echo "exists"
|
||||
2. ./scripts/test-skill.sh --validate outlook || echo "failed"
|
||||
Expected Result: Valid skill structure
|
||||
Evidence: Command output
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(skills): add outlook email integration skill`
|
||||
- Files: `skills/outlook/SKILL.md`
|
||||
- Pre-commit: `./scripts/test-skill.sh --validate outlook`
|
||||
|
||||
---
|
||||
|
||||
- [x] 13. Create Obsidian integration skill
|
||||
|
||||
**What to do**:
|
||||
- Create `skills/obsidian/SKILL.md`
|
||||
- Document Obsidian Local REST API capabilities
|
||||
- Include: vault operations, note CRUD, search, daily notes
|
||||
- Reference skills/brainstorming/references/obsidian-workflow.md for patterns
|
||||
- Provide workflow examples for personal knowledge management
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not include plugin installation
|
||||
- Do not duplicate general note-taking advice
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`skill-creator`]
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 3
|
||||
- **Blocks**: None
|
||||
- **Blocked By**: None
|
||||
|
||||
**References**:
|
||||
- `skills/skill-creator/SKILL.md` - Skill creation patterns
|
||||
- `skills/brainstorming/SKILL.md` - Example skill structure
|
||||
- `skills/brainstorming/references/obsidian-workflow.md` - Existing Obsidian patterns
|
||||
- https://coddingtonbear.github.io/obsidian-local-rest-api/ - Local REST API docs
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Obsidian skill has valid structure
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -d skills/obsidian && test -f skills/obsidian/SKILL.md && echo "exists"
|
||||
2. ./scripts/test-skill.sh --validate obsidian || echo "failed"
|
||||
Expected Result: Valid skill structure
|
||||
Evidence: Command output
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(skills): add obsidian integration skill`
|
||||
- Files: `skills/obsidian/SKILL.md`
|
||||
- Pre-commit: `./scripts/test-skill.sh --validate obsidian`
|
||||
|
||||
---
|
||||
|
||||
### Wave 4: Validation
|
||||
|
||||
- [x] 14. Create agent validation script
|
||||
|
||||
**What to do**:
|
||||
- Create `scripts/validate-agents.sh`
|
||||
- Validate agents.json structure and required fields
|
||||
- Verify all referenced prompt files exist
|
||||
- Check prompt files are non-empty
|
||||
- Integrate with existing test-skill.sh patterns
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not require MCP servers for validation
|
||||
- Do not perform functional agent testing (just structural)
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: []
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: NO
|
||||
- **Parallel Group**: Sequential (Wave 4)
|
||||
- **Blocks**: None
|
||||
- **Blocked By**: Tasks 1, 3-8
|
||||
|
||||
**References**:
|
||||
- `scripts/test-skill.sh` - Existing validation script pattern
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Validation script is executable
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -x scripts/validate-agents.sh && echo "executable" || echo "not executable"
|
||||
2. Assert: Output is "executable"
|
||||
Expected Result: Script has execute permission
|
||||
Evidence: Command output
|
||||
|
||||
Scenario: Validation script runs successfully
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. ./scripts/validate-agents.sh
|
||||
2. Assert: Exit code is 0
|
||||
Expected Result: All validations pass
|
||||
Evidence: Script output
|
||||
|
||||
Scenario: Validation script catches missing files
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. mv prompts/chiron.txt prompts/chiron.txt.bak
|
||||
2. ./scripts/validate-agents.sh
|
||||
3. Assert: Exit code is NOT 0
|
||||
4. mv prompts/chiron.txt.bak prompts/chiron.txt
|
||||
Expected Result: Script detects missing prompt file
|
||||
Evidence: Error output
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(scripts): add agent validation script`
|
||||
- Files: `scripts/validate-agents.sh`
|
||||
- Pre-commit: `./scripts/validate-agents.sh`
|
||||
|
||||
---
|
||||
|
||||
## Commit Strategy
|
||||
|
||||
| After Task | Message | Files | Verification |
|
||||
|------------|---------|-------|--------------|
|
||||
| 1, 2 | `feat(agents): add chiron agent framework with 6 agents` | agents/agents.json, prompts/ | `python3 -c "import json; json.load(open('agents/agents.json'))"` |
|
||||
| 3-8 | `feat(prompts): add chiron and subagent system prompts` | prompts/*.txt | `for f in prompts/*.txt; do test -s "$f"; done` |
|
||||
| 9 | `feat(skills): add basecamp integration skill` | skills/basecamp/ | `./scripts/test-skill.sh --validate basecamp` |
|
||||
| 10 | `feat(skills): add outline wiki integration skill` | skills/outline/ | `./scripts/test-skill.sh --validate outline` |
|
||||
| 11 | `feat(skills): add ms teams integration skill` | skills/msteams/ | `./scripts/test-skill.sh --validate msteams` |
|
||||
| 12 | `feat(skills): add outlook email integration skill` | skills/outlook/ | `./scripts/test-skill.sh --validate outlook` |
|
||||
| 13 | `feat(skills): add obsidian integration skill` | skills/obsidian/ | `./scripts/test-skill.sh --validate obsidian` |
|
||||
| 14 | `feat(scripts): add agent validation script` | scripts/validate-agents.sh | `./scripts/validate-agents.sh` |
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
### Verification Commands
|
||||
```bash
|
||||
# Validate agents.json
|
||||
python3 -c "import json; json.load(open('agents/agents.json'))" # Expected: exit 0
|
||||
|
||||
# Count agents
|
||||
python3 -c "import json; print(len(json.load(open('agents/agents.json'))))" # Expected: 6
|
||||
|
||||
# Validate all prompts exist
|
||||
for f in chiron chiron-forge hermes athena apollo calliope; do
|
||||
test -s prompts/$f.txt && echo "$f: OK" || echo "$f: MISSING"
|
||||
done
|
||||
|
||||
# Validate all skills
|
||||
./scripts/test-skill.sh --validate # Expected: all pass
|
||||
|
||||
# Run full validation
|
||||
./scripts/validate-agents.sh # Expected: exit 0
|
||||
```
|
||||
|
||||
### Final Checklist
|
||||
- [x] All 6 agents defined in agents.json
|
||||
- [x] All 6 prompt files exist and are non-empty
|
||||
- [x] All 5 skills have valid SKILL.md with YAML frontmatter
|
||||
- [x] validate-agents.sh passes
|
||||
- [x] test-skill.sh --validate passes
|
||||
- [x] No MCP configuration in repo
|
||||
- [x] No inline prompts in agents.json
|
||||
- [x] All agent names are Greek mythology (not conflicting with Oh My OpenCode)
|
||||
@@ -1,897 +0,0 @@
|
||||
# Memory System for AGENTS + Obsidian CODEX
|
||||
|
||||
## TL;DR
|
||||
|
||||
> **Quick Summary**: Build a dual-layer memory system equivalent to openclaw's — Mem0 for fast semantic search/auto-recall + Obsidian CODEX vault for human-readable, versioned knowledge. Memories are stored in both layers and cross-referenced via IDs.
|
||||
>
|
||||
> **Deliverables**:
|
||||
> - New `skills/memory/SKILL.md` — Core orchestration skill (auto-capture, auto-recall, dual-layer sync)
|
||||
> - New `80-memory/` folder in CODEX vault with category subfolders + memory template
|
||||
> - Obsidian MCP server configuration (cyanheads/obsidian-mcp-server)
|
||||
> - Updated skills (mem0-memory, obsidian), Apollo prompt, CODEX docs, user profile
|
||||
>
|
||||
> **Estimated Effort**: Medium (9 tasks across config/docs, no traditional code)
|
||||
> **Parallel Execution**: YES — 4 waves
|
||||
> **Critical Path**: Task 1 (vault infra) → Task 4 (memory skill) → Task 9 (validation)
|
||||
|
||||
---
|
||||
|
||||
## Context
|
||||
|
||||
### Original Request
|
||||
Adapt openclaw's memory system for the opencode AGENTS repo, integrated with the Obsidian CODEX vault at `~/CODEX`. The vault should serve as a "second brain" for both the user AND AI agents.
|
||||
|
||||
### Interview Summary
|
||||
**Key Discussions**:
|
||||
- Analyzed openclaw's 3-layer memory architecture (SQLite+vectors builtin, memory-core plugin, memory-lancedb plugin with auto-capture/auto-recall)
|
||||
- User confirmed Mem0 is available self-hosted at localhost:8000 — just needs spinning up
|
||||
- User chose `80-memory/` as dedicated vault folder with category subfolders
|
||||
- User chose auto+explicit capture (LLM extraction at session end + "remember this" commands)
|
||||
- User chose agent QA only (no unit test infrastructure — repo is config/docs only)
|
||||
- No Obsidian MCP server currently configured — plan to add cyanheads/obsidian-mcp-server
|
||||
|
||||
**Research Findings**:
|
||||
- cyanheads/obsidian-mcp-server (363 stars) — Best MCP server: frontmatter management, vault cache, search with pagination, tag management
|
||||
- GitHub Copilot's memory system: citation-based verification pattern (Phase 2 candidate)
|
||||
- Production recommendation: dual-layer (operational memory + documented knowledge)
|
||||
- Mem0 provides semantic search, user_id/agent_id/run_id scoping, metadata support, `/health` endpoint
|
||||
- Auto-capture best practice: max 3 per session, LLM extraction > regex patterns
|
||||
|
||||
### Metis Review
|
||||
**Identified Gaps** (addressed):
|
||||
- 80-memory/ subfolders vs flat pattern: Resolved — follows `30-resources/` pattern (subfolders by TYPE), not `50-zettelkasten/` flat pattern
|
||||
- Mem0 health check: Added prerequisite validation step
|
||||
- Error handling undefined: Defined — Mem0 unavailable → skip, Obsidian unavailable → Mem0 only
|
||||
- Deployment order: Defined — CODEX first → MCP config → skills → validation
|
||||
- Scope creep risk: Locked down — citation verification, memory deletion/lifecycle, dashboards all Phase 2
|
||||
- Agent role clarity: Defined — memory skill loadable by any agent, Apollo is primary memory specialist
|
||||
|
||||
---
|
||||
|
||||
## Work Objectives
|
||||
|
||||
### Core Objective
|
||||
Build a dual-layer memory system for opencode agents that stores memories in Mem0 (semantic search, operational) AND the Obsidian CODEX vault (human-readable, versioned, wiki-linked). Equivalent in capability to openclaw's memory system.
|
||||
|
||||
### Concrete Deliverables
|
||||
**AGENTS repo** (`~/p/AI/AGENTS`):
|
||||
- `skills/memory/SKILL.md` — NEW: Core memory skill
|
||||
- `skills/memory/references/mcp-config.md` — NEW: Obsidian MCP server config documentation
|
||||
- `skills/mem0-memory/SKILL.md` — UPDATED: Add categories, dual-layer sync
|
||||
- `skills/obsidian/SKILL.md` — UPDATED: Add 80-memory/ conventions
|
||||
- `prompts/apollo.txt` — UPDATED: Add memory management responsibilities
|
||||
- `context/profile.md` — UPDATED: Add memory system configuration
|
||||
|
||||
**CODEX vault** (`~/CODEX`):
|
||||
- `80-memory/` — NEW: Folder with subfolders (preferences/, facts/, decisions/, entities/, other/)
|
||||
- `templates/memory.md` — NEW: Memory note template
|
||||
- `tag-taxonomy.md` — UPDATED: Add #memory/* tags
|
||||
- `AGENTS.md` — UPDATED: Add 80-memory/ docs, folder decision tree, memory workflows
|
||||
- `README.md` — UPDATED: Add 80-memory/ to folder structure
|
||||
|
||||
**Infrastructure** (Nix home-manager — outside AGENTS repo):
|
||||
- Add cyanheads/obsidian-mcp-server to opencode.json MCP section
|
||||
|
||||
### Definition of Done
|
||||
- [x] All 11 files created/updated as specified
|
||||
- [x] `curl http://localhost:8000/health` returns 200 (Mem0 running)
|
||||
- [~] `curl http://127.0.0.1:27124/vault-info` returns vault info (Obsidian REST API) — *Requires Obsidian desktop app to be open*
|
||||
- [x] `./scripts/test-skill.sh --validate` passes for new/updated skills
|
||||
- [x] 80-memory/ folder exists in CODEX vault with 5 subfolders
|
||||
- [x] Memory template creates valid notes with correct frontmatter
|
||||
|
||||
### Must Have
|
||||
- Dual-layer storage: every memory in Mem0 AND Obsidian
|
||||
- Auto-capture at session end (LLM-based, max 3 per session)
|
||||
- Explicit "remember this" command support
|
||||
- Auto-recall: inject relevant memories before agent starts
|
||||
- 5 categories: preference, fact, decision, entity, other
|
||||
- Health checks before memory operations
|
||||
- Cross-reference: mem0_id in Obsidian frontmatter, obsidian_ref in Mem0 metadata
|
||||
- Error handling: graceful degradation when either layer unavailable
|
||||
|
||||
### Must NOT Have (Guardrails)
|
||||
- NO citation-based memory verification (Phase 2)
|
||||
- NO memory expiration/lifecycle management (Phase 2)
|
||||
- NO memory deletion/forget functionality (Phase 2)
|
||||
- NO memory search UI or Obsidian dashboards (Phase 2)
|
||||
- NO conflict resolution UI between layers (manual edit only)
|
||||
- NO unit tests (repo has no test infrastructure — agent QA only)
|
||||
- NO subfolders in 50-zettelkasten/ or 70-tasks/ (respect flat structure)
|
||||
- NO new memory categories beyond the 5 defined
|
||||
- NO modifications to existing Obsidian templates (only ADD memory.md)
|
||||
- NO changes to agents.json (no new agents or agent config changes)
|
||||
|
||||
---
|
||||
|
||||
## Verification Strategy
|
||||
|
||||
> **UNIVERSAL RULE: ZERO HUMAN INTERVENTION**
|
||||
>
|
||||
> ALL tasks MUST be verifiable WITHOUT any human action.
|
||||
> Every criterion is verifiable by running a command or checking file existence.
|
||||
|
||||
### Test Decision
|
||||
- **Infrastructure exists**: NO (config-only repo)
|
||||
- **Automated tests**: None (agent QA only)
|
||||
- **Framework**: N/A
|
||||
|
||||
### Agent-Executed QA Scenarios (MANDATORY — ALL tasks)
|
||||
|
||||
Verification tools by deliverable type:
|
||||
|
||||
| Type | Tool | How Agent Verifies |
|
||||
|------|------|-------------------|
|
||||
| Vault folders/files | Bash (ls, test -f) | Check existence, content |
|
||||
| Skill YAML frontmatter | Bash (grep, python) | Parse and validate fields |
|
||||
| Mem0 API | Bash (curl) | Send requests, parse JSON |
|
||||
| Obsidian REST API | Bash (curl) | Read notes, check frontmatter |
|
||||
| MCP server | Bash (npx) | Test server startup |
|
||||
|
||||
---
|
||||
|
||||
## Execution Strategy
|
||||
|
||||
### Parallel Execution Waves
|
||||
|
||||
```
|
||||
Wave 1 (Start Immediately — no dependencies):
|
||||
├── Task 1: CODEX vault memory infrastructure (folders, template, tags)
|
||||
└── Task 3: Obsidian MCP server config documentation
|
||||
|
||||
Wave 2 (After Wave 1 — depends on vault structure existing):
|
||||
├── Task 2: CODEX vault documentation updates (AGENTS.md, README.md)
|
||||
├── Task 4: Create core memory skill (skills/memory/SKILL.md)
|
||||
├── Task 5: Update Mem0 memory skill
|
||||
└── Task 6: Update Obsidian skill
|
||||
|
||||
Wave 3 (After Wave 2 — depends on skill content for prompt/profile):
|
||||
├── Task 7: Update Apollo agent prompt
|
||||
└── Task 8: Update user context profile
|
||||
|
||||
Wave 4 (After all — final validation):
|
||||
└── Task 9: End-to-end validation
|
||||
|
||||
Critical Path: Task 1 → Task 4 → Task 9
|
||||
Parallel Speedup: ~50% faster than sequential
|
||||
```
|
||||
|
||||
### Dependency Matrix
|
||||
|
||||
| Task | Depends On | Blocks | Can Parallelize With |
|
||||
|------|------------|--------|---------------------|
|
||||
| 1 | None | 2, 4, 5, 6 | 3 |
|
||||
| 2 | 1 | 9 | 4, 5, 6 |
|
||||
| 3 | None | 4 | 1 |
|
||||
| 4 | 1, 3 | 7, 8, 9 | 5, 6 |
|
||||
| 5 | 1 | 9 | 4, 6 |
|
||||
| 6 | 1 | 9 | 4, 5 |
|
||||
| 7 | 4 | 9 | 8 |
|
||||
| 8 | 4 | 9 | 7 |
|
||||
| 9 | ALL | None | None (final) |
|
||||
|
||||
### Agent Dispatch Summary
|
||||
|
||||
| Wave | Tasks | Recommended Agents |
|
||||
|------|-------|-------------------|
|
||||
| 1 | 1, 3 | task(category="quick", load_skills=["obsidian"], run_in_background=false) |
|
||||
| 2 | 2, 4, 5, 6 | dispatch parallel: task(category="unspecified-high") for Task 4; task(category="quick") for 2, 5, 6 |
|
||||
| 3 | 7, 8 | task(category="quick", run_in_background=false) |
|
||||
| 4 | 9 | task(category="unspecified-low", run_in_background=false) |
|
||||
|
||||
---
|
||||
|
||||
## TODOs
|
||||
|
||||
- [x] 1. CODEX Vault Memory Infrastructure
|
||||
|
||||
**What to do**:
|
||||
- Create `80-memory/` folder with 5 subfolders: `preferences/`, `facts/`, `decisions/`, `entities/`, `other/`
|
||||
- Create each subfolder with a `.gitkeep` file so git tracks empty directories
|
||||
- Create `templates/memory.md` — memory note template with frontmatter:
|
||||
```yaml
|
||||
---
|
||||
type: memory
|
||||
category: # preference | fact | decision | entity | other
|
||||
mem0_id: # Mem0 memory ID (e.g., "mem_abc123")
|
||||
source: explicit # explicit | auto-capture
|
||||
importance: # critical | high | medium | low
|
||||
created: <% tp.date.now("YYYY-MM-DD") %>
|
||||
updated: <% tp.date.now("YYYY-MM-DD") %>
|
||||
tags:
|
||||
- memory
|
||||
sync_targets: []
|
||||
---
|
||||
|
||||
# Memory Title
|
||||
|
||||
## Content
|
||||
<!-- The actual memory content -->
|
||||
|
||||
## Context
|
||||
<!-- When/where this was learned, conversation context -->
|
||||
|
||||
## Related
|
||||
<!-- Wiki links to related notes -->
|
||||
```
|
||||
- Update `tag-taxonomy.md` — add `#memory` tag category with subtags:
|
||||
```
|
||||
#memory
|
||||
├── #memory/preference
|
||||
├── #memory/fact
|
||||
├── #memory/decision
|
||||
├── #memory/entity
|
||||
└── #memory/other
|
||||
```
|
||||
Include usage examples and definitions for each category
|
||||
|
||||
**Must NOT do**:
|
||||
- Do NOT create subfolders inside 50-zettelkasten/ or 70-tasks/
|
||||
- Do NOT modify existing templates (only ADD memory.md)
|
||||
- Do NOT use Templater syntax that doesn't match existing templates
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- Reason: Simple file creation, no complex logic
|
||||
- **Skills**: [`obsidian`]
|
||||
- `obsidian`: Vault conventions, frontmatter patterns, template structure
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 1 (with Task 3)
|
||||
- **Blocks**: Tasks 2, 4, 5, 6
|
||||
- **Blocked By**: None
|
||||
|
||||
**References**:
|
||||
|
||||
**Pattern References**:
|
||||
- `/home/m3tam3re/CODEX/30-resources/` — Subfolder-by-type pattern to follow (bookmarks/, literature/, meetings/, people/, recipes/)
|
||||
- `/home/m3tam3re/CODEX/templates/task.md` — Template frontmatter pattern (type, status, created, updated, tags, sync_targets)
|
||||
- `/home/m3tam3re/CODEX/templates/bookmark.md` — Simpler template example
|
||||
|
||||
**Documentation References**:
|
||||
- `/home/m3tam3re/CODEX/AGENTS.md:22-27` — Frontmatter conventions (required fields: type, created, updated)
|
||||
- `/home/m3tam3re/CODEX/AGENTS.md:163-176` — Template locations table (add memory row)
|
||||
- `/home/m3tam3re/CODEX/tag-taxonomy.md:1-18` — Tag structure rules (max 3 levels, kebab-case)
|
||||
|
||||
**WHY Each Reference Matters**:
|
||||
- `30-resources/` shows that subfolders-by-type is the established vault pattern for categorized content
|
||||
- `task.md` template shows the exact frontmatter field set expected by the vault
|
||||
- `tag-taxonomy.md` rules show the 3-level max hierarchy constraint for new tags
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
**Agent-Executed QA Scenarios:**
|
||||
|
||||
```
|
||||
Scenario: Verify 80-memory folder structure
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -d /home/m3tam3re/CODEX/80-memory/preferences
|
||||
2. test -d /home/m3tam3re/CODEX/80-memory/facts
|
||||
3. test -d /home/m3tam3re/CODEX/80-memory/decisions
|
||||
4. test -d /home/m3tam3re/CODEX/80-memory/entities
|
||||
5. test -d /home/m3tam3re/CODEX/80-memory/other
|
||||
Expected Result: All 5 directories exist (exit code 0 for each)
|
||||
Evidence: Shell output captured
|
||||
|
||||
Scenario: Verify memory template exists with correct frontmatter
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -f /home/m3tam3re/CODEX/templates/memory.md
|
||||
2. grep "type: memory" /home/m3tam3re/CODEX/templates/memory.md
|
||||
3. grep "category:" /home/m3tam3re/CODEX/templates/memory.md
|
||||
4. grep "mem0_id:" /home/m3tam3re/CODEX/templates/memory.md
|
||||
Expected Result: File exists and contains required frontmatter fields
|
||||
Evidence: grep output captured
|
||||
|
||||
Scenario: Verify tag-taxonomy updated with memory tags
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. grep "#memory" /home/m3tam3re/CODEX/tag-taxonomy.md
|
||||
2. grep "#memory/preference" /home/m3tam3re/CODEX/tag-taxonomy.md
|
||||
3. grep "#memory/fact" /home/m3tam3re/CODEX/tag-taxonomy.md
|
||||
Expected Result: All memory tags present in taxonomy
|
||||
Evidence: grep output captured
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(vault): add 80-memory folder structure and memory template`
|
||||
- Files: `80-memory/`, `templates/memory.md`, `tag-taxonomy.md`
|
||||
- Repo: `~/CODEX`
|
||||
|
||||
---
|
||||
|
||||
- [x] 2. CODEX Vault Documentation Updates
|
||||
|
||||
**What to do**:
|
||||
- Update `AGENTS.md`:
|
||||
- Add `80-memory/` row to Folder Structure table (line ~11)
|
||||
- Add `#### 80-memory` section in Folder Details (after 70-tasks section, ~line 161)
|
||||
- Update Folder Decision Tree to include memory branch: `Is it a memory/learned fact? → YES → 80-memory/`
|
||||
- Add Memory template row to Template Locations table (line ~165)
|
||||
- Add Memory Workflows section (after Sync Workflow): create memory, retrieve memory, dual-layer sync
|
||||
- Update `README.md`:
|
||||
- Add `80-memory/` to folder structure diagram with subfolders
|
||||
- Add `80-memory/` row to Folder Details section
|
||||
- Add memory template to Templates table
|
||||
|
||||
**Must NOT do**:
|
||||
- Do NOT rewrite existing sections — only ADD new content
|
||||
- Do NOT remove any existing folder/template documentation
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- Reason: Documentation additions to existing files, following established patterns
|
||||
- **Skills**: [`obsidian`]
|
||||
- `obsidian`: Vault documentation conventions
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2 (with Tasks 4, 5, 6)
|
||||
- **Blocks**: Task 9
|
||||
- **Blocked By**: Task 1 (needs folder structure to reference)
|
||||
|
||||
**References**:
|
||||
|
||||
**Pattern References**:
|
||||
- `/home/m3tam3re/CODEX/AGENTS.md:110-161` — Existing Folder Details sections to follow pattern
|
||||
- `/home/m3tam3re/CODEX/AGENTS.md:75-108` — Folder Decision Tree format
|
||||
- `/home/m3tam3re/CODEX/README.md` — Folder structure diagram format
|
||||
|
||||
**WHY Each Reference Matters**:
|
||||
- AGENTS.md folder details show the exact format: Purpose, Structure (flat/subfolders), Key trait, When to use, Naming convention
|
||||
- Decision tree shows the exact `├─ YES →` format to follow
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Verify AGENTS.md has 80-memory documentation
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. grep "80-memory" /home/m3tam3re/CODEX/AGENTS.md
|
||||
2. grep "Is it a memory" /home/m3tam3re/CODEX/AGENTS.md
|
||||
3. grep "templates/memory.md" /home/m3tam3re/CODEX/AGENTS.md
|
||||
Expected Result: All three patterns found
|
||||
Evidence: grep output
|
||||
|
||||
Scenario: Verify README.md has 80-memory in structure
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. grep "80-memory" /home/m3tam3re/CODEX/README.md
|
||||
2. grep "preferences/" /home/m3tam3re/CODEX/README.md
|
||||
Expected Result: Folder and subfolder documented
|
||||
Evidence: grep output
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `docs(vault): add 80-memory documentation to AGENTS.md and README.md`
|
||||
- Files: `AGENTS.md`, `README.md`
|
||||
- Repo: `~/CODEX`
|
||||
|
||||
---
|
||||
|
||||
- [x] 3. Obsidian MCP Server Configuration Documentation
|
||||
|
||||
**What to do**:
|
||||
- Create `skills/memory/references/mcp-config.md` documenting:
|
||||
- cyanheads/obsidian-mcp-server configuration for opencode.json
|
||||
- Required environment variables: `OBSIDIAN_API_KEY`, `OBSIDIAN_BASE_URL`, `OBSIDIAN_VERIFY_SSL`, `OBSIDIAN_ENABLE_CACHE`
|
||||
- opencode.json MCP section snippet:
|
||||
```json
|
||||
"Obsidian-Vault": {
|
||||
"command": ["npx", "obsidian-mcp-server"],
|
||||
"environment": {
|
||||
"OBSIDIAN_API_KEY": "<your-api-key>",
|
||||
"OBSIDIAN_BASE_URL": "http://127.0.0.1:27123",
|
||||
"OBSIDIAN_VERIFY_SSL": "false",
|
||||
"OBSIDIAN_ENABLE_CACHE": "true"
|
||||
},
|
||||
"enabled": true,
|
||||
"type": "local"
|
||||
}
|
||||
```
|
||||
- Nix home-manager snippet showing how to add to `programs.opencode.settings.mcp`
|
||||
- Note that this requires `home-manager switch` after adding
|
||||
- Available MCP tools list: obsidian_read_note, obsidian_update_note, obsidian_global_search, obsidian_manage_frontmatter, obsidian_manage_tags, obsidian_list_notes, obsidian_delete_note, obsidian_search_replace
|
||||
- How to get the API key from Obsidian: Settings → Local REST API plugin
|
||||
|
||||
**Must NOT do**:
|
||||
- Do NOT directly modify `~/.config/opencode/opencode.json` (Nix-managed)
|
||||
- Do NOT modify `agents/agents.json`
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- Reason: Creating a single reference doc
|
||||
- **Skills**: [`obsidian`]
|
||||
- `obsidian`: Obsidian REST API configuration knowledge
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 1 (with Task 1)
|
||||
- **Blocks**: Task 4
|
||||
- **Blocked By**: None
|
||||
|
||||
**References**:
|
||||
|
||||
**Pattern References**:
|
||||
- `/home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md:156-166` — Existing API reference pattern
|
||||
- `/home/m3tam3re/.config/opencode/opencode.json:77-127` — Current MCP config format (Exa, Basecamp, etc.)
|
||||
|
||||
**External References**:
|
||||
- GitHub: `https://github.com/cyanheads/obsidian-mcp-server` — Config docs, env vars, tool list
|
||||
- npm: `npx obsidian-mcp-server` — Installation method
|
||||
|
||||
**WHY Each Reference Matters**:
|
||||
- opencode.json MCP section shows exact JSON format needed (command array, environment, enabled, type)
|
||||
- cyanheads repo shows required env vars and their defaults
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Verify MCP config reference file exists
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -f /home/m3tam3re/p/AI/AGENTS/skills/memory/references/mcp-config.md
|
||||
2. grep "obsidian-mcp-server" /home/m3tam3re/p/AI/AGENTS/skills/memory/references/mcp-config.md
|
||||
3. grep "OBSIDIAN_API_KEY" /home/m3tam3re/p/AI/AGENTS/skills/memory/references/mcp-config.md
|
||||
4. grep "home-manager" /home/m3tam3re/p/AI/AGENTS/skills/memory/references/mcp-config.md
|
||||
Expected Result: File exists with MCP config, env vars, and Nix instructions
|
||||
Evidence: grep output
|
||||
```
|
||||
|
||||
**Commit**: YES (groups with Task 4)
|
||||
- Message: `feat(memory): add core memory skill and MCP config reference`
|
||||
- Files: `skills/memory/SKILL.md`, `skills/memory/references/mcp-config.md`
|
||||
- Repo: `~/p/AI/AGENTS`
|
||||
|
||||
---
|
||||
|
||||
- [x] 4. Create Core Memory Skill
|
||||
|
||||
**What to do**:
|
||||
- Create `skills/memory/SKILL.md` — the central orchestration skill for the dual-layer memory system
|
||||
- YAML frontmatter:
|
||||
```yaml
|
||||
---
|
||||
name: memory
|
||||
description: "Dual-layer memory system (Mem0 + Obsidian CODEX). Use when: (1) storing information for future recall ('remember this'), (2) auto-capturing session insights, (3) recalling past decisions/preferences/facts, (4) injecting relevant context before tasks. Triggers: 'remember', 'recall', 'what do I know about', 'memory', session end."
|
||||
compatibility: opencode
|
||||
---
|
||||
```
|
||||
- Sections to include:
|
||||
1. **Overview** — Dual-layer architecture (Mem0 operational + Obsidian documented)
|
||||
2. **Prerequisites** — Mem0 running at localhost:8000, Obsidian MCP configured (reference mcp-config.md)
|
||||
3. **Memory Categories** — 5 categories with definitions and examples:
|
||||
- preference: Personal preferences (UI, workflow, communication style)
|
||||
- fact: Objective information about user/work (role, tech stack, constraints)
|
||||
- decision: Architectural/tool choices made (with rationale)
|
||||
- entity: People, organizations, systems, concepts
|
||||
- other: Everything else
|
||||
4. **Workflow 1: Store Memory (Explicit)** — User says "remember X":
|
||||
- Classify category
|
||||
- POST to Mem0 `/memories` with user_id, metadata (category, source: "explicit")
|
||||
- Create Obsidian note in `80-memory/<category>/` using memory template
|
||||
- Cross-reference: mem0_id in Obsidian frontmatter, obsidian_ref in Mem0 metadata
|
||||
5. **Workflow 2: Recall Memory** — User asks "what do I know about X":
|
||||
- POST to Mem0 `/search` with query
|
||||
- Return results with Obsidian note paths for reference
|
||||
6. **Workflow 3: Auto-Capture (Session End)** — Automatic extraction:
|
||||
- Scan conversation for memory-worthy content (preferences stated, decisions made, important facts)
|
||||
- Select top 3 highest-value memories
|
||||
- For each: store in Mem0 AND create Obsidian note (source: "auto-capture")
|
||||
- Present to user: "I captured these memories: [list]. Confirm or reject?"
|
||||
7. **Workflow 4: Auto-Recall (Session Start)** — Context injection:
|
||||
- On session start, search Mem0 with user's first message
|
||||
- If relevant memories found (score > 0.7), inject as `<relevant-memories>` context
|
||||
- Limit to top 5 most relevant
|
||||
8. **Error Handling** — Graceful degradation:
|
||||
- Mem0 unavailable: `curl http://localhost:8000/health` fails → skip all memory ops, warn user
|
||||
- Obsidian unavailable: Store in Mem0 only, log that Obsidian sync failed
|
||||
- Both unavailable: Skip memory entirely, continue without memory features
|
||||
9. **Integration** — How other skills/agents use memory:
|
||||
- Load `memory` skill to access memory workflows
|
||||
- Apollo is primary memory specialist
|
||||
- Any agent can search/store via Mem0 REST API patterns in `mem0-memory` skill
|
||||
|
||||
**Must NOT do**:
|
||||
- Do NOT implement citation-based verification
|
||||
- Do NOT implement memory deletion/forget
|
||||
- Do NOT add memory expiration logic
|
||||
- Do NOT create dashboards or search UI
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `unspecified-high`
|
||||
- Reason: Core deliverable requiring careful architecture documentation, must be comprehensive
|
||||
- **Skills**: [`obsidian`, `mem0-memory`]
|
||||
- `obsidian`: Vault conventions, template patterns, frontmatter standards
|
||||
- `mem0-memory`: Mem0 REST API patterns, endpoint details
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2 (with Tasks 2, 5, 6)
|
||||
- **Blocks**: Tasks 7, 8, 9
|
||||
- **Blocked By**: Tasks 1, 3
|
||||
|
||||
**References**:
|
||||
|
||||
**Pattern References**:
|
||||
- `/home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md` — Full file: Mem0 REST API patterns, endpoint table, identity scopes, workflow patterns
|
||||
- `/home/m3tam3re/p/AI/AGENTS/skills/obsidian/SKILL.md` — Full file: Obsidian REST API patterns, create/read/update note workflows, frontmatter conventions
|
||||
- `/home/m3tam3re/p/AI/AGENTS/skills/reflection/SKILL.md` — Skill structure pattern (overview, workflows, integration)
|
||||
|
||||
**API References**:
|
||||
- `/home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md:13-21` — Quick Reference endpoint table
|
||||
- `/home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md:90-109` — Identity scopes (user_id, agent_id, run_id)
|
||||
|
||||
**Documentation References**:
|
||||
- `/home/m3tam3re/CODEX/AGENTS.md:22-27` — Frontmatter conventions for vault notes
|
||||
- `/home/m3tam3re/p/AI/AGENTS/skills/memory/references/mcp-config.md` — MCP server config (created in Task 3)
|
||||
|
||||
**External References**:
|
||||
- OpenClaw reference: `/home/m3tam3re/p/AI/openclaw/extensions/memory-lancedb/index.ts` — Auto-capture regex patterns, auto-recall injection, importance scoring (use as inspiration, not copy)
|
||||
|
||||
**WHY Each Reference Matters**:
|
||||
- mem0-memory SKILL.md provides the exact API endpoints and patterns to reference in dual-layer sync workflows
|
||||
- obsidian SKILL.md provides the vault file creation patterns (curl commands, path encoding)
|
||||
- openclaw memory-lancedb shows the auto-capture/auto-recall architecture to adapt
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Validate skill YAML frontmatter
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -f /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
2. grep "^name: memory$" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
3. grep "^compatibility: opencode$" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
4. grep "description:" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
Expected Result: Valid YAML frontmatter with name, description, compatibility
|
||||
Evidence: grep output
|
||||
|
||||
Scenario: Verify skill contains all required workflows
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. grep -c "## Workflow" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
2. grep "Auto-Capture" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
3. grep "Auto-Recall" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
4. grep "Error Handling" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
5. grep "preference" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
Expected Result: At least 4 workflow sections, auto-capture, auto-recall, error handling, categories
|
||||
Evidence: grep output
|
||||
|
||||
Scenario: Verify dual-layer sync pattern documented
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. grep "mem0_id" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
2. grep "obsidian_ref" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
3. grep "localhost:8000" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
4. grep "80-memory" /home/m3tam3re/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
Expected Result: Cross-reference IDs and both layer endpoints documented
|
||||
Evidence: grep output
|
||||
```
|
||||
|
||||
**Commit**: YES (groups with Task 3)
|
||||
- Message: `feat(memory): add core memory skill and MCP config reference`
|
||||
- Files: `skills/memory/SKILL.md`, `skills/memory/references/mcp-config.md`
|
||||
- Repo: `~/p/AI/AGENTS`
|
||||
|
||||
---
|
||||
|
||||
- [x] 5. Update Mem0 Memory Skill
|
||||
|
||||
**What to do**:
|
||||
- Add "Memory Categories" section after Identity Scopes (line ~109):
|
||||
- Table: category name, definition, Obsidian path, example
|
||||
- Metadata pattern for categories: `{"category": "preference", "source": "explicit|auto-capture"}`
|
||||
- Add "Dual-Layer Sync" section after Workflow Patterns:
|
||||
- After storing to Mem0, also create Obsidian note in `80-memory/<category>/`
|
||||
- Include mem0_id from response in Obsidian note frontmatter
|
||||
- Include obsidian_ref path in Mem0 metadata via update
|
||||
- Add "Health Check" workflow: Check `/health` before any memory operations
|
||||
- Add "Error Handling" section: What to do when Mem0 is unavailable
|
||||
|
||||
**Must NOT do**:
|
||||
- Do NOT delete existing content
|
||||
- Do NOT change the YAML frontmatter description (triggers)
|
||||
- Do NOT change existing API endpoint documentation
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- Reason: Adding sections to existing well-structured file
|
||||
- **Skills**: [`mem0-memory`]
|
||||
- `mem0-memory`: Existing skill patterns to extend
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2 (with Tasks 2, 4, 6)
|
||||
- **Blocks**: Task 9
|
||||
- **Blocked By**: Task 1
|
||||
|
||||
**References**:
|
||||
|
||||
- `/home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md` — Full file: current content to extend (preserve ALL existing content)
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Verify categories added to mem0-memory skill
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. grep "Memory Categories" /home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md
|
||||
2. grep "preference" /home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md
|
||||
3. grep "Dual-Layer" /home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md
|
||||
4. grep "80-memory" /home/m3tam3re/p/AI/AGENTS/skills/mem0-memory/SKILL.md
|
||||
Expected Result: New sections present alongside existing content
|
||||
Evidence: grep output
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(mem0-memory): add memory categories and dual-layer sync patterns`
|
||||
- Files: `skills/mem0-memory/SKILL.md`
|
||||
- Repo: `~/p/AI/AGENTS`
|
||||
|
||||
---
|
||||
|
||||
- [x] 6. Update Obsidian Skill
|
||||
|
||||
**What to do**:
|
||||
- Add "Memory Folder Conventions" section (after Best Practices, ~line 228):
|
||||
- Document `80-memory/` structure with 5 subfolders
|
||||
- Memory note naming: kebab-case (e.g., `prefers-dark-mode.md`)
|
||||
- Required frontmatter fields for memory notes (type, category, mem0_id, etc.)
|
||||
- Add "Memory Note Workflows" section:
|
||||
- Create memory note: POST to vault REST API with memory template content
|
||||
- Read memory note: GET with path encoding for `80-memory/` paths
|
||||
- Search memories: Search within `80-memory/` path filter
|
||||
- Update Integration table to include memory skill handoff
|
||||
|
||||
**Must NOT do**:
|
||||
- Do NOT change existing content or workflows
|
||||
- Do NOT modify the YAML frontmatter
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: [`obsidian`]
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 2
|
||||
- **Blocks**: Task 9
|
||||
- **Blocked By**: Task 1
|
||||
|
||||
**References**:
|
||||
- `/home/m3tam3re/p/AI/AGENTS/skills/obsidian/SKILL.md` — Full file: current content to extend
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Verify memory conventions added to obsidian skill
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. grep "Memory Folder" /home/m3tam3re/p/AI/AGENTS/skills/obsidian/SKILL.md
|
||||
2. grep "80-memory" /home/m3tam3re/p/AI/AGENTS/skills/obsidian/SKILL.md
|
||||
3. grep "mem0_id" /home/m3tam3re/p/AI/AGENTS/skills/obsidian/SKILL.md
|
||||
Expected Result: Memory folder docs and frontmatter patterns present
|
||||
Evidence: grep output
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(obsidian): add memory folder conventions and workflows`
|
||||
- Files: `skills/obsidian/SKILL.md`
|
||||
- Repo: `~/p/AI/AGENTS`
|
||||
|
||||
---
|
||||
|
||||
- [x] 7. Update Apollo Agent Prompt
|
||||
|
||||
**What to do**:
|
||||
- Add "Memory Management" to Core Responsibilities list (after item 4):
|
||||
- Store memories in dual-layer system (Mem0 + Obsidian CODEX)
|
||||
- Retrieve memories via semantic search (Mem0)
|
||||
- Auto-capture session insights at session end (max 3, confirm with user)
|
||||
- Handle explicit "remember this" requests
|
||||
- Inject relevant memories into context on session start
|
||||
- Add memory-related tools to Tool Usage section
|
||||
- Add memory error handling to Edge Cases
|
||||
|
||||
**Must NOT do**:
|
||||
- Do NOT remove existing responsibilities
|
||||
- Do NOT change Apollo's identity or boundaries
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: []
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 3 (with Task 8)
|
||||
- **Blocks**: Task 9
|
||||
- **Blocked By**: Task 4
|
||||
|
||||
**References**:
|
||||
- `/home/m3tam3re/p/AI/AGENTS/prompts/apollo.txt` — Full file (47 lines): current prompt to extend
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Verify memory management added to Apollo prompt
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. grep -i "memory" /home/m3tam3re/p/AI/AGENTS/prompts/apollo.txt | wc -l
|
||||
2. grep "Mem0" /home/m3tam3re/p/AI/AGENTS/prompts/apollo.txt
|
||||
3. grep "auto-capture" /home/m3tam3re/p/AI/AGENTS/prompts/apollo.txt
|
||||
Expected Result: Multiple memory references, Mem0 mentioned, auto-capture documented
|
||||
Evidence: grep output
|
||||
```
|
||||
|
||||
**Commit**: YES (groups with Task 8)
|
||||
- Message: `feat(agents): add memory management to Apollo prompt and user profile`
|
||||
- Files: `prompts/apollo.txt`, `context/profile.md`
|
||||
- Repo: `~/p/AI/AGENTS`
|
||||
|
||||
---
|
||||
|
||||
- [x] 8. Update User Context Profile
|
||||
|
||||
**What to do**:
|
||||
- Add "Memory System" section to `context/profile.md`:
|
||||
- Mem0 endpoint: `http://localhost:8000`
|
||||
- Mem0 user_id: `m3tam3re` (or whatever the user's ID should be)
|
||||
- Obsidian vault path: `~/CODEX`
|
||||
- Memory folder: `80-memory/`
|
||||
- Auto-capture: enabled, max 3 per session
|
||||
- Auto-recall: enabled, top 5 results, score threshold 0.7
|
||||
- Memory categories: preference, fact, decision, entity, other
|
||||
- Obsidian MCP server: cyanheads/obsidian-mcp-server (see skills/memory/references/mcp-config.md)
|
||||
|
||||
**Must NOT do**:
|
||||
- Do NOT remove existing profile content
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: []
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 3 (with Task 7)
|
||||
- **Blocks**: Task 9
|
||||
- **Blocked By**: Task 4
|
||||
|
||||
**References**:
|
||||
- `/home/m3tam3re/p/AI/AGENTS/context/profile.md` — Current profile to extend
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Verify memory config in profile
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. grep "Memory System" /home/m3tam3re/p/AI/AGENTS/context/profile.md
|
||||
2. grep "localhost:8000" /home/m3tam3re/p/AI/AGENTS/context/profile.md
|
||||
3. grep "80-memory" /home/m3tam3re/p/AI/AGENTS/context/profile.md
|
||||
4. grep "auto-capture" /home/m3tam3re/p/AI/AGENTS/context/profile.md
|
||||
Expected Result: Memory system section with all config values
|
||||
Evidence: grep output
|
||||
```
|
||||
|
||||
**Commit**: YES (groups with Task 7)
|
||||
- Message: `feat(agents): add memory management to Apollo prompt and user profile`
|
||||
- Files: `prompts/apollo.txt`, `context/profile.md`
|
||||
- Repo: `~/p/AI/AGENTS`
|
||||
|
||||
---
|
||||
|
||||
- [x] 9. End-to-End Validation
|
||||
|
||||
**What to do**:
|
||||
- Verify ALL files exist and contain expected content
|
||||
- Run skill validation: `./scripts/test-skill.sh memory`
|
||||
- Test Mem0 availability: `curl http://localhost:8000/health`
|
||||
- Test Obsidian REST API: `curl http://127.0.0.1:27124/vault-info`
|
||||
- Verify CODEX vault structure: `ls -la ~/CODEX/80-memory/`
|
||||
- Verify template: `cat ~/CODEX/templates/memory.md | head -20`
|
||||
- Check all YAML frontmatter valid across new/updated skill files
|
||||
|
||||
**Must NOT do**:
|
||||
- Do NOT create automated test infrastructure
|
||||
- Do NOT modify any files — validation only
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `unspecified-low`
|
||||
- Reason: Verification only, running commands and checking outputs
|
||||
- **Skills**: []
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: NO
|
||||
- **Parallel Group**: Wave 4 (final, sequential)
|
||||
- **Blocks**: None (final task)
|
||||
- **Blocked By**: ALL tasks (1-8)
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
```
|
||||
Scenario: Full file existence check
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. test -f ~/p/AI/AGENTS/skills/memory/SKILL.md
|
||||
2. test -f ~/p/AI/AGENTS/skills/memory/references/mcp-config.md
|
||||
3. test -d ~/CODEX/80-memory/preferences
|
||||
4. test -f ~/CODEX/templates/memory.md
|
||||
5. grep "80-memory" ~/CODEX/AGENTS.md
|
||||
6. grep "#memory" ~/CODEX/tag-taxonomy.md
|
||||
7. grep "80-memory" ~/CODEX/README.md
|
||||
8. grep -i "memory" ~/p/AI/AGENTS/prompts/apollo.txt
|
||||
9. grep "Memory System" ~/p/AI/AGENTS/context/profile.md
|
||||
Expected Result: All checks pass (exit code 0)
|
||||
Evidence: Shell output captured
|
||||
|
||||
Scenario: Mem0 health check
|
||||
Tool: Bash
|
||||
Preconditions: Mem0 server must be running
|
||||
Steps:
|
||||
1. curl -s -o /dev/null -w "%{http_code}" http://localhost:8000/health
|
||||
Expected Result: HTTP 200
|
||||
Evidence: Status code captured
|
||||
Note: If Mem0 not running, this test will fail — spin up Mem0 first
|
||||
|
||||
Scenario: Obsidian REST API check
|
||||
Tool: Bash
|
||||
Preconditions: Obsidian desktop app must be running with Local REST API plugin
|
||||
Steps:
|
||||
1. curl -s -o /dev/null -w "%{http_code}" http://127.0.0.1:27124/vault-info
|
||||
Expected Result: HTTP 200
|
||||
Evidence: Status code captured
|
||||
Note: Requires Obsidian desktop app to be open
|
||||
|
||||
Scenario: Skill validation
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. cd ~/p/AI/AGENTS && ./scripts/test-skill.sh memory
|
||||
Expected Result: Validation passes (no errors)
|
||||
Evidence: Script output captured
|
||||
```
|
||||
|
||||
**Commit**: NO (validation only, no file changes)
|
||||
|
||||
---
|
||||
|
||||
## Commit Strategy
|
||||
|
||||
| After Task | Message | Files | Repo | Verification |
|
||||
|------------|---------|-------|------|--------------|
|
||||
| 1 | `feat(vault): add 80-memory folder structure and memory template` | 80-memory/, templates/memory.md, tag-taxonomy.md | ~/CODEX | ls + grep |
|
||||
| 2 | `docs(vault): add 80-memory documentation to AGENTS.md and README.md` | AGENTS.md, README.md | ~/CODEX | grep |
|
||||
| 3+4 | `feat(memory): add core memory skill and MCP config reference` | skills/memory/SKILL.md, skills/memory/references/mcp-config.md | ~/p/AI/AGENTS | test-skill.sh |
|
||||
| 5 | `feat(mem0-memory): add memory categories and dual-layer sync patterns` | skills/mem0-memory/SKILL.md | ~/p/AI/AGENTS | grep |
|
||||
| 6 | `feat(obsidian): add memory folder conventions and workflows` | skills/obsidian/SKILL.md | ~/p/AI/AGENTS | grep |
|
||||
| 7+8 | `feat(agents): add memory management to Apollo prompt and user profile` | prompts/apollo.txt, context/profile.md | ~/p/AI/AGENTS | grep |
|
||||
|
||||
**Note**: Two different git repos! CODEX and AGENTS commits are independent.
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
### Verification Commands
|
||||
```bash
|
||||
# CODEX vault structure
|
||||
ls ~/CODEX/80-memory/ # Expected: preferences/ facts/ decisions/ entities/ other/
|
||||
cat ~/CODEX/templates/memory.md | head -5 # Expected: ---\ntype: memory
|
||||
grep "#memory" ~/CODEX/tag-taxonomy.md # Expected: #memory/* tags
|
||||
|
||||
# AGENTS skill validation
|
||||
cd ~/p/AI/AGENTS && ./scripts/test-skill.sh memory # Expected: pass
|
||||
|
||||
# Infrastructure (requires services running)
|
||||
curl -s http://localhost:8000/health # Expected: 200
|
||||
curl -s http://127.0.0.1:27124/vault-info # Expected: 200
|
||||
```
|
||||
|
||||
### Final Checklist
|
||||
- [x] All "Must Have" present (dual-layer, auto-capture, auto-recall, categories, health checks, error handling)
|
||||
- [x] All "Must NOT Have" absent (no citation system, no deletion, no dashboards, no unit tests)
|
||||
- [x] CODEX commits pushed (vault structure + docs)
|
||||
- [x] AGENTS commits pushed (skills + prompts + profile)
|
||||
- [x] User reminded to add Obsidian MCP to Nix config and run `home-manager switch`
|
||||
- [x] User reminded to spin up Mem0 server before using memory features
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,804 +0,0 @@
|
||||
# Centralized Rules & Per-Project Context Injection System
|
||||
|
||||
## TL;DR
|
||||
|
||||
> **Quick Summary**: Create a `rules/` directory in the AGENTS repository containing modular AI coding rules (per-concern + per-language), deployed centrally via Home Manager. A `mkOpencodeRules` Nix helper function lives in the nixpkgs repo (following the existing `ports.nix` → `mkPortHelpers` pattern), generating per-project `opencode.json` via devShell activation.
|
||||
>
|
||||
> **Deliverables**:
|
||||
> - 6 concern rule files (coding-style, naming, documentation, testing, git-workflow, project-structure)
|
||||
> - 5 language/framework rule files (python, typescript, nix, shell, n8n)
|
||||
> - `lib/opencode-rules.nix` in nixpkgs repo — `mkOpencodeRules` helper function
|
||||
> - Updated `lib/default.nix` in nixpkgs repo — imports opencode-rules
|
||||
> - Updated `opencode.nix` in nixos-config — deploys rules/ alongside existing skills
|
||||
> - `rules/USAGE.md` — per-project adoption documentation
|
||||
>
|
||||
> **Repos Touched**: 3 (AGENTS, nixpkgs, nixos-config)
|
||||
> **Estimated Effort**: Medium (11 rule files + 3 nix changes + 1 doc)
|
||||
> **Parallel Execution**: YES — 4 waves
|
||||
> **Critical Path**: T1-T3 (foundation) → T6-T16 (content) → T17 (verification)
|
||||
|
||||
---
|
||||
|
||||
## Context
|
||||
|
||||
### Original Request
|
||||
User wants to streamline their Agent workflow by centrally managing language-specific and framework-specific coding rules in the AGENTS repository, while allowing project-specific overrides. Rules should be injected per-project using Nix flakes + direnv.
|
||||
|
||||
### Interview Summary
|
||||
**Key Discussions**:
|
||||
- **Loading strategy**: Always loaded (not lazy) — rules always in context when project activates
|
||||
- **Composition mechanism**: Nix flake devShell — each project declares languages/frameworks needed
|
||||
- **Rule granularity**: Per concern with separate language files for deep patterns
|
||||
- **Override strategy**: Project-level AGENTS.md overrides central rules (OpenCode's native precedence)
|
||||
- **opencode.json**: No project-specific one exists yet — devShell generates it entirely
|
||||
- **Nix helper location**: Lives in `m3ta-nixpkgs` repo at `lib/opencode-rules.nix` (follows `ports.nix` pattern)
|
||||
- **AGENTS repo stays pure content**: No Nix code — only markdown rule files
|
||||
|
||||
**Research Findings**:
|
||||
- OpenCode `instructions` field in `opencode.json` loads external .md files as always-on context
|
||||
- Anthropic guide: progressive disclosure, composability, 500-line max, use TOCs for long files
|
||||
- Best practices: 100-200 lines per file, imperative language, micro-examples (correct/incorrect)
|
||||
- Rule files benefit from sandwich principle: critical constraints at START and END
|
||||
|
||||
### Metis Review
|
||||
**Identified Gaps** (addressed):
|
||||
- **Rule update strategy**: When rules change in AGENTS repo, projects run `nix flake update agents`. Standard Nix flow.
|
||||
- **Multi-language projects**: `mkOpencodeRules { languages = [ "python" "typescript" ]; }` — list multiple.
|
||||
- **Context window budget**: ~800-1300 lines total. Well under 1500-line budget.
|
||||
- **Empty rules selection**: `mkOpencodeRules {}` loads only concern files (defaults to all 6).
|
||||
|
||||
### Architecture Decision: Nix Helper Location
|
||||
**Decision**: `mkOpencodeRules` lives in **nixpkgs repo** (`/home/m3tam3re/p/NIX/nixpkgs/lib/opencode-rules.nix`), NOT in AGENTS repo.
|
||||
|
||||
**Rationale**:
|
||||
- nixpkgs already has `lib/ports.nix` → `mkPortHelpers` as an identical pattern
|
||||
- nixpkgs is already consumed by all configs: `inputs.m3ta-nixpkgs.lib.${system}`
|
||||
- AGENTS repo stays pure content (markdown + configs), no Nix code
|
||||
- Projects already have `m3ta-nixpkgs` as a flake input — no new input needed for the helper
|
||||
|
||||
**Consumption pattern** (per-project):
|
||||
```nix
|
||||
let
|
||||
m3taLib = inputs.m3ta-nixpkgs.lib.${system};
|
||||
rules = m3taLib.opencode-rules.mkOpencodeRules {
|
||||
agents = inputs.agents; # Non-flake input with rule content
|
||||
languages = [ "python" ];
|
||||
};
|
||||
in pkgs.mkShell { shellHook = rules.shellHook; }
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Work Objectives
|
||||
|
||||
### Core Objective
|
||||
Create a centralized, modular AI coding rules system managed in the AGENTS repo, with a Nix helper in nixpkgs for per-project injection via devShell + direnv.
|
||||
|
||||
### Concrete Deliverables
|
||||
- `rules/concerns/{coding-style,naming,documentation,testing,git-workflow,project-structure}.md` — in AGENTS repo
|
||||
- `rules/languages/{python,typescript,nix,shell}.md` — in AGENTS repo
|
||||
- `rules/frameworks/n8n.md` — in AGENTS repo
|
||||
- `rules/USAGE.md` — adoption documentation in AGENTS repo
|
||||
- `lib/opencode-rules.nix` — in nixpkgs repo (`/home/m3tam3re/p/NIX/nixpkgs/`)
|
||||
- Updated `lib/default.nix` — in nixpkgs repo (add import)
|
||||
- Updated `opencode.nix` — in nixos-config repo (`/home/m3tam3re/p/NIX/nixos-config/home/features/coding/`)
|
||||
|
||||
### Definition of Done
|
||||
- [ ] All 11 rule files exist and are under 250 lines each
|
||||
- [ ] `lib/opencode-rules.nix` in nixpkgs exports `mkOpencodeRules` following `ports.nix` pattern
|
||||
- [ ] `opencode.nix` deploys `rules/` to `~/.config/opencode/rules/`
|
||||
- [ ] A project can use `m3taLib.opencode-rules.mkOpencodeRules` in devShell
|
||||
|
||||
### Must Have
|
||||
- All rule files use imperative language ("Always use...", "Never...")
|
||||
- Every rule includes micro-examples (correct vs incorrect, 2-3 lines each)
|
||||
- Concern files are language-agnostic; language subsections are brief pointers
|
||||
- Language files go deep into toolchain, idioms, anti-patterns
|
||||
- `mkOpencodeRules` accepts: `{ agents, languages ? [], concerns ? [...], frameworks ? [], extraInstructions ? [] }`
|
||||
- `mkOpencodeRules` follows `ports.nix` pattern: `{lib}: { mkOpencodeRules = ...}`
|
||||
- shellHook creates `.opencode-rules` symlink + generates `opencode.json`
|
||||
- Both `.opencode-rules` and `opencode.json` must be gitignored (documented in USAGE.md)
|
||||
|
||||
### Must NOT Have (Guardrails)
|
||||
- Rule files MUST NOT exceed 250 lines
|
||||
- Total loaded rules MUST NOT exceed 1500 lines for any realistic config
|
||||
- Concern files MUST NOT contain language-specific implementation details
|
||||
- MUST NOT put Nix code in AGENTS repo — AGENTS stays pure content
|
||||
- MUST NOT add rule versioning, testing framework, or generator CLI
|
||||
- MUST NOT create rules for docker, k8s, terraform — out of scope
|
||||
- MUST NOT modify existing skills, agents, prompts, or commands
|
||||
- MUST NOT use generic advice ("write clean code", "follow best practices")
|
||||
|
||||
---
|
||||
|
||||
## Verification Strategy (MANDATORY)
|
||||
|
||||
> **ZERO HUMAN INTERVENTION** — ALL verification is agent-executed. No exceptions.
|
||||
|
||||
### Test Decision
|
||||
- **Infrastructure exists**: NO (config/documentation repos)
|
||||
- **Automated tests**: NO
|
||||
- **Framework**: none
|
||||
|
||||
### QA Policy
|
||||
Every task MUST include agent-executed QA scenarios.
|
||||
Evidence saved to `.sisyphus/evidence/task-{N}-{scenario-slug}.{ext}`.
|
||||
|
||||
| Deliverable Type | Verification Tool | Method |
|
||||
|------------------|-------------------|--------|
|
||||
| Markdown rule files | Bash (wc, grep) | Line count, micro-examples, imperative language |
|
||||
| Nix expressions | Bash (nix eval) | Evaluate, check errors |
|
||||
| Shell integration | Bash | Verify symlink + opencode.json generated |
|
||||
| Cross-repo | Bash (grep) | Verify entries in correct files |
|
||||
|
||||
---
|
||||
|
||||
## Execution Strategy
|
||||
|
||||
### Parallel Execution Waves
|
||||
|
||||
```
|
||||
Wave 1 (Foundation — 5 tasks, all parallel):
|
||||
├── Task 1: Create rules/ directory structure in AGENTS repo [quick]
|
||||
├── Task 2: Create lib/opencode-rules.nix in nixpkgs repo [quick]
|
||||
├── Task 3: Update lib/default.nix in nixpkgs repo [quick]
|
||||
├── Task 4: Update opencode.nix in nixos-config repo [quick]
|
||||
└── Task 5: Create rules/USAGE.md in AGENTS repo [quick]
|
||||
|
||||
Wave 2 (Content — 11 rule files, all parallel):
|
||||
├── Task 6: concerns/coding-style.md [writing]
|
||||
├── Task 7: concerns/naming.md [writing]
|
||||
├── Task 8: concerns/documentation.md [writing]
|
||||
├── Task 9: concerns/testing.md [writing]
|
||||
├── Task 10: concerns/git-workflow.md [writing]
|
||||
├── Task 11: concerns/project-structure.md [writing]
|
||||
├── Task 12: languages/python.md [writing]
|
||||
├── Task 13: languages/typescript.md [writing]
|
||||
├── Task 14: languages/nix.md [writing]
|
||||
├── Task 15: languages/shell.md [writing]
|
||||
└── Task 16: frameworks/n8n.md [writing]
|
||||
|
||||
Wave 3 (Verification):
|
||||
└── Task 17: End-to-end integration test [deep]
|
||||
|
||||
Wave FINAL (Review — 4 parallel):
|
||||
├── Task F1: Plan compliance audit (oracle)
|
||||
├── Task F2: Code quality review (unspecified-high)
|
||||
├── Task F3: Real manual QA (unspecified-high)
|
||||
└── Task F4: Scope fidelity check (deep)
|
||||
|
||||
Critical Path: T1-T3 → T6-T16 (parallel) → T17 → F1-F4
|
||||
Max Concurrent: 11 (Wave 2)
|
||||
```
|
||||
|
||||
### Dependency Matrix
|
||||
|
||||
| Task | Depends On | Blocks | Wave |
|
||||
|------|------------|--------|------|
|
||||
| 1 | — | 5, 6-16, 17 | 1 |
|
||||
| 2, 3 | — | 17 | 1 |
|
||||
| 4 | — | 17 | 1 |
|
||||
| 5 | 1, 2 | 17 | 1 |
|
||||
| 6-16 | 1 | 17 | 2 |
|
||||
| 17 | 2-5, 6-16 | F1-F4 | 3 |
|
||||
| F1-F4 | 17 | — | FINAL |
|
||||
|
||||
### Agent Dispatch Summary
|
||||
|
||||
| Wave | # Parallel | Tasks and Agent Category |
|
||||
|------|------------|------------------------|
|
||||
| 1 | **5** | T1-T5 → `quick` |
|
||||
| 2 | **11** | T6-T16 → `writing` |
|
||||
| 3 | **1** | T17 → `deep` |
|
||||
| FINAL | **4** | F1 → `oracle`, F2,F3 → `unspecified-high`, F4 → `deep` |
|
||||
|
||||
---
|
||||
|
||||
## TODOs
|
||||
|
||||
- [x] 1. Create rules/ directory structure in AGENTS repo
|
||||
|
||||
**What to do**:
|
||||
- Create directory structure in `/home/m3tam3re/p/AI/AGENTS/`: `rules/concerns/`, `rules/languages/`, `rules/frameworks/`
|
||||
- Add `.gitkeep` files to each directory so they're tracked before content is added
|
||||
- This is the CONTENT repo only — NO Nix code goes here
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not create any Nix files in AGENTS repo
|
||||
- Do not create rule content files (those are Wave 2)
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: []
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 1 (with Tasks 2-5)
|
||||
- **Blocks**: Tasks 5, 6-16, 17
|
||||
- **Blocked By**: None
|
||||
|
||||
**References**:
|
||||
- `/home/m3tam3re/p/AI/AGENTS/skills/` — existing directory structure pattern
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
**QA Scenarios (MANDATORY):**
|
||||
|
||||
```
|
||||
Scenario: Directory structure exists
|
||||
Tool: Bash
|
||||
Preconditions: None
|
||||
Steps:
|
||||
1. Run `ls /home/m3tam3re/p/AI/AGENTS/rules/concerns/.gitkeep /home/m3tam3re/p/AI/AGENTS/rules/languages/.gitkeep /home/m3tam3re/p/AI/AGENTS/rules/frameworks/.gitkeep`
|
||||
Expected Result: All 3 .gitkeep files exist
|
||||
Failure Indicators: "No such file or directory"
|
||||
Evidence: .sisyphus/evidence/task-1-dirs.txt
|
||||
|
||||
Scenario: No Nix files in AGENTS repo rules/
|
||||
Tool: Bash
|
||||
Preconditions: Dirs created
|
||||
Steps:
|
||||
1. Run `find /home/m3tam3re/p/AI/AGENTS/rules/ -name '*.nix' | wc -l`
|
||||
Expected Result: Count is 0
|
||||
Failure Indicators: Count > 0
|
||||
Evidence: .sisyphus/evidence/task-1-no-nix.txt
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(rules): add rules directory structure`
|
||||
- Files: `rules/concerns/.gitkeep`, `rules/languages/.gitkeep`, `rules/frameworks/.gitkeep`
|
||||
|
||||
---
|
||||
|
||||
- [x] 2. Create `lib/opencode-rules.nix` in nixpkgs repo
|
||||
|
||||
**What to do**:
|
||||
- Create `/home/m3tam3re/p/NIX/nixpkgs/lib/opencode-rules.nix`
|
||||
- Follow the EXACT pattern of `lib/ports.nix`: `{lib}: { mkOpencodeRules = ...; }`
|
||||
- The function must accept: `{ agents, languages ? [], concerns ? [ "coding-style" "naming" "documentation" "testing" "git-workflow" "project-structure" ], frameworks ? [], extraInstructions ? [] }`
|
||||
- `agents` parameter = the non-flake input (path to AGENTS repo in Nix store)
|
||||
- It must return: `{ shellHook = "..."; instructions = [...]; }`
|
||||
- `shellHook` must: (a) create `.opencode-rules` symlink to `${agents}/rules`, (b) generate `opencode.json` with `$schema` and `instructions` fields using `builtins.toJSON`
|
||||
- `instructions` = list of paths relative to project root via `.opencode-rules/` symlink
|
||||
- Include comprehensive Nix doc comments (matching ports.nix style)
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not deviate from ports.nix pattern
|
||||
- Do not put any code in AGENTS repo
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- Reason: One Nix file following established pattern
|
||||
- **Skills**: []
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 1 (with Tasks 1, 3-5)
|
||||
- **Blocks**: Tasks 5, 17
|
||||
- **Blocked By**: None
|
||||
|
||||
**References**:
|
||||
|
||||
**Pattern References**:
|
||||
- `/home/m3tam3re/p/NIX/nixpkgs/lib/ports.nix` — MUST follow this exact pattern: `{lib}: { mkPortHelpers = portsConfig: let ... in { ... }; }`
|
||||
- `/home/m3tam3re/p/NIX/nixpkgs/lib/default.nix` — shows how lib modules are imported: `import ./ports.nix {inherit lib;}`
|
||||
- `/home/m3tam3re/p/NIX/nixpkgs/flake.nix:73-77` — shows how lib is exposed: `lib = forAllSystems (system: ... import ./lib {lib = pkgs.lib;});`
|
||||
|
||||
**External References**:
|
||||
- OpenCode rules docs: `https://opencode.ai/docs/rules/` — `instructions` field accepts relative paths
|
||||
|
||||
**WHY Each Reference Matters**:
|
||||
- `ports.nix` is the canonical pattern for lib functions in this repo — `{lib}:` signature, doc comments, nested `let ... in`
|
||||
- `default.nix` shows how the new module gets wired in
|
||||
- `flake.nix` confirms how consumers access it: `m3ta-nixpkgs.lib.${system}.opencode-rules.mkOpencodeRules`
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
**QA Scenarios (MANDATORY):**
|
||||
|
||||
```
|
||||
Scenario: opencode-rules.nix evaluates without errors
|
||||
Tool: Bash
|
||||
Preconditions: File created
|
||||
Steps:
|
||||
1. Run `nix eval --impure --expr 'let pkgs = import <nixpkgs> {}; lib = (import /home/m3tam3re/p/NIX/nixpkgs/lib/opencode-rules.nix {lib = pkgs.lib;}); in builtins.attrNames lib' 2>&1`
|
||||
Expected Result: Output contains "mkOpencodeRules"
|
||||
Failure Indicators: "error:" in output
|
||||
Evidence: .sisyphus/evidence/task-2-eval.txt
|
||||
|
||||
Scenario: mkOpencodeRules generates correct paths
|
||||
Tool: Bash
|
||||
Preconditions: File created
|
||||
Steps:
|
||||
1. Run `nix eval --impure --json --expr 'let pkgs = import <nixpkgs> {}; lib = (import /home/m3tam3re/p/NIX/nixpkgs/lib/opencode-rules.nix {lib = pkgs.lib;}); in (lib.mkOpencodeRules { agents = /home/m3tam3re/p/AI/AGENTS; languages = ["python" "typescript"]; frameworks = ["n8n"]; }).instructions'`
|
||||
Expected Result: JSON array with 9 paths (6 concerns + 2 languages + 1 framework), all starting with ".opencode-rules/"
|
||||
Failure Indicators: Wrong count, wrong prefix, error
|
||||
Evidence: .sisyphus/evidence/task-2-paths.txt
|
||||
|
||||
Scenario: Default (empty languages) works
|
||||
Tool: Bash
|
||||
Preconditions: File created
|
||||
Steps:
|
||||
1. Run `nix eval --impure --json --expr 'let pkgs = import <nixpkgs> {}; lib = (import /home/m3tam3re/p/NIX/nixpkgs/lib/opencode-rules.nix {lib = pkgs.lib;}); in (lib.mkOpencodeRules { agents = /home/m3tam3re/p/AI/AGENTS; }).instructions'`
|
||||
Expected Result: JSON array with 6 paths (concerns only)
|
||||
Failure Indicators: Extra paths, error
|
||||
Evidence: .sisyphus/evidence/task-2-defaults.txt
|
||||
|
||||
Scenario: shellHook generates valid JSON
|
||||
Tool: Bash
|
||||
Preconditions: File created
|
||||
Steps:
|
||||
1. Run `nix eval --impure --raw --expr 'let pkgs = import <nixpkgs> {}; lib = (import /home/m3tam3re/p/NIX/nixpkgs/lib/opencode-rules.nix {lib = pkgs.lib;}); in (lib.mkOpencodeRules { agents = /home/m3tam3re/p/AI/AGENTS; languages = ["python"]; }).shellHook' | sh -c 'eval "$(cat)"' && python3 -m json.tool opencode.json`
|
||||
Expected Result: Valid JSON output with "$schema" and "instructions" fields
|
||||
Failure Indicators: JSON parse error, missing fields
|
||||
Evidence: .sisyphus/evidence/task-2-json.txt
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(lib): add opencode-rules helper for per-project rule injection`
|
||||
- Files: `lib/opencode-rules.nix`
|
||||
- Pre-commit: `nix eval --impure --expr '...'`
|
||||
|
||||
---
|
||||
|
||||
- [x] 3. Update `lib/default.nix` in nixpkgs repo
|
||||
|
||||
**What to do**:
|
||||
- Add one line to `/home/m3tam3re/p/NIX/nixpkgs/lib/default.nix` to import opencode-rules:
|
||||
`opencode-rules = import ./opencode-rules.nix {inherit lib;};`
|
||||
- Place it after the existing `ports = import ./ports.nix {inherit lib;};` line
|
||||
- Update the comment at line 10 to remove it (it's a placeholder)
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not modify the ports import
|
||||
- Do not change the function signature `{lib}:`
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: []
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES (but logically pairs with Task 2)
|
||||
- **Parallel Group**: Wave 1
|
||||
- **Blocks**: Task 17
|
||||
- **Blocked By**: Task 2 (opencode-rules.nix must exist first)
|
||||
|
||||
**References**:
|
||||
- `/home/m3tam3re/p/NIX/nixpkgs/lib/default.nix:6-12` — current file content, add after line 8
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
**QA Scenarios (MANDATORY):**
|
||||
|
||||
```
|
||||
Scenario: default.nix imports opencode-rules
|
||||
Tool: Bash
|
||||
Preconditions: Both files updated
|
||||
Steps:
|
||||
1. Run `grep 'opencode-rules' /home/m3tam3re/p/NIX/nixpkgs/lib/default.nix`
|
||||
Expected Result: Line shows `opencode-rules = import ./opencode-rules.nix {inherit lib;};`
|
||||
Failure Indicators: No match
|
||||
Evidence: .sisyphus/evidence/task-3-import.txt
|
||||
|
||||
Scenario: Full lib evaluates
|
||||
Tool: Bash
|
||||
Preconditions: Both files updated
|
||||
Steps:
|
||||
1. Run `nix eval --impure --expr 'let pkgs = import <nixpkgs> {}; m3taLib = import /home/m3tam3re/p/NIX/nixpkgs/lib {lib = pkgs.lib;}; in builtins.attrNames m3taLib' 2>&1`
|
||||
Expected Result: Output includes both "ports" and "opencode-rules"
|
||||
Failure Indicators: Missing "opencode-rules" or error
|
||||
Evidence: .sisyphus/evidence/task-3-full-lib.txt
|
||||
```
|
||||
|
||||
**Commit**: YES (groups with Task 2)
|
||||
- Message: `feat(lib): add opencode-rules helper for per-project rule injection`
|
||||
- Files: `lib/default.nix`, `lib/opencode-rules.nix`
|
||||
|
||||
---
|
||||
|
||||
- [x] 4. Update opencode.nix in nixos-config repo
|
||||
|
||||
**What to do**:
|
||||
- Add `rules/` deployment to `xdg.configFile` in `/home/m3tam3re/p/NIX/nixos-config/home/features/coding/opencode.nix`
|
||||
- Add entry: `"opencode/rules" = { source = "${inputs.agents}/rules"; recursive = true; };`
|
||||
- Place it alongside existing entries for commands, context, prompts, skills (lines 2-18)
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not modify any existing entries
|
||||
- Do not change agents, MCP, providers, or oh-my-opencode config
|
||||
- Do not run `home-manager switch`
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: []
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 1
|
||||
- **Blocks**: Task 17
|
||||
- **Blocked By**: None
|
||||
|
||||
**References**:
|
||||
- `/home/m3tam3re/p/NIX/nixos-config/home/features/coding/opencode.nix:2-18` — existing xdg.configFile entries
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
**QA Scenarios (MANDATORY):**
|
||||
|
||||
```
|
||||
Scenario: opencode.nix contains rules entry
|
||||
Tool: Bash
|
||||
Preconditions: File updated
|
||||
Steps:
|
||||
1. Run `grep -c 'opencode/rules' /home/m3tam3re/p/NIX/nixos-config/home/features/coding/opencode.nix`
|
||||
2. Run `grep -c 'opencode/commands\|opencode/context\|opencode/prompts\|opencode/skills' /home/m3tam3re/p/NIX/nixos-config/home/features/coding/opencode.nix`
|
||||
Expected Result: Rules count is 1, existing count is 4 (all preserved)
|
||||
Failure Indicators: Count mismatch
|
||||
Evidence: .sisyphus/evidence/task-4-opencode-nix.txt
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(opencode): deploy rules/ to ~/.config/opencode/rules/ via home-manager`
|
||||
- Files: `opencode.nix`
|
||||
|
||||
---
|
||||
|
||||
- [x] 5. Create `rules/USAGE.md` in AGENTS repo
|
||||
|
||||
**What to do**:
|
||||
- Document how to use `mkOpencodeRules` in a project's `flake.nix`
|
||||
- Show the nixpkgs consumption pattern: `m3taLib.opencode-rules.mkOpencodeRules { agents = inputs.agents; languages = ["python"]; }`
|
||||
- Complete example `flake.nix` devShell snippet showing: `inputs.agents` + `inputs.m3ta-nixpkgs` + `mkOpencodeRules` + `shellHook`
|
||||
- Document `.gitignore` additions: `.opencode-rules` and `opencode.json`
|
||||
- Explain project-level `AGENTS.md` overrides
|
||||
- Explain update flow: `nix flake update agents`
|
||||
- Keep concise: max 100 lines
|
||||
|
||||
**Must NOT do**:
|
||||
- Do not create a README.md (repo anti-pattern)
|
||||
- Do not reference `rules/default.nix` — the helper lives in nixpkgs, not AGENTS
|
||||
|
||||
**Recommended Agent Profile**:
|
||||
- **Category**: `quick`
|
||||
- **Skills**: []
|
||||
|
||||
**Parallelization**:
|
||||
- **Can Run In Parallel**: YES
|
||||
- **Parallel Group**: Wave 1
|
||||
- **Blocks**: Task 17
|
||||
- **Blocked By**: Tasks 1, 2 (needs to reference both structures)
|
||||
|
||||
**References**:
|
||||
- `/home/m3tam3re/p/AI/AGENTS/AGENTS.md` — repo documentation style (concise, code-heavy)
|
||||
- `/home/m3tam3re/p/NIX/nixpkgs/lib/ports.nix:1-42` — the doc comment style used for lib functions
|
||||
- OpenCode rules docs: `https://opencode.ai/docs/rules/` — `instructions` field
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
**QA Scenarios (MANDATORY):**
|
||||
|
||||
```
|
||||
Scenario: USAGE.md has required content
|
||||
Tool: Bash
|
||||
Preconditions: File created
|
||||
Steps:
|
||||
1. Run `wc -l /home/m3tam3re/p/AI/AGENTS/rules/USAGE.md`
|
||||
2. Run `grep -c 'm3ta-nixpkgs\|mkOpencodeRules\|gitignore\|AGENTS.md\|nix flake update' /home/m3tam3re/p/AI/AGENTS/rules/USAGE.md`
|
||||
Expected Result: Under 100 lines, key terms >= 5
|
||||
Failure Indicators: Over 100 lines or missing key concepts
|
||||
Evidence: .sisyphus/evidence/task-5-usage.txt
|
||||
```
|
||||
|
||||
**Commit**: YES (groups with T1)
|
||||
- Message: `feat(rules): add rules directory structure and usage documentation`
|
||||
- Files: `rules/USAGE.md`, `rules/concerns/.gitkeep`, `rules/languages/.gitkeep`, `rules/frameworks/.gitkeep`
|
||||
|
||||
---
|
||||
|
||||
- [x] 6. Create `rules/concerns/coding-style.md`
|
||||
|
||||
**What to do**:
|
||||
- Write coding style rules: code formatting, patterns/anti-patterns, error handling, type safety, function design, DRY/SOLID
|
||||
- Imperative language ("Always...", "Never...", "Prefer..."), micro-examples (`Correct:` / `Incorrect:`)
|
||||
- Keep under 200 lines, sandwich principle (critical rules at start and end)
|
||||
|
||||
**Must NOT do**: No language-specific toolchain details, no generic advice ("write clean code"), max 200 lines
|
||||
|
||||
**Recommended Agent Profile**: `writing`, Skills: []
|
||||
**Parallelization**: Wave 2, parallel with T7-T16. Blocks T17. Blocked by T1.
|
||||
|
||||
**References**:
|
||||
- `/home/m3tam3re/p/AI/AGENTS/skills/skill-creator/SKILL.md` — documentation density example
|
||||
- Awesome Cursorrules: `https://github.com/PatrickJS/awesome-cursorrules`
|
||||
|
||||
**Acceptance Criteria**:
|
||||
```
|
||||
Scenario: Quality check
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. `wc -l` → under 200
|
||||
2. `grep -c 'Correct:\|Incorrect:\|Always\|Never\|Prefer'` → >= 10
|
||||
3. `grep -c '```'` → >= 6 (3+ example pairs)
|
||||
4. `grep -ic 'write clean code\|follow best practices'` → 0
|
||||
Evidence: .sisyphus/evidence/task-6-coding-style.txt
|
||||
```
|
||||
**Commit**: NO (groups with Wave 2 commit in T17)
|
||||
|
||||
---
|
||||
|
||||
- [x] 7. Create `rules/concerns/naming.md`
|
||||
|
||||
**What to do**:
|
||||
- Naming conventions: files, variables, functions, classes, modules, constants
|
||||
- Per-language table (Python=snake_case, TS=camelCase, Nix=camelCase, Shell=UPPER_SNAKE)
|
||||
- Keep under 150 lines
|
||||
|
||||
**Must NOT do**: No toolchain details, max 150 lines
|
||||
|
||||
**Recommended Agent Profile**: `writing`, Skills: []
|
||||
**Parallelization**: Wave 2. Blocks T17. Blocked by T1.
|
||||
|
||||
**References**: `/home/m3tam3re/p/AI/AGENTS/AGENTS.md:58-62` — existing naming conventions
|
||||
|
||||
**Acceptance Criteria**:
|
||||
```
|
||||
Scenario: `wc -l` → under 150, `grep -c 'snake_case\|camelCase\|PascalCase\|UPPER_SNAKE'` → >= 4
|
||||
Evidence: .sisyphus/evidence/task-7-naming.txt
|
||||
```
|
||||
**Commit**: NO
|
||||
|
||||
---
|
||||
|
||||
- [x] 8. Create `rules/concerns/documentation.md`
|
||||
|
||||
**What to do**: When to document, docstring formats, inline comment philosophy (WHY not WHAT), README standards. Under 150 lines.
|
||||
**Recommended Agent Profile**: `writing`
|
||||
**Parallelization**: Wave 2. Blocked by T1.
|
||||
**References**: `/home/m3tam3re/p/AI/AGENTS/AGENTS.md` — repo's own style
|
||||
**Acceptance Criteria**: `wc -l` < 150, `grep -c 'WHY\|WHAT\|Correct:\|Incorrect:'` >= 4
|
||||
**Commit**: NO
|
||||
|
||||
---
|
||||
|
||||
- [x] 9. Create `rules/concerns/testing.md`
|
||||
|
||||
**What to do**: Arrange-act-assert, behavior vs implementation testing, mocking philosophy, coverage, TDD. Under 200 lines.
|
||||
**Recommended Agent Profile**: `writing`
|
||||
**Parallelization**: Wave 2. Blocked by T1.
|
||||
**References**: `/home/m3tam3re/p/AI/AGENTS/AGENTS.md:73-82` — existing test philosophy
|
||||
**Acceptance Criteria**: `wc -l` < 200, `grep -ic 'arrange\|act\|assert\|mock\|behavior'` >= 4
|
||||
**Commit**: NO
|
||||
|
||||
---
|
||||
|
||||
- [x] 10. Create `rules/concerns/git-workflow.md`
|
||||
|
||||
**What to do**: Conventional commits, branch naming, PR descriptions, squash vs merge. Under 120 lines.
|
||||
**Recommended Agent Profile**: `writing`, Skills: [`git-master`]
|
||||
**Parallelization**: Wave 2. Blocked by T1.
|
||||
**References**: `https://www.conventionalcommits.org/en/v1.0.0/`
|
||||
**Acceptance Criteria**: `wc -l` < 120, `grep -c 'feat\|fix\|refactor\|docs\|chore'` >= 5
|
||||
**Commit**: NO
|
||||
|
||||
---
|
||||
|
||||
- [x] 11. Create `rules/concerns/project-structure.md`
|
||||
|
||||
**What to do**: Directory layout, module organization, entry points, config placement. Per-type: Python (src layout), TS (src/), Nix (modules/). Under 120 lines.
|
||||
**Recommended Agent Profile**: `writing`
|
||||
**Parallelization**: Wave 2. Blocked by T1.
|
||||
**References**: `/home/m3tam3re/p/AI/AGENTS/AGENTS.md:24-38` — repo structure
|
||||
**Acceptance Criteria**: `wc -l` < 120
|
||||
**Commit**: NO
|
||||
|
||||
---
|
||||
|
||||
- [x] 12. Create `rules/languages/python.md`
|
||||
|
||||
**What to do**:
|
||||
- Deep Python patterns: `uv` (pkg mgmt), `ruff` (lint/fmt), `pyright` (types), `pytest` + `hypothesis`, Pydantic for data boundaries
|
||||
- Idioms: comprehensions, context managers, generators, f-strings
|
||||
- Anti-patterns: bare except, mutable defaults, global state, star imports
|
||||
- Project setup: `pyproject.toml`, src layout
|
||||
- Under 250 lines with micro-examples
|
||||
|
||||
**Must NOT do**: No general coding style (covered in concerns/), no Django/Flask/FastAPI, max 250 lines
|
||||
|
||||
**Recommended Agent Profile**: `writing`
|
||||
**Parallelization**: Wave 2. Blocked by T1.
|
||||
|
||||
**References**:
|
||||
- `/home/m3tam3re/p/AI/AGENTS/AGENTS.md:60` — existing Python conventions (shebang, docstrings)
|
||||
- Ruff docs: `https://docs.astral.sh/ruff/`, uv docs: `https://docs.astral.sh/uv/`
|
||||
|
||||
**Acceptance Criteria**: `wc -l` < 250, `grep -c 'ruff\|uv\|pytest\|pydantic\|pyright'` >= 4, `grep -c '```python'` >= 5, no "pythonic"/"best practice"
|
||||
**Commit**: NO
|
||||
|
||||
---
|
||||
|
||||
- [x] 13. Create `rules/languages/typescript.md`
|
||||
|
||||
**What to do**:
|
||||
- Strict mode (`strict: true`, `noUncheckedIndexedAccess`), discriminated unions, branded types, `satisfies`, `as const`
|
||||
- Modern: `using`, `Promise.withResolvers()`, `Object.groupBy()`
|
||||
- Toolchain: `bun`/`tsx`, `biome`/`eslint`
|
||||
- Anti-patterns: `as any`, `@ts-ignore`, `!` assertion, `enum` (prefer union)
|
||||
- Under 250 lines
|
||||
|
||||
**Must NOT do**: No React/Next.js, max 250 lines
|
||||
|
||||
**Recommended Agent Profile**: `writing`
|
||||
**Parallelization**: Wave 2. Blocked by T1.
|
||||
**Acceptance Criteria**: `wc -l` < 250, `grep -c 'strict\|as any\|ts-ignore\|discriminated\|satisfies'` >= 4, `grep -c '```ts'` >= 5
|
||||
**Commit**: NO
|
||||
|
||||
---
|
||||
|
||||
- [x] 14. Create `rules/languages/nix.md`
|
||||
|
||||
**What to do**:
|
||||
- Flake structure, module patterns (`{ config, lib, pkgs, ... }:`), `mkIf`/`mkMerge`
|
||||
- Formatting: `alejandra`, naming: camelCase
|
||||
- Anti-patterns: `with pkgs;`, `builtins.fetchTarball`, impure ops
|
||||
- Home Manager patterns, overlays
|
||||
- Under 200 lines
|
||||
|
||||
**Recommended Agent Profile**: `writing`
|
||||
**Parallelization**: Wave 2. Blocked by T1.
|
||||
|
||||
**References**:
|
||||
- `/home/m3tam3re/p/NIX/nixos-config/home/features/coding/opencode.nix` — user's actual Nix style
|
||||
- `/home/m3tam3re/p/NIX/nixpkgs/lib/ports.nix` — well-structured Nix code example
|
||||
|
||||
**Acceptance Criteria**: `wc -l` < 200, `grep -c 'flake\|mkShell\|alejandra\|with pkgs\|overlay'` >= 4
|
||||
**Commit**: NO
|
||||
|
||||
---
|
||||
|
||||
- [x] 15. Create `rules/languages/shell.md`
|
||||
|
||||
**What to do**: `set -euo pipefail`, shellcheck, quoting, local vars, POSIX portability, `#!/usr/bin/env bash`. Under 120 lines.
|
||||
**Recommended Agent Profile**: `writing`
|
||||
**Parallelization**: Wave 2. Blocked by T1.
|
||||
**References**: `/home/m3tam3re/p/AI/AGENTS/AGENTS.md:61`, `/home/m3tam3re/p/AI/AGENTS/scripts/test-skill.sh`
|
||||
**Acceptance Criteria**: `wc -l` < 120, `grep -c 'set -euo pipefail\|shellcheck\|#!/usr/bin/env'` >= 2
|
||||
**Commit**: NO
|
||||
|
||||
---
|
||||
|
||||
- [x] 16. Create `rules/frameworks/n8n.md`
|
||||
|
||||
**What to do**: Workflow design, node patterns, naming, Error Trigger, data patterns, security. Under 120 lines.
|
||||
**Recommended Agent Profile**: `writing`
|
||||
**Parallelization**: Wave 2. Blocked by T1.
|
||||
**References**: n8n docs: `https://docs.n8n.io/`
|
||||
**Acceptance Criteria**: `wc -l` < 120, `grep -c 'workflow\|node\|Error Trigger\|webhook\|credential'` >= 4
|
||||
**Commit**: NO
|
||||
|
||||
---
|
||||
|
||||
- [x] 17. End-to-end integration test + commits
|
||||
|
||||
**What to do**:
|
||||
1. Verify all 11 rule files exist and meet line count limits
|
||||
2. Verify `lib/opencode-rules.nix` in nixpkgs evaluates correctly for: empty, single-lang, multi-lang, with-frameworks
|
||||
3. Verify full lib import works: `m3taLib.opencode-rules.mkOpencodeRules`
|
||||
4. Verify generated `opencode.json` is valid JSON with correct `instructions` paths
|
||||
5. Verify all instruction paths resolve to real files in AGENTS repo rules/
|
||||
6. Verify total context budget: all concerns + 1 language < 1500 lines
|
||||
7. Verify `opencode.nix` has the rules deployment entry
|
||||
8. Commit all Wave 2 rule files as a single commit in AGENTS repo
|
||||
|
||||
**Must NOT do**: Do not run `home-manager switch`, do not modify files, do not create test projects
|
||||
|
||||
**Recommended Agent Profile**: `deep`, Skills: [`git-master`]
|
||||
**Parallelization**: Wave 3 (sequential). Blocks F1-F4. Blocked by T2-T5, T6-T16.
|
||||
|
||||
**References**:
|
||||
- `/home/m3tam3re/p/NIX/nixpkgs/lib/opencode-rules.nix` — Nix helper to evaluate
|
||||
- `/home/m3tam3re/p/NIX/nixos-config/home/features/coding/opencode.nix` — deployment config
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
**QA Scenarios (MANDATORY):**
|
||||
|
||||
```
|
||||
Scenario: All rule files exist and meet limits
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. For each of 11 files: `wc -l` and verify under limit
|
||||
Expected Result: All 11 files present, all under limits
|
||||
Evidence: .sisyphus/evidence/task-17-inventory.txt
|
||||
|
||||
Scenario: Full lib integration
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. Run `nix eval --impure --json --expr 'let pkgs = import <nixpkgs> {}; m3taLib = import /home/m3tam3re/p/NIX/nixpkgs/lib {lib = pkgs.lib;}; in (m3taLib.opencode-rules.mkOpencodeRules { agents = /home/m3tam3re/p/AI/AGENTS; languages = ["python" "typescript" "nix" "shell"]; frameworks = ["n8n"]; }).instructions'`
|
||||
Expected Result: JSON array with 11 paths (6 concerns + 4 langs + 1 framework)
|
||||
Failure Indicators: Wrong count, error
|
||||
Evidence: .sisyphus/evidence/task-17-full-integration.txt
|
||||
|
||||
Scenario: All paths resolve to real files
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. For each path in instructions output: verify the corresponding file exists under `rules/`
|
||||
Expected Result: All paths resolve, none missing
|
||||
Evidence: .sisyphus/evidence/task-17-paths-resolve.txt
|
||||
|
||||
Scenario: Total context budget
|
||||
Tool: Bash
|
||||
Steps:
|
||||
1. `cat /home/m3tam3re/p/AI/AGENTS/rules/concerns/*.md | wc -l`
|
||||
2. `wc -l < /home/m3tam3re/p/AI/AGENTS/rules/languages/python.md`
|
||||
3. Sum must be < 1500
|
||||
Expected Result: Total under 1500
|
||||
Evidence: .sisyphus/evidence/task-17-budget.txt
|
||||
```
|
||||
|
||||
**Commit**: YES
|
||||
- Message: `feat(rules): add initial rule files for all concerns, languages, and frameworks`
|
||||
- Files: all `rules/**/*.md` files (11 total)
|
||||
- Repo: AGENTS
|
||||
|
||||
---
|
||||
|
||||
## Final Verification Wave (MANDATORY — after ALL implementation tasks)
|
||||
|
||||
> 4 review agents run in PARALLEL. ALL must APPROVE. Rejection → fix → re-run.
|
||||
|
||||
- [x] F1. **Plan Compliance Audit** — `oracle`
|
||||
For each "Must Have": verify implementation exists. For each "Must NOT Have": search for violations. Check evidence files. Compare deliverables across all 3 repos.
|
||||
Output: `Must Have [N/N] | Must NOT Have [N/N] | Tasks [N/N] | VERDICT`
|
||||
|
||||
- [x] F2. **Code Quality Review** — `unspecified-high`
|
||||
Rule files: no generic advice, has examples, consistent tone, under limits. Nix: valid syntax, correct paths, edge cases. USAGE.md: accurate.
|
||||
Output: `Files [N clean/N issues] | VERDICT`
|
||||
|
||||
- [x] F3. **Real Manual QA** — `unspecified-high`
|
||||
Run `nix eval` on opencode-rules.nix via full lib import with various configs. Verify JSON. Check rule content quality. Save to `.sisyphus/evidence/final-qa/`.
|
||||
Output: `Scenarios [N/N pass] | VERDICT`
|
||||
|
||||
- [x] F4. **Scope Fidelity Check** — `deep`
|
||||
For each task: "What to do" vs actual file. 1:1 match. No creep. Check "Must NOT do". Flag unaccounted changes across all 3 repos.
|
||||
Output: `Tasks [N/N compliant] | Unaccounted [CLEAN/N files] | VERDICT`
|
||||
|
||||
---
|
||||
|
||||
## Commit Strategy
|
||||
|
||||
| After Task(s) | Repo | Message | Files |
|
||||
|---------------|------|---------|-------|
|
||||
| 1, 5 | AGENTS | `feat(rules): add rules directory structure and usage documentation` | `rules/USAGE.md`, `rules/{concerns,languages,frameworks}/.gitkeep` |
|
||||
| 2, 3 | nixpkgs | `feat(lib): add opencode-rules helper for per-project rule injection` | `lib/opencode-rules.nix`, `lib/default.nix` |
|
||||
| 4 | nixos-config | `feat(opencode): deploy rules/ to ~/.config/opencode/rules/` | `opencode.nix` |
|
||||
| 17 | AGENTS | `feat(rules): add initial rule files for concerns, languages, and frameworks` | all `rules/**/*.md` (11 files) |
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
### Verification Commands
|
||||
```bash
|
||||
# All rule files exist (AGENTS repo)
|
||||
ls rules/concerns/*.md rules/languages/*.md rules/frameworks/*.md
|
||||
|
||||
# Context budget
|
||||
cat rules/concerns/*.md rules/languages/python.md | wc -l # Expected: < 1500
|
||||
|
||||
# Nix helper via full lib (nixpkgs)
|
||||
nix eval --impure --json --expr 'let pkgs = import <nixpkgs> {}; m3taLib = import /path/to/nixpkgs/lib {lib = pkgs.lib;}; in (m3taLib.opencode-rules.mkOpencodeRules { agents = /path/to/AGENTS; languages = ["python"]; }).instructions'
|
||||
|
||||
# opencode.nix has rules entry (nixos-config)
|
||||
grep 'opencode/rules' /home/m3tam3re/p/NIX/nixos-config/home/features/coding/opencode.nix
|
||||
```
|
||||
|
||||
### Final Checklist
|
||||
- [ ] All 11 rule files present and under line limits
|
||||
- [ ] All rule files use imperative language with micro-examples
|
||||
- [ ] `lib/opencode-rules.nix` in nixpkgs follows ports.nix pattern exactly
|
||||
- [ ] `lib/default.nix` imports opencode-rules
|
||||
- [ ] `opencode.nix` deploys rules/ alongside skills/commands/context/prompts
|
||||
- [ ] `rules/USAGE.md` documents nixpkgs consumption pattern correctly
|
||||
- [ ] No Nix code in AGENTS repo
|
||||
- [ ] No existing files modified (except lib/default.nix +1 line, opencode.nix +3 lines)
|
||||
- [ ] Total loaded context under 1500 lines for any realistic configuration
|
||||
Reference in New Issue
Block a user