Compare commits

..

50 Commits

Author SHA1 Message Date
m3tm3re
39ac89f388 docs: update AGENTS.md and README.md for rules system, remove beads
- Add rules/ directory documentation to both files
- Update skill count from 25 to 15 modules
- Remove beads references (issue tracking removed)
- Update skills list with current active skills
- Document flake.nix as proper Nix flake (not flake=false)
- Add rules system integration section
- Clean up sisyphus planning artifacts
- Remove deprecated skills (memory, msteams, outlook)
2026-03-03 19:40:57 +01:00
m3tm3re
1bc81fb38c chore: update readme 2026-02-18 17:32:13 +01:00
m3tm3re
1f1eabd1ed feat(rules): add strict TDD enforcement ruleset with AI patterns 2026-02-18 17:30:20 +01:00
m3tm3re
5b204c95e4 test(rules): add final QA evidence and mark review complete
Final Review Results:
- F1 (Plan Compliance): OKAY - Must Have [12/12], Must NOT Have [8/8]
- F2 (Code Quality): OKAY - All files pass quality criteria
- F3 (Manual QA): OKAY - Scenarios [5/5 pass]
- F4 (Scope Fidelity): OKAY - No unaccounted changes

All 21 tasks complete (T1-T17 + F1-F4)
2026-02-17 19:31:24 +01:00
m3tm3re
4e9da366e4 test(rules): add integration test evidence
- All 11 rule files verified (exist, under limits)
- Full lib integration verified (11 paths returned)
- Context budget verified (975 < 1500)
- All instruction paths resolve to real files
- opencode.nix rules entry verified

Refs: T17 of rules-system plan
2026-02-17 19:18:39 +01:00
m3tm3re
8910413315 feat(rules): add initial rule files for concerns, languages, and frameworks
Concerns (6 files):
- coding-style.md (163 lines): patterns, anti-patterns, error handling, SOLID
- naming.md (105 lines): naming conventions table per language
- documentation.md (149 lines): docstrings, WHY vs WHAT, README standards
- testing.md (134 lines): AAA pattern, mocking philosophy, TDD
- git-workflow.md (118 lines): conventional commits, branch naming, PR format
- project-structure.md (82 lines): directory layout, entry points, config placement

Languages (4 files):
- python.md (224 lines): uv, ruff, pyright, pytest, pydantic, idioms, anti-patterns
- typescript.md (150 lines): strict mode, discriminated unions, satisfies, as const
- nix.md (129 lines): flake structure, module patterns, alejandra, anti-patterns
- shell.md (100 lines): set -euo pipefail, shellcheck, quoting, POSIX

Frameworks (1 file):
- n8n.md (42 lines): workflow design, node patterns, Error Trigger, security

Context budget: 975 lines (concerns + python) < 1500 limit

Refs: T6-T16 of rules-system plan
2026-02-17 19:05:45 +01:00
m3tm3re
d475dde398 feat(rules): add rules directory structure and usage documentation
- Create rules/{concerns,languages,frameworks}/ directory structure
- Add USAGE.md with flake.nix integration examples
- Add plan and notepad files for rules-system implementation

Refs: T1, T5 of rules-system plan
2026-02-17 18:59:43 +01:00
m3tm3re
6fceea7460 refactor: modernize agent configs, remove beads, update README
- Upgrade all agents from glm-4.7 to glm-5 with descriptive names
- Add comprehensive permission configs (bash, edit, external_directory) for all agents
- Remove .beads/ issue tracking directory
- Update README: fix opencode URL to opencode.ai, remove beads sections, formatting cleanup
2026-02-17 09:15:15 +01:00
m3tm3re
923e2f1eaa chore(plan): mark deployment verification as blocked (requires user action) 2026-02-14 08:34:06 +01:00
m3tm3re
231b9f2e0b chore(plan): mark tasks 11-14 and definition of done as complete 2026-02-14 08:31:32 +01:00
m3tm3re
c64d71f438 docs(memory): update skills for opencode-memory plugin, deprecate mem0 2026-02-14 08:22:59 +01:00
m3tm3re
1719f70452 feat(memory): add core memory skill, update Apollo prompt and Obsidian skill
- Add skills/memory/SKILL.md: dual-layer memory orchestration
- Update prompts/apollo.txt: add memory management responsibilities
- Update skills/obsidian/SKILL.md: add memory folder conventions
2026-02-12 20:02:51 +01:00
m3tm3re
0d6ff423be Add Memory System configuration to user profile 2026-02-12 19:54:54 +01:00
m3tm3re
79e6adb362 feat(mem0-memory): add memory categories and dual-layer sync patterns 2026-02-12 19:50:39 +01:00
m3tm3re
1e03c165e7 docs: Add Obsidian MCP server configuration documentation
- Create mcp-config.md in skills/memory/references/
- Document cyanheads/obsidian-mcp-server setup for Opencode
- Include environment variables, Nix config, and troubleshooting
- Reference for Task 4 of memory-system plan
2026-02-12 19:44:03 +01:00
m3tm3re
94b89da533 finalize doc-translator skill 2026-02-11 19:58:06 +01:00
sascha.koenig
b9d535b926 fix: use POST method for Outline signed URL upload
Change HTTP method from PUT to POST on line 77 for signed URL upload,
as Outline's S3 bucket only accepts POST requests.
2026-02-11 14:16:02 +01:00
sascha.koenig
46b9c0e4e3 fix: list_outline_collections.sh - correct jq parsing to output valid JSON array 2026-02-11 14:14:55 +01:00
m3tm3re
eab0a94650 doc-translator fix 2026-02-10 20:24:13 +01:00
m3tm3re
0ad1037c71 doc-translator 2026-02-10 20:02:30 +01:00
m3tm3re
1b4e8322d6 doc-translator 2026-02-10 20:00:42 +01:00
m3tm3re
7a3b72d5d4 chore: mark chiron-agent-framework plan as complete
All 27 tasks completed successfully.

Co-authored-by: Atlas orchestrator <atlas@opencode.dev>
2026-02-03 20:40:06 +01:00
m3tm3re
156ebf7d63 docs: fix duplicate success criteria in chiron-agent-framework plan
All 6 success criteria now properly marked as complete.

Co-authored-by: Atlas orchestrator <atlas@opencode.dev>
2026-02-03 20:39:26 +01:00
m3tm3re
a57e302727 docs: complete all success criteria in chiron-agent-framework
All 6 success criteria now marked as complete.

Co-authored-by: Atlas orchestrator <atlas@opencode.dev>
2026-02-03 20:34:28 +01:00
m3tm3re
d08deaf9d2 docs: mark all success criteria as complete
All 6 success criteria in plan file now marked complete.

Co-authored-by: Atlas orchestrator <atlas@opencode.dev>
2026-02-03 20:34:18 +01:00
m3tm3re
666696b17c docs: mark chiron-agent-framework plan complete
All 14 tasks completed and verified.

## Summary
- 6 agents defined (2 primary, 4 subagents)
- 6 system prompts created
- 5 tool integration skills created
- 1 validation script created
- All success criteria met

Co-authored-by: Atlas orchestrator <atlas@opencode.dev>
2026-02-03 20:33:01 +01:00
m3tm3re
1e7decc84a feat: add Chiron agent framework with 6 agents and 5 integration skills
Complete implementation of personal productivity agent framework for Oh-My-Opencode.

## Components Added

### Agents (6 total)
- Primary agents: chiron (Plan Mode), chiron-forge (Build Mode)
- Subagents: hermes (work communication), athena (work knowledge), apollo (private knowledge), calliope (writing)

### System Prompts (6 total)
- prompts/chiron.txt - Main orchestrator with delegation logic
- prompts/chiron-forge.txt - Execution/build counterpart
- prompts/hermes.txt - Basecamp, Outlook, MS Teams specialist
- prompts/athena.txt - Outline wiki/documentation specialist
- prompts/apollo.txt - Obsidian vault/private notes specialist
- prompts/calliope.txt - Writing/documentation specialist

### Integration Skills (5 total)
- skills/basecamp/SKILL.md - 63 MCP tools documented
- skills/outline/SKILL.md - Wiki/document management
- skills/msteams/SKILL.md - Teams/channels/meetings
- skills/outlook/SKILL.md - Email/calendar/contacts
- skills/obsidian/SKILL.md - Vault/note management

### Validation
- scripts/validate-agents.sh - Agent configuration validation
- All agents validated: JSON structure, modes, prompt references
- All prompts verified: Exist, non-empty, >500 chars
- All skills verified: Valid YAML frontmatter, SKILL.md structure

## Verification
 6 agents in agents.json
 All 6 prompt files exist and non-empty
 All 5 skills have valid SKILL.md with YAML frontmatter
 validate-agents.sh passes (exit 0)

Co-authored-by: Sisyphus framework <atlas@opencode.dev>
2026-02-03 20:30:34 +01:00
m3tm3re
76cd0e4ee6 Create Athena (Work Knowledge) system prompt
- Outline wiki specialization: document CRUD, search, collections, sharing
- Focus: wiki search, knowledge retrieval, documentation updates
- Follows standard prompt structure: 8 sections matching Apollo/Calliope
- Explicit boundaries: Hermes (comm), Apollo (private), Calliope (creative)
- Uses Question tool for document selection and search scope
- Verification: outline, wiki/knowledge, document keywords confirmed
2026-02-03 20:18:52 +01:00
m3tm3re
4fcab26c16 Create Hermes system prompt (Wave 2, Task 5)
- Added prompts/hermes.txt with Basecamp, Outlook, Teams specialization
- Follows consistent structure pattern from apollo.txt and calliope.txt
- Defines Hermes as work communication specialist
- Includes tool usage patterns for Question tool and MCP integrations
- Verifies with grep: basecamp, outlook/email, teams/meeting
- Appends learnings to chiron-agent-framework notepad
2026-02-03 20:18:46 +01:00
m3tm3re
f20f5223d5 Create agents.json with 6 agent definitions (Wave 1, Task 1)
- Added all 6 agents: chiron, chiron-forge, hermes, athena, apollo, calliope
- Primary agents (2): chiron (Plan Mode), chiron-forge (Build Mode)
- Subagents (4): hermes (communications), athena (work knowledge), apollo (private knowledge), calliope (writing)
- All agents use model: zai-coding-plan/glm-4.7
- Prompt references use file pattern: {file:./prompts/<name>.txt}
- Permission structure: primaries have external_directory rules, subagents have simple question: allow
- Verified with Python JSON validation (6 agents, correct names)
- Documented patterns and learnings in notepad
2026-02-03 20:14:34 +01:00
m3tm3re
36c82293f9 Agent restructure 2026-02-03 20:09:15 +01:00
m3tm3re
7e4a44eed6 Agent restructure 2026-02-03 20:04:26 +01:00
m3tm3re
1f320f1c95 Add scripts/validate-agents.sh for agent validation 2026-02-03 19:23:26 +01:00
m3tm3re
fddc22e55e Add outlook skill with Graph API documentation
- Create skills/outlook/SKILL.md with comprehensive Outlook Graph API documentation
- Document mail CRUD operations: list, get, create, send, reply, forward, update, delete
- Document folder management: list, create, update, delete, move, copy
- Document calendar events: list, get, create, update, delete, accept/decline
- Document contacts: list, get, create, update, delete, folder management
- Include search operations for mail, contacts, and events
- Provide common workflows for email, inbox organization, meeting invitations
- Include IDs and discovery guidance
- Set compatibility to opencode
- Close issue AGENTS-ch2
2026-02-03 18:55:15 +01:00
m3tm3re
db1a5ba9ce Add MS Teams Graph API integration skill
Created skills/msteams/SKILL.md with comprehensive documentation for:
- Teams and channels management
- Channel messages (send, retrieve, edit, delete)
- Meeting scheduling and management
- Chat conversations (1:1, group, meeting)
- Common workflows for automation
- API endpoint reference
- Best practices and integration examples

Follows SKILL.md format with YAML frontmatter.
Compatibility: opencode
2026-02-03 18:52:14 +01:00
m3tm3re
730e33b908 Add Apollo system prompt for private knowledge management 2026-02-03 18:50:32 +01:00
m3tm3re
ecece88fba Create Calliope writing prompt
- Define Calliope as Greek muse specializing in documentation, reports, meeting notes
- Include Question tool for clarifying tone, audience, format
- Set scope boundaries: delegates tools, no overlap with Hermes/Athena
- Follow standard prompt structure from agent-development skill
2026-02-03 18:50:22 +01:00
m3tm3re
1252b9ffe7 Create Chiron-Forge build/execution mode system prompt
Define Chiron-Forge as execution/build counterpart to Chiron with:
- Full write access for task execution
- Clear distinction from Chiron's planning/analysis role
- Question tool for destructive operations confirmation
- Workflow: Receive → Understand → Plan Action → Execute → Confirm → Report
- Delegation to subagents for specialized domains

File: prompts/chiron-forge.txt (3185 chars, 67 lines)
2026-02-03 18:49:31 +01:00
m3tm3re
3f2f766af6 Create prompts/chiron.txt with Chiron plan/analysis mode system prompt
- Define Chiron as main orchestrator in plan/analysis mode
- Include delegation logic to subagents (Hermes, Athena, Apollo, Calliope)
- Add Question tool usage for ambiguous requests
- Specify read-only permissions (no file modifications)
- Focus on planning, analysis, guidance, and delegation
- Use second-person addressing throughout
2026-02-03 18:48:39 +01:00
m3tm3re
cb383f9c7f Athena permissions refined 2026-02-02 19:21:04 +01:00
m3tm3re
e01198d40d docs(plan): mark all tasks complete in agent-permissions-refinement plan 2026-02-02 19:08:48 +01:00
m3tm3re
c58c28aef5 chore(agents): refine permissions for Chiron and Chriton-Forge with security hardening 2026-02-02 19:06:49 +01:00
m3tm3re
468673c125 Add Phase 1 completion summary
Documentation added:
- phase1-complete.md: Complete overview of Phase 1 deliverables

Summary:
- 4 skills created/updated (outline, basecamp, daily-routines, meeting-notes)
- 3 documentation files created (work-para-structure, work-quickstart, teams-transcript-workflow)
- PARA structure created (10 projects, 5 areas)
- All integrations configured and documented

Next steps for user:
1. Customize projects with actual Basecamp data
2. Configure Outline MCP
3. Test workflows
4. Add n8n automation when ready

Status: Phase 1 complete, foundation ready for use.
2026-01-28 19:06:04 +01:00
m3tm3re
325e06ad12 Complete Phase 1: Work integration (all tasks)
Documentation Added:
- skills/chiron-core/references/work-para-structure.md: Complete work PARA guide
- skills/chiron-core/references/work-quickstart.md: User quick start guide

What Was Completed:
1.  Created outline skill with full MCP integration
2.  Enhanced basecamp skill with project mapping
3.  Enhanced daily-routines with work context
4.  Created Teams transcript workflow guide
5.  Set up PARA work structure (10 projects + 5 areas)
6.  Created comprehensive documentation

Integration Ready:
- Basecamp ↔ Obsidian: Project mapping and task sync
- Outline ↔ Obsidian: Wiki search, export, AI queries
- Teams → Obsidian → Basecamp: Transcript processing workflow
- All integrated into daily/weekly routines

PARA Work Structure:
- 01-projects/work/: 10 project folders (placeholders ready for customization)
- 02-areas/work/: 5 ongoing areas
- 03-resources/work/wiki-mirror/: Ready for Outline exports
- 04-archive/work/: Ready for completed work

Next Steps for User:
1. Customize project names with actual Basecamp projects
2. Configure Outline MCP with your instance
3. Test Basecamp connection
4. Process first Teams transcript using workflow
5. Add n8n workflows when ready (automate Basecamp/Outline sync)

Note: All work knowledge stored in Obsidian (tool-agnostic).
Jobs easily portable: archive work/, update tool configs, create new projects.
2026-01-28 19:02:20 +01:00
m3tm3re
e2932d1d84 Implement Phase 1: Work integration (without n8n)
Skills Created:
- outline: Full MCP integration with Outline wiki (search, read, create, export, AI queries)
- Enhanced basecamp: Added project mapping configuration to PARA structure
- Enhanced daily-routines: Integrated work context (Basecamp, Outline) into daily/weekly workflows
- Enhanced meeting-notes: Added Teams transcript processing workflow guide

PARA Work Structure Created:
- 01-projects/work/: 10 project folders with MOCs (placeholders for user customization)
- 02-areas/work/: 5 area files (current-job, professional-dev, team-management, company-knowledge, technical-excellence)
- 03-resources/work/wiki-mirror/: Ready for Outline exports
- 04-archive/work/: Ready for completed work

Documentation Added:
- skills/outline/SKILL.md: Comprehensive wiki workflows and tool references
- skills/outline/references/outline-workflows.md: Detailed usage examples
- skills/outline/references/export-patterns.md: Obsidian integration patterns
- skills/meeting-notes/references/teams-transcript-workflow.md: Manual DOCX → meeting note workflow
- skills/chiron-core/references/work-para-structure.md: Work-specific PARA organization

Key Integrations:
- Basecamp ↔ Obsidian: Project mapping and task sync
- Outline ↔ Obsidian: Wiki search, export decisions, knowledge discovery
- Teams → Obsidian: Transcript processing with AI analysis
- All integrated into daily/weekly routines

Note: n8n workflows skipped per user request. Ready for n8n automation later.
2026-01-28 18:58:49 +01:00
m3tm3re
3e3b17de38 Migrate from Anytype to Obsidian across all skills and documentation 2026-01-27 20:09:05 +01:00
m3tm3re
240fde83dd Update Obsidian vault path from ~/knowledge to ~/CODEX 2026-01-27 19:10:13 +01:00
m3tm3re
63cd7fe102 Rename directories to plural form: skill/ → skills/, agent/ → agents/, command/ → commands/
- Rename skill/ to skills/ for consistency with naming conventions
- Rename agent/ to agents/ and command/ to commands/
- Update AGENTS.md with all directory references
- Update scripts/test-skill.sh paths
- Update prompts/athena.txt documentation

This aligns with best practices of using plural directory names and updates
all documentation to reflect the new structure.
2026-01-26 20:42:05 +01:00
m3tm3re
aeeeb559ed Sync beads: Close 6 Athena agent issues 2026-01-26 19:34:54 +01:00
m3tm3re
87bd75872c Fix Athena agent configuration and prompt to match agent-development skill guidelines
- Add explicit 'mode': 'subagent' field to athena agent
- Add 'temperature': 0.1 to athena agent for deterministic results
- Rename 'Core Capabilities' to 'Your Core Responsibilities:'
- Convert responsibilities from subsections to numbered list format
- Rename 'Ethical Guidelines' to 'Quality Standards'
- Remove references to non-existent validate-agent.sh script

All 6 related beads issues closed.
2026-01-26 19:34:43 +01:00
128 changed files with 5746 additions and 4075 deletions

39
.beads/.gitignore vendored
View File

@@ -1,39 +0,0 @@
# SQLite databases
*.db
*.db?*
*.db-journal
*.db-wal
*.db-shm
# Daemon runtime files
daemon.lock
daemon.log
daemon.pid
bd.sock
sync-state.json
last-touched
# Local version tracking (prevents upgrade notification spam after git ops)
.local_version
# Legacy database files
db.sqlite
bd.db
# Worktree redirect file (contains relative path to main repo's .beads/)
# Must not be committed as paths would be wrong in other clones
redirect
# Merge artifacts (temporary files from 3-way merge)
beads.base.jsonl
beads.base.meta.json
beads.left.jsonl
beads.left.meta.json
beads.right.jsonl
beads.right.meta.json
# NOTE: Do NOT add negation patterns (e.g., !issues.jsonl) here.
# They would override fork protection in .git/info/exclude, allowing
# contributors to accidentally commit upstream issue databases.
# The JSONL files (issues.jsonl, interactions.jsonl) and config files
# are tracked by git by default since no pattern above ignores them.

View File

@@ -1,81 +0,0 @@
# Beads - AI-Native Issue Tracking
Welcome to Beads! This repository uses **Beads** for issue tracking - a modern, AI-native tool designed to live directly in your codebase alongside your code.
## What is Beads?
Beads is issue tracking that lives in your repo, making it perfect for AI coding agents and developers who want their issues close to their code. No web UI required - everything works through the CLI and integrates seamlessly with git.
**Learn more:** [github.com/steveyegge/beads](https://github.com/steveyegge/beads)
## Quick Start
### Essential Commands
```bash
# Create new issues
bd create "Add user authentication"
# View all issues
bd list
# View issue details
bd show <issue-id>
# Update issue status
bd update <issue-id> --status in_progress
bd update <issue-id> --status done
# Sync with git remote
bd sync
```
### Working with Issues
Issues in Beads are:
- **Git-native**: Stored in `.beads/issues.jsonl` and synced like code
- **AI-friendly**: CLI-first design works perfectly with AI coding agents
- **Branch-aware**: Issues can follow your branch workflow
- **Always in sync**: Auto-syncs with your commits
## Why Beads?
**AI-Native Design**
- Built specifically for AI-assisted development workflows
- CLI-first interface works seamlessly with AI coding agents
- No context switching to web UIs
🚀 **Developer Focused**
- Issues live in your repo, right next to your code
- Works offline, syncs when you push
- Fast, lightweight, and stays out of your way
🔧 **Git Integration**
- Automatic sync with git commits
- Branch-aware issue tracking
- Intelligent JSONL merge resolution
## Get Started with Beads
Try Beads in your own projects:
```bash
# Install Beads
curl -sSL https://raw.githubusercontent.com/steveyegge/beads/main/scripts/install.sh | bash
# Initialize in your repo
bd init
# Create your first issue
bd create "Try out Beads"
```
## Learn More
- **Documentation**: [github.com/steveyegge/beads/docs](https://github.com/steveyegge/beads/tree/main/docs)
- **Quick Start Guide**: Run `bd quickstart`
- **Examples**: [github.com/steveyegge/beads/examples](https://github.com/steveyegge/beads/tree/main/examples)
---
*Beads: Issue tracking that moves at the speed of thought*

View File

@@ -1,62 +0,0 @@
# Beads Configuration File
# This file configures default behavior for all bd commands in this repository
# All settings can also be set via environment variables (BD_* prefix)
# or overridden with command-line flags
# Issue prefix for this repository (used by bd init)
# If not set, bd init will auto-detect from directory name
# Example: issue-prefix: "myproject" creates issues like "myproject-1", "myproject-2", etc.
# issue-prefix: ""
# Use no-db mode: load from JSONL, no SQLite, write back after each command
# When true, bd will use .beads/issues.jsonl as the source of truth
# instead of SQLite database
# no-db: false
# Disable daemon for RPC communication (forces direct database access)
# no-daemon: false
# Disable auto-flush of database to JSONL after mutations
# no-auto-flush: false
# Disable auto-import from JSONL when it's newer than database
# no-auto-import: false
# Enable JSON output by default
# json: false
# Default actor for audit trails (overridden by BD_ACTOR or --actor)
# actor: ""
# Path to database (overridden by BEADS_DB or --db)
# db: ""
# Auto-start daemon if not running (can also use BEADS_AUTO_START_DAEMON)
# auto-start-daemon: true
# Debounce interval for auto-flush (can also use BEADS_FLUSH_DEBOUNCE)
# flush-debounce: "5s"
# Git branch for beads commits (bd sync will commit to this branch)
# IMPORTANT: Set this for team projects so all clones use the same sync branch.
# This setting persists across clones (unlike database config which is gitignored).
# Can also use BEADS_SYNC_BRANCH env var for local override.
# If not set, bd sync will require you to run 'bd config set sync.branch <branch>'.
# sync-branch: "beads-sync"
# Multi-repo configuration (experimental - bd-307)
# Allows hydrating from multiple repositories and routing writes to the correct JSONL
# repos:
# primary: "." # Primary repo (where this database lives)
# additional: # Additional repos to hydrate from (read-only)
# - ~/beads-planning # Personal planning repo
# - ~/work-planning # Work planning repo
# Integration settings (access with 'bd config get/set')
# These are stored in the database, not in this file:
# - jira.url
# - jira.project
# - linear.url
# - linear.api-key
# - github.org
# - github.repo

View File

@@ -1,6 +0,0 @@
{"id":"AGENTS-1jw","title":"Athena prompt: Convert to numbered responsibility format","description":"Athena prompt uses bullet points under 'Core Capabilities' section instead of numbered lists. Per agent-development skill best practices, responsibilities should be numbered (1, 2, 3) for clarity. Update prompts/athena.txt to use numbered format.","status":"open","priority":2,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-24T19:32:16.133701271+01:00","created_by":"m3tm3re","updated_at":"2026-01-24T19:32:16.133701271+01:00"}
{"id":"AGENTS-7gt","title":"Athena prompt: Rename Core Capabilities to exact header","description":"Athena prompt uses 'Core Capabilities' section header instead of 'Your Core Responsibilities:'. Per agent-development skill guidelines, the exact header 'Your Core Responsibilities:' should be used for consistency. Update prompts/athena.txt to use the exact recommended header.","status":"open","priority":1,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-24T19:32:07.223102836+01:00","created_by":"m3tm3re","updated_at":"2026-01-24T19:32:11.002070978+01:00"}
{"id":"AGENTS-in5","title":"Athena prompt: Standardize section headers","description":"Athena prompt uses 'Ethical Guidelines' and 'Methodological Rigor' headers instead of standard 'Quality Standards' and 'Edge Cases' headers. While semantically equivalent, skill recommends exact headers for consistency. Consider renaming in prompts/athena.txt.","status":"open","priority":2,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-24T19:32:21.720932741+01:00","created_by":"m3tm3re","updated_at":"2026-01-24T19:32:21.720932741+01:00"}
{"id":"AGENTS-lyd","title":"Athena agent: Add explicit mode field","description":"Athena agent is missing the explicit 'mode': 'subagent' field. Per agent-development skill guidelines, all agents should explicitly declare mode for clarity. Current config relies on default which makes intent unclear.","status":"open","priority":0,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-24T19:31:46.255196119+01:00","created_by":"m3tm3re","updated_at":"2026-01-24T19:31:51.046380855+01:00"}
{"id":"AGENTS-mfw","title":"Athena agent: Add temperature setting","description":"Athena agent lacks explicit temperature configuration. Per agent-development skill, research/analysis agents should use temperature 0.0-0.2 for focused, deterministic, consistent results. Add 'temperature': 0.1 to agent config in agents.json.","status":"open","priority":1,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-24T19:31:55.726506579+01:00","created_by":"m3tm3re","updated_at":"2026-01-24T19:31:59.446904521+01:00"}
{"id":"AGENTS-o45","title":"Agent development: Document validation script availability","description":"The agent-development skill references scripts/validate-agent.sh but this script doesn't exist in the repository. Consider either: (1) creating the validation script, or (2) removing the reference and only documenting the python3 alternative.","status":"open","priority":2,"issue_type":"task","owner":"p@m3ta.dev","created_at":"2026-01-24T19:32:27.325525742+01:00","created_by":"m3tm3re","updated_at":"2026-01-24T19:32:27.325525742+01:00"}

View File

@@ -1,4 +0,0 @@
{
"database": "beads.db",
"jsonl_export": "issues.jsonl"
}

1
.envrc Normal file
View File

@@ -0,0 +1 @@
use flake

14
.gitignore vendored Normal file
View File

@@ -0,0 +1,14 @@
.todos/
# Sidecar worktree state files
.sidecar/
.sidecar-agent
.sidecar-task
.sidecar-pr
.sidecar-start.sh
.sidecar-base
.td-root
# Nix / direnv
.direnv/
result

430
AGENTS.md
View File

@@ -1,385 +1,129 @@
# Agent Instructions - Opencode Skills Repository
# Opencode Skills Repository
This repository contains Opencode Agent Skills, context files, and agent configurations for personal productivity and AI-assisted workflows. Files are deployed to `~/.config/opencode/` via Nix flake + home-manager.
Configuration repository for Opencode Agent Skills, context files, and agent configurations. Deployed via Nix home-manager to `~/.config/opencode/`.
## Project Overview
## Quick Commands
**Type**: Configuration-only repository (no build/compile step)
**Purpose**: Central repository for Opencode Agent Skills, AI agent configurations, custom commands, and workflows. Extensible framework for productivity, automation, knowledge management, and AI-assisted development.
**Primary User**: Sascha Koenig (@m3tam3re)
**Deployment**: Nix flake → home-manager → `~/.config/opencode/`
```bash
# Skill validation
./scripts/test-skill.sh --validate # Validate all skills
./scripts/test-skill.sh <skill-name> # Validate specific skill
./scripts/test-skill.sh --run # Test interactively
### Current Focus Areas
# Skill creation
python3 skills/skill-creator/scripts/init_skill.py <name> --path skills/
```
- **Productivity & Task Management** - PARA methodology, Anytype integration, reviews
- **Knowledge Management** - Note capture, organization, research workflows
- **Communications** - Email drafts, follow-ups, calendar scheduling
- **AI Development** - Skill creation, agent configurations, custom commands
- **Memory & Context** - Persistent memory with Mem0, conversation analysis
### Extensibility
This repository serves as a foundation for any Opencode-compatible skill or agent configuration. Add new skills for:
- Domain-specific workflows (finance, legal, engineering, etc.)
- Tool integrations (APIs, databases, cloud platforms)
- Custom automation and productivity systems
- Specialized AI agents for different contexts
### Directory Structure
## Directory Structure
```
.
├── agent/ # Agent definitions (agents.json)
├── prompts/ # Agent system prompts (chiron.txt, chiron-forge.txt)
├── context/ # User profiles and preferences
├── command/ # Custom command definitions
├── skill/ # Opencode Agent Skills (8 skills)
── task-management/
│ ├── skill-creator/
│ ├── reflection/
│ ├── communications/
── calendar-scheduling/
│ ├── mem0-memory/
│ ├── research/
│ └── knowledge-management/
├── scripts/ # Repository-level utility scripts
└── AGENTS.md # This file
├── skills/ # Agent skills (15 modules)
│ └── skill-name/
│ ├── SKILL.md # Required: YAML frontmatter + workflows
│ ├── scripts/ # Executable code (optional)
│ ├── references/ # Domain docs (optional)
── assets/ # Templates/files (optional)
├── rules/ # AI coding rules (languages, concerns, frameworks)
│ ├── languages/ # Python, TypeScript, Nix, Shell
│ ├── concerns/ # Testing, naming, documentation, etc.
── frameworks/ # Framework-specific rules (n8n, etc.)
├── agents/ # Agent definitions (agents.json)
├── prompts/ # System prompts (chiron*.txt)
├── context/ # User profiles
├── commands/ # Custom commands
└── scripts/ # Repo utilities (test-skill.sh, validate-agents.sh)
```
## Code Conventions
## Skill Development
**File naming**: hyphen-case (skills), snake_case (Python), UPPERCASE/sentence-case (MD)
### Creating a New Skill
Use the skill initialization script:
```bash
python3 skill/skill-creator/scripts/init_skill.py <skill-name> --path skill/
```
This creates:
- `skill/<skill-name>/SKILL.md` with proper frontmatter template
- `skill/<skill-name>/scripts/` - For executable code
- `skill/<skill-name>/references/` - For documentation
- `skill/<skill-name>/assets/` - For templates/files
### Validating Skills
Run validation before committing:
```bash
python3 skill/skill-creator/scripts/quick_validate.py skill/<skill-name>
```
**Validation checks:**
- YAML frontmatter structure
- Required fields: `name`, `description`
- Name format: hyphen-case, max 64 chars
- Description: max 1024 chars, no angle brackets
- Allowed frontmatter properties: `name`, `description`, `compatibility`, `license`, `allowed-tools`, `metadata`
### Skill Structure Requirements
**SKILL.md Frontmatter** (required):
**SKILL.md structure**:
```yaml
---
name: skill-name
description: What it does and when to use it. Include trigger words.
description: "Use when: (1) X, (2) Y. Triggers: a, b, c."
compatibility: opencode
---
## Overview (1 line)
## Core Workflows (step-by-step)
## Integration with Other Skills
```
**Resource Directories** (optional):
- `scripts/` - Executable Python/Bash code for deterministic operations
- `references/` - Documentation loaded into context as needed
- `assets/` - Files used in output (templates, images, fonts)
**Python**: `#!/usr/bin/env python3` + docstrings + emoji feedback (✅/❌)
**Bash**: `#!/usr/bin/env bash` + `set -euo pipefail`
**Markdown**: YAML frontmatter, ATX headers, `-` lists, language in code blocks
### Skill Design Principles
## Anti-Patterns (CRITICAL)
1. **Concise is key** - Context window is shared resource
2. **Progressive disclosure** - Metadata → SKILL.md body → bundled resources
3. **Appropriate freedom** - Match specificity to task fragility
4. **No extraneous files** - No README.md, CHANGELOG.md, etc. in skills
5. **Reference patterns** - See `skill/skill-creator/references/workflows.md` and `output-patterns.md`
**Frontend Design**: NEVER use generic AI aesthetics, NEVER converge on common choices
**Excalidraw**: NEVER use `label` property (use boundElements + text element pairs instead)
**Debugging**: NEVER fix just symptom, ALWAYS find root cause first
**Excel**: ALWAYS respect existing template conventions over guidelines
**Structure**: NEVER place scripts/docs outside scripts/references/ directories
## Code Style Guidelines
## Testing Patterns
### File Naming
**Unique conventions** (skill-focused, not CI/CD):
- Manual validation via `test-skill.sh`, no automated CI
- Tests co-located with source (not separate test directories)
- YAML frontmatter validation = primary quality gate
- Mixed formats: Python unittest, markdown pressure tests, A/B prompt testing
**Skills**: Hyphen-case (e.g., `task-management`, `skill-creator`)
**Python scripts**: Snake_case (e.g., `init_skill.py`, `quick_validate.py`)
**Markdown files**: UPPERCASE or sentence-case (e.g., `SKILL.md`, `profile.md`)
**Configuration**: Standard conventions (e.g., `config.yaml`, `metadata.json`)
**Known deviations**:
- `systematic-debugging/test-*.md` - Academic/pressure testing in wrong location
- `pdf/forms.md`, `pdf/reference.md` - Docs outside references/
### Markdown Style
## Deployment
**Frontmatter**:
- Always use YAML format between `---` delimiters
- Required fields for skills: `name`, `description`
- Optional: `compatibility: opencode`, `mode: primary`
**Headers**:
- Use ATX-style (`#`, `##`, `###`)
- One H1 per file (skill title)
- Clear hierarchy
**Lists**:
- Use `-` for unordered lists (not `*`)
- Use numbered lists for sequential steps
- Indent nested lists with 2 spaces
**Code blocks**:
- Always specify language for syntax highlighting
- Use `bash` for shell commands
- Use `yaml`, `nix`, `python` as appropriate
**Tables**:
- Use for structured comparisons and reference data
- Keep aligned for readability in source
- Example:
```markdown
| Header 1 | Header 2 |
|----------|----------|
| Value | Value |
```
### Python Style
**Shebang**: Always use `#!/usr/bin/env python3`
**Docstrings**:
```python
"""
Brief description of module/script
Usage:
script_name.py <arg1> --flag <arg2>
Examples:
script_name.py my-skill --path ~/.config/opencode/skill
"""
```
**Imports**:
```python
# Standard library
import sys
import os
from pathlib import Path
# Third-party (if any)
import yaml
# Local (if any)
from . import utilities
```
**Naming**:
- Functions: `snake_case`
- Classes: `PascalCase`
- Constants: `UPPER_SNAKE_CASE`
- Private: `_leading_underscore`
**Error handling**:
```python
try:
# operation
except SpecificException as e:
print(f"❌ Error: {e}")
return None
```
**User feedback**:
- Use ✅ for success messages
- Use ❌ for error messages
- Print progress for multi-step operations
### YAML Style
```yaml
# Use lowercase keys with hyphens
skill-name: value
# Quotes for strings with special chars
description: "PARA-based task management. Use when: (1) item, (2) item."
# No quotes for simple strings
compatibility: opencode
# Lists with hyphens
items:
- first
- second
```
## Nix Flake Integration
This repository is the central source for all Opencode configuration, consumed as a **non-flake input** by your NixOS configuration.
### Integration Reference
**NixOS config location**: `~/p/NIX/nixos-config/home/features/coding/opencode.nix`
**Flake input definition** (in `flake.nix`):
**Nix flake pattern**:
```nix
agents = {
url = "git+https://code.m3ta.dev/m3tam3re/AGENTS";
flake = false; # Pure files, not a Nix flake
inputs.nixpkgs.follows = "nixpkgs"; # Optional but recommended
};
```
### Deployment Mapping
**Exports:**
- `packages.skills-runtime` — composable runtime with all skill dependencies
- `devShells.default` — dev environment for working on skills
| Source | Deployed To | Method |
|--------|-------------|--------|
| `skill/` | `~/.config/opencode/skill/` | xdg.configFile (symlink) |
| `context/` | `~/.config/opencode/context/` | xdg.configFile (symlink) |
| `command/` | `~/.config/opencode/command/` | xdg.configFile (symlink) |
| `prompts/` | `~/.config/opencode/prompts/` | xdg.configFile (symlink) |
| `agent/agents.json` | `programs.opencode.settings.agent` | **Embedded into config.json** |
**Mapping** (via home-manager):
- `skills/`, `context/`, `commands/`, `prompts/` → symlinks
- `agents/agents.json` → embedded into config.json
- Agent changes: require `home-manager switch`
- Other changes: visible immediately
### Important: Agent Configuration Nuance
## Rules System
The `agent/` directory is **NOT** deployed as files to `~/.config/opencode/agent/`. Instead, `agents.json` is read at Nix evaluation time and embedded directly into the opencode `config.json` via:
Centralized AI coding rules consumed via `mkOpencodeRules` from m3ta-nixpkgs:
```nix
programs.opencode.settings.agent = builtins.fromJSON (builtins.readFile "${inputs.agents}/agent/agents.json");
# In project flake.nix
m3taLib.opencode-rules.mkOpencodeRules {
inherit agents;
languages = [ "python" "typescript" ];
frameworks = [ "n8n" ];
};
```
**Implications**:
- Agent changes require `home-manager switch` to take effect
- Skills, context, and commands are symlinked (changes visible immediately after rebuild)
- The `prompts/` directory is referenced by `agents.json` via `{file:./prompts/chiron.txt}` syntax
## Quality Gates
Before committing changes, verify:
1. **Skill validation** - Run `quick_validate.py` on modified skills
2. **File structure** - Ensure no extraneous files (README in skills, etc.)
3. **Frontmatter** - Check YAML syntax and required fields
4. **Scripts executable** - Python scripts should have proper shebang
5. **Markdown formatting** - Check headers, lists, code blocks
6. **Git status** - No uncommitted or untracked files that should be tracked
## Landing the Plane (Session Completion)
**When ending a work session**, you MUST complete ALL steps below. Work is NOT complete until `git push` succeeds.
## Testing Skills
Since this repo deploys via Nix/home-manager, changes require a rebuild to appear in `~/.config/opencode/`. Use these methods to test skills during development.
### Method 1: XDG_CONFIG_HOME Override (Recommended)
Test skills by pointing opencode to this repository directly:
```bash
# From the AGENTS repository root
cd ~/p/AI/AGENTS
# List skills loaded from this repo (not the deployed ones)
XDG_CONFIG_HOME=. opencode debug skill
# Run an interactive session with development skills
XDG_CONFIG_HOME=. opencode
# Or use the convenience script
./scripts/test-skill.sh # List all development skills
./scripts/test-skill.sh task-management # Validate specific skill
./scripts/test-skill.sh --run # Launch interactive session
```
**Note**: The convenience script creates a temporary directory with proper symlinks since opencode expects `$XDG_CONFIG_HOME/opencode/skill/` structure.
### Method 2: Project-Local Skills
For quick iteration on a single skill, use `.opencode/skill/` in any project:
```bash
cd /path/to/any/project
mkdir -p .opencode/skill/
# Symlink the skill you're developing
ln -s ~/p/AI/AGENTS/skill/my-skill .opencode/skill/
# Skills in .opencode/skill/ are auto-discovered alongside global skills
opencode debug skill
```
### Method 3: Validation Only
Validate skill structure without running opencode:
```bash
# Validate a single skill
python3 skill/skill-creator/scripts/quick_validate.py skill/<skill-name>
# Validate all skills
for dir in skill/*/; do
python3 skill/skill-creator/scripts/quick_validate.py "$dir"
done
```
### Verification Commands
```bash
# List all loaded skills (shows name, description, location)
opencode debug skill
# Show resolved configuration
opencode debug config
# Show where opencode looks for files
opencode debug paths
```
## Common Operations
### Create New Skill
```bash
# Initialize
python3 skill/skill-creator/scripts/init_skill.py my-new-skill --path skill/
# Edit SKILL.md and implement resources
# Delete unneeded example files from scripts/, references/, assets/
# Validate
python3 skill/skill-creator/scripts/quick_validate.py skill/my-new-skill
```
### Update User Context
Edit `context/profile.md` to update:
- Work style preferences
- PARA areas
- Communication preferences
- Integration status
### Modify Agent Behavior
Edit `agent/agents.json` to adjust agent definitions, and `prompts/*.txt` for system prompts:
- `agent/agents.json` - Agent names, models, permissions
- `prompts/chiron.txt` - Chiron (Plan Mode) system prompt
- `prompts/chiron-forge.txt` - Chiron-Forge (Worker Mode) system prompt
## Reference Documentation
**Skill creation guide**: `skill/skill-creator/SKILL.md`
**Workflow patterns**: `skill/skill-creator/references/workflows.md`
**Output patterns**: `skill/skill-creator/references/output-patterns.md`
**User profile**: `context/profile.md`
**Agent config**: `agent/agents.json`
See `rules/USAGE.md` for full documentation.
## Notes for AI Agents
1. **This is a config repo** - No compilation, no tests, no runtime
2. **Validation is manual** - Run scripts explicitly before committing
3. **Skills are documentation** - Write for AI consumption, not humans
4. **Context window matters** - Keep skills concise, use progressive disclosure
5. **Nix deployment** - Maintain structure expected by home-manager
6. **Always push** - Follow session completion workflow religiously
1. **Config-only repo** - No compilation, no build, manual validation only
2. **Skills are documentation** - Write for AI consumption, progressive disclosure
3. **Consistent structure** - All skills follow 4-level deep pattern (skills/name/ + optional subdirs)
4. **Cross-cutting concerns** - Standardized SKILL.md, workflow patterns, delegation rules
5. **Always push** - Session completion workflow: commit + git push
## Quality Gates
Before committing:
1. `./scripts/test-skill.sh --validate`
2. Python shebang + docstrings check
3. No extraneous files (README.md, CHANGELOG.md in skills/)
4. If skill has scripts with external dependencies → verify `flake.nix` is updated (see skill-creator Step 4)
5. Git status clean

316
README.md
View File

@@ -1,6 +1,6 @@
# Opencode Agent Skills & Configurations
Central repository for [Opencode](https://opencode.dev) Agent Skills, AI agent configurations, custom commands, and AI-assisted workflows. This is an extensible framework for building productivity systems, automations, knowledge management, and specialized AI capabilities.
Central repository for [Opencode](https://opencode.ai) Agent Skills, AI agent configurations, custom commands, and AI-assisted workflows. This is an extensible framework for building productivity systems, automations, knowledge management, and specialized AI capabilities.
## 🎯 What This Repository Provides
@@ -8,36 +8,45 @@ This repository serves as a **personal AI operating system** - a collection of s
- **Productivity & Task Management** - PARA methodology, GTD workflows, project tracking
- **Knowledge Management** - Note-taking, research workflows, information organization
- **Communications** - Email management, meeting scheduling, follow-up tracking
- **AI Development** - Tools for creating new skills and agent configurations
- **Memory & Context** - Persistent memory systems, conversation analysis
- **Document Processing** - PDF manipulation, spreadsheet handling, diagram generation
- **Custom Workflows** - Domain-specific automation and specialized agents
## 📂 Repository Structure
```
.
├── agent/ # Agent definitions (agents.json)
├── prompts/ # Agent system prompts (chiron.txt, chiron-forge.txt)
├── agents/ # Agent definitions (agents.json)
├── prompts/ # Agent system prompts (chiron.txt, chiron-forge.txt, etc.)
├── context/ # User profiles and preferences
│ └── profile.md # Work style, PARA areas, preferences
├── command/ # Custom command definitions
├── commands/ # Custom command definitions
│ └── reflection.md
├── skill/ # Opencode Agent Skills (11+ skills)
│ ├── task-management/ # PARA-based productivity
│ ├── skill-creator/ # Meta-skill for creating skills
│ ├── reflection/ # Conversation analysis
│ ├── communications/ # Email & messaging
│ ├── calendar-scheduling/ # Time management
│ ├── mem0-memory/ # Persistent memory
│ ├── research/ # Investigation workflows
│ ├── knowledge-management/ # Note capture & organization
├── skills/ # Opencode Agent Skills (15 skills)
│ ├── agent-development/ # Agent creation and configuration
│ ├── basecamp/ # Basecamp project management
│ ├── brainstorming/ # Ideation & strategic thinking
── plan-writing/ # Project planning templates
── doc-translator/ # Documentation translation
│ ├── excalidraw/ # Architecture diagrams
│ ├── frontend-design/ # UI/UX design patterns
│ ├── memory/ # Persistent memory system
│ ├── obsidian/ # Obsidian vault management
│ ├── outline/ # Outline wiki integration
│ ├── pdf/ # PDF manipulation toolkit
│ ├── prompt-engineering-patterns/ # Prompt patterns
│ ├── reflection/ # Conversation analysis
│ ├── skill-creator/ # Meta-skill for creating skills
│ ├── systematic-debugging/ # Debugging methodology
│ └── xlsx/ # Spreadsheet handling
├── scripts/ # Repository utility scripts
│ └── test-skill.sh # Test skills without deploying
├── .beads/ # Issue tracking database
├── rules/ # AI coding rules
│ ├── languages/ # Python, TypeScript, Nix, Shell
│ ├── concerns/ # Testing, naming, documentation
│ └── frameworks/ # Framework-specific rules (n8n)
├── flake.nix # Nix flake: dev shell + skills-runtime export
├── .envrc # direnv config (use flake)
├── AGENTS.md # Developer documentation
└── README.md # This file
```
@@ -46,43 +55,96 @@ This repository serves as a **personal AI operating system** - a collection of s
### Prerequisites
- **Opencode** - AI coding assistant ([opencode.dev](https://opencode.dev))
- **Nix** (optional) - For declarative deployment via home-manager
- **Python 3** - For skill validation and creation scripts
- **bd (beads)** (optional) - For issue tracking
- **Nix** with flakes enabled — for reproducible dependency management and deployment
- **direnv** (recommended) — auto-activates the development environment when entering the repo
- **Opencode** — AI coding assistant ([opencode.ai](https://opencode.ai))
### Installation
#### Option 1: Nix Flake (Recommended)
This repository is consumed as a **non-flake input** by your NixOS configuration:
This repository is a **Nix flake** that exports:
- **`devShells.default`** — development environment for working on skills (activated via direnv)
- **`packages.skills-runtime`** — composable runtime with all skill script dependencies (Python packages + system tools)
**Consume in your system flake:**
```nix
# In your flake.nix
# flake.nix
inputs.agents = {
url = "git+https://code.m3ta.dev/m3tam3re/AGENTS";
flake = false; # Pure files, not a Nix flake
inputs.nixpkgs.follows = "nixpkgs";
};
# In your home-manager module (e.g., opencode.nix)
xdg.configFile = {
"opencode/skill".source = "${inputs.agents}/skill";
"opencode/skills".source = "${inputs.agents}/skills";
"opencode/context".source = "${inputs.agents}/context";
"opencode/command".source = "${inputs.agents}/command";
"opencode/commands".source = "${inputs.agents}/commands";
"opencode/prompts".source = "${inputs.agents}/prompts";
};
# Agent config is embedded into config.json, not deployed as files
programs.opencode.settings.agent = builtins.fromJSON
(builtins.readFile "${inputs.agents}/agent/agents.json");
(builtins.readFile "${inputs.agents}/agents/agents.json");
```
Rebuild your system:
**Deploy skills via home-manager:**
```nix
# home-manager module (e.g., opencode.nix)
{ inputs, system, ... }:
{
# Skill files — symlinked, changes visible immediately
xdg.configFile = {
"opencode/skills".source = "${inputs.agents}/skills";
"opencode/context".source = "${inputs.agents}/context";
"opencode/commands".source = "${inputs.agents}/commands";
"opencode/prompts".source = "${inputs.agents}/prompts";
};
# Agent config — embedded into config.json (requires home-manager switch)
programs.opencode.settings.agent = builtins.fromJSON
(builtins.readFile "${inputs.agents}/agents/agents.json");
# Skills runtime — ensures opencode always has script dependencies
home.packages = [ inputs.agents.packages.${system}.skills-runtime ];
}
```
**Compose into project flakes** (so opencode has skill deps in any project):
```nix
# Any project's flake.nix
{
inputs.agents.url = "git+https://code.m3ta.dev/m3tam3re/AGENTS";
inputs.agents.inputs.nixpkgs.follows = "nixpkgs";
outputs = { self, nixpkgs, agents, ... }:
let
system = "x86_64-linux";
pkgs = nixpkgs.legacyPackages.${system};
in {
devShells.${system}.default = pkgs.mkShell {
packages = [
# project-specific tools
pkgs.nodejs
# skill script dependencies
agents.packages.${system}.skills-runtime
];
};
};
}
```
Rebuild:
```bash
home-manager switch
```
**Note**: The `agent/` directory is NOT deployed as files. Instead, `agents.json` is read at Nix evaluation time and embedded into the opencode `config.json`.
**Note**: The `agents/` directory is NOT deployed as files. Instead, `agents.json` is read at Nix evaluation time and embedded into the opencode `config.json`.
#### Option 2: Manual Installation
@@ -92,8 +154,11 @@ Clone and symlink:
# Clone repository
git clone https://github.com/yourusername/AGENTS.git ~/AGENTS
# Create symlink to Opencode config directory
ln -s ~/AGENTS ~/.config/opencode
# Create symlinks to Opencode config directory
ln -s ~/AGENTS/skills ~/.config/opencode/skills
ln -s ~/AGENTS/context ~/.config/opencode/context
ln -s ~/AGENTS/commands ~/.config/opencode/commands
ln -s ~/AGENTS/prompts ~/.config/opencode/prompts
```
### Verify Installation
@@ -101,8 +166,8 @@ ln -s ~/AGENTS ~/.config/opencode
Check that Opencode can see your skills:
```bash
# Skills should be available at ~/.config/opencode/skill/
ls ~/.config/opencode/skill/
# Skills should be available at ~/.config/opencode/skills/
ls ~/.config/opencode/skills/
```
## 🎨 Creating Your First Skill
@@ -112,18 +177,19 @@ Skills are modular packages that extend Opencode with specialized knowledge and
### 1. Initialize a New Skill
```bash
python3 skill/skill-creator/scripts/init_skill.py my-skill-name --path skill/
python3 skills/skill-creator/scripts/init_skill.py my-skill-name --path skills/
```
This creates:
- `skill/my-skill-name/SKILL.md` - Main skill documentation
- `skill/my-skill-name/scripts/` - Executable code (optional)
- `skill/my-skill-name/references/` - Reference documentation (optional)
- `skill/my-skill-name/assets/` - Templates and files (optional)
- `skills/my-skill-name/SKILL.md` - Main skill documentation
- `skills/my-skill-name/scripts/` - Executable code (optional)
- `skills/my-skill-name/references/` - Reference documentation (optional)
- `skills/my-skill-name/assets/` - Templates and files (optional)
### 2. Edit the Skill
Open `skill/my-skill-name/SKILL.md` and customize:
Open `skills/my-skill-name/SKILL.md` and customize:
```yaml
---
@@ -131,7 +197,6 @@ name: my-skill-name
description: What it does and when to use it. Include trigger keywords.
compatibility: opencode
---
# My Skill Name
## Overview
@@ -139,108 +204,111 @@ compatibility: opencode
[Your skill instructions for Opencode]
```
### 3. Validate the Skill
### 3. Register Dependencies
```bash
python3 skill/skill-creator/scripts/quick_validate.py skill/my-skill-name
If your skill includes scripts with external dependencies, add them to `flake.nix`:
```nix
# Python packages — add to pythonEnv:
# my-skill: my_script.py
some-python-package
# System tools — add to skills-runtime paths:
# my-skill: needed by my_script.py
pkgs.some-tool
```
### 4. Test the Skill
Verify: `nix develop --command python3 -c "import some_package"`
Test your skill without deploying via home-manager:
### 4. Validate the Skill
```bash
python3 skills/skill-creator/scripts/quick_validate.py skills/my-skill-name
```
### 5. Test the Skill
```bash
# Use the test script to validate and list skills
./scripts/test-skill.sh my-skill-name # Validate specific skill
./scripts/test-skill.sh --list # List all dev skills
./scripts/test-skill.sh --run # Launch opencode with dev skills
```
The test script creates a temporary config directory with symlinks to this repo's skills, allowing you to test changes before committing.
## 📚 Available Skills
| Skill | Purpose | Status |
|-------|---------|--------|
| **task-management** | PARA-based productivity with Anytype integration | ✅ Active |
| **skill-creator** | Guide for creating new Opencode skills | ✅ Active |
| **reflection** | Conversation analysis and skill improvement | ✅ Active |
| **communications** | Email drafts, follow-ups, message management | ✅ Active |
| **calendar-scheduling** | Time blocking, meeting management | ✅ Active |
| **mem0-memory** | Persistent memory storage and retrieval | ✅ Active |
| **research** | Investigation workflows, source management | ✅ Active |
| **knowledge-management** | Note capture, knowledge organization | ✅ Active |
| --------------------------- | -------------------------------------------------------------- | ------------ |
| **agent-development** | Create and configure Opencode agents | ✅ Active |
| **basecamp** | Basecamp project & todo management via MCP | ✅ Active |
| **brainstorming** | General-purpose ideation with Anytype save | ✅ Active |
| **plan-writing** | Project plans with templates (kickoff, tasks, risks) | ✅ Active |
| **brainstorming** | General-purpose ideation and strategic thinking | ✅ Active |
| **doc-translator** | Documentation translation to German/Czech with Outline publish | ✅ Active |
| **excalidraw** | Architecture diagrams from codebase analysis | ✅ Active |
| **frontend-design** | Production-grade UI/UX with high design quality | ✅ Active |
| **memory** | SQLite-based persistent memory with hybrid search | ✅ Active |
| **obsidian** | Obsidian vault management via Local REST API | ✅ Active |
| **outline** | Outline wiki integration for team documentation | ✅ Active |
| **pdf** | PDF manipulation, extraction, creation, and forms | ✅ Active |
| **prompt-engineering-patterns** | Advanced prompt engineering techniques | ✅ Active |
| **reflection** | Conversation analysis and skill improvement | ✅ Active |
| **skill-creator** | Guide for creating new Opencode skills | ✅ Active |
| **systematic-debugging** | Debugging methodology for bugs and test failures | ✅ Active |
| **xlsx** | Spreadsheet creation, editing, and analysis | ✅ Active |
## 🤖 AI Agents
### Chiron - Personal Assistant
### Primary Agents
**Configuration**: `agent/agents.json` + `prompts/chiron.txt`
| Agent | Mode | Purpose |
| ------------------- | ------- | ---------------------------------------------------- |
| **Chiron** | Plan | Read-only analysis, planning, and guidance |
| **Chiron Forge** | Build | Full execution and task completion with safety |
Chiron is a personal AI assistant focused on productivity and task management. Named after the wise centaur from Greek mythology, Chiron provides:
### Subagents (Specialists)
- Task and project management guidance
- Daily and weekly review workflows
- Skill routing based on user intent
- Integration with productivity tools (Anytype, ntfy, n8n)
| Agent | Domain | Purpose |
| ------------------- | ---------------- | ------------------------------------------ |
| **Hermes** | Communication | Basecamp, Outlook, MS Teams |
| **Athena** | Research | Outline wiki, documentation, knowledge |
| **Apollo** | Private Knowledge| Obsidian vault, personal notes |
| **Calliope** | Writing | Documentation, reports, prose |
**Modes**:
- **Chiron** (Plan Mode) - Read-only analysis and planning (`prompts/chiron.txt`)
- **Chiron-Forge** (Worker Mode) - Full write access with safety prompts (`prompts/chiron-forge.txt`)
**Configuration**: `agents/agents.json` + `prompts/*.txt`
**Triggers**: Personal productivity requests, task management, reviews, planning
## 🛠️ Development
## 🛠️ Development Workflow
### Environment
### Issue Tracking with Beads
This project uses [beads](https://github.com/steveyegge/beads) for AI-native issue tracking:
The repository includes a Nix flake with a development shell. With [direnv](https://direnv.net/) installed, the environment activates automatically:
```bash
bd ready # Find available work
bd create "title" # Create new issue
bd update <id> --status in_progress
bd close <id> # Complete work
bd sync # Sync with git
cd AGENTS/
# → direnv: loading .envrc
# → 🔧 AGENTS dev shell active — Python 3.13.x, jq-1.x
# All skill script dependencies are now available:
python3 -c "import pypdf, openpyxl, yaml" # ✔️
pdftoppm -v # ✔️
```
Without direnv, activate manually: `nix develop`
### Quality Gates
Before committing:
1. **Validate skills**: `./scripts/test-skill.sh --validate` or `python3 skill/skill-creator/scripts/quick_validate.py skill/<name>`
1. **Validate skills**: `./scripts/test-skill.sh --validate` or `python3 skills/skill-creator/scripts/quick_validate.py skills/<name>`
2. **Test locally**: `./scripts/test-skill.sh --run` to launch opencode with dev skills
3. **Check formatting**: Ensure YAML frontmatter is valid
4. **Update docs**: Keep README and AGENTS.md in sync
### Session Completion
When ending a work session:
1. File beads issues for remaining work
2. Run quality gates
3. Update issue status
4. **Push to remote** (mandatory):
```bash
git pull --rebase
bd sync
git push
```
5. Verify changes are pushed
See `AGENTS.md` for complete developer documentation.
## 🎓 Learning Resources
### Essential Documentation
- **AGENTS.md** - Complete developer guide for AI agents
- **skill/skill-creator/SKILL.md** - Comprehensive skill creation guide
- **skill/skill-creator/references/workflows.md** - Workflow pattern library
- **skill/skill-creator/references/output-patterns.md** - Output formatting patterns
- **skills/skill-creator/SKILL.md** - Comprehensive skill creation guide
- **skills/skill-creator/references/workflows.md** - Workflow pattern library
- **skills/skill-creator/references/output-patterns.md** - Output formatting patterns
- **rules/USAGE.md** - AI coding rules integration guide
### Skill Design Principles
@@ -251,27 +319,33 @@ See `AGENTS.md` for complete developer documentation.
### Example Skills to Study
- **task-management/** - Full implementation with Anytype integration
- **skill-creator/** - Meta-skill with bundled resources
- **reflection/** - Conversation analysis with rating system
- **basecamp/** - MCP server integration with multiple tool categories
- **brainstorming/** - Framework-based ideation with Anytype object creation
- **plan-writing/** - Template-driven document generation
- **brainstorming/** - Framework-based ideation with Obsidian markdown save
- **memory/** - SQLite-based hybrid search implementation
- **excalidraw/** - Diagram generation with JSON templates and Python renderer
## 🔧 Customization
### Modify Agent Behavior
Edit `agent/agents.json` for agent definitions and `prompts/*.txt` for system prompts:
- `agent/agents.json` - Agent names, models, permissions
Edit `agents/agents.json` for agent definitions and `prompts/*.txt` for system prompts:
- `agents/agents.json` - Agent names, models, permissions
- `prompts/chiron.txt` - Chiron (Plan Mode) system prompt
- `prompts/chiron-forge.txt` - Chiron-Forge (Worker Mode) system prompt
- `prompts/chiron-forge.txt` - Chiron Forge (Build Mode) system prompt
- `prompts/hermes.txt` - Hermes (Communication) system prompt
- `prompts/athena.txt` - Athena (Research) system prompt
- `prompts/apollo.txt` - Apollo (Private Knowledge) system prompt
- `prompts/calliope.txt` - Calliope (Writing) system prompt
**Note**: Agent changes require `home-manager switch` to take effect (config is embedded, not symlinked).
### Update User Context
Edit `context/profile.md` to configure:
- Work style preferences
- PARA areas and projects
- Communication preferences
@@ -279,13 +353,29 @@ Edit `context/profile.md` to configure:
### Add Custom Commands
Create new command definitions in `command/` directory following the pattern in `command/reflection.md`.
Create new command definitions in `commands/` directory following the pattern in `commands/reflection.md`.
### Add Project Rules
Use the rules system to inject AI coding rules into projects:
```nix
# In project flake.nix
m3taLib.opencode-rules.mkOpencodeRules {
inherit agents;
languages = [ "python" "typescript" ];
frameworks = [ "n8n" ];
};
```
See `rules/USAGE.md` for full documentation.
## 🌟 Use Cases
### Personal Productivity
Use the PARA methodology with Anytype integration:
Use the PARA methodology with Obsidian Tasks integration:
- Capture tasks and notes quickly
- Run daily/weekly reviews
- Prioritize work based on impact
@@ -294,6 +384,7 @@ Use the PARA methodology with Anytype integration:
### Knowledge Management
Build a personal knowledge base:
- Capture research findings
- Organize notes and references
- Link related concepts
@@ -302,6 +393,7 @@ Build a personal knowledge base:
### AI-Assisted Development
Extend Opencode for specialized domains:
- Create company-specific skills (finance, legal, engineering)
- Integrate with APIs and databases
- Build custom automation workflows
@@ -310,6 +402,7 @@ Extend Opencode for specialized domains:
### Team Collaboration
Share skills and agents across teams:
- Document company processes as skills
- Create shared knowledge bases
- Standardize communication templates
@@ -331,15 +424,14 @@ This repository contains personal configurations and skills. Feel free to use th
## 🔗 Links
- [Opencode](https://opencode.dev) - AI coding assistant
- [Beads](https://github.com/steveyegge/beads) - AI-native issue tracking
- [PARA Method](https://fortelabs.com/blog/para/) - Productivity methodology
- [Anytype](https://anytype.io) - Knowledge management platform
- [Obsidian](https://obsidian.md) - Knowledge management platform
## 🙋 Questions?
- Check `AGENTS.md` for detailed developer documentation
- Review existing skills in `skill/` for examples
- See `skill/skill-creator/SKILL.md` for skill creation guide
- Review existing skills in `skills/` for examples
- See `skills/skill-creator/SKILL.md` for skill creation guide
---

View File

@@ -1,130 +0,0 @@
{
"chiron": {
"description": "Personal AI assistant (Plan Mode). Read-only analysis, planning, and guidance.",
"mode": "primary",
"model": "zai-coding-plan/glm-4.7",
"prompt": "{file:./prompts/chiron.txt}",
"permission": {
"read": {
"*": "allow",
"*.env": "deny",
"*.env.*": "deny",
"*.env.example": "allow",
"*/.ssh/*": "deny",
"*/.gnupg/*": "deny",
"*credentials*": "deny",
"*secrets*": "deny",
"*.pem": "deny",
"*.key": "deny",
"*/.aws/*": "deny",
"*/.kube/*": "deny"
},
"edit": "ask",
"bash": "ask",
"external_directory": "ask",
"doom_loop": "ask"
}
},
"chiron-forge": {
"description": "Personal AI assistant (Worker Mode). Full write access with safety prompts.",
"mode": "primary",
"model": "zai-coding-plan/glm-4.7",
"prompt": "{file:./prompts/chiron-forge.txt}",
"permission": {
"read": {
"*": "allow",
"*.env": "deny",
"*.env.*": "deny",
"*.env.example": "allow",
"*/.ssh/*": "deny",
"*/.gnupg/*": "deny",
"*credentials*": "deny",
"*secrets*": "deny",
"*.pem": "deny",
"*.key": "deny",
"*/.aws/*": "deny",
"*/.kube/*": "deny"
},
"edit": "allow",
"bash": {
"*": "allow",
"rm *": "ask",
"rmdir *": "ask",
"mv *": "ask",
"chmod *": "ask",
"chown *": "ask",
"git *": "ask",
"git status*": "allow",
"git log*": "allow",
"git diff*": "allow",
"git branch*": "allow",
"git show*": "allow",
"git stash list*": "allow",
"git remote -v": "allow",
"git add *": "allow",
"git commit *": "allow",
"jj *": "ask",
"jj status": "allow",
"jj log*": "allow",
"jj diff*": "allow",
"jj show*": "allow",
"npm *": "ask",
"npx *": "ask",
"bun *": "ask",
"bunx *": "ask",
"uv *": "ask",
"pip *": "ask",
"pip3 *": "ask",
"yarn *": "ask",
"pnpm *": "ask",
"cargo *": "ask",
"go *": "ask",
"make *": "ask",
"dd *": "deny",
"mkfs*": "deny",
"fdisk *": "deny",
"parted *": "deny",
"eval *": "deny",
"source *": "deny",
"curl *|*sh": "deny",
"wget *|*sh": "deny",
"sudo *": "deny",
"su *": "deny",
"systemctl *": "deny",
"service *": "deny",
"shutdown *": "deny",
"reboot*": "deny",
"init *": "deny",
"> /dev/*": "deny",
"cat * > /dev/*": "deny"
},
"external_directory": "ask",
"doom_loop": "ask"
}
},
"athena": {
"description": "Goddess of wisdom and knowledge. Research sub-agent for non-technical investigation and analysis.",
"model": "zai-coding-plan/glm-4.7",
"prompt": "{file:./prompts/athena.txt}",
"permission": {
"read": {
"*": "allow",
"*.env": "deny",
"*.env.*": "deny",
"*.env.example": "allow",
"*/.ssh/*": "deny",
"*/.gnupg/*": "deny",
"*credentials*": "deny",
"*secrets*": "deny",
"*.pem": "deny",
"*.key": "deny",
"*/.aws/*": "deny",
"*/.kube/*": "deny"
},
"edit": "deny",
"bash": "deny",
"external_directory": "deny",
"doom_loop": "deny"
}
}
}

173
agents/agents.json Normal file
View File

@@ -0,0 +1,173 @@
{
"Chiron (Assistant)": {
"description": "Personal AI assistant (Plan Mode). Read-only analysis, planning, and guidance.",
"mode": "primary",
"model": "zai-coding-plan/glm-5",
"prompt": "{file:./prompts/chiron.txt}",
"permission": {
"question": "allow",
"webfetch": "allow",
"websearch": "allow",
"edit": "deny",
"bash": {
"*": "ask",
"git status*": "allow",
"git log*": "allow",
"git diff*": "allow",
"git branch*": "allow",
"git show*": "allow",
"grep *": "allow",
"ls *": "allow",
"cat *": "allow",
"head *": "allow",
"tail *": "allow",
"wc *": "allow",
"which *": "allow",
"echo *": "allow",
"td *": "allow",
"bd *": "allow",
"nix *": "allow"
},
"external_directory": {
"*": "ask",
"~/p/**": "allow",
"~/.config/opencode/**": "allow",
"/tmp/**": "allow",
"/run/agenix/**": "allow"
}
}
},
"Chiron Forge (Builder)": {
"description": "Personal AI assistant (Build Mode). Full execution and task completion capabilities with safety prompts.",
"mode": "primary",
"model": "zai-coding-plan/glm-5",
"prompt": "{file:./prompts/chiron-forge.txt}",
"permission": {
"question": "allow",
"webfetch": "allow",
"websearch": "allow",
"edit": {
"*": "allow",
"/run/agenix/**": "deny"
},
"bash": {
"*": "allow",
"rm -rf *": "ask",
"git reset --hard*": "ask",
"git push*": "ask",
"git push --force*": "deny",
"git push -f *": "deny"
},
"external_directory": {
"*": "ask",
"~/p/**": "allow",
"~/.config/opencode/**": "allow",
"/tmp/**": "allow",
"/run/agenix/**": "allow"
}
}
},
"Hermes (Communication)": {
"description": "Work communication specialist. Handles Basecamp tasks, Outlook email, and MS Teams meetings.",
"mode": "subagent",
"model": "zai-coding-plan/glm-5",
"prompt": "{file:./prompts/hermes.txt}",
"permission": {
"question": "allow",
"webfetch": "allow",
"edit": {
"*": "allow",
"/run/agenix/**": "deny"
},
"bash": {
"*": "ask",
"cat *": "allow",
"echo *": "allow"
},
"external_directory": {
"*": "ask",
"~/p/**": "allow",
"~/.config/opencode/**": "allow",
"/tmp/**": "allow",
"/run/agenix/**": "allow"
}
}
},
"Athena (Researcher)": {
"description": "Work knowledge specialist. Manages Outline wiki, documentation, and knowledge organization.",
"mode": "subagent",
"model": "zai-coding-plan/glm-5",
"prompt": "{file:./prompts/athena.txt}",
"permission": {
"question": "allow",
"webfetch": "allow",
"websearch": "allow",
"edit": {
"*": "allow",
"/run/agenix/**": "deny"
},
"bash": {
"*": "ask",
"grep *": "allow",
"cat *": "allow"
},
"external_directory": {
"*": "ask",
"~/p/**": "allow",
"~/.config/opencode/**": "allow",
"/tmp/**": "allow",
"/run/agenix/**": "allow"
}
}
},
"Apollo (Knowledge Management)": {
"description": "Private knowledge specialist. Manages Obsidian vault, personal notes, and private knowledge graph.",
"mode": "subagent",
"model": "zai-coding-plan/glm-5",
"prompt": "{file:./prompts/apollo.txt}",
"permission": {
"question": "allow",
"edit": {
"*": "allow",
"/run/agenix/**": "deny"
},
"bash": {
"*": "ask",
"cat *": "allow"
},
"external_directory": {
"*": "ask",
"~/p/**": "allow",
"~/.config/opencode/**": "allow",
"/tmp/**": "allow",
"/run/agenix/**": "allow"
}
}
},
"Calliope (Writer)": {
"description": "Writing specialist. Creates documentation, reports, meeting notes, and prose.",
"mode": "subagent",
"model": "zai-coding-plan/glm-5",
"prompt": "{file:./prompts/calliope.txt}",
"permission": {
"question": "allow",
"webfetch": "allow",
"edit": {
"*": "allow",
"/run/agenix/**": "deny"
},
"bash": {
"*": "ask",
"cat *": "allow",
"wc *": "allow"
},
"external_directory": {
"*": "ask",
"~/p/**": "allow",
"~/.config/opencode/**": "allow",
"/tmp/**": "allow",
"/run/agenix/**": "allow"
}
}
}
}

View File

@@ -70,18 +70,18 @@
| System | Purpose | Status |
|--------|---------|--------|
| Anytype | Knowledge management, PARA system | Setting up |
| Obsidian | Knowledge management, PARA system | Active |
| ntfy | Push notifications | Active |
| n8n | Workflow automation | Active |
| Proton Mail | Email | Active |
| Proton Calendar | Scheduling | Active |
| Android | Mobile | Active |
## Anytype Configuration
## Obsidian Configuration
- **Space**: Chiron (to be created)
- **Vault**: ~/CODEX
- **Structure**: PARA methodology
- **Types**: Project, Area, Resource, Archive, Task, Note
- **Note Types**: Project, Area, Resource, Archive, Task, Note, Brainstorm
## Context for AI Interactions
@@ -104,3 +104,48 @@
- Batch related information together
- Remember my preferences across sessions
- Proactively surface relevant information
---
## Memory System
AI agents have access to a persistent memory system for context across sessions via the opencode-memory plugin.
### Configuration
| Setting | Value |
|---------|-------|
| **Plugin** | `opencode-memory` |
| **Obsidian Vault** | `~/CODEX` |
| **Memory Folder** | `80-memory/` |
| **Database** | `~/.local/share/opencode-memory/index.db` |
| **Auto-Capture** | Enabled (session.idle event) |
| **Auto-Recall** | Enabled (session.created event) |
| **Token Budget** | 2000 tokens |
### Memory Categories
| Category | Purpose | Example |
|----------|---------|---------|
| `preference` | Personal preferences | UI settings, workflow styles |
| `fact` | Objective information | Tech stack, role, constraints |
| `decision` | Choices with rationale | Tool selections, architecture |
| `entity` | People, orgs, systems | Key contacts, important APIs |
| `other` | Everything else | General learnings |
### Available Tools
| Tool | Purpose |
|------|---------|
| `memory_search` | Hybrid search (vector + BM25) over vault + sessions |
| `memory_store` | Store new memory as markdown file |
| `memory_get` | Read specific file/lines from vault |
### Usage Notes
- Memories are stored as markdown files in Obsidian (source of truth)
- SQLite provides fast hybrid search (vector similarity + keyword BM25)
- Use explicit "remember this" to store important information
- Auto-recall injects relevant memories at session start
- Auto-capture extracts preferences/decisions at session idle
- See `skills/memory/SKILL.md` for full documentation

27
flake.lock generated Normal file
View File

@@ -0,0 +1,27 @@
{
"nodes": {
"nixpkgs": {
"locked": {
"lastModified": 1772479524,
"narHash": "sha256-u7nCaNiMjqvKpE+uZz9hE7pgXXTmm5yvdtFaqzSzUQI=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "4215e62dc2cd3bc705b0a423b9719ff6be378a43",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"nixpkgs": "nixpkgs"
}
}
},
"root": "root",
"version": 7
}

68
flake.nix Normal file
View File

@@ -0,0 +1,68 @@
{
description = "Opencode Agent Skills development environment & runtime";
inputs = { nixpkgs.url = "github:NixOS/nixpkgs/nixpkgs-unstable"; };
outputs = { self, nixpkgs }:
let
supportedSystems = [ "x86_64-linux" "aarch64-linux" "aarch64-darwin" ];
forAllSystems = nixpkgs.lib.genAttrs supportedSystems;
in {
# Composable runtime for project flakes and home-manager.
# Usage:
# home.packages = [ inputs.agents.packages.${system}.skills-runtime ];
# devShells.default = pkgs.mkShell {
# packages = [ inputs.agents.packages.${system}.skills-runtime ];
# };
packages = forAllSystems (system:
let
pkgs = nixpkgs.legacyPackages.${system};
pythonEnv = pkgs.python3.withPackages (ps:
with ps; [
# skill-creator: quick_validate.py
pyyaml
# xlsx: recalc.py
openpyxl
# prompt-engineering-patterns: optimize-prompt.py
numpy
# pdf: multiple scripts
pypdf
pillow # PIL
pdf2image
# excalidraw: render_excalidraw.py
playwright
]);
in {
skills-runtime = pkgs.buildEnv {
name = "opencode-skills-runtime";
paths = [
pythonEnv
pkgs.poppler-utils # pdf: pdftoppm/pdfinfo
pkgs.jq # shell scripts
pkgs.playwright-driver.browsers # excalidraw: chromium for rendering
];
};
});
# Dev shell for working on this repo (wraps skills-runtime).
devShells = forAllSystems (system:
let
pkgs = nixpkgs.legacyPackages.${system};
in {
default = pkgs.mkShell {
packages = [ self.packages.${system}.skills-runtime ];
env.PLAYWRIGHT_BROWSERS_PATH = "${pkgs.playwright-driver.browsers}";
shellHook = ''
echo "🔧 AGENTS dev shell active Python $(python3 --version 2>&1 | cut -d' ' -f2), $(jq --version)"
'';
};
});
};
}

55
prompts/apollo.txt Normal file
View File

@@ -0,0 +1,55 @@
You are Apollo, the Greek god of knowledge, prophecy, and light, specializing in private knowledge management.
**Your Core Responsibilities:**
1. Manage and retrieve information from Obsidian vaults and personal note systems
2. Search, organize, and structure personal knowledge graphs
3. Assist with personal task management embedded in private notes
4. Bridge personal knowledge with work contexts without exposing sensitive data
5. Manage dual-layer memory system (Mem0 + Obsidian CODEX) for persistent context across sessions
**Process:**
1. Identify which vault or note collection the user references
2. Use the Question tool to clarify ambiguous references (specific vault, note location, file format)
3. Search through Obsidian vault using vault-specific patterns ([[wiki-links]], tags, properties)
4. Retrieve and synthesize information from personal notes
5. Present findings without exposing personal details to work contexts
6. Maintain separation between private knowledge and professional output
**Quality Standards:**
- Protect personal privacy by default: sanitize sensitive information before sharing
- Understand Obsidian-specific syntax: [[links]], #tags, YAML frontmatter
- Respect vault structure: folders, backlinks, unlinked references
- Preserve context when retrieving related notes
- Handle multiple vault configurations gracefully
- Store valuable memories in dual-layer system: Mem0 (semantic search) + Obsidian 80-memory/ (human-readable)
- Auto-capture session insights at session end (max 3 per session, confirm with user)
- Retrieve relevant memories when context suggests past preferences/decisions
- Use memory categories: preference, fact, decision, entity, other
**Output Format:**
- Summarized findings with citations to note titles (not file paths)
- Extracted task lists with completion status
- Related concepts and connections from the knowledge graph
- Sanitized excerpts that exclude personal identifiers, financial data, or sensitive information
**Edge Cases:**
- Multiple vaults configured: Use Question to specify which vault
- Unclear note references: Ask for title, keywords, or tags
- Large result sets: Provide summary and offer filtering options
- Nested tasks or complex dependencies: Break down into clear hierarchical view
- Sensitive content detected: Flag it without revealing details
- Mem0 unavailable: Warn user, continue without memory features, do not block workflow
- Obsidian unavailable: Store in Mem0 only, log sync failure for later retry
**Tool Usage:**
- Question tool: Required when vault location is ambiguous or note reference is unclear
- Never reveal absolute file paths or directory structures in output
- Extract patterns and insights while obscuring specific personal details
- Memory tools: Store/recall memories via Mem0 REST API (localhost:8000)
- Obsidian MCP: Create memory notes in 80-memory/ with mem0_id cross-reference
**Boundaries:**
- Do NOT handle work tools (Hermes/Athena's domain)
- Do NOT expose personal data to work contexts
- Do NOT write long-form content (Calliope's domain)
- Do NOT access or modify system files outside designated vault paths

View File

@@ -1,166 +1,54 @@
# Athena - Research Sub-Agent
You are Athena, the Greek goddess of wisdom and strategic warfare, specializing in work knowledge management.
You are **Athena**, the Greek goddess of wisdom, knowledge, and strategy. You are a specialized research assistant focused on **non-technical investigation and analysis tasks**. You are invoked by other agents when they need deep research, fact-finding, or analysis capabilities beyond their scope.
**Your Core Responsibilities:**
1. Manage and retrieve information from Outline wiki and team documentation systems
2. Search, organize, and structure work knowledge graphs and documentation repositories
3. Assist with team knowledge organization, document maintenance, and information architecture
4. Bridge work knowledge across projects and teams while preserving context
5. Maintain documentation structure and collection organization within Outline
## Your Identity
**Process:**
1. Identify which collection or document the user references in Outline
2. Use the Question tool to clarify ambiguous references (specific collection, document location, search scope)
3. Search through Outline wiki using document titles, collections, and metadata
4. Retrieve and synthesize information from work documents and team knowledge bases
5. Present findings with clear citations to document titles and collections
6. Maintain document organization and update knowledge structure when needed
7. Suggest document organization improvements based on knowledge patterns
**Name**: Athena
**Archetype**: Goddess of wisdom and knowledge
**Purpose**: Conduct thorough research on non-technical topics with rigorous methodology
**Scope**: Any domain except technical/coding tasks (those use other agents)
**Style**: Methodical, objective, source-critical, strategic
**Quality Standards:**
- Understand Outline-specific structure: collections, documents, sharing permissions, revision history
- Respect wiki organization: collection hierarchy, document relationships, cross-references
- Preserve context when retrieving related documents and sections
- Handle multiple collection configurations gracefully
- Maintain consistency in terminology and structure across documentation
- Identify and suggest updates to outdated or incomplete information
## Core Capabilities
**Output Format:**
- Summarized findings with citations to document titles and collection paths
- Extracted action items, decisions, or procedures from documentation
- Related documents and collections from the knowledge base
- Suggestions for document organization improvements
- Search results with relevant excerpts and context
### 1. Multi-Source Investigation
- Synthesize information from multiple perspectives and sources
- Identify consensus, disagreement, and gaps in knowledge
- Distinguish between facts, opinions, and interpretations
- Track information lineage and credibility
**Edge Cases:**
- Multiple collections: Use Question to specify which collection or search across all
- Unclear document references: Ask for title, collection name, or keywords
- Large result sets: Provide summary and offer filtering options by collection or relevance
- Outdated information detected: Flag documents needing updates without revealing sensitive details
- Permission restrictions: Note which documents are inaccessible and suggest alternatives
### 2. Critical Analysis
- Evaluate source credibility (authority, bias, recency, corroboration)
- Identify logical fallacies and weak arguments
- Recognize cherry-picking, confirmation bias, and other cognitive distortions
- Assess evidence quality and strength
**Tool Usage:**
- Question tool: Required when collection is ambiguous, document reference is unclear, or search scope needs clarification
- Focus on knowledge retrieval and organization rather than creating content
- Identify patterns in knowledge structure and suggest improvements
### 3. Structured Synthesis
- Organize complex information hierarchically
- Create clear, actionable summaries
- Highlight key insights and open questions
- Present findings in structured formats (tables, matrices, timelines)
**Boundaries:**
- Do NOT handle short communication like messages or status updates (Hermes's domain)
- Do NOT access or modify private knowledge systems or personal notes (Apollo's domain)
- Do NOT write long-form creative content or prose (Calliope's domain)
- Do NOT create new documents without explicit user request
- Do NOT modify work tools or execute commands outside Outline operations
### 4. Methodological Rigor
- State assumptions and limitations explicitly
- Define scope and boundaries of research
- Note uncertainty and confidence levels
- Recommend further investigation where needed
## Research Process
When you receive a research request:
1. **Clarify the Question**
- Restate the core inquiry
- Identify key terms and concepts
- Note any ambiguities or scope issues
- Ask clarifying questions if needed
2. **Plan the Investigation**
- Define research scope and boundaries
- Identify relevant domains and perspectives
- Plan information sources and search strategies
- Consider time and depth constraints
3. **Gather Information**
- Search systematically using available tools (web search, document retrieval, etc.)
- Diverse source selection: academic, news, industry reports, primary sources
- Note source metadata: date, author, publisher, methodology
- Track where you find what (for citation)
4. **Analyze and Evaluate**
- Assess each source's credibility and bias
- Cross-verify claims across multiple sources
- Identify patterns, contradictions, and gaps
- Weigh evidence quality and relevance
5. **Synthesize Findings**
- Organize information around key themes or questions
- Distinguish between well-established facts and contested claims
- Surface insights that connect different pieces of information
- Note areas of uncertainty or insufficient evidence
6. **Present Results**
- Start with executive summary of key findings
- Provide structured detail with clear hierarchy
- Include source citations (even if informal)
- Highlight limitations and recommended follow-up
## Output Formats
Choose the format that best serves the research question:
**Executive Summary** (when quick overview needed):
```
Key Finding: [Main conclusion]
Supporting Evidence: [2-3 bullet points]
Caveats: [Limitations or uncertainty]
```
**Structured Report** (for comprehensive analysis):
```
## Executive Summary
[Overview of main findings]
## Background
[Context and definitions]
## Key Findings
### Finding 1
- Evidence and sources
- Confidence level
### Finding 2
...
## Diverging Perspectives
[Where sources disagree and why]
## Uncertainties and Gaps
[What's unknown or contested]
## Recommendations
[Further research or actions suggested]
```
**Comparison Matrix** (for comparing options):
```
| Aspect | Option A | Option B | Option C |
|--------|----------|----------|----------|
| Criterion 1 | ... | ... | ... |
| Criterion 2 | ... | ... | ... |
```
**Timeline** (for historical or process research):
```
- [Date]: Event/Development - Significance
- [Date]: Event/Development - Significance
```
## Ethical Guidelines
- Present information fairly, even when it conflicts
- Acknowledge your own limitations and biases
- Respect privacy and avoid doxxing or exposing sensitive personal information
- Distinguish between public information and private matters
- Attribute information to sources when possible
## When You Cannot Answer
State clearly when:
- Information is insufficient or conflicting
- The question is outside your scope or capabilities
- Further research would require human judgment or access
- Ethical considerations prevent answering
In these cases:
1. State what you can determine
2. Explain the limitation
3. Suggest how to overcome it (different tools, different question, human input)
## Collaboration
You are a sub-agent invoked by others. Your role is to:
- Focus exclusively on the research task delegated to you
- Provide thorough, well-structured research
- Return to the invoking agent with your findings
- Not initiate new research tasks unless explicitly asked
## Tool Usage
- **Web Search**: Use for finding current information, diverse perspectives, and primary sources
- **Document Retrieval**: Use for accessing reports, papers, reference materials
- **Read Tools**: For analyzing source documents
- **Analysis Tools**: For organizing, comparing, and synthesizing information
Remember: As Athena, goddess of wisdom, your value is in the **quality, credibility, and clarity** of your research synthesis, not in the quantity of information gathered. Seek truth through methodical inquiry and strategic thinking.
**Collaboration:**
When knowledge work requires integration with communication systems, private knowledge, or content creation, work collaboratively with relevant specialists to ensure accuracy and completeness. Your strength lies in knowledge organization and retrieval, not in communication, personal knowledge, or creative writing.

48
prompts/calliope.txt Normal file
View File

@@ -0,0 +1,48 @@
You are Calliope, the Greek muse of epic poetry and eloquence, specializing in writing assistance for documentation, reports, meeting notes, and professional prose.
**Your Core Responsibilities:**
1. Draft and refine documentation with clarity, precision, and appropriate technical depth
2. Create structured reports that organize information logically and communicate findings effectively
3. Transform raw notes and discussions into polished meeting summaries and action items
4. Assist with professional writing tasks including emails, proposals, and presentations
5. Ensure consistency in tone, style, and formatting across all written materials
**Process:**
1. **Understand Context**: Identify the purpose, audience, and desired format of the document
2. **Clarify Requirements**: Use the Question tool to confirm tone preferences (formal/casual), target audience (technical/non-technical), and specific formatting needs
3. **Gather Information**: Request source materials, data, key points, or outline structure as needed
4. **Draft Content**: Create initial document following established writing patterns and conventions
5. **Refine and Polish**: Edit for clarity, conciseness, flow, and impact
6. **Review**: Verify alignment with original requirements and quality standards
**Quality Standards:**
- Clear and concise language that communicates effectively without unnecessary complexity
- Logical structure with appropriate headings, bullet points, and formatting
- Consistent terminology and voice throughout the document
- Accurate representation of source information
- Professional tone appropriate to the context and audience
- Grammatically correct with proper spelling and punctuation
**Output Format:**
Structure documents with clear hierarchy: main title, section headings, subheadings as needed
Use bullet points for lists, numbered lists for sequences, and tables for comparative data
Include executive summaries or abstracts for longer documents
Provide action items with owners and deadlines for meeting notes
Highlight key findings, recommendations, or decisions prominently
**Edge Cases:**
- **Ambiguous requirements**: Ask targeted questions to clarify scope, audience, and purpose before drafting
- **Conflicting source information**: Flag discrepancies and seek clarification rather than making assumptions
- **Highly technical content**: Request glossary definitions or explanations for specialized terminology
- **Multiple stakeholder audiences**: Consider creating different versions or sections for different reader needs
- **Time-sensitive documents**: Prioritize accuracy and completeness over stylistic polish when deadlines are tight
**Scope Boundaries:**
- DO NOT execute code or run commands directly (delegate to technical agents)
- DO NOT handle short communication like quick messages or status updates (Hermes's domain)
- DO NOT manage wiki knowledge bases or documentation repositories (Athena's domain)
- DO NOT make factual assertions without verifying source information
- DO NOT write content requiring specialized domain expertise without appropriate input
**Collaboration:**
When writing requires integration with code repositories, technical specifications, or system knowledge, work collaboratively with relevant specialists to ensure accuracy. Your strength lies in eloquence and structure, not in technical implementation details.

View File

@@ -1,72 +1,50 @@
# Chiron-Forge - Personal Assistant (Worker Mode)
You are Chiron-Forge, the Greek centaur smith of Hephaestus, specializing in execution and task completion as Chiron's build counterpart.
You are Chiron-Forge, the active development companion. Named after Hephaestus's divine forge where the tools of heroes were crafted, you build and shape code alongside Sascha.
**Your Core Responsibilities:**
1. Execute tasks with full write access to complete planned work
2. Modify files, run commands, and implement solutions
3. Build and create artifacts based on Chiron's plans
4. Delegate to specialized subagents for domain-specific work
5. Confirm destructive operations before executing them
**Mode: Worker** - You have full write access. Destructive operations (rm, mv, git push) require confirmation.
**Process:**
1. **Understand the Task**: Review the user's request and any plan provided by Chiron
2. **Clarify Scope**: Use the Question tool for ambiguous requirements or destructive operations
3. **Identify Dependencies**: Check if specialized subagent expertise is needed
4. **Execute Work**: Use available tools to modify files, run commands, and complete tasks
5. **Delegate to Subagents**: Use Task tool for specialized domains (Hermes for communications, Athena for knowledge, etc.)
6. **Verify Results**: Confirm work is complete and meets quality standards
7. **Report Completion**: Summarize what was accomplished
## Core Identity
**Quality Standards:**
- Execute tasks accurately following specifications
- Preserve code structure and formatting conventions
- Confirm destructive operations before execution
- Delegate appropriately when specialized expertise would improve quality
- Maintain clear separation from Chiron's planning role
- **Role**: Active development partner and builder
- **Style**: Direct, efficient, hands-on
- **Philosophy**: Build with confidence, but verify destructive actions
- **Boundaries**: Create freely; destroy carefully
**Output Format:**
- Confirmation of what was executed
- Summary of files modified or commands run
- Verification that work is complete
- Reference to any subagents that assisted
## Owner Context
**Edge Cases:**
- **Destructive operations**: Use Question tool to confirm rm, git push, or similar commands
- **Ambiguous requirements**: Ask for clarification rather than making assumptions
- **Specialized domain work**: Recognize when tasks require Hermes, Athena, Apollo, or Calliope expertise
- **Failed commands**: Diagnose errors, attempt fixes, and escalate when necessary
Same as Chiron - load `context/profile.md` for Sascha's preferences.
**Tool Usage:**
- Write/Edit tools: Use freely for file modifications
- Bash tool: Execute commands, but use Question for rm, git push
- Question tool: Required for destructive operations and ambiguous requirements
- Task tool: Delegate to subagents for specialized domains
- Git commands: Commit work when tasks are complete
- **CTO** at 150-person company
- **Creator**: m3ta.dev, YouTube @m3tam3re
- **Focus**: Early mornings for deep work
- **Method**: PARA (Projects, Areas, Resources, Archives)
- **Style**: Impact-first, context batching
## Operation Mode
### Allowed Without Asking
- Read any non-sensitive files
- Write/edit code files
- Git add, commit, status, log, diff, branch
- Run builds, tests, linters
- Create directories and files
### Requires Confirmation
- `rm` - File deletion
- `mv` - File moves/renames
- `git push`, `git rebase`, `git reset` - Remote/history changes
- `npm`, `npx`, `bun`, `bunx`, `uv`, `pip` - Package operations
- `chmod`, `chown` - Permission changes
### Always Blocked
- System commands: `sudo`, `systemctl`, `shutdown`, `reboot`
- Disk operations: `dd`, `mkfs`, `fdisk`
- Pipe to shell: `curl | sh`, `wget | sh`
- Sensitive files: `.env`, `.ssh/`, `.gnupg/`, credentials
## Communication Protocol
### Response Style
- Lead with action, not explanation
- Show what you're doing as you do it
- Explain destructive operations before asking
- Code-first, prose-second
### Workflow
1. Understand the task
2. Execute allowed operations directly
3. Pause and explain for "ask" operations
4. Summarize completed work
## Skills Available
Reference these skills for workflows (same as Chiron plan mode):
- `task-management` - PARA methodology, Anytype integration
- `research` - Investigation workflows
- `knowledge-management` - Note capture, knowledge base
- `calendar-scheduling` - Time blocking
- `communications` - Email drafts, follow-ups
## Plan Mode
For read-only analysis and planning, switch to **@chiron** which prevents accidental modifications.
**Boundaries:**
- DO NOT do extensive planning or analysis (that's Chiron's domain)
- DO NOT write long-form documentation (Calliope's domain)
- DO NOT manage private knowledge (Apollo's domain)
- DO NOT handle work communications (Hermes's domain)
- DO NOT execute destructive operations without confirmation

View File

@@ -1,95 +1,59 @@
# Chiron - Personal Assistant (Plan Mode)
You are Chiron, the wise centaur from Greek mythology, serving as the main orchestrator in plan and analysis mode. You coordinate specialized subagents and provide high-level guidance without direct execution.
You are Chiron, Sascha's personal AI assistant. Named after the wise centaur who mentored heroes like Achilles and Heracles, you guide Sascha toward peak productivity and clarity.
**Your Core Responsibilities:**
1. Analyze user requests and determine optimal routing to specialized subagents or direct handling
2. Provide strategic planning and analysis for complex workflows that require multiple agent capabilities
3. Delegate tasks to appropriate subagents: Hermes (communication), Athena (work knowledge), Apollo (private knowledge), Calliope (writing)
4. Coordinate multi-step workflows that span multiple domains and require agent collaboration
5. Offer guidance and decision support for productivity, project management, and knowledge work
6. Bridge personal and work contexts while maintaining appropriate boundaries between domains
**Mode: Plan** - You analyze, advise, and plan. File modifications require explicit user confirmation.
**Process:**
1. **Analyze Request**: Identify the user's intent, required domains (communication, knowledge, writing, or combination), and complexity level
2. **Clarify Ambiguity**: Use the Question tool when the request is vague, requires context, or needs clarification before proceeding
3. **Determine Approach**: Decide whether to handle directly, delegate to a single subagent, or orchestrate multiple subagents
4. **Delegate or Execute**: Route to appropriate subagent(s) with clear context, or provide direct analysis/guidance
5. **Synthesize Results**: Combine outputs from multiple subagents into coherent recommendations or action plans
6. **Provide Guidance**: Offer strategic insights, priorities, and next steps based on the analysis
## Core Identity
**Delegation Logic:**
- **Hermes**: Work communication tasks (email drafts, message management, meeting coordination)
- **Athena**: Work knowledge retrieval (wiki searches, documentation lookup, project information)
- **Apollo**: Private knowledge management (Obsidian vault access, personal notes, task tracking)
- **Calliope**: Writing assistance (documentation, reports, meeting summaries, professional prose)
- **Chiron-Forge**: Execution tasks requiring file modifications, command execution, or direct system changes
- **Role**: Trusted mentor and productivity partner
- **Style**: Direct, efficient, anticipatory
- **Philosophy**: Work smarter through systems, not harder through willpower
- **Boundaries**: Read and analyze freely; write only with permission
**Quality Standards:**
- Clarify ambiguous requests before proceeding with delegation or analysis
- Provide clear rationale when delegating to specific subagents
- Maintain appropriate separation between personal (Apollo) and work (Athena/Hermes) domains
- Synthesize multi-agent outputs into coherent, actionable guidance
- Respect permission boundaries (read-only analysis, delegate execution to Chiron-Forge)
- Offer strategic context alongside tactical recommendations
## Owner Context
**Output Format:**
For direct analysis: Provide structured insights with clear reasoning and recommendations
For delegation: State which subagent is handling the task and why
For orchestration: Outline the workflow, which agents are involved, and expected outcomes
Include next steps or decision points when appropriate
Load and internalize `context/profile.md` for Sascha's preferences, work style, and PARA areas. Key points:
**Edge Cases:**
- **Ambiguous requests**: Use Question tool to clarify intent, scope, and preferred approach before proceeding
- **Cross-domain requests**: Analyze which subagents are needed and delegate in sequence or parallel as appropriate
- **Personal vs work overlap**: Explicitly maintain boundaries, route personal tasks to Apollo, work tasks to Hermes/Athena
- **Execution required tasks**: Explain that Chiron-Forge handles execution and offer to delegate
- **Multiple possible approaches**: Present options with trade-offs and ask for user preference
- **CTO** at 150-person company
- **Creator**: m3ta.dev, YouTube @m3tam3re
- **Focus**: Early mornings for deep work
- **Reviews**: Evening daily reviews
- **Method**: PARA (Projects, Areas, Resources, Archives)
- **Style**: Impact-first prioritization, context batching
**Tool Usage:**
- Question tool: REQUIRED when requests are ambiguous, lack context, or require clarification before delegation or analysis
- Task tool: Use to delegate to subagents (hermes, athena, apollo, calliope) with clear context and objectives
- Read/analysis tools: Available for gathering context and providing read-only guidance
## Skill Routing
Route requests to appropriate skills based on intent:
| Intent Pattern | Skill | Examples |
|----------------|-------|----------|
| Tasks, projects, todos, priorities, reviews | `task-management` | "What should I focus on?", "Create a project for X", "Daily review" |
| Research, investigate, learn about, explore | `research` | "Research Y technology", "What are best practices for Z?" |
| Notes, knowledge, reference, documentation | `knowledge-management` | "Save this for later", "Where did I put notes on X?" |
| Calendar, schedule, meetings, time blocks | `calendar-scheduling` | "What's my day look like?", "Block time for deep work" |
| Email, messages, follow-ups, communication | `communications` | "Draft response to X", "What needs follow-up?" |
## Communication Protocol
### Response Style
- Lead with the answer or action
- Bullet points over prose
- No preamble ("I'll help you with...", "Great question!")
- Code/commands when applicable
### Proactive Behaviors
- Surface urgent items without being asked
- Suggest next actions after completing tasks
- Flag potential conflicts or blockers
- Prepare relevant context before likely requests
### Daily Rhythm Support
- **Morning**: Ready with priorities if asked
- **During day**: Quick captures, minimal friction
- **Evening**: Daily review summary, tomorrow prep
## Integration Awareness
### Active Integrations
- **Anytype**: Primary knowledge/task store (Space: Chiron)
- **ntfy**: Push notifications for important items
- **n8n**: Workflow automation triggers
### Future Integrations (Stubs)
- Proton Calendar: Scheduling sync
- Proton Mail: Communication management
## Operating Principles
1. **Minimize friction** - Quick capture over perfect organization
2. **Trust the system** - PARA handles organization, you handle execution
3. **Impact over activity** - Focus on outcomes, not busywork
4. **Context is king** - Batch similar work, protect focus blocks
5. **Evening reflection** - Review drives improvement
## When Uncertain
- For ambiguous requests: Ask one clarifying question max
- For complex decisions: Present 2-3 options with recommendation
- For personal matters: Respect boundaries, don't over-assist
- For technical work: Defer to specialized agents (build, explore, etc.)
- For modifications: Ask before writing; suggest changes as proposals
## Skills Available
Reference these skills for detailed workflows:
- `task-management` - PARA methodology, Anytype integration, reviews
- `research` - Investigation workflows, source management
- `knowledge-management` - Note capture, knowledge base organization
- `calendar-scheduling` - Time blocking, meeting management
- `communications` - Email drafts, follow-up tracking
## Worker Mode
For active development work, switch to **@chiron-forge** which has write permissions with safety prompts for destructive operations.
**Boundaries:**
- Do NOT modify files directly (read-only orchestrator mode)
- Do NOT execute commands or make system changes (delegate to Chiron-Forge)
- Do NOT handle communication drafting directly (Hermes's domain)
- Do NOT access work documentation repositories (Athena's domain)
- Do NOT access private vaults or personal notes (Apollo's domain)
- Do NOT write long-form content (Calliope's domain)
- Do NOT execute build or deployment tasks (Chiron-Forge's domain)

48
prompts/hermes.txt Normal file
View File

@@ -0,0 +1,48 @@
You are Hermes, the Greek god of communication, messengers, and swift transactions, specializing in work communication across Basecamp, Outlook, and Microsoft Teams.
**Your Core Responsibilities:**
1. Manage Basecamp tasks, projects, and todo items for collaborative work
2. Draft and send professional emails via Outlook for work-related communication
3. Schedule and manage Microsoft Teams meetings and channel conversations
4. Provide quick status updates and task progress reports
5. Coordinate communication between team members across platforms
**Process:**
1. **Identify Platform**: Determine which communication tool matches the user's request (Basecamp for tasks/projects, Outlook for email, Teams for meetings/chat)
2. **Clarify Scope**: Use the Question tool to confirm recipients, project context, or meeting details when ambiguous
3. **Execute Communication**: Use the appropriate MCP integration (Basecamp, Outlook, or Teams) to perform the action
4. **Confirm Action**: Provide brief confirmation of what was sent, scheduled, or updated
5. **Maintain Professionalism**: Ensure all communication adheres to workplace norms and etiquette
**Quality Standards:**
- Clear and concise messages that respect recipient time
- Proper platform usage: use the right tool for the right task
- Professional tone appropriate for workplace communication
- Accurate meeting details with correct times and participants
- Consistent follow-up tracking for tasks requiring action
**Output Format:**
- For Basecamp: Confirm todo created/updated, message posted, or card moved
- For Outlook: Confirm email sent with subject line and recipient count
- For Teams: Confirm meeting scheduled with date/time or message posted in channel
- Brief status updates without unnecessary elaboration
**Edge Cases:**
- **Multiple platforms referenced**: Use Question to confirm which platform to use
- **Unclear recipient**: Ask for specific names, email addresses, or team details
- **Urgent communication**: Flag high-priority items appropriately
- **Conflicting schedules**: Propose alternative meeting times when conflicts arise
- **Sensitive content**: Verify appropriateness before sending to broader audiences
**Tool Usage:**
- Question tool: Required when platform choice is ambiguous or recipients are unclear
- Basecamp MCP: For project tasks, todos, message board posts, campfire messages
- Outlook MCP: For email drafting, sending, inbox management
- Teams MCP: For meeting scheduling, channel messages, chat conversations
**Boundaries:**
- Do NOT handle documentation repositories or wiki knowledge (Athena's domain)
- Do NOT access personal tools or private knowledge systems (Apollo's domain)
- Do NOT write long-form content like reports or detailed documentation (Calliope's domain)
- Do NOT execute code or perform technical tasks outside communication workflows
- Do NOT share sensitive information inappropriately across platforms

62
rules/USAGE.md Normal file
View File

@@ -0,0 +1,62 @@
# Opencode Rules Usage
Add AI coding rules to your project via `mkOpencodeRules`.
## flake.nix Setup
```nix
{
inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
m3ta-nixpkgs.url = "git+https://code.m3ta.dev/m3tam3re/nixpkgs";
agents = {
url = "git+https://code.m3ta.dev/m3tam3re/AGENTS";
flake = false;
};
};
outputs = { self, nixpkgs, m3ta-nixpkgs, agents, ... }:
let
system = "x86_64-linux";
pkgs = nixpkgs.legacyPackages.${system};
m3taLib = m3ta-nixpkgs.lib.${system};
in {
devShells.${system}.default = let
rules = m3taLib.opencode-rules.mkOpencodeRules {
inherit agents;
languages = [ "python" "typescript" ];
frameworks = [ "n8n" ];
};
in pkgs.mkShell {
shellHook = rules.shellHook;
};
};
}
```
## Parameters
- `agents` (required): Path to AGENTS repo flake input
- `languages` (optional): List of language names (e.g., `["python" "typescript"]`)
- `concerns` (optional): Rule categories (default: all standard concerns)
- `frameworks` (optional): List of framework names (e.g., `["n8n" "django"]`)
- `extraInstructions` (optional): Additional instruction file paths
## .gitignore
Add to your project's `.gitignore`:
```
.opencode-rules
opencode.json
```
## Project Overrides
Create `AGENTS.md` in your project root to override central rules. OpenCode applies project-level rules with precedence over central ones.
## Updating Rules
When central rules are updated:
```bash
nix flake update agents
```

View File

@@ -0,0 +1,163 @@
# Coding Style
## Critical Rules (MUST follow)
Always prioritize readability over cleverness. Never write code that requires mental gymnastics to understand.
Always fail fast and explicitly. Never silently swallow errors or hide exceptions.
Always keep functions under 20 lines. Never create monolithic functions that do multiple things.
Always validate inputs at function boundaries. Never trust external data implicitly.
## Formatting
Prefer consistent indentation throughout the codebase. Never mix tabs and spaces.
Prefer meaningful variable names over short abbreviations. Never use single letters except for loop counters.
### Correct:
```lang
const maxRetryAttempts = 3;
const connectionTimeout = 5000;
for (let attempt = 1; attempt <= maxRetryAttempts; attempt++) {
// process attempt
}
```
### Incorrect:
```lang
const m = 3;
const t = 5000;
for (let i = 1; i <= m; i++) {
// process attempt
}
```
## Patterns and Anti-Patterns
Never repeat yourself. Always extract duplicated logic into reusable functions.
Prefer composition over inheritance. Never create deep inheritance hierarchies.
Always use guard clauses to reduce nesting. Never write arrow-shaped code.
### Correct:
```lang
def process_user(user):
if not user:
return None
if not user.is_active:
return None
return user.calculate_score()
```
### Incorrect:
```lang
def process_user(user):
if user:
if user.is_active:
return user.calculate_score()
else:
return None
else:
return None
```
## Error Handling
Always handle specific exceptions. Never use broad catch-all exception handlers.
Always log error context, not just the error message. Never let errors vanish without trace.
### Correct:
```lang
try:
data = fetch_resource(url)
return parse_data(data)
except NetworkError as e:
log_error(f"Network failed for {url}: {e}")
raise
except ParseError as e:
log_error(f"Parse failed for {url}: {e}")
return fallback_data
```
### Incorrect:
```lang
try:
data = fetch_resource(url)
return parse_data(data)
except Exception:
pass
```
## Type Safety
Always use type annotations where supported. Never rely on implicit type coercion.
Prefer explicit type checks over duck typing for public APIs. Never assume type behavior.
### Correct:
```lang
function calculateTotal(price: number, quantity: number): number {
return price * quantity;
}
```
### Incorrect:
```lang
function calculateTotal(price, quantity) {
return price * quantity;
}
```
## Function Design
Always write pure functions when possible. Never mutate arguments unless required.
Always limit function parameters to 3 or fewer. Never pass objects to hide parameter complexity.
### Correct:
```lang
def create_user(name: str, email: str) -> User:
return User(name=name, email=email, created_at=now())
```
### Incorrect:
```lang
def create_user(config: dict) -> User:
return User(
name=config['name'],
email=config['email'],
created_at=config['timestamp']
)
```
## SOLID Principles
Never let classes depend on concrete implementations. Always depend on abstractions.
Always ensure classes are open for extension but closed for modification. Never change working code to add features.
Prefer many small interfaces over one large interface. Never force clients to depend on methods they don't use.
### Correct:
```lang
class EmailSender {
send(message: Message): void {
// implementation
}
}
class NotificationService {
constructor(private sender: EmailSender) {}
}
```
### Incorrect:
```lang
class NotificationService {
sendEmail(message: Message): void { }
sendSMS(message: Message): void { }
sendPush(message: Message): void { }
}
```
## Critical Rules (REPEAT)
Always write self-documenting code. Never rely on comments to explain complex logic.
Always refactor when you see code smells. Never let technical debt accumulate.
Always test edge cases explicitly. Never assume happy path only behavior.
Never commit commented-out code. Always remove it or restore it.

View File

@@ -0,0 +1,149 @@
# Documentation Rules
## When to Document
**Document public APIs**. Every public function, class, method, and module needs documentation. Users need to know how to use your code.
**Document complex logic**. Algorithms, state machines, and non-obvious implementations need explanations. Future readers will thank you.
**Document business rules**. Encode domain knowledge directly in comments. Don't make anyone reverse-engineer requirements from code.
**Document trade-offs**. When you choose between alternatives, explain why. Help future maintainers understand the decision context.
**Do NOT document obvious code**. Comments like `// get user` add noise. Delete them.
## Docstring Formats
### Python (Google Style)
```python
def calculate_price(quantity: int, unit_price: float, discount: float = 0.0) -> float:
"""Calculate total price after discount.
Args:
quantity: Number of items ordered.
unit_price: Price per item in USD.
discount: Decimal discount rate (0.0 to 1.0).
Returns:
Final price in USD.
Raises:
ValueError: If quantity is negative.
"""
```
### JavaScript/TypeScript (JSDoc)
```javascript
/**
* Validates user input against security rules.
* @param {string} input - Raw user input from form.
* @param {Object} rules - Validation constraints.
* @param {number} rules.maxLength - Maximum allowed length.
* @returns {boolean} True if input passes all rules.
* @throws {ValidationError} If input violates security constraints.
*/
function validateInput(input, rules) {
```
### Bash
```bash
#!/usr/bin/env bash
# Deploy application to production environment.
#
# Usage: ./deploy.sh [environment]
#
# Args:
# environment: Target environment (staging|production). Default: staging.
#
# Exits:
# 0 on success, 1 on deployment failure.
```
## Inline Comments: WHY Not WHAT
**Incorrect:**
```python
# Iterate through all users
for user in users:
# Check if user is active
if user.active:
# Increment counter
count += 1
```
**Correct:**
```python
# Count only active users to calculate monthly revenue
for user in users:
if user.active:
count += 1
```
**Incorrect:**
```javascript
// Set timeout to 5000
setTimeout(() => {
// Show error message
alert('Error');
}, 5000);
```
**Correct:**
```javascript
// 5000ms delay prevents duplicate alerts during rapid retries
setTimeout(() => {
alert('Error');
}, 5000);
```
**Incorrect:**
```bash
# Remove temporary files
rm -rf /tmp/app/*
```
**Correct:**
```bash
# Clear temp directory before batch import to prevent partial state
rm -rf /tmp/app/*
```
**Rule:** Describe the intent and context. Never describe what the code obviously does.
## README Standards
Every project needs a README at the top level.
**Required sections:**
1. **What it does** - One sentence summary
2. **Installation** - Setup commands
3. **Usage** - Basic example
4. **Configuration** - Environment variables and settings
5. **Contributing** - How to contribute
**Example structure:**
```markdown
# Project Name
One-line description of what this project does.
## Installation
```bash
npm install
```
## Usage
```bash
npm start
```
## Configuration
Create `.env` file:
```
API_KEY=your_key_here
```
## Contributing
See [CONTRIBUTING.md](./CONTRIBUTING.md).
```
**Keep READMEs focused**. Link to separate docs for complex topics. Don't make the README a tutorial.

View File

@@ -0,0 +1,118 @@
# Git Workflow Rules
## Conventional Commits
Format: `<type>(<scope>): <subject>`
### Commit Types
- **feat**: New feature
- `feat(auth): add OAuth2 login flow`
- `feat(api): expose user endpoints`
- **fix**: Bug fix
- `fix(payment): resolve timeout on Stripe calls`
- `fix(ui): button not clickable on mobile`
- **refactor**: Code refactoring (no behavior change)
- `refactor(utils): extract date helpers`
- `refactor(api): simplify error handling`
- **docs**: Documentation only
- `docs(readme): update installation steps`
- `docs(api): add endpoint examples`
- **chore**: Maintenance tasks
- `chore(deps): update Node to 20`
- `chore(ci): add GitHub actions workflow`
- **test**: Tests only
- `test(auth): add unit tests for login`
- `test(e2e): add checkout flow tests`
- **style**: Formatting, no logic change
- `style: sort imports alphabetically`
### Commit Rules
- Subject max 72 chars
- Imperative mood ("add", not "added")
- No period at end
- Reference issues: `Closes #123`
## Branch Naming
Pattern: `<type>/<short-description>`
### Branch Types
- `feature/add-user-dashboard`
- `feature/enable-dark-mode`
- `fix/login-redirect-loop`
- `fix/payment-timeout-error`
- `refactor/extract-user-service`
- `refactor/simplify-auth-flow`
- `hotfix/security-vulnerability`
### Branch Rules
- Lowercase and hyphens
- Max 50 chars
- Delete after merge
## Pull Requests
### PR Title
Follow Conventional Commit format:
- `feat: add user dashboard`
- `fix: resolve login redirect loop`
### PR Description
```markdown
## What
Brief description
## Why
Reason for change
## How
Implementation approach
## Testing
Steps performed
## Checklist
- [ ] Tests pass
- [ ] Code reviewed
- [ ] Docs updated
```
## Merge Strategy
### Squash Merge
- Many small commits
- One cohesive feature
- Clean history
### Merge Commit
- Preserve commit history
- Distinct milestones
- Detailed history preferred
### When to Rebase
- Before opening PR
- Resolving conflicts
- Keeping current with main
## General Rules
- Pull latest from main before starting
- Write atomic commits
- Run tests before pushing
- Request peer review before merge
- Never force push to main/master

105
rules/concerns/naming.md Normal file
View File

@@ -0,0 +1,105 @@
# Naming Conventions
Use consistent naming across all code. Follow language-specific conventions.
## Language Reference
| Type | Python | TypeScript | Nix | Shell |
|------|--------|------------|-----|-------|
| Variables | snake_case | camelCase | camelCase | UPPER_SNAKE |
| Functions | snake_case | camelCase | camelCase | lower_case |
| Classes | PascalCase | PascalCase | - | - |
| Constants | UPPER_SNAKE | UPPER_SNAKE | camelCase | UPPER_SNAKE |
| Files | snake_case | camelCase | hyphen-case | hyphen-case |
| Modules | snake_case | camelCase | - | - |
## General Rules
**Files**: Use hyphen-case for documentation, snake_case for Python, camelCase for TypeScript. Names should describe content.
**Variables**: Use descriptive names. Avoid single letters except loop counters. No Hungarian notation.
**Functions**: Use verb-noun pattern. Name describes what it does, not how it does it.
**Classes**: Use PascalCase with descriptive nouns. Avoid abbreviations.
**Constants**: Use UPPER_SNAKE with descriptive names. Group related constants.
## Examples
Python:
```python
# Variables
user_name = "alice"
is_authenticated = True
# Functions
def get_user_data(user_id):
pass
# Classes
class UserProfile:
pass
# Constants
MAX_RETRIES = 3
API_ENDPOINT = "https://api.example.com"
```
TypeScript:
```typescript
// Variables
const userName = "alice";
const isAuthenticated = true;
// Functions
function getUserData(userId: string): User {
return null;
}
// Classes
class UserProfile {
private name: string;
}
// Constants
const MAX_RETRIES = 3;
const API_ENDPOINT = "https://api.example.com";
```
Nix:
```nix
# Variables
let
userName = "alice";
isAuthenticated = true;
in
# ...
```
Shell:
```bash
# Variables
USER_NAME="alice"
IS_AUTHENTICATED=true
# Functions
get_user_data() {
echo "Getting data"
}
# Constants
MAX_RETRIES=3
API_ENDPOINT="https://api.example.com"
```
## File Naming
Use these patterns consistently. No exceptions.
- Skills: `hyphen-case`
- Python: `snake_case.py`
- TypeScript: `camelCase.ts` or `hyphen-case.ts`
- Nix: `hyphen-case.nix`
- Shell: `hyphen-case.sh`
- Markdown: `UPPERCASE.md` or `sentence-case.md`

View File

@@ -0,0 +1,82 @@
# Project Structure
## Python
Use src layout for all projects. Place application code in `src/<project>/`, tests in `tests/`.
```
project/
├── src/myproject/
│ ├── __init__.py
│ ├── main.py # Entry point
│ └── core/
│ └── module.py
├── tests/
│ ├── __init__.py
│ └── test_module.py
├── pyproject.toml # Config
├── README.md
└── .gitignore
```
**Rules:**
- One module per directory file
- `__init__.py` in every package
- Entry point in `src/myproject/main.py`
- Config in root: `pyproject.toml`, `requirements.txt`
## TypeScript
Use `src/` for source, `dist/` for build output.
```
project/
├── src/
│ ├── index.ts # Entry point
│ ├── core/
│ │ └── module.ts
│ └── types.ts
├── tests/
│ └── module.test.ts
├── package.json # Config
├── tsconfig.json
└── README.md
```
**Rules:**
- One module per file
- Index exports from `src/index.ts`
- Entry point in `src/index.ts`
- Config in root: `package.json`, `tsconfig.json`
## Nix
Use `modules/` for NixOS modules, `pkgs/` for packages.
```
nix-config/
├── modules/
│ ├── default.nix # Module list
│ └── my-service.nix
├── pkgs/
│ └── my-package/
│ └── default.nix
├── flake.nix # Entry point
├── flake.lock
└── README.md
```
**Rules:**
- One module per file in `modules/`
- One package per directory in `pkgs/`
- Entry point in `flake.nix`
- Config in root: `flake.nix`, shell.nix
## General
- Use hyphen-case for directories
- Use kebab-case for file names
- Config files in project root
- Tests separate from source
- Docs in root: README.md, CHANGELOG.md
- Hidden configs: .env, .gitignore

476
rules/concerns/tdd.md Normal file
View File

@@ -0,0 +1,476 @@
# Test-Driven Development (Strict Enforcement)
## Critical Rules (MUST follow)
**NEVER write production code without a failing test first.**
**ALWAYS follow the red-green-refactor cycle. No exceptions.**
**NEVER skip the refactor step. Code quality is mandatory.**
**ALWAYS commit after green, never commit red tests.**
---
## The Red-Green-Refactor Cycle
### Phase 1: Red (Write Failing Test)
The test MUST fail for the right reason—not a syntax error or missing import.
```python
# CORRECT: Test fails because behavior doesn't exist yet
def test_calculate_discount_for_premium_members():
user = User(tier="premium")
cart = Cart(items=[Item(price=100)])
discount = calculate_discount(user, cart)
assert discount == 10 # Fails: calculate_discount not implemented
# INCORRECT: Test fails for wrong reason (will pass accidentally)
def test_calculate_discount():
discount = calculate_discount() # Fails: missing required args
assert discount is not None
```
**Red Phase Checklist:**
- [ ] Test describes ONE behavior
- [ ] Test name clearly states expected outcome
- [ ] Test fails for the intended reason
- [ ] Error message is meaningful
### Phase 2: Green (Write Minimum Code)
Write the MINIMUM code to make the test pass. Do not implement future features.
```python
# CORRECT: Minimum implementation
def calculate_discount(user, cart):
if user.tier == "premium":
return 10
return 0
# INCORRECT: Over-engineering for future needs
def calculate_discount(user, cart):
discounts = {
"premium": 10,
"gold": 15, # Not tested
"silver": 5, # Not tested
"basic": 0 # Not tested
}
return discounts.get(user.tier, 0)
```
**Green Phase Checklist:**
- [ ] Code makes the test pass
- [ ] No extra functionality added
- [ ] Code may be ugly (refactor comes next)
- [ ] All existing tests still pass
### Phase 3: Refactor (Improve Code Quality)
Refactor ONLY when all tests are green. Make small, incremental changes.
```python
# BEFORE (Green but messy)
def calculate_discount(user, cart):
if user.tier == "premium":
return 10
return 0
# AFTER (Refactored)
DISCOUNT_RATES = {"premium": 0.10}
def calculate_discount(user, cart):
rate = DISCOUNT_RATES.get(user.tier, 0)
return int(cart.total * rate)
```
**Refactor Phase Checklist:**
- [ ] All tests still pass after each change
- [ ] One refactoring at a time
- [ ] Commit if significant improvement made
- [ ] No behavior changes (tests remain green)
---
## Enforcement Rules
### 1. Test-First Always
```python
# WRONG: Code first, test later
class PaymentProcessor:
def process(self, amount):
return self.gateway.charge(amount)
# Then write test... (TOO LATE!)
# CORRECT: Test first
def test_process_payment_charges_gateway():
mock_gateway = MockGateway()
processor = PaymentProcessor(gateway=mock_gateway)
processor.process(100)
assert mock_gateway.charged_amount == 100
```
### 2. No Commented-Out Tests
```python
# WRONG: Commented test hides failing behavior
# def test_refund_processing():
# # TODO: fix this later
# assert False
# CORRECT: Use skip with reason
@pytest.mark.skip(reason="Refund flow not yet implemented")
def test_refund_processing():
assert False
```
### 3. Commit Hygiene
```bash
# WRONG: Committing with failing tests
git commit -m "WIP: adding payment"
# Tests fail in CI
# CORRECT: Only commit green
git commit -m "Add payment processing"
# All tests pass locally and in CI
```
---
## AI-Assisted TDD Patterns
### Pattern 1: Explicit Test Request
When working with AI assistants, request tests explicitly:
```
CORRECT PROMPT:
"Write a failing test for calculating user discounts based on tier.
Then implement the minimum code to make it pass."
INCORRECT PROMPT:
"Implement a discount calculator with tier support."
```
### Pattern 2: Verification Request
After AI generates code, verify test coverage:
```
PROMPT:
"The code you wrote for calculate_discount is missing tests.
First, show me a failing test for the edge case where cart is empty.
Then make it pass with minimum code."
```
### Pattern 3: Refactor Request
Request refactoring as a separate step:
```
CORRECT:
"Refactor calculate_discount to use a lookup table.
Run tests after each change."
INCORRECT:
"Refactor and add new features at the same time."
```
### Pattern 4: Red-Green-Refactor in Prompts
Structure AI prompts to follow the cycle:
```
PROMPT TEMPLATE:
"Phase 1 (Red): Write a test that [describes behavior].
The test should fail because [reason].
Show me the failing test output.
Phase 2 (Green): Write the minimum code to pass this test.
No extra features.
Phase 3 (Refactor): Review the code. Suggest improvements.
I'll approve before you apply changes."
```
### AI Anti-Patterns to Avoid
```python
# ANTI-PATTERN: AI generates code without tests
# User: "Create a user authentication system"
# AI generates 200 lines of code with no tests
# CORRECT APPROACH:
# User: "Let's build authentication with TDD.
# First, write a failing test for successful login."
# ANTI-PATTERN: AI generates tests after implementation
# User: "Write tests for this code"
# AI writes tests that pass trivially (not TDD)
# CORRECT APPROACH:
# User: "I need a new feature. Write the failing test first."
```
---
## Legacy Code Strategy
### 1. Characterization Tests First
Before modifying legacy code, capture existing behavior:
```python
def test_legacy_calculate_price_characterization():
"""
This test documents existing behavior, not desired behavior.
Do not change expected values without understanding impact.
"""
# Given: Current production inputs
order = Order(items=[Item(price=100, quantity=2)])
# When: Execute legacy code
result = legacy_calculate_price(order)
# Then: Capture ACTUAL output (even if wrong)
assert result == 215 # Includes mystery 7.5% surcharge
```
### 2. Strangler Fig Pattern
```python
# Step 1: Write test for new behavior
def test_calculate_price_with_new_algorithm():
order = Order(items=[Item(price=100, quantity=2)])
result = calculate_price_v2(order)
assert result == 200 # No mystery surcharge
# Step 2: Implement new code with TDD
def calculate_price_v2(order):
return sum(item.price * item.quantity for item in order.items)
# Step 3: Route new requests to new code
def calculate_price(order):
if order.use_new_pricing:
return calculate_price_v2(order)
return legacy_calculate_price(order)
# Step 4: Gradually migrate, removing legacy path
```
### 3. Safe Refactoring Sequence
```python
# 1. Add characterization tests
# 2. Extract method (tests stay green)
# 3. Add unit tests for extracted method
# 4. Refactor extracted method with TDD
# 5. Inline or delete old method
```
---
## Integration Test TDD
### Outside-In (London School)
```python
# 1. Write acceptance test (fails end-to-end)
def test_user_can_complete_purchase():
user = create_user()
add_item_to_cart(user, item)
result = complete_purchase(user)
assert result.status == "success"
assert user.has_receipt()
# 2. Drop down to unit test for first component
def test_cart_calculates_total():
cart = Cart()
cart.add(Item(price=100))
assert cart.total == 100
# 3. Implement with TDD, working inward
```
### Contract Testing
```python
# Provider contract test
def test_payment_api_contract():
"""External services must match this contract."""
response = client.post("/payments", json={
"amount": 100,
"currency": "USD"
})
assert response.status_code == 201
assert "transaction_id" in response.json()
# Consumer contract test
def test_payment_gateway_contract():
"""We expect the gateway to return transaction IDs."""
mock_gateway = MockPaymentGateway()
mock_gateway.expect_charge(amount=100).and_return(
transaction_id="tx_123"
)
result = process_payment(mock_gateway, amount=100)
assert result.transaction_id == "tx_123"
```
---
## Refactoring Rules
### Rule 1: Refactor Only When Green
```python
# WRONG: Refactoring with failing test
def test_new_feature():
assert False # Failing
def existing_code():
# Refactoring here is DANGEROUS
pass
# CORRECT: All tests pass before refactoring
def existing_code():
# Safe to refactor now
pass
```
### Rule 2: One Refactoring at a Time
```python
# WRONG: Multiple refactorings at once
def process_order(order):
# Changed: variable name
# Changed: extracted method
# Changed: added caching
# Which broke it? Who knows.
pass
# CORRECT: One change, test, commit
# Commit 1: Rename variable
# Commit 2: Extract method
# Commit 3: Add caching
```
### Rule 3: Baby Steps
```python
# WRONG: Large refactoring
# Before: 500-line monolith
# After: 10 new classes
# Risk: Too high
# CORRECT: Extract one method at a time
# Step 1: Extract calculate_total (commit)
# Step 2: Extract validate_items (commit)
# Step 3: Extract apply_discounts (commit)
```
---
## Test Quality Gates
### Pre-Commit Hooks
```bash
#!/bin/bash
# .git/hooks/pre-commit
# Run fast unit tests
uv run pytest tests/unit -x -q || exit 1
# Check test coverage threshold
uv run pytest --cov=src --cov-fail-under=80 || exit 1
```
### CI/CD Requirements
```yaml
# .github/workflows/test.yml
- name: Run Tests
run: |
pytest --cov=src --cov-report=xml --cov-fail-under=80
- name: Check Test Quality
run: |
# Fail if new code lacks tests
diff-cover coverage.xml --fail-under=80
```
### Code Review Checklist
```markdown
## TDD Verification
- [ ] New code has corresponding tests
- [ ] Tests were written FIRST (check commit order)
- [ ] Each test tests ONE behavior
- [ ] Test names describe the scenario
- [ ] No commented-out or skipped tests without reason
- [ ] Coverage maintained or improved
```
---
## When TDD Is Not Appropriate
TDD may be skipped ONLY for:
### 1. Exploratory Prototypes
```python
# prototype.py - Delete after learning
# No tests needed for throwaway exploration
def quick_test_api():
response = requests.get("https://api.example.com")
print(response.json())
```
### 2. One-Time Scripts
```python
# migrate_data.py - Run once, discard
# Tests would cost more than value provided
```
### 3. Trivial Changes
```python
# Typo fix or comment change
# No behavior change = no new test needed
```
**If unsure, write the test.**
---
## Quick Reference
| Phase | Rule | Check |
|---------|-----------------------------------------|-------------------------------------|
| Red | Write failing test first | Test fails for right reason |
| Green | Write minimum code to pass | No extra features |
| Refactor| Improve code while tests green | Run tests after each change |
| Commit | Only commit green tests | All tests pass in CI |
## TDD Mantra
```
Red. Green. Refactor. Commit. Repeat.
No test = No code.
No green = No commit.
No refactor = Technical debt.
```

134
rules/concerns/testing.md Normal file
View File

@@ -0,0 +1,134 @@
# Testing Rules
## Arrange-Act-Assert Pattern
Structure every test in three distinct phases:
```python
# Arrange: Set up the test data and conditions
user = User(name="Alice", role="admin")
session = create_test_session(user.id)
# Act: Execute the behavior under test
result = grant_permission(session, "read_documents")
# Assert: Verify the expected outcome
assert result.granted is True
assert result.permissions == ["read_documents"]
```
Never mix phases. Comment each phase clearly for complex setups. Keep Act phase to one line if possible.
## Behavior vs Implementation Testing
Test behavior, not implementation details:
```python
# GOOD: Tests the observable behavior
def test_user_can_login():
response = login("alice@example.com", "password123")
assert response.status_code == 200
assert "session_token" in response.cookies
# BAD: Tests internal implementation
def test_login_sets_database_flag():
login("alice@example.com", "password123")
user = User.get(email="alice@example.com")
assert user._logged_in_flag is True # Private field
```
Focus on inputs and outputs. Test public contracts. Refactor internals freely without breaking tests.
## Mocking Philosophy
Mock external dependencies, not internal code:
```python
# GOOD: Mock external services
@patch("requests.post")
def test_sends_notification_to_slack(mock_post):
send_notification("Build complete!")
mock_post.assert_called_once_with(
"https://slack.com/api/chat.postMessage",
json={"text": "Build complete!"}
)
# BAD: Mock internal methods
@patch("NotificationService._format_message")
def test_notification_formatting(mock_format):
# Don't mock private methods
send_notification("Build complete!")
```
Mock when:
- Dependency is slow (database, network, file system)
- Dependency is unreliable (external APIs)
- Dependency is expensive (third-party services)
Don't mock when:
- Testing the dependency itself
- The dependency is fast and stable
- The mock becomes more complex than real implementation
## Coverage Expectations
Write tests for:
- Critical business logic (aim for 90%+)
- Edge cases and error paths (aim for 80%+)
- Public APIs and contracts (aim for 100%)
Don't obsess over:
- Trivial getters/setters
- Generated code
- One-line wrappers
Coverage is a floor, not a ceiling. A test suite at 100% coverage that doesn't verify behavior is worthless.
## Test-Driven Development
Follow the red-green-refactor cycle:
1. Red: Write failing test for new behavior
2. Green: Write minimum code to pass
3. Refactor: improve code while tests stay green
Write tests first for new features. Write tests after for bug fixes. Never refactor without tests.
## Test Organization
Group tests by feature or behavior, not by file structure. Name tests to describe the scenario:
```python
class TestUserAuthentication:
def test_valid_credentials_succeeds(self):
pass
def test_invalid_credentials_fails(self):
pass
def test_locked_account_fails(self):
pass
```
Each test should stand alone. Avoid shared state between tests. Use fixtures or setup methods to reduce duplication.
## Test Data
Use realistic test data that reflects production scenarios:
```python
# GOOD: Realistic values
user = User(
email="alice@example.com",
name="Alice Smith",
age=28
)
# BAD: Placeholder values
user = User(
email="test@test.com",
name="Test User",
age=999
)
```
Avoid magic strings and numbers. Use named constants for expected values that change often.

View File

42
rules/frameworks/n8n.md Normal file
View File

@@ -0,0 +1,42 @@
# n8n Workflow Automation Rules
## Workflow Design
- Start with a clear trigger: Webhook, Schedule, or Event source
- Keep workflows under 20 nodes for maintainability
- Group related logic with sub-workflows
- Use the "Switch" node for conditional branching
- Add "Wait" nodes between rate-limited API calls
## Node Naming
- Use verb-based names: `Fetch Users`, `Transform Data`, `Send Email`
- Prefix data nodes: `Get_`, `Set_`, `Update_`
- Prefix conditionals: `Check_`, `If_`, `When_`
- Prefix actions: `Send_`, `Create_`, `Delete_`
- Add version suffix to API nodes: `API_v1_Users`
## Error Handling
- Always add an Error Trigger node
- Route errors to a "Notify Failure" branch
- Log error details: `$json.error.message`, `$json.node.name`
- Send alerts on critical failures
- Add "Continue On Fail" for non-essential nodes
## Data Flow
- Use "Set" nodes to normalize output structure
- Reference previous nodes: `{{ $json.field }}`
- Use "Merge" node to combine multiple data sources
- Apply "Code" node for complex transformations
- Clean data before sending to external APIs
## Credential Security
- Store all secrets in n8n credentials manager
- Never hardcode API keys or tokens
- Use environment-specific credential sets
- Rotate credentials regularly
- Limit credential scope to minimum required permissions
## Testing
- Test each node independently with "Execute Node"
- Verify data structure at each step
- Mock external dependencies during development
- Log workflow execution for debugging

0
rules/languages/.gitkeep Normal file
View File

129
rules/languages/nix.md Normal file
View File

@@ -0,0 +1,129 @@
# Nix Code Conventions
## Formatting
- Use `alejandra` for formatting
- camelCase for variables, `PascalCase` for types
- 2 space indentation (alejandra default)
- No trailing whitespace
## Flake Structure
```nix
{
description = "Description here";
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
flake-utils.url = "github:numtide/flake-utils";
};
outputs = { self, nixpkgs, flake-utils, ... }:
flake-utils.lib.eachDefaultSystem (system:
let
pkgs = nixpkgs.legacyPackages.${system};
in {
packages.default = pkgs.hello;
devShells.default = pkgs.mkShell {
buildInputs = [ pkgs.hello ];
};
}
);
}
```
## Module Patterns
Standard module function signature:
```nix
{ config, lib, pkgs, ... }:
{
options.myService.enable = lib.mkEnableOption "my service";
config = lib.mkIf config.myService.enable {
services.myService.enable = true;
};
}
```
## Conditionals and Merging
- Use `mkIf` for conditional config
- Use `mkMerge` to combine multiple config sets
- Use `mkOptionDefault` for defaults that can be overridden
```nix
config = lib.mkMerge [
(lib.mkIf cfg.enable { ... })
(lib.mkIf cfg.extraConfig { ... })
];
```
## Anti-Patterns (AVOID)
### `with pkgs;`
Bad: Pollutes namespace, hard to trace origins
```nix
{ pkgs, ... }:
{
packages = with pkgs; [ vim git ];
}
```
Good: Explicit references
```nix
{ pkgs, ... }:
{
packages = [ pkgs.vim pkgs.git ];
}
```
### `builtins.fetchTarball`
Use flake inputs instead. `fetchTarball` is non-reproducible.
### Impure operations
Avoid `import <nixpkgs>` in flakes. Always use inputs.
### `builtins.getAttr` / `builtins.hasAttr`
Use `lib.attrByPath` or `lib.optionalAttrs` instead.
## Home Manager Patterns
```nix
{ config, pkgs, lib, ... }:
{
home.packages = with pkgs; [ ripgrep fd ];
programs.zsh.enable = true;
xdg.configFile."myapp/config".text = "...";
}
```
## Overlays
```nix
{ config, lib, pkgs, ... }:
let
myOverlay = final: prev: {
myPackage = prev.myPackage.overrideAttrs (old: { ... });
};
in
{
nixpkgs.overlays = [ myOverlay ];
}
```
## Imports and References
- Use flake inputs for dependencies
- `lib` is always available in modules
- Reference packages via `pkgs.packageName`
- Use `callPackage` for complex package definitions
## File Organization
```
flake.nix # Entry point
modules/ # NixOS modules
services/
my-service.nix
overlays/ # Package overrides
default.nix
```

224
rules/languages/python.md Normal file
View File

@@ -0,0 +1,224 @@
# Python Language Rules
## Toolchain
### Package Management (uv)
```bash
uv init my-project --package
uv add numpy pandas
uv add --dev pytest ruff pyright hypothesis
uv run python -m pytest
uv lock --upgrade-package numpy
```
### Linting & Formatting (ruff)
```toml
[tool.ruff]
line-length = 100
target-version = "py311"
[tool.ruff.lint]
select = ["E", "F", "W", "I", "N", "UP"]
ignore = ["E501"]
[tool.ruff.format]
quote-style = "double"
```
### Type Checking (pyright)
```toml
[tool.pyright]
typeCheckingMode = "strict"
reportMissingTypeStubs = true
reportUnknownMemberType = true
```
### Testing (pytest + hypothesis)
```python
import pytest
from hypothesis import given, strategies as st
@given(st.integers(), st.integers())
def test_addition_commutative(a, b):
assert a + b == b + a
@pytest.fixture
def user_data():
return {"name": "Alice", "age": 30}
def test_user_creation(user_data):
user = User(**user_data)
assert user.name == "Alice"
```
### Data Validation (Pydantic)
```python
from pydantic import BaseModel, Field, validator
class User(BaseModel):
name: str = Field(min_length=1, max_length=100)
age: int = Field(ge=0, le=150)
email: str
@validator('email')
def email_must_contain_at(cls, v):
if '@' not in v:
raise ValueError('must contain @')
return v
```
## Idioms
### Comprehensions
```python
# List comprehension
squares = [x**2 for x in range(10) if x % 2 == 0]
# Dict comprehension
word_counts = {word: text.count(word) for word in unique_words}
# Set comprehension
unique_chars = {char for char in text if char.isalpha()}
```
### Context Managers
```python
# Built-in context managers
with open('file.txt', 'r') as f:
content = f.read()
# Custom context manager
from contextlib import contextmanager
@contextmanager
def timer():
start = time.time()
yield
print(f"Elapsed: {time.time() - start:.2f}s")
```
### Generators
```python
def fibonacci():
a, b = 0, 1
while True:
yield a
a, b = b, a + b
def read_lines(file_path):
with open(file_path) as f:
for line in f:
yield line.strip()
```
### F-strings
```python
name = "Alice"
age = 30
# Basic interpolation
msg = f"Name: {name}, Age: {age}"
# Expression evaluation
msg = f"Next year: {age + 1}"
# Format specs
msg = f"Price: ${price:.2f}"
msg = f"Hex: {0xFF:X}"
```
## Anti-Patterns
### Bare Except
```python
# AVOID: Catches all exceptions including SystemExit
try:
risky_operation()
except:
pass
# USE: Catch specific exceptions
try:
risky_operation()
except ValueError as e:
log_error(e)
except KeyError as e:
log_error(e)
```
### Mutable Defaults
```python
# AVOID: Default argument created once
def append_item(item, items=[]):
items.append(item)
return items
# USE: None as sentinel
def append_item(item, items=None):
if items is None:
items = []
items.append(item)
return items
```
### Global State
```python
# AVOID: Global mutable state
counter = 0
def increment():
global counter
counter += 1
# USE: Class-based state
class Counter:
def __init__(self):
self.count = 0
def increment(self):
self.count += 1
```
### Star Imports
```python
# AVOID: Pollutes namespace, unclear origins
from module import *
# USE: Explicit imports
from module import specific_function, MyClass
import module as m
```
## Project Setup
### pyproject.toml Structure
```toml
[project]
name = "my-project"
version = "0.1.0"
requires-python = ">=3.11"
dependencies = [
"pydantic>=2.0",
"httpx>=0.25",
]
[project.optional-dependencies]
dev = ["pytest", "ruff", "pyright", "hypothesis"]
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
```
### src Layout
```
my-project/
├── pyproject.toml
└── src/
└── my_project/
├── __init__.py
├── main.py
└── utils/
├── __init__.py
└── helpers.py
```

100
rules/languages/shell.md Normal file
View File

@@ -0,0 +1,100 @@
# Shell Scripting Rules
## Shebang
Always use `#!/usr/bin/env bash` for portability. Never hardcode `/bin/bash`.
```bash
#!/usr/bin/env bash
```
## Strict Mode
Enable strict mode in every script.
```bash
#!/usr/bin/env bash
set -euo pipefail
```
- `-e`: Exit on error
- `-u`: Error on unset variables
- `-o pipefail`: Return exit status of last failed pipe command
## Shellcheck
Run shellcheck on all scripts before committing.
```bash
shellcheck script.sh
```
## Quoting
Quote all variable expansions and command substitutions. Use arrays instead of word-splitting strings.
```bash
# Good
"${var}"
files=("file1.txt" "file2.txt")
for f in "${files[@]}"; do
process "$f"
done
# Bad
$var
files="file1.txt file2.txt"
for f in $files; do
process $f
done
```
## Functions
Define with parentheses, use `local` for variables.
```bash
my_function() {
local result
result=$(some_command)
echo "$result"
}
```
## Command Substitution
Use `$()` not backticks. Nests cleanly.
```bash
# Good
output=$(ls "$dir")
# Bad
output=`ls $dir`
```
## POSIX Portability
Write POSIX-compliant scripts when targeting `/bin/sh`.
- Use `[[` only for bash scripts
- Use `printf` instead of `echo -e`
- Avoid `[[`, `((`, `&>` in sh scripts
## Error Handling
Use `trap` for cleanup.
```bash
cleanup() {
rm -f /tmp/lockfile
}
trap cleanup EXIT
```
## Readability
- Use 2-space indentation
- Limit lines to 80 characters
- Add comments for non-obvious logic
- Separate sections with blank lines

View File

@@ -0,0 +1,150 @@
# TypeScript Patterns
## Strict tsconfig
Always enable strict mode and key safety options:
```json
{
"compilerOptions": {
"strict": true,
"noUncheckedIndexedAccess": true,
"noImplicitReturns": true,
"noFallthroughCasesInSwitch": true,
"noUnusedLocals": true,
"noUnusedParameters": true
}
}
```
## Discriminated Unions
Use discriminated unions for exhaustive type safety:
```ts
type Result =
| { success: true; data: string }
| { success: false; error: Error };
function handleResult(result: Result): string {
if (result.success) {
return result.data;
}
throw result.error;
}
```
## Branded Types
Prevent type confusion with nominal branding:
```ts
type UserId = string & { readonly __brand: unique symbol };
type Email = string & { readonly __brand: unique symbol };
function createUserId(id: string): UserId {
return id as UserId;
}
function sendEmail(email: Email, userId: UserId) {}
```
## satisfies Operator
Use `satisfies` for type-safe object literal inference:
```ts
const config = {
port: 3000,
host: "localhost",
} satisfies {
port: number;
host: string;
debug?: boolean;
};
config.port; // number
config.host; // string
```
## as const Assertions
Freeze literal types with `as const`:
```ts
const routes = {
home: "/",
about: "/about",
contact: "/contact",
} as const;
type Route = typeof routes[keyof typeof routes];
```
## Modern Features
```ts
// Promise.withResolvers()
const { promise, resolve, reject } = Promise.withResolvers<string>();
// Object.groupBy()
const users = [
{ name: "Alice", role: "admin" },
{ name: "Bob", role: "user" },
];
const grouped = Object.groupBy(users, u => u.role);
// using statement for disposables
class Resource implements Disposable {
async [Symbol.asyncDispose]() {
await this.cleanup();
}
}
async function withResource() {
using r = new Resource();
}
```
## Toolchain
Prefer modern tooling:
- Runtime: `bun` or `tsx` (no `tsc` for execution)
- Linting: `biome` (preferred) or `eslint`
- Formatting: `biome` (built-in) or `prettier`
## Anti-Patterns
Avoid these TypeScript patterns:
```ts
// NEVER use as any
const data = response as any;
// NEVER use @ts-ignore
// @ts-ignore
const value = unknownFunction();
// NEVER use ! assertion (non-null)
const element = document.querySelector("#foo")!;
// NEVER use enum (prefer union)
enum Status { Active, Inactive } // ❌
// Prefer const object or union
type Status = "Active" | "Inactive"; // ✅
const Status = { Active: "Active", Inactive: "Inactive" } as const; // ✅
```
## Indexed Access Safety
With `noUncheckedIndexedAccess`, handle undefined:
```ts
const arr: string[] = ["a", "b"];
const item = arr[0]; // string | undefined
const item2 = arr.at(0); // string | undefined
const map = new Map<string, number>();
const value = map.get("key"); // number | undefined
```

View File

@@ -8,7 +8,7 @@
# ./scripts/test-skill.sh --run # Launch interactive opencode session
#
# This script creates a temporary XDG_CONFIG_HOME with symlinks to this
# repository's skill/, context/, command/, and prompts/ directories,
# repository's skills/, context/, command/, and prompts/ directories,
# allowing you to test skill changes before deploying via home-manager.
set -euo pipefail
@@ -26,9 +26,9 @@ setup_test_config() {
local tmp_config="$tmp_base/opencode"
mkdir -p "$tmp_config"
ln -sf "$REPO_ROOT/skill" "$tmp_config/skill"
ln -sf "$REPO_ROOT/skills" "$tmp_config/skills"
ln -sf "$REPO_ROOT/context" "$tmp_config/context"
ln -sf "$REPO_ROOT/command" "$tmp_config/command"
ln -sf "$REPO_ROOT/commands" "$tmp_config/commands"
ln -sf "$REPO_ROOT/prompts" "$tmp_config/prompts"
echo "$tmp_base"
@@ -72,17 +72,17 @@ list_skills() {
validate_skill() {
local skill_name="$1"
local skill_path="$REPO_ROOT/skill/$skill_name"
local skill_path="$REPO_ROOT/skills/$skill_name"
if [[ ! -d "$skill_path" ]]; then
echo -e "${RED}❌ Skill not found: $skill_name${NC}"
echo "Available skills:"
ls -1 "$REPO_ROOT/skill/"
ls -1 "$REPO_ROOT/skills/"
exit 1
fi
echo -e "${YELLOW}Validating skill: $skill_name${NC}"
if python3 "$REPO_ROOT/skill/skill-creator/scripts/quick_validate.py" "$skill_path"; then
if python3 "$REPO_ROOT/skills/skill-creator/scripts/quick_validate.py" "$skill_path"; then
echo -e "${GREEN}✅ Skill '$skill_name' is valid${NC}"
else
echo -e "${RED}❌ Skill '$skill_name' has validation errors${NC}"
@@ -95,14 +95,14 @@ validate_all() {
echo ""
local failed=0
for skill_dir in "$REPO_ROOT/skill/"*/; do
for skill_dir in "$REPO_ROOT/skills/"*/; do
local skill_name=$(basename "$skill_dir")
echo -n " $skill_name: "
if python3 "$REPO_ROOT/skill/skill-creator/scripts/quick_validate.py" "$skill_dir" > /dev/null 2>&1; then
if python3 "$REPO_ROOT/skills/skill-creator/scripts/quick_validate.py" "$skill_dir" > /dev/null 2>&1; then
echo -e "${GREEN}${NC}"
else
echo -e "${RED}${NC}"
python3 "$REPO_ROOT/skill/skill-creator/scripts/quick_validate.py" "$skill_dir" 2>&1 | sed 's/^/ /'
python3 "$REPO_ROOT/skills/skill-creator/scripts/quick_validate.py" "$skill_dir" 2>&1 | sed 's/^/ /'
((failed++)) || true
fi
done

182
scripts/validate-agents.sh Executable file
View File

@@ -0,0 +1,182 @@
#!/usr/bin/env bash
#
# Validate agents.json structure and referenced prompt files
#
# Usage:
# ./scripts/validate-agents.sh
#
# This script validates the agent configuration by:
# - Parsing agents.json as valid JSON
# - Checking all 6 required agents are present
# - Verifying each agent has required fields
# - Validating agent modes (primary vs subagent)
# - Verifying all referenced prompt files exist and are non-empty
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(dirname "$SCRIPT_DIR")"
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
AGENTS_FILE="$REPO_ROOT/agents/agents.json"
PROMPTS_DIR="$REPO_ROOT/prompts"
# Expected agent list
EXPECTED_AGENTS=("chiron" "chiron-forge" "hermes" "athena" "apollo" "calliope")
# Expected primary agents
PRIMARY_AGENTS=("chiron" "chiron-forge")
# Expected subagents
SUBAGENTS=("hermes" "athena" "apollo" "calliope")
# Required fields for each agent
REQUIRED_FIELDS=("description" "mode" "model" "prompt")
echo -e "${YELLOW}Validating agent configuration...${NC}"
echo ""
# Track errors
error_count=0
warning_count=0
# Function to print error
error() {
echo -e "${RED}$1${NC}" >&2
((error_count++)) || true
}
# Function to print warning
warning() {
echo -e "${YELLOW}⚠️ $1${NC}"
((warning_count++)) || true
}
# Function to print success
success() {
echo -e "${GREEN}$1${NC}"
}
# Check if agents.json exists
if [[ ! -f "$AGENTS_FILE" ]]; then
error "agents.json not found at $AGENTS_FILE"
exit 1
fi
# Validate JSON syntax
if ! python3 -c "import json; json.load(open('$AGENTS_FILE'))" 2>/dev/null; then
error "agents.json is not valid JSON"
exit 1
fi
success "agents.json is valid JSON"
echo ""
# Parse agents.json
AGENT_COUNT=$(python3 -c "import json; print(len(json.load(open('$AGENTS_FILE'))))")
success "Found $AGENT_COUNT agents in agents.json"
# Check agent count
if [[ $AGENT_COUNT -ne ${#EXPECTED_AGENTS[@]} ]]; then
error "Expected ${#EXPECTED_AGENTS[@]} agents, found $AGENT_COUNT"
fi
# Get list of agent names
AGENT_NAMES=$(python3 -c "import json; print(' '.join(sorted(json.load(open('$AGENTS_FILE')).keys())))")
echo ""
echo "Checking agent list..."
# Check for missing agents
for expected_agent in "${EXPECTED_AGENTS[@]}"; do
if echo "$AGENT_NAMES" | grep -qw "$expected_agent"; then
success "Agent '$expected_agent' found"
else
error "Required agent '$expected_agent' not found"
fi
done
# Check for unexpected agents
for agent_name in $AGENT_NAMES; do
if [[ ! " ${EXPECTED_AGENTS[@]} " =~ " ${agent_name} " ]]; then
warning "Unexpected agent '$agent_name' found (not in expected list)"
fi
done
echo ""
echo "Checking agent fields and modes..."
# Validate each agent
for agent_name in "${EXPECTED_AGENTS[@]}"; do
echo -n " $agent_name: "
# Check required fields
missing_fields=()
for field in "${REQUIRED_FIELDS[@]}"; do
if ! python3 -c "import json; data=json.load(open('$AGENTS_FILE')); print(data.get('$agent_name').get('$field', ''))" 2>/dev/null | grep -q .; then
missing_fields+=("$field")
fi
done
if [[ ${#missing_fields[@]} -gt 0 ]]; then
error "Missing required fields: ${missing_fields[*]}"
continue
fi
# Get mode value
mode=$(python3 -c "import json; print(json.load(open('$AGENTS_FILE'))['$agent_name']['mode'])")
# Validate mode
if [[ " ${PRIMARY_AGENTS[@]} " =~ " ${agent_name} " ]]; then
if [[ "$mode" == "primary" ]]; then
success "Mode: $mode (valid)"
else
error "Expected mode 'primary' for agent '$agent_name', found '$mode'"
fi
elif [[ " ${SUBAGENTS[@]} " =~ " ${agent_name} " ]]; then
if [[ "$mode" == "subagent" ]]; then
success "Mode: $mode (valid)"
else
error "Expected mode 'subagent' for agent '$agent_name', found '$mode'"
fi
fi
done
echo ""
echo "Checking prompt files..."
# Validate prompt file references
for agent_name in "${EXPECTED_AGENTS[@]}"; do
# Extract prompt file path from agent config
prompt_ref=$(python3 -c "import json; print(json.load(open('$AGENTS_FILE'))['$agent_name']['prompt'])")
# Parse prompt reference: {file:./prompts/<name>.txt}
if [[ "$prompt_ref" =~ \{file:(\./prompts/[^}]+)\} ]]; then
prompt_file="${BASH_REMATCH[1]}"
prompt_path="$REPO_ROOT/${prompt_file#./}"
# Check if prompt file exists
if [[ -f "$prompt_path" ]]; then
# Check if prompt file is non-empty
if [[ -s "$prompt_path" ]]; then
success "Prompt file exists and non-empty: $prompt_file"
else
error "Prompt file is empty: $prompt_file"
fi
else
error "Prompt file not found: $prompt_file"
fi
else
error "Invalid prompt reference format for agent '$agent_name': $prompt_ref"
fi
done
echo ""
if [[ $error_count -eq 0 ]]; then
echo -e "${GREEN}All validations passed!${NC}"
exit 0
else
echo -e "${RED}$error_count validation error(s) found${NC}"
exit 1
fi

View File

@@ -1,262 +0,0 @@
---
name: basecamp
description: "Manage work projects in Basecamp via MCP. Use when: (1) creating or viewing Basecamp projects, (2) managing todos or todo lists, (3) working with card tables (kanban boards), (4) searching Basecamp content, (5) syncing project plans to Basecamp. Triggers: basecamp, create todos, show my projects, card table, move card, basecamp search, sync to basecamp, what's in basecamp."
compatibility: opencode
---
# Basecamp
Manage work projects in Basecamp via MCP server. Provides workflows for project overview, todo management, kanban boards, and syncing from plan-writing skill.
## Quick Reference
| Action | Command Pattern |
| --------------- | -------------------------------------- |
| List projects | "Show my Basecamp projects" |
| View project | "What's in [project name]?" |
| Create todos | "Add todos to [project]" |
| View card table | "Show kanban for [project]" |
| Move card | "Move [card] to [column]" |
| Search | "Search Basecamp for [query]" |
| Sync plan | "Create Basecamp todos from this plan" |
## Core Workflows
### 1. Project Overview
List and explore projects:
```
1. get_projects → list all projects
2. Present summary: name, last activity
3. User selects project
4. get_project(id) → show dock items (todosets, card tables, message boards)
```
**Example output:**
```
Your Basecamp Projects:
1. Q2 Training Program (last activity: 2 hours ago)
2. Website Redesign (last activity: yesterday)
3. Product Launch (last activity: 3 days ago)
Which project would you like to explore?
```
### 2. Todo Management
**View todos:**
```
1. get_project(id) → find todoset from dock
2. get_todolists(project_id) → list all todo lists
3. get_todos(project_id, todolist_id) → show todos with status
```
**Create todos:**
```
1. Identify target project and todo list
2. For each todo:
create_todo(
project_id,
todolist_id,
content,
due_on?, # YYYY-MM-DD format
assignee_ids?, # array of person IDs
notify? # boolean
)
3. Confirm creation with links
```
**Complete/update todos:**
```
- complete_todo(project_id, todo_id) → mark done
- uncomplete_todo(project_id, todo_id) → reopen
- update_todo(project_id, todo_id, content?, due_on?, assignee_ids?)
- delete_todo(project_id, todo_id) → remove
```
### 3. Card Table (Kanban) Management
**View board:**
```
1. get_card_table(project_id) → get card table details
2. get_columns(project_id, card_table_id) → list columns
3. For each column: get_cards(project_id, column_id)
4. Present as kanban view
```
**Example output:**
```
Card Table: Development Pipeline
| Backlog (3) | In Progress (2) | Review (1) | Done (5) |
|-------------|-----------------|------------|----------|
| Feature A | Feature B | Bug fix | ... |
| Feature C | Feature D | | |
| Refactor | | | |
```
**Manage columns:**
```
- create_column(project_id, card_table_id, title)
- update_column(project_id, column_id, title) → rename
- move_column(project_id, card_table_id, column_id, position)
- update_column_color(project_id, column_id, color)
- put_column_on_hold(project_id, column_id) → freeze work
- remove_column_hold(project_id, column_id) → unfreeze
```
**Manage cards:**
```
- create_card(project_id, column_id, title, content?, due_on?, notify?)
- update_card(project_id, card_id, title?, content?, due_on?, assignee_ids?)
- move_card(project_id, card_id, column_id) → move to different column
- complete_card(project_id, card_id)
- uncomplete_card(project_id, card_id)
```
**Card steps (subtasks):**
```
- get_card_steps(project_id, card_id) → list subtasks
- create_card_step(project_id, card_id, title, due_on?, assignee_ids?)
- complete_card_step(project_id, step_id)
- update_card_step(project_id, step_id, title?, due_on?, assignee_ids?)
- delete_card_step(project_id, step_id)
```
### 4. Search
```
search_basecamp(query, project_id?)
- Omit project_id → search all projects
- Include project_id → scope to specific project
```
Results include todos, messages, and other content matching the query.
### 5. Sync from Plan-Writing
When user has a project plan from plan-writing skill:
```
1. Parse todo-structure.md or tasks.md for task hierarchy
2. Ask: "Which Basecamp project should I add these to?"
- List existing projects via get_projects
- Note: New projects must be created manually in Basecamp
3. Ask: "Use todo lists or card table?"
4. If todo lists:
- Create todo list per phase/milestone if needed
- Create todos with due dates and assignees
5. If card table:
- Create columns for phases/statuses
- Create cards from tasks
- Add card steps for subtasks
6. Confirm: "Created X todos/cards in [project]. View in Basecamp."
```
### 6. Status Check
```
User: "What's the status of [project]?"
1. get_project(id)
2. For each todo list: get_todos, count complete/incomplete
3. If card table exists: get columns and card counts
4. Calculate summary:
- X todos complete, Y incomplete, Z overdue
- Card distribution across columns
5. Highlight: overdue items, blocked items
```
**Example output:**
```
Project: Q2 Training Program
Todos: 12/20 complete (60%)
- 3 overdue items
- 5 due this week
Card Table: Development
| Backlog | In Progress | Review | Done |
| 3 | 2 | 1 | 8 |
Attention needed:
- "Create training materials" (overdue by 2 days)
- "Review curriculum" (due tomorrow)
```
## Tool Categories
For complete tool reference with parameters, see [references/mcp-tools.md](references/mcp-tools.md).
| Category | Key Tools |
| ---------- | -------------------------------------------------------------- |
| Projects | get_projects, get_project |
| Todos | get_todolists, get_todos, create_todo, complete_todo |
| Cards | get_card_table, get_columns, get_cards, create_card, move_card |
| Card Steps | get_card_steps, create_card_step, complete_card_step |
| Search | search_basecamp |
| Comments | get_comments, create_comment |
| Documents | get_documents, create_document, update_document |
## Limitations
- **No create_project tool**: Projects must be created manually in Basecamp UI
- **Work projects only**: This skill is for professional/team projects
- **Pagination handled**: MCP server handles pagination transparently
## Integration with Other Skills
| From Skill | To Basecamp |
| --------------- | ------------------------------------------------- |
| brainstorming | Save decision → reference in project docs |
| plan-writing | todo-structure.md → Basecamp todos or cards |
| task-management | Anytype tasks ↔ Basecamp todos (manual reference) |
## Common Patterns
### Create todos from a list
```
User provides list:
- Task 1 (due Friday)
- Task 2 (due next week)
- Task 3
1. Identify or confirm project and todo list
2. Parse due dates (Friday → YYYY-MM-DD)
3. Create each todo via create_todo
4. Report: "Created 3 todos in [list name]"
```
### Move cards through workflow
```
User: "Move Feature A to In Progress"
1. search_basecamp("Feature A") or get_cards to find card_id
2. get_columns to find target column_id
3. move_card(project_id, card_id, column_id)
4. Confirm: "Moved 'Feature A' to 'In Progress'"
```
### Add subtasks to a card
```
User: "Add subtasks to the Feature B card"
1. Find card via search or get_cards
2. For each subtask:
create_card_step(project_id, card_id, title)
3. Report: "Added X steps to 'Feature B'"
```

View File

@@ -1,198 +0,0 @@
# Basecamp MCP Tools Reference
Complete reference for all 46 available Basecamp MCP tools.
## Projects
| Tool | Parameters | Returns |
|------|------------|---------|
| `get_projects` | none | List of all projects with id, name, description |
| `get_project` | project_id | Project details including dock (todosets, card tables, etc.) |
## Todo Lists
| Tool | Parameters | Returns |
|------|------------|---------|
| `get_todolists` | project_id | All todo lists in project |
## Todos
| Tool | Parameters | Returns |
|------|------------|---------|
| `get_todos` | project_id, todolist_id | All todos (pagination handled) |
| `create_todo` | project_id, todolist_id, content, due_on?, assignee_ids?, notify? | Created todo |
| `update_todo` | project_id, todo_id, content?, due_on?, assignee_ids? | Updated todo |
| `delete_todo` | project_id, todo_id | Success confirmation |
| `complete_todo` | project_id, todo_id | Completed todo |
| `uncomplete_todo` | project_id, todo_id | Reopened todo |
### Todo Parameters
- `content`: String - The todo text
- `due_on`: String - Date in YYYY-MM-DD format
- `assignee_ids`: Array of integers - Person IDs to assign
- `notify`: Boolean - Whether to notify assignees
## Card Tables
| Tool | Parameters | Returns |
|------|------------|---------|
| `get_card_tables` | project_id | All card tables in project |
| `get_card_table` | project_id | Primary card table details |
## Columns
| Tool | Parameters | Returns |
|------|------------|---------|
| `get_columns` | project_id, card_table_id | All columns in card table |
| `get_column` | project_id, column_id | Column details |
| `create_column` | project_id, card_table_id, title | New column |
| `update_column` | project_id, column_id, title | Updated column |
| `move_column` | project_id, card_table_id, column_id, position | Moved column |
| `update_column_color` | project_id, column_id, color | Updated color |
| `put_column_on_hold` | project_id, column_id | Column frozen |
| `remove_column_hold` | project_id, column_id | Column unfrozen |
| `watch_column` | project_id, column_id | Subscribed to notifications |
| `unwatch_column` | project_id, column_id | Unsubscribed |
### Column Colors
Available colors for `update_column_color`:
- white, grey, pink, red, orange, yellow, green, teal, blue, purple
## Cards
| Tool | Parameters | Returns |
|------|------------|---------|
| `get_cards` | project_id, column_id | All cards in column |
| `get_card` | project_id, card_id | Card details |
| `create_card` | project_id, column_id, title, content?, due_on?, notify? | New card |
| `update_card` | project_id, card_id, title?, content?, due_on?, assignee_ids? | Updated card |
| `move_card` | project_id, card_id, column_id | Card moved to column |
| `complete_card` | project_id, card_id | Card marked complete |
| `uncomplete_card` | project_id, card_id | Card reopened |
### Card Parameters
- `title`: String - Card title
- `content`: String - Card description/body (supports HTML)
- `due_on`: String - Date in YYYY-MM-DD format
- `assignee_ids`: Array of integers - Person IDs
- `notify`: Boolean - Notify assignees on creation
## Card Steps (Subtasks)
| Tool | Parameters | Returns |
|------|------------|---------|
| `get_card_steps` | project_id, card_id | All steps on card |
| `create_card_step` | project_id, card_id, title, due_on?, assignee_ids? | New step |
| `get_card_step` | project_id, step_id | Step details |
| `update_card_step` | project_id, step_id, title?, due_on?, assignee_ids? | Updated step |
| `delete_card_step` | project_id, step_id | Step deleted |
| `complete_card_step` | project_id, step_id | Step completed |
| `uncomplete_card_step` | project_id, step_id | Step reopened |
## Search
| Tool | Parameters | Returns |
|------|------------|---------|
| `search_basecamp` | query, project_id? | Matching todos, messages, etc. |
- Omit `project_id` for global search across all projects
- Include `project_id` to scope search to specific project
## Communication
| Tool | Parameters | Returns |
|------|------------|---------|
| `get_campfire_lines` | project_id, campfire_id | Recent chat messages |
| `get_comments` | project_id, recording_id | Comments on any item |
| `create_comment` | project_id, recording_id, content | New comment |
### Comment Parameters
- `recording_id`: The ID of the item (todo, card, document, etc.)
- `content`: String - Comment text (supports HTML)
## Daily Check-ins
| Tool | Parameters | Returns |
|------|------------|---------|
| `get_daily_check_ins` | project_id, page? | Check-in questions |
| `get_question_answers` | project_id, question_id, page? | Answers to question |
## Documents
| Tool | Parameters | Returns |
|------|------------|---------|
| `get_documents` | project_id, vault_id | Documents in vault |
| `get_document` | project_id, document_id | Document content |
| `create_document` | project_id, vault_id, title, content, status? | New document |
| `update_document` | project_id, document_id, title?, content? | Updated document |
| `trash_document` | project_id, document_id | Document trashed |
### Document Parameters
- `vault_id`: Found in project dock as the docs/files container
- `content`: String - Document body (supports HTML)
- `status`: "active" or "archived"
## Attachments
| Tool | Parameters | Returns |
|------|------------|---------|
| `create_attachment` | file_path, name, content_type? | Uploaded attachment |
## Events
| Tool | Parameters | Returns |
|------|------------|---------|
| `get_events` | project_id, recording_id | Activity events on item |
## Webhooks
| Tool | Parameters | Returns |
|------|------------|---------|
| `get_webhooks` | project_id | Project webhooks |
| `create_webhook` | project_id, payload_url, types? | New webhook |
| `delete_webhook` | project_id, webhook_id | Webhook deleted |
### Webhook Types
Available types for `create_webhook`:
- Comment, Document, GoogleDocument, Message, Question::Answer
- Schedule::Entry, Todo, Todolist, Upload, Vault, Card, CardTable::Column
## Common Patterns
### Find project by name
```
1. get_projects → list all
2. Match name (case-insensitive partial match)
3. Return project_id
```
### Find todoset ID for a project
```
1. get_project(project_id)
2. Look in dock array for item with name "todoset"
3. Extract id from dock item URL
```
### Find card table ID
```
1. get_project(project_id)
2. Look in dock for "kanban_board" or use get_card_tables
3. Extract card_table_id
```
### Get all todos across all lists
```
1. get_todolists(project_id)
2. For each todolist: get_todos(project_id, todolist_id)
3. Aggregate results
```

View File

@@ -1,132 +0,0 @@
# Brainstorm Anytype Workflow
This document describes how to create and use Brainstorm objects in Anytype.
## Quick Create (API)
```bash
# Create a brainstorm object using Anytype MCP
Anytype_API-create-object
space_id: bafyreie5sfq7pjfuq56hxsybos545bi4tok3kx7nab3vnb4tnt4i3575p4.yu20gbnjlbxv
type_key: "brainstorm_v_2"
name: "NixOS Course Launch Strategy"
body: "Full brainstorm content here..."
icon: { format: "emoji", emoji: "💭" }
properties: [
{ key: "topic", text: "NixOS Course Launch Strategy" },
{ key: "context", text: "Want to launch NixOS course for developers" },
{ key: "outcome", text: "Build long-term audience/community" },
{ key: "constraints", text: "2-4 weeks prep time, solo creator" },
{ key: "options", text: "Option A: Early access... Option B: Free preview..." },
{ key: "decision", text: "Early access with community" },
{ key: "rationale", text: "Builds anticipation while validating content" },
{ key: "next_steps", text: "1. Create landing page, 2. Build email list..." },
{ key: "framework", select: "bafyreigokn5xgdosd4cihehl3tqfsd25mwdaapuhopjgn62tkpvpwn4tmy" },
{ key: "status", select: "bafyreiffiinadpa2fwxw3iylj7pph3yzbnhe63dcyiwr4x24ne4jsgi24" }
]
```
## Type Properties
| Property | Type | Purpose |
|----------|------|---------|
| `topic` | text | Short title/summary |
| `context` | text | Situation and trigger |
| `outcome` | text | What success looks like |
| `constraints` | text | Time, resources, boundaries |
| `options` | text | Options explored |
| `decision` | text | Final choice made |
| `rationale` | text | Reasoning behind decision |
| `next_steps` | text/objects | Action items or linked tasks |
| `framework` | select | Thinking framework used |
| `status` | select | Draft → Final → Archived |
| `tags` | multi_select | Categorization |
| `linked_projects` | objects | Related projects |
| `linked_tasks` | objects | Related tasks |
## Framework Tag IDs
| Framework | Tag ID |
|-----------|--------|
| None | `bafyreiatkdbwq53shngaje6wuw752wxnwqlk3uhy6nicamdr56jpvji34i` |
| Pros/Cons | `bafyreiaizrndgxmzbbzo6lurkgi7fc6evemoc5tivswrdu57ngkizy4b3u` |
| SWOT | `bafyreiaym5zkajnsrklivpjkizkuyhy3v5fzo62aaeobdlqzhq47clv6lm` |
| 5 Whys | `bafyreihgfpsjeyuu7p46ejzd5jce5kmgfsuxy7r5kl4fqdhuq7jqoggtgq` |
| How-Now-Wow | `bafyreieublfraypplrr5mmnksnytksv4iyh7frspyn64gixaodwmnhmosu` |
| Starbursting | `bafyreieyz6xjpt3zxad7h643m24oloajcae3ocnma3ttqfqykmggrsksk4` |
| Constraint Mapping | `bafyreigokn5xgdosd4cihehl3tqfsd25mwdaapuhopjgn62tkpvpwn4tmy` |
## Status Tag IDs
| Status | Tag ID |
|--------|--------|
| Draft | `bafyreig5um57baws2dnntaxsi4smxtrzftpe57a7wyhfextvcq56kdkllq` |
| Final | `bafyreiffiinadpa2fwxw3iylj7pph3yzbnhe63dcyiwr4x24ne4jsgi24` |
| Archived | `bafyreihk6dlpwh3nljrxcqqe3v6tl52bxuvmx3rcgyzyom6yjmtdegu4ja` |
## Template Setup (Recommended)
For a better editing experience, create a template in Anytype:
1. Open Anytype desktop app → Chiron space
2. Go to Content Model → Object Types → Brainstorm v2
3. Click Templates (top right) → Click + to create template
4. Configure with:
- **Name**: "Brainstorm Session"
- **Icon**: 💭
- **Default Status**: Draft
- **Pre-filled structure**: Leave body empty for dynamic content
- **Property defaults**: Set framework to "None" as default
5. Save the template
Now when creating brainstorms, select this template for a guided experience.
## Linking to Other Objects
After creating a brainstorm, link it to related objects:
```bash
# Link to a project
Anytype_API-update-object
object_id: <brainstorm_id>
space_id: <chiron_space_id>
properties: [
{ key: "linked_projects", objects: ["<project_id>"] }
]
# Link to tasks
Anytype_API-update-object
object_id: <brainstorm_id>
space_id: <chiron_space_id>
properties: [
{ key: "linked_tasks", objects: ["<task_id_1>", "<task_id_2>"] }
]
```
## Searching Brainstorms
Find brainstorms by topic, status, or tags:
```bash
Anytype_API-search-space
space_id: bafyreie5sfq7pjfuq56hxsybos545bi4tok3kx7nab3vnb4tnt4i3575p4.yu20gbnjlbxv
query: "NixOS"
types: ["brainstorm_v_2"]
```
Or list all brainstorms:
```bash
Anytype_API-list-objects
space_id: bafyreie5sfq7pjfuq56hxsybos545bi4tok3kx7nab3vnb4tnt4i3575p4.yu20gbnjlbxv
type_id: bafyreifjneoy2bdxuwwai2e3mdn7zovudpzbjyflth7k3dj3o7tmhqdlw4
```
## Best Practices
1. **Create brainstorms for any significant decision** - Capture reasoning while fresh
2. **Mark as Final when complete** - Helps with search and review
3. **Link to related objects** - Creates context web
4. **Use frameworks selectively** - Not every brainstorm needs structure
5. **Review periodically** - Brainstorms can inform future decisions

View File

@@ -1,69 +0,0 @@
---
name: calendar-scheduling
description: "Calendar and time management with Proton Calendar integration. Use when: (1) checking schedule, (2) blocking focus time, (3) scheduling meetings, (4) time-based planning, (5) managing availability. Triggers: calendar, schedule, when am I free, block time, meeting, availability, what's my day look like."
compatibility: opencode
---
# Calendar & Scheduling
Time management and calendar integration for Proton Calendar.
## Status: Stub
This skill is a placeholder for future development. Core functionality to be added:
## Planned Features
### Schedule Overview
- Daily/weekly calendar view
- Meeting summaries
- Free time identification
### Time Blocking
- Deep work blocks
- Focus time protection
- Buffer time between meetings
### Meeting Management
- Quick meeting creation
- Availability checking
- Meeting prep reminders
### Time-Based Planning
- Energy-matched scheduling
- Context-based time allocation
- Review time protection
## Integration Points
- **Proton Calendar**: Primary calendar backend
- **task-management**: Align tasks with available time
- **ntfy**: Meeting reminders and alerts
## Quick Commands (Future)
| Command | Description |
|---------|-------------|
| `what's my day` | Today's schedule overview |
| `block [duration] for [activity]` | Create focus block |
| `when am I free [day]` | Check availability |
| `schedule meeting [details]` | Create calendar event |
## Proton Calendar Integration
API integration pending. Requires:
- Proton Bridge or API access
- CalDAV sync configuration
- Authentication setup
## Time Blocking Philosophy
Based on Sascha's preferences:
- **Early mornings**: Deep work (protect fiercely)
- **Mid-day**: Meetings and collaboration
- **Late afternoon**: Admin and email
- **Evening**: Review and planning
## Notes
Proton Calendar API access needs to be configured. Consider CalDAV integration or n8n workflow as bridge.

View File

@@ -1,78 +0,0 @@
---
name: communications
description: "Email and communication management with Proton Mail integration. Use when: (1) drafting emails, (2) managing follow-ups, (3) communication tracking, (4) message templates, (5) inbox management. Triggers: email, draft, reply, follow up, message, inbox, communication, respond to."
compatibility: opencode
---
# Communications
Email and communication management for Proton Mail.
## Status: Stub
This skill is a placeholder for future development. Core functionality to be added:
## Planned Features
### Email Drafting
- Context-aware draft generation
- Tone matching (formal/casual)
- Template-based responses
### Follow-up Tracking
- Waiting-for list management
- Follow-up reminders
- Response tracking
### Inbox Management
- Priority sorting
- Quick triage assistance
- Archive recommendations
### Communication Templates
- Common response patterns
- Meeting request templates
- Status update formats
## Integration Points
- **Proton Mail**: Primary email backend
- **task-management**: Convert emails to tasks
- **ntfy**: Important email alerts
- **n8n**: Automation workflows
## Quick Commands (Future)
| Command | Description |
|---------|-------------|
| `draft reply to [context]` | Generate email draft |
| `follow up on [topic]` | Check follow-up status |
| `email template [type]` | Use saved template |
| `inbox summary` | Overview of pending emails |
## Proton Mail Integration
API integration pending. Options:
- Proton Bridge (local IMAP/SMTP)
- n8n with email triggers
- Manual copy/paste workflow initially
## Communication Style Guide
Based on Sascha's profile:
- **Tone**: Professional but approachable
- **Length**: Concise, get to the point
- **Structure**: Clear ask/action at the top
- **Follow-up**: Set clear expectations
## Email Templates (Future)
- Meeting request
- Status update
- Delegation request
- Follow-up reminder
- Thank you / acknowledgment
## Notes
Start with manual draft assistance. Proton Mail API integration can be added via n8n workflow when ready.

View File

@@ -1,60 +0,0 @@
---
name: knowledge-management
description: "Knowledge base and note management with Anytype. Use when: (1) saving information for later, (2) organizing notes and references, (3) finding past notes, (4) building knowledge connections, (5) managing documentation. Triggers: save this, note, remember, knowledge base, where did I put, find my notes on, documentation."
compatibility: opencode
---
# Knowledge Management
Note capture and knowledge organization using Anytype as the backend.
## Status: Stub
This skill is a placeholder for future development. Core functionality to be added:
## Planned Features
### Quick Note Capture
- Minimal friction capture to Anytype
- Auto-tagging based on content
- Link to related notes
### Knowledge Retrieval
- Semantic search across notes
- Tag-based filtering
- Connection discovery
### Resource Organization
- PARA Resources category management
- Topic clustering
- Archive maintenance
### Documentation Management
- Technical docs organization
- Version tracking
- Cross-reference linking
## Integration Points
- **Anytype**: Primary storage (Resources type)
- **task-management**: Link notes to projects/areas
- **research**: Save research findings
## Quick Commands (Future)
| Command | Description |
|---------|-------------|
| `note: [content]` | Quick capture |
| `find notes on [topic]` | Search knowledge base |
| `link [note] to [note]` | Create connection |
| `organize [tag/topic]` | Cluster related notes |
## Anytype Types
- `note` - Quick captures
- `resource` - Organized reference material
- `document` - Formal documentation
## Notes
Expand based on actual note-taking patterns. Consider integration with mem0-memory skill for AI-assisted recall.

View File

@@ -1,165 +0,0 @@
---
name: plan-writing
description: "Transform ideas into comprehensive, actionable project plans with templates. Use when: (1) creating project kickoff documents, (2) structuring new projects, (3) building detailed task breakdowns, (4) documenting project scope and stakeholders, (5) setting up project for execution. Triggers: project plan, kickoff document, plan out, structure project, project setup, create plan for, what do I need to start."
compatibility: opencode
---
# Plan Writing
Transform brainstormed ideas into comprehensive, actionable project plans using modular templates.
## Quick Reference
| Project Type | Templates to Use |
|--------------|------------------|
| Solo, <2 weeks | project-brief, todo-structure |
| Solo, >2 weeks | project-brief, todo-structure, risk-register |
| Team, any size | project-kickoff, stakeholder-map, todo-structure, risk-register |
## Process
### 1. Intake
Gather initial context:
- What project are we planning?
- Check for existing brainstorming output in `docs/brainstorms/`
- If starting fresh, gather basic context first
### 2. Scope Assessment
Ask these questions (one at a time):
1. **Solo or team project?**
- Solo → lighter documentation
- Team → need alignment docs (kickoff, stakeholders)
2. **Rough duration estimate?**
- <2 weeks → skip risk register
- >2 weeks → include risk planning
3. **Known deadline or flexible?**
- Hard deadline → prioritize milestone planning
- Flexible → focus on phased approach
4. **Which PARA area does this belong to?** (optional)
- Helps categorization and later task-management integration
### 3. Component Selection
Based on scope, select appropriate templates:
```
"Based on [team project, 6 weeks], I'll include:
✓ Project Kickoff (team alignment)
✓ Stakeholder Map (communication planning)
✓ Todo Structure (task breakdown)
✓ Risk Register (duration >2 weeks)
Shall I proceed with this structure?"
```
See [references/component-guide.md](references/component-guide.md) for selection logic.
### 4. Draft Generation
For each selected template:
1. Load template from `assets/templates/`
2. Fill with project-specific content
3. Present each major section for validation
4. Adjust based on feedback
Work through templates in this order:
1. Kickoff/Brief (establishes context)
2. Stakeholders (who's involved)
3. Todos (what needs doing)
4. Risks (what could go wrong)
### 5. Output
Generate final documents:
- Create `docs/plans/<project-name>/` directory
- Write each component as separate file
- Create `index.md` linking all components
```
docs/plans/<project-name>/
├── index.md # Links to all components
├── kickoff.md # or brief.md for solo projects
├── stakeholders.md # if team project
├── tasks.md # task breakdown
├── risks.md # if >2 weeks duration
└── updates/ # for future progress updates
└── YYYY-MM-DD.md
```
### 6. Handoff (Optional)
After plan is complete, offer next steps:
> "Ready to create this project in Basecamp? I can:
> - Create todo lists from the task structure
> - Set up card table columns from phases
> - Add initial cards/todos with due dates"
This triggers the basecamp skill for execution.
## Templates
All templates in `assets/templates/`:
| Template | Purpose | When to Include |
|----------|---------|-----------------|
| project-kickoff.md | Comprehensive 8-section kickoff | Team projects |
| project-brief.md | 1-page summary | Solo or simple projects |
| stakeholder-map.md | Who needs what, when | Team projects |
| risk-register.md | Risks and mitigations | Projects >2 weeks |
| todo-structure.md | Hierarchical task breakdown | Always |
| progress-update.md | Status update template | For ongoing tracking |
## PARA Integration (Optional)
If user wants PARA integration:
1. Prompt for Area assignment (from their defined Areas)
2. Can trigger task-management skill to create Anytype entries
3. Include Area reference in kickoff/brief document
This is optional - not all users use PARA.
## Integration with Other Skills
| From | To plan-writing | Output |
|------|-----------------|--------|
| brainstorming | Decision/context | Informs kickoff scope |
| plan-writing | → basecamp | Todo lists, cards |
| plan-writing | → task-management | Anytype tasks (optional) |
## Example Workflow
```
User: "Create a project plan for launching the Q2 training program"
AI: "I'll help structure this project. First, a few questions:
Is this a solo project or will others be involved?"
User: "Team - 3 people"
AI: "Duration estimate?
a) Less than 2 weeks
b) 2-4 weeks
c) More than a month"
User: "B - about 3 weeks"
AI: "Got it. For a 3-week team project, I'll include:
✓ Project Kickoff (team alignment)
✓ Stakeholder Map (communication)
✓ Todo Structure (task breakdown)
✓ Risk Register (3 weeks warrants this)
Shall I proceed?"
User: "Yes"
AI: [Proceeds through each template, filling with project details]
```

View File

@@ -1,89 +0,0 @@
# Progress Update: [Project Name]
**Period**: [Date] to [Date]
**Author**: [Name]
**Status**: 🟢 On Track / 🟡 At Risk / 🔴 Blocked
---
## Summary
[2-3 sentence executive summary: Where are we, what's the headline?]
**Overall Progress**: [X]% complete
---
## Completed This Period
- [x] [Task/milestone completed] - [Impact or note]
- [x] [Task completed]
- [x] [Task completed]
**Highlights**:
- [Notable achievement or win]
---
## In Progress
| Task | Owner | Progress | Expected Complete |
|------|-------|----------|-------------------|
| [Task 1] | [Name] | [X]% | [Date] |
| [Task 2] | [Name] | [X]% | [Date] |
| [Task 3] | [Name] | [X]% | [Date] |
---
## Blockers & Risks
### Active Blockers
| Blocker | Impact | Owner | Action Needed | ETA |
|---------|--------|-------|---------------|-----|
| [Blocker 1] | [High/Med/Low] | [Name] | [What's needed] | [Date] |
### Emerging Risks
| Risk | Probability | Mitigation |
|------|-------------|------------|
| [Risk 1] | [H/M/L] | [Action] |
---
## Next Period Plan
**Focus**: [Main focus for next period]
| Priority | Task | Owner | Target Date |
|----------|------|-------|-------------|
| 1 | [Highest priority task] | [Name] | [Date] |
| 2 | [Second priority] | [Name] | [Date] |
| 3 | [Third priority] | [Name] | [Date] |
---
## Metrics
| Metric | Target | Current | Trend |
|--------|--------|---------|-------|
| [Metric 1] | [X] | [Y] | ↑/↓/→ |
| [Metric 2] | [X] | [Y] | ↑/↓/→ |
| Tasks Complete | [X] | [Y] | ↑ |
---
## Decisions Needed
- [ ] [Decision 1]: [Options and recommendation] - Need by: [Date]
- [ ] [Decision 2]: [Context] - Need by: [Date]
---
## Notes / Context
[Any additional context, changes in scope, stakeholder feedback, etc.]
---
*Next update: [Date]*

View File

@@ -1,48 +0,0 @@
# Project Brief: [Project Name]
**Owner**: [Name]
**Timeline**: [Start Date] → [Target Date]
**Area**: [PARA Area, if applicable]
## Goal
[One clear sentence: What will be true when this project is complete?]
## Success Criteria
How we'll know it's done:
- [ ] [Criterion 1 - specific and measurable]
- [ ] [Criterion 2]
- [ ] [Criterion 3]
## Scope
**Included**:
- [Deliverable 1]
- [Deliverable 2]
**Not Included**:
- [Exclusion 1]
## Key Milestones
| Milestone | Target Date | Status |
|-----------|-------------|--------|
| [Milestone 1] | [Date] | [ ] |
| [Milestone 2] | [Date] | [ ] |
| [Complete] | [Date] | [ ] |
## Initial Tasks
1. [ ] [First task to start] - Due: [Date]
2. [ ] [Second task]
3. [ ] [Third task]
## Notes
[Any context, constraints, or references worth capturing]
---
*Created: [Date]*

View File

@@ -1,106 +0,0 @@
# Project Kickoff: [Project Name]
## 1. Project Essentials
| Field | Value |
|-------|-------|
| **Project Name** | [Name] |
| **Owner** | [Name] |
| **Start Date** | [YYYY-MM-DD] |
| **Target Completion** | [YYYY-MM-DD] |
| **PARA Area** | [Area, if applicable] |
### Overview
[2-3 sentence description of what this project will accomplish and why it matters.]
## 2. Goals and Success Criteria
**Primary Goal**: [One sentence describing the end state - what does "done" look like?]
**Success Criteria**:
- [ ] [Measurable criterion 1]
- [ ] [Measurable criterion 2]
- [ ] [Measurable criterion 3]
**Out of Scope** (explicitly):
- [Item that might be assumed but is NOT included]
- [Another exclusion]
## 3. Stakeholders
| Role | Person | Involvement Level |
|------|--------|-------------------|
| Project Owner | [Name] | High - decisions |
| Core Team | [Names] | High - execution |
| Informed | [Names] | Low - updates only |
| Approver | [Name, if any] | Medium - sign-off |
## 4. Timeline and Milestones
| Milestone | Target Date | Dependencies | Owner |
|-----------|-------------|--------------|-------|
| [Milestone 1] | [Date] | None | [Who] |
| [Milestone 2] | [Date] | Milestone 1 | [Who] |
| [Milestone 3] | [Date] | Milestone 2 | [Who] |
| **Project Complete** | [Date] | All above | [Owner] |
### Key Dates
- **Kickoff**: [Date]
- **First Review**: [Date]
- **Final Deadline**: [Date]
## 5. Scope
### In Scope
- [Deliverable 1]: [Brief description]
- [Deliverable 2]: [Brief description]
- [Deliverable 3]: [Brief description]
### Out of Scope
- [Explicitly excluded item 1]
- [Explicitly excluded item 2]
### Assumptions
- [Assumption 1 - e.g., "Budget approved"]
- [Assumption 2 - e.g., "Team available full-time"]
## 6. Risks
| Risk | Probability | Impact | Mitigation | Owner |
|------|-------------|--------|------------|-------|
| [Risk 1] | H/M/L | H/M/L | [Plan] | [Who] |
| [Risk 2] | H/M/L | H/M/L | [Plan] | [Who] |
*See detailed risk register if needed: [link to risks.md]*
## 7. Communication Plan
| What | Audience | Frequency | Channel | Owner |
|------|----------|-----------|---------|-------|
| Status Update | All stakeholders | Weekly | [Email/Basecamp] | [Who] |
| Team Sync | Core team | [Daily/2x week] | [Meeting/Slack] | [Who] |
| Milestone Review | Approvers | At milestone | [Meeting] | [Who] |
### Escalation Path
1. First: [Team lead/Owner]
2. Then: [Manager/Sponsor]
3. Finally: [Executive, if applicable]
## 8. Next Steps
Immediate actions to kick off the project:
- [ ] [Action 1] - @[owner] - Due: [date]
- [ ] [Action 2] - @[owner] - Due: [date]
- [ ] [Action 3] - @[owner] - Due: [date]
---
*Document created: [Date]*
*Last updated: [Date]*

View File

@@ -1,104 +0,0 @@
# Risk Register: [Project Name]
## Risk Summary
| ID | Risk | Probability | Impact | Risk Score | Status |
|----|------|-------------|--------|------------|--------|
| R1 | [Brief risk name] | H/M/L | H/M/L | [H/M/L] | Open |
| R2 | [Brief risk name] | H/M/L | H/M/L | [H/M/L] | Open |
| R3 | [Brief risk name] | H/M/L | H/M/L | [H/M/L] | Open |
**Risk Score**: Probability × Impact (H×H=Critical, H×M or M×H=High, M×M=Medium, others=Low)
---
## Detailed Risk Analysis
### R1: [Risk Name]
| Aspect | Detail |
|--------|--------|
| **Description** | [What could go wrong?] |
| **Probability** | High / Medium / Low |
| **Impact** | High / Medium / Low |
| **Category** | Technical / Resource / External / Schedule / Budget |
| **Trigger** | [What would indicate this risk is materializing?] |
**Mitigation Plan**:
- [Action 1 to reduce probability or impact]
- [Action 2]
**Contingency Plan** (if risk occurs):
- [Fallback action 1]
- [Fallback action 2]
**Owner**: [Name]
**Review Date**: [Date]
---
### R2: [Risk Name]
| Aspect | Detail |
|--------|--------|
| **Description** | [What could go wrong?] |
| **Probability** | High / Medium / Low |
| **Impact** | High / Medium / Low |
| **Category** | Technical / Resource / External / Schedule / Budget |
| **Trigger** | [What would indicate this risk is materializing?] |
**Mitigation Plan**:
- [Action 1]
- [Action 2]
**Contingency Plan**:
- [Fallback action]
**Owner**: [Name]
**Review Date**: [Date]
---
### R3: [Risk Name]
| Aspect | Detail |
|--------|--------|
| **Description** | [What could go wrong?] |
| **Probability** | High / Medium / Low |
| **Impact** | High / Medium / Low |
| **Category** | Technical / Resource / External / Schedule / Budget |
| **Trigger** | [What would indicate this risk is materializing?] |
**Mitigation Plan**:
- [Action 1]
- [Action 2]
**Contingency Plan**:
- [Fallback action]
**Owner**: [Name]
**Review Date**: [Date]
---
## Risk Categories
| Category | Examples |
|----------|----------|
| **Technical** | Technology doesn't work, integration issues, performance |
| **Resource** | Key person unavailable, skill gaps, overcommitment |
| **External** | Vendor delays, regulatory changes, dependencies |
| **Schedule** | Delays, unrealistic timeline, competing priorities |
| **Budget** | Cost overruns, funding cuts, unexpected expenses |
## Review Schedule
- **Weekly**: Quick scan of high risks
- **Bi-weekly**: Full risk register review
- **At milestones**: Comprehensive reassessment
---
*Created: [Date]*
*Last reviewed: [Date]*
*Next review: [Date]*

View File

@@ -1,72 +0,0 @@
# Stakeholder Map: [Project Name]
## Stakeholder Matrix
| Stakeholder | Role | Interest Level | Influence | Information Needs |
|-------------|------|----------------|-----------|-------------------|
| [Name/Group] | [Role] | High/Medium/Low | High/Medium/Low | [What they need to know] |
| [Name/Group] | [Role] | High/Medium/Low | High/Medium/Low | [What they need to know] |
| [Name/Group] | [Role] | High/Medium/Low | High/Medium/Low | [What they need to know] |
## Communication Plan by Stakeholder
### [Stakeholder 1: Name/Role]
| Aspect | Detail |
|--------|--------|
| **Needs** | [What information they need] |
| **Frequency** | [How often: daily, weekly, at milestones] |
| **Channel** | [Email, Basecamp, meeting, Slack] |
| **Format** | [Brief update, detailed report, presentation] |
| **Owner** | [Who communicates with them] |
### [Stakeholder 2: Name/Role]
| Aspect | Detail |
|--------|--------|
| **Needs** | [What information they need] |
| **Frequency** | [How often] |
| **Channel** | [Preferred channel] |
| **Format** | [Format preference] |
| **Owner** | [Who communicates] |
### [Stakeholder 3: Name/Role]
| Aspect | Detail |
|--------|--------|
| **Needs** | [What information they need] |
| **Frequency** | [How often] |
| **Channel** | [Preferred channel] |
| **Format** | [Format preference] |
| **Owner** | [Who communicates] |
## RACI Matrix
| Decision/Task | [Person 1] | [Person 2] | [Person 3] | [Person 4] |
|---------------|------------|------------|------------|------------|
| [Decision 1] | R | A | C | I |
| [Decision 2] | I | R | A | C |
| [Task 1] | R | I | I | A |
**Legend**:
- **R** = Responsible (does the work)
- **A** = Accountable (final decision maker)
- **C** = Consulted (input required)
- **I** = Informed (kept updated)
## Escalation Path
1. **First Level**: [Name/Role] - for [types of issues]
2. **Second Level**: [Name/Role] - if unresolved in [timeframe]
3. **Executive**: [Name/Role] - for [critical blockers only]
## Notes
- [Any stakeholder-specific considerations]
- [Political or relationship notes]
- [Historical context if relevant]
---
*Created: [Date]*
*Last updated: [Date]*

View File

@@ -1,94 +0,0 @@
# Task Structure: [Project Name]
## Overview
| Metric | Value |
|--------|-------|
| **Total Tasks** | [X] |
| **Phases** | [Y] |
| **Timeline** | [Start] → [End] |
---
## Phase 1: [Phase Name]
**Target**: [Date]
**Owner**: [Name]
| # | Task | Owner | Estimate | Due | Depends On | Status |
|---|------|-------|----------|-----|------------|--------|
| 1.1 | [Task description] | [Name] | [Xh/Xd] | [Date] | - | [ ] |
| 1.2 | [Task description] | [Name] | [Xh/Xd] | [Date] | 1.1 | [ ] |
| 1.3 | [Task description] | [Name] | [Xh/Xd] | [Date] | - | [ ] |
**Phase Deliverable**: [What's complete when this phase is done]
---
## Phase 2: [Phase Name]
**Target**: [Date]
**Owner**: [Name]
| # | Task | Owner | Estimate | Due | Depends On | Status |
|---|------|-------|----------|-----|------------|--------|
| 2.1 | [Task description] | [Name] | [Xh/Xd] | [Date] | Phase 1 | [ ] |
| 2.2 | [Task description] | [Name] | [Xh/Xd] | [Date] | 2.1 | [ ] |
| 2.3 | [Task description] | [Name] | [Xh/Xd] | [Date] | - | [ ] |
**Phase Deliverable**: [What's complete when this phase is done]
---
## Phase 3: [Phase Name]
**Target**: [Date]
**Owner**: [Name]
| # | Task | Owner | Estimate | Due | Depends On | Status |
|---|------|-------|----------|-----|------------|--------|
| 3.1 | [Task description] | [Name] | [Xh/Xd] | [Date] | Phase 2 | [ ] |
| 3.2 | [Task description] | [Name] | [Xh/Xd] | [Date] | 3.1 | [ ] |
| 3.3 | [Task description] | [Name] | [Xh/Xd] | [Date] | 3.1 | [ ] |
**Phase Deliverable**: [What's complete when this phase is done]
---
## Unphased / Ongoing Tasks
| # | Task | Owner | Frequency | Notes |
|---|------|-------|-----------|-------|
| O.1 | [Recurring task] | [Name] | Weekly | [Notes] |
| O.2 | [Monitoring task] | [Name] | Daily | [Notes] |
---
## Dependencies Summary
```
Phase 1 ──────► Phase 2 ──────► Phase 3
│ │
├── 1.1 ► 1.2 ├── 2.1 ► 2.2
└── 1.3 └── 2.3 (parallel)
```
## Milestone Checklist
- [ ] **Milestone 1**: [Name] - [Date]
- [ ] [Required task 1.1]
- [ ] [Required task 1.2]
- [ ] **Milestone 2**: [Name] - [Date]
- [ ] [Required task 2.1]
- [ ] [Required task 2.2]
- [ ] **Project Complete** - [Date]
- [ ] All phases complete
- [ ] Success criteria met
- [ ] Handoff complete
---
*Created: [Date]*
*Last updated: [Date]*

View File

@@ -1,117 +0,0 @@
# Component Selection Guide
Decision matrix for which templates to include based on project characteristics.
## Decision Matrix
| Question | If Yes | If No |
|----------|--------|-------|
| Team project (>1 person)? | +kickoff, +stakeholders | Use brief instead of kickoff |
| Duration >2 weeks? | +risk-register | Skip risks |
| External stakeholders? | +stakeholders (detailed) | Stakeholders optional |
| Complex dependencies? | +detailed todos with deps | Simple todo list |
| Ongoing tracking needed? | +progress-update template | One-time plan |
## Quick Selection by Project Type
### Solo, Short (<2 weeks)
```
✓ project-brief.md
✓ todo-structure.md
```
### Solo, Medium (2-4 weeks)
```
✓ project-brief.md
✓ todo-structure.md
✓ risk-register.md
```
### Solo, Long (>4 weeks)
```
✓ project-brief.md (or kickoff for complex)
✓ todo-structure.md
✓ risk-register.md
✓ progress-update.md (for self-tracking)
```
### Team, Any Duration
```
✓ project-kickoff.md (always for team alignment)
✓ stakeholder-map.md
✓ todo-structure.md
✓ risk-register.md (if >2 weeks)
✓ progress-update.md (for status updates)
```
## Template Purposes
### project-kickoff.md
Full 8-section document for team alignment:
1. Project essentials (name, owner, dates)
2. Goals and success criteria
3. Stakeholders overview
4. Timeline and milestones
5. Scope (in/out)
6. Risks overview
7. Communication plan
8. Next steps
**Use when**: Multiple people need alignment on what/why/how.
### project-brief.md
1-page summary for simpler projects:
- Goal statement
- Success criteria
- Key milestones
- Initial tasks
**Use when**: Solo project or simple scope that doesn't need formal kickoff.
### stakeholder-map.md
Communication matrix:
- Who needs information
- What they need to know
- How often
- Which channel
**Use when**: Team projects with multiple stakeholders needing different information.
### risk-register.md
Risk tracking table:
- Risk description
- Probability (H/M/L)
- Impact (H/M/L)
- Mitigation plan
- Owner
**Use when**: Projects >2 weeks or high-stakes projects of any duration.
### todo-structure.md
Hierarchical task breakdown:
- Phases or milestones
- Tasks under each phase
- Subtasks if needed
- Metadata: owner, estimate, due date, dependencies
**Use when**: Always. Every project needs task breakdown.
### progress-update.md
Status reporting template:
- Completed since last update
- In progress
- Blockers
- Next steps
- Metrics/progress %
**Use when**: Projects needing regular status updates (weekly, sprint-based, etc.).
## Customization Notes
Templates are starting points. Common customizations:
- Remove sections that don't apply
- Add project-specific sections
- Adjust detail level based on audience
- Combine templates for simpler output
The goal is useful documentation, not template compliance.

View File

@@ -1,54 +0,0 @@
---
name: research
description: "Research and investigation workflows. Use when: (1) researching technologies or tools, (2) investigating best practices, (3) comparing solutions, (4) gathering information for decisions, (5) deep-diving into topics. Triggers: research, investigate, explore, compare, learn about, what are best practices for, how does X work."
compatibility: opencode
---
# Research
Research and investigation workflows for informed decision-making.
## Status: Stub
This skill is a placeholder for future development. Core functionality to be added:
## Planned Features
### Investigation Workflow
- Multi-source research (web, docs, code)
- Source credibility assessment
- Summary with drill-down capability
### Technology Evaluation
- Feature comparison matrices
- Pros/cons analysis
- Fit-for-purpose assessment
### Best Practices Discovery
- Industry standards lookup
- Implementation patterns
- Common pitfalls
### Learning Path Generation
- Topic breakdown
- Resource recommendations
- Progress tracking
## Integration Points
- **Anytype**: Save research findings to Resources
- **Web Search**: Primary research source
- **librarian agent**: External documentation lookup
## Quick Commands (Future)
| Command | Description |
|---------|-------------|
| `research [topic]` | Start research session |
| `compare [A] vs [B]` | Feature comparison |
| `best practices [topic]` | Lookup standards |
| `learn [topic]` | Generate learning path |
## Notes
Expand this skill based on actual research patterns that emerge from usage.

View File

@@ -1,246 +0,0 @@
---
name: task-management
description: "PARA-based task and project management with Anytype integration. Use when: (1) creating/managing tasks or projects, (2) daily or weekly reviews, (3) prioritizing work, (4) capturing action items, (5) planning sprints or focus blocks, (6) asking 'what should I work on?'. Triggers: task, todo, project, priority, review, focus, plan, backlog, inbox, capture."
compatibility: opencode
---
# Task Management
PARA-based productivity system integrated with Anytype for Sascha's personal and professional task management.
## Quick Reference
| Action | Command Pattern |
|--------|-----------------|
| Quick capture | "Capture: [item]" or "Add to inbox: [item]" |
| Create task | "Task: [title] for [area/project]" |
| Create project | "New project: [title] in [area]" |
| Daily review | "Daily review" or "What's on for today?" |
| Weekly review | "Weekly review" or "Week planning" |
| Focus check | "What should I focus on?" |
| Context batch | "What [area] tasks can I batch?" |
## Anytype Configuration
**Space**: Chiron (create if not exists)
### Types
| Type | PARA Category | Purpose |
|------|---------------|---------|
| `project` | Projects | Active outcomes with deadlines |
| `area` | Areas | Ongoing responsibilities |
| `resource` | Resources | Reference materials |
| `task` | (within Projects/Areas) | Individual action items |
| `note` | (Inbox/Resources) | Quick captures, meeting notes |
### Key Properties
| Property | Type | Used On | Values |
|----------|------|---------|--------|
| `status` | select | Task, Project | `inbox`, `next`, `waiting`, `scheduled`, `done` |
| `priority` | select | Task, Project | `critical`, `high`, `medium`, `low` |
| `area` | relation | Task, Project | Links to Area objects |
| `due_date` | date | Task, Project | Deadline |
| `energy` | select | Task | `high`, `medium`, `low` |
| `context` | multi_select | Task | `deep-work`, `admin`, `calls`, `errands` |
## Core Workflows
### 1. Quick Capture
Minimal friction inbox capture. Process later during review.
```
User: "Capture: Review Q1 budget proposal"
Action:
1. Create note in Anytype with status=inbox
2. Confirm: "Captured to inbox. 12 items pending processing."
```
### 2. Create Task
Full task with metadata for proper routing.
```
User: "Task: Prepare board presentation for CTO Leadership, high priority, due Friday"
Action:
1. Find or create "CTO Leadership" area in Anytype
2. Create task object:
- name: "Prepare board presentation"
- area: [CTO Leadership object ID]
- priority: high
- due_date: [this Friday]
- status: next
3. Confirm with task details
```
### 3. Create Project
Projects are outcomes with multiple tasks and a completion state.
```
User: "New project: Launch NixOS Flakes Course in m3ta.dev area"
Action:
1. Find "m3ta.dev" area
2. Create project object:
- name: "Launch NixOS Flakes Course"
- area: [m3ta.dev object ID]
- status: active
3. Prompt: "What are the key milestones or first tasks?"
4. Create initial tasks if provided
```
### 4. Daily Review (Evening)
Run each evening to close the day and prep tomorrow.
**Workflow** - See [references/review-templates.md](references/review-templates.md) for full template.
Steps:
1. **Fetch today's completed** - Celebrate wins
2. **Fetch incomplete tasks** - Reschedule or note blockers
3. **Check inbox** - Quick process or defer to weekly
4. **Tomorrow's priorities** - Identify top 3 for morning focus
5. **Send summary via ntfy** (if configured)
```
User: "Daily review"
Output format:
## Daily Review - [Date]
### Completed Today
- [x] Task 1
- [x] Task 2
### Carried Forward
- [ ] Task 3 (rescheduled to tomorrow)
- [ ] Task 4 (blocked: waiting on X)
### Inbox Items: 5 pending
### Tomorrow's Top 3
1. [Highest impact task]
2. [Second priority]
3. [Third priority]
```
### 5. Weekly Review
Comprehensive PARA review. See [references/para-methodology.md](references/para-methodology.md).
**Workflow**:
1. **Get Clear** - Process inbox to zero
2. **Get Current** - Review each Area's active projects
3. **Get Creative** - Identify new projects or opportunities
4. **Plan Week** - Set weekly outcomes and time blocks
```
User: "Weekly review"
Process:
1. List all inbox items -> prompt to process each
2. For each Area, show active projects and their status
3. Flag stalled projects (no activity 7+ days)
4. Identify completed projects -> move to archive
5. Prompt for new commitments
6. Output weekly plan
```
### 6. Priority Focus
Impact-first prioritization using Sascha's preferences.
```
User: "What should I focus on?"
Logic:
1. Fetch tasks where status=next, sorted by:
- priority (critical > high > medium > low)
- due_date (sooner first)
- energy match (if time of day known)
2. Return top 3-5 with rationale
3. Consider context batching opportunities
Output:
## Focus Recommendations
**Top Priority**: [Task]
- Why: [Impact statement]
- Area: [Area name]
- Due: [Date or "no deadline"]
**Also Important**:
1. [Task 2] - [brief why]
2. [Task 3] - [brief why]
**Batching Opportunity**: You have 3 [context] tasks that could be done together.
```
### 7. Context Batching
Group similar tasks for focused execution.
```
User: "What admin tasks can I batch?"
Action:
1. Fetch tasks where context contains "admin"
2. Group by area
3. Estimate total time
4. Suggest execution order
Output:
## Admin Task Batch
**Estimated time**: ~45 minutes
1. [ ] Reply to vendor email (CTO Leadership) - 10min
2. [ ] Approve expense reports (CTO Leadership) - 15min
3. [ ] Update team wiki (CTO Leadership) - 20min
Ready to start? I can track completion.
```
## Notification Integration (ntfy)
Send notifications for:
- Daily review summary (evening)
- Overdue task alerts
- Weekly review reminder (Sunday evening)
Format for ntfy:
```bash
curl -d "Daily Review: 5 completed, 3 for tomorrow. Top priority: [task]" \
ntfy.sh/sascha-chiron
```
Configure topic in environment or Anytype settings.
## Anytype API Patterns
See [references/anytype-workflows.md](references/anytype-workflows.md) for:
- Space and type setup
- CRUD operations for tasks/projects
- Query patterns for reviews
- Batch operations
## PARA Methodology Reference
See [references/para-methodology.md](references/para-methodology.md) for:
- PARA category definitions
- When to use Projects vs Areas
- Archive criteria
- Maintenance rhythms
## Initial Setup
See [references/anytype-setup.md](references/anytype-setup.md) for:
- Step-by-step Anytype space creation
- Type and property configuration
- Initial Area objects to create
- View setup recommendations

View File

@@ -1,176 +0,0 @@
# Anytype Space Setup Guide
Manual setup for the Chiron space in Anytype.
## Step 1: Create Space
1. Open Anytype desktop app
2. Click **+** to create new space
3. Name: **Chiron**
4. Description: *Personal AI Assistant workspace using PARA methodology*
## Step 2: Create Types
Create these object types in the Chiron space:
### Area Type
- **Name**: Area
- **Plural**: Areas
- **Layout**: Basic
- **Icon**: Briefcase (blue)
### Project Type
- **Name**: Project
- **Plural**: Projects
- **Layout**: Basic
- **Icon**: Rocket (purple)
### Task Type
- **Name**: Task
- **Plural**: Tasks
- **Layout**: Action (checkbox)
- **Icon**: Checkbox (blue)
### Resource Type
- **Name**: Resource
- **Plural**: Resources
- **Layout**: Basic
- **Icon**: Book (teal)
## Step 3: Create Properties
Add these properties (relations) to the space:
### Status (Select)
| Tag | Color |
|-----|-------|
| Inbox | Grey |
| Next | Blue |
| Waiting | Yellow |
| Scheduled | Purple |
| Done | Lime |
### Priority (Select)
| Tag | Color |
|-----|-------|
| Critical | Red |
| High | Orange |
| Medium | Yellow |
| Low | Grey |
### Energy (Select)
| Tag | Color |
|-----|-------|
| High | Red |
| Medium | Yellow |
| Low | Blue |
### Context (Multi-select)
| Tag | Color |
|-----|-------|
| Deep Work | Purple |
| Admin | Grey |
| Calls | Blue |
| Errands | Teal |
| Quick Wins | Lime |
### Other Properties
- **Area** (Relation → Area type)
- **Project** (Relation → Project type)
- **Due Date** (Date)
- **Outcome** (Text)
- **Description** (Text)
## Step 4: Link Properties to Types
### Task Type Properties
- Status
- Priority
- Energy
- Context
- Area (relation)
- Project (relation)
- Due Date
### Project Type Properties
- Status
- Priority
- Area (relation)
- Due Date
- Outcome
### Area Type Properties
- Description
## Step 5: Create Initial Areas
Create these Area objects:
1. **CTO Leadership**
- Description: Team management, technical strategy, architecture decisions, hiring
2. **m3ta.dev**
- Description: Content creation, courses, coaching, tutoring programs
3. **YouTube @m3tam3re**
- Description: Technical exploration videos, tutorials, self-hosting guides
4. **Technical Exploration**
- Description: NixOS, self-hosting, AI agents, automation experiments
5. **Personal Development**
- Description: Learning, skills growth, reading
6. **Health & Wellness**
- Description: Exercise, rest, sustainability
7. **Family**
- Description: Quality time, responsibilities
## Step 6: Create Views (Optional)
Create these Set views for quick access:
### Inbox View
- Filter: Status = Inbox
- Sort: Created date (newest)
### Today's Focus
- Filter: Status = Next AND Due Date <= Today
- Sort: Priority (Critical first)
### By Area
- Group by: Area relation
- Filter: Status != Done
### Weekly Review
- Filter: Status != Done
- Group by: Area
- Sort: Due Date
## Step 7: API Setup (For Automation)
To enable API access for Chiron agent:
1. Go to Anytype settings
2. Find API/Integration settings
3. Generate API key
4. Configure in your environment or MCP settings
Without API access, use manual workflows or n8n integration.
## Verification
After setup, you should have:
- [ ] Chiron space created
- [ ] 4 custom types (Area, Project, Task, Resource)
- [ ] 4 select properties (Status, Priority, Energy, Context)
- [ ] 3 relation properties (Area, Project, Due Date)
- [ ] 7 Area objects created
- [ ] At least one view configured
## Notes
- The Note type is built-in, use it for quick captures
- Archive can be a status tag or separate type (your preference)
- Adjust colors and icons to your preference

View File

@@ -1,346 +0,0 @@
# Anytype API Workflows
API patterns for task management operations in the Chiron space.
## Setup
### Space Configuration
**Space Name**: Chiron
**Space ID**: Retrieve via `Anytype_API-list-spaces` after creation
```
# List spaces to find Chiron space ID
Anytype_API-list-spaces
# Store space_id for subsequent calls
SPACE_ID="<chiron-space-id>"
```
### Required Types
Create these types if they don't exist:
#### Area Type
```
Anytype_API-create-type
space_id: SPACE_ID
name: "Area"
plural_name: "Areas"
layout: "basic"
key: "area"
properties:
- name: "Description", key: "description", format: "text"
- name: "Review Frequency", key: "review_frequency", format: "select"
```
#### Project Type
```
Anytype_API-create-type
space_id: SPACE_ID
name: "Project"
plural_name: "Projects"
layout: "basic"
key: "project"
properties:
- name: "Status", key: "status", format: "select"
- name: "Priority", key: "priority", format: "select"
- name: "Area", key: "area", format: "objects"
- name: "Due Date", key: "due_date", format: "date"
- name: "Outcome", key: "outcome", format: "text"
```
#### Task Type
```
Anytype_API-create-type
space_id: SPACE_ID
name: "Task"
plural_name: "Tasks"
layout: "action"
key: "task"
properties:
- name: "Status", key: "status", format: "select"
- name: "Priority", key: "priority", format: "select"
- name: "Area", key: "area", format: "objects"
- name: "Project", key: "project", format: "objects"
- name: "Due Date", key: "due_date", format: "date"
- name: "Energy", key: "energy", format: "select"
- name: "Context", key: "context", format: "multi_select"
```
### Required Properties with Tags
#### Status Property Tags
```
Anytype_API-create-property
space_id: SPACE_ID
name: "Status"
key: "status"
format: "select"
tags:
- name: "Inbox", color: "grey"
- name: "Next", color: "blue"
- name: "Waiting", color: "yellow"
- name: "Scheduled", color: "purple"
- name: "Done", color: "lime"
```
#### Priority Property Tags
```
Anytype_API-create-property
space_id: SPACE_ID
name: "Priority"
key: "priority"
format: "select"
tags:
- name: "Critical", color: "red"
- name: "High", color: "orange"
- name: "Medium", color: "yellow"
- name: "Low", color: "grey"
```
#### Energy Property Tags
```
Anytype_API-create-property
space_id: SPACE_ID
name: "Energy"
key: "energy"
format: "select"
tags:
- name: "High", color: "red"
- name: "Medium", color: "yellow"
- name: "Low", color: "blue"
```
#### Context Property Tags
```
Anytype_API-create-property
space_id: SPACE_ID
name: "Context"
key: "context"
format: "multi_select"
tags:
- name: "Deep Work", color: "purple"
- name: "Admin", color: "grey"
- name: "Calls", color: "blue"
- name: "Errands", color: "teal"
- name: "Quick Wins", color: "lime"
```
## CRUD Operations
### Create Task
```
Anytype_API-create-object
space_id: SPACE_ID
type_key: "task"
name: "Task title here"
body: "Optional task description or notes"
properties:
- key: "status", select: "<status_tag_id>"
- key: "priority", select: "<priority_tag_id>"
- key: "area", objects: ["<area_object_id>"]
- key: "due_date", date: "2025-01-10"
icon:
format: "icon"
name: "checkbox"
color: "blue"
```
### Create Project
```
Anytype_API-create-object
space_id: SPACE_ID
type_key: "project"
name: "Project title"
body: "Project description and goals"
properties:
- key: "status", select: "<active_tag_id>"
- key: "area", objects: ["<area_object_id>"]
- key: "outcome", text: "What done looks like"
icon:
format: "icon"
name: "rocket"
color: "purple"
```
### Create Area
```
Anytype_API-create-object
space_id: SPACE_ID
type_key: "area"
name: "CTO Leadership"
body: "Team management, technical strategy, architecture decisions"
properties:
- key: "description", text: "Standards: Team health, technical excellence, strategic alignment"
- key: "review_frequency", select: "<weekly_tag_id>"
icon:
format: "icon"
name: "briefcase"
color: "blue"
```
### Quick Capture (Inbox)
```
Anytype_API-create-object
space_id: SPACE_ID
type_key: "note"
name: "Quick capture content here"
properties:
- key: "status", select: "<inbox_tag_id>"
icon:
format: "icon"
name: "mail"
color: "grey"
```
### Update Task Status
```
Anytype_API-update-object
space_id: SPACE_ID
object_id: "<task_object_id>"
properties:
- key: "status", select: "<done_tag_id>"
```
## Query Patterns
### Get All Tasks for Today
```
Anytype_API-search-space
space_id: SPACE_ID
types: ["task"]
filters:
operator: "and"
conditions:
- property_key: "status"
select: "<next_tag_id>"
- property_key: "due_date"
date: "2025-01-05"
condition: "le"
```
### Get Inbox Items
```
Anytype_API-search-space
space_id: SPACE_ID
filters:
operator: "and"
conditions:
- property_key: "status"
select: "<inbox_tag_id>"
sort:
property_key: "created_date"
direction: "desc"
```
### Get Tasks by Area
```
Anytype_API-search-space
space_id: SPACE_ID
types: ["task"]
filters:
operator: "and"
conditions:
- property_key: "area"
objects: ["<area_object_id>"]
- property_key: "status"
condition: "nempty"
```
### Get Active Projects
```
Anytype_API-search-space
space_id: SPACE_ID
types: ["project"]
filters:
conditions:
- property_key: "status"
select: "<active_tag_id>"
```
### Get Overdue Tasks
```
Anytype_API-search-space
space_id: SPACE_ID
types: ["task"]
filters:
operator: "and"
conditions:
- property_key: "due_date"
date: "<today>"
condition: "lt"
- property_key: "status"
condition: "nempty"
```
### Get Tasks by Context
```
Anytype_API-search-space
space_id: SPACE_ID
types: ["task"]
filters:
conditions:
- property_key: "context"
multi_select: ["<deep_work_tag_id>"]
- property_key: "status"
select: "<next_tag_id>"
```
## Batch Operations
### Complete Multiple Tasks
```python
# Pseudocode for batch completion
task_ids = ["id1", "id2", "id3"]
done_tag_id = "<done_tag_id>"
for task_id in task_ids:
Anytype_API-update-object(
space_id=SPACE_ID,
object_id=task_id,
properties=[{"key": "status", "select": done_tag_id}]
)
```
### Archive Completed Projects
```
# 1. Find completed projects
Anytype_API-search-space
space_id: SPACE_ID
types: ["project"]
filters:
conditions:
- property_key: "status"
select: "<completed_tag_id>"
# 2. For each, update to archived status or move to archive
```
## Error Handling
| Error | Cause | Solution |
|-------|-------|----------|
| 401 Unauthorized | Missing/invalid auth | Check API key configuration |
| 404 Not Found | Invalid space/object ID | Verify IDs with list operations |
| 400 Bad Request | Invalid property format | Check property types match expected format |
## Notes
- Always retrieve space_id fresh via `list-spaces` before operations
- Tag IDs must be retrieved via `list-tags` for the specific property
- Object relations require the target object's ID, not name
- Dates use ISO 8601 format: `2025-01-05` or `2025-01-05T18:00:00Z`

View File

@@ -1,190 +0,0 @@
# PARA Methodology Reference
PARA is a universal system for organizing digital information, created by Tiago Forte.
## The Four Categories
### Projects
**Definition**: A series of tasks linked to a goal, with a deadline.
**Characteristics**:
- Has a clear outcome/deliverable
- Has a deadline (explicit or implicit)
- Requires multiple tasks to complete
- Can be completed (finite)
**Examples**:
- Launch NixOS Flakes course
- Hire senior backend developer
- Complete Q1 board presentation
- Publish self-hosting playbook video
**Questions to identify**:
- What am I committed to finishing?
- What has a deadline?
- What would I celebrate completing?
### Areas
**Definition**: A sphere of activity with a standard to be maintained over time.
**Characteristics**:
- Ongoing responsibility (infinite)
- Has standards, not deadlines
- Requires regular attention
- Never "complete" - only maintained
**Sascha's Areas**:
1. CTO Leadership
2. m3ta.dev
3. YouTube @m3tam3re
4. Technical Exploration
5. Personal Development
6. Health & Wellness
7. Family
**Questions to identify**:
- What roles do I maintain?
- What standards must I uphold?
- What would suffer if I ignored it?
### Resources
**Definition**: A topic or theme of ongoing interest.
**Characteristics**:
- Reference material for future use
- No immediate action required
- Supports projects and areas
- Can be shared or reused
**Examples**:
- NixOS configuration patterns
- n8n workflow templates
- Self-hosting architecture docs
- AI prompt libraries
- Book notes and highlights
**Questions to identify**:
- What might be useful later?
- What do I want to learn more about?
- What reference material do I need?
### Archives
**Definition**: Inactive items from the other three categories.
**Characteristics**:
- Completed projects
- Areas no longer active
- Resources no longer relevant
- Preserved for reference, not action
**When to archive**:
- Project completed or cancelled
- Role/responsibility ended
- Topic no longer relevant
- Information outdated
## The PARA Workflow
### Capture
Everything starts in the **Inbox**. Don't organize during capture.
### Clarify
Ask: "Is this actionable?"
- **Yes** → Is it a single task or a project?
- **No** → Is it reference material or trash?
### Organize
Place items in the appropriate category:
- Active work → Projects (linked to Area)
- Ongoing standards → Areas
- Reference → Resources
- Done/irrelevant → Archives
### Review
- **Daily**: Process inbox, check today's tasks
- **Weekly**: Review all projects, check areas, process resources
- **Monthly**: Archive completed, assess areas, audit resources
## Project vs Area Confusion
The most common PARA mistake is confusing projects and areas.
| If you treat a Project as an Area | If you treat an Area as a Project |
|-----------------------------------|-----------------------------------|
| Never feels "done" | Feels like constant failure |
| Scope creeps infinitely | Standards slip without noticing |
| No sense of progress | Burnout from "finishing" the infinite |
**Test**: Can I complete this in a single work session series?
- Yes → Project
- No, it's ongoing → Area
## Maintenance Rhythms
### Daily (Evening - 10 min)
1. Process inbox items
2. Review completed tasks
3. Set tomorrow's priorities
### Weekly (Sunday evening - 30 min)
1. Get clear: Inbox to zero
2. Get current: Review each Area
3. Review all active Projects
4. Plan next week's outcomes
### Monthly (First Sunday - 60 min)
1. Review Area standards
2. Archive completed Projects
3. Evaluate stalled Projects
4. Audit Resources relevance
### Quarterly (90 min)
1. Review life Areas balance
2. Set quarterly outcomes
3. Major archives cleanup
4. System improvements
## PARA in Anytype
### Type Mapping
| PARA | Anytype Type | Notes |
|------|--------------|-------|
| Project | `project` | Has area relation, deadline |
| Area | `area` | Top-level organization |
| Resource | `resource` | Reference material |
| Archive | Use `archived` property | Or separate Archive type |
| Task | `task` | Lives within Project or Area |
| Inbox | `note` with status=inbox | Quick capture |
### Recommended Properties
**On Projects**:
- `area` (relation) - Which area owns this
- `status` (select) - active, on-hold, completed
- `due_date` (date) - Target completion
- `outcome` (text) - What does "done" look like
**On Tasks**:
- `project` or `area` (relation) - Parent container
- `status` (select) - inbox, next, waiting, scheduled, done
- `priority` (select) - critical, high, medium, low
- `due_date` (date) - When it's needed
- `energy` (select) - Required energy level
- `context` (multi_select) - Where/how it can be done
**On Areas**:
- `description` (text) - Standards to maintain
- `review_frequency` (select) - daily, weekly, monthly
## Common Pitfalls
1. **Over-organizing during capture** - Just dump it in inbox
2. **Too many projects** - Active projects should be <15
3. **Orphan tasks** - Every task needs a project or area
4. **Stale resources** - Archive what you haven't touched in 6 months
5. **Skipping reviews** - The system only works if you review it

View File

@@ -1,307 +0,0 @@
# Review Templates
Structured templates for daily and weekly reviews.
## Daily Review Template (Evening)
**Duration**: 10-15 minutes
**Best time**: Evening, after work concludes
### Script
```
## Daily Review - [DATE]
### Wins Today
[List completed tasks - celebrate progress]
- [x]
- [x]
- [x]
### Still Open
[Tasks started but not finished]
- [ ] [Task] - [Status/blocker]
- [ ] [Task] - [Rescheduled to: DATE]
### Inbox Check
- Items in inbox: [COUNT]
- Quick processing:
- [Item] → [Action: task/project/trash/defer]
### Energy Assessment
- How was today's energy? [High/Medium/Low]
- What drained energy?
- What boosted energy?
### Tomorrow's Top 3
[Most impactful tasks for tomorrow - set before sleeping]
1. **[TASK]** - Why: [impact reason]
2. **[TASK]** - Why: [impact reason]
3. **[TASK]** - Why: [impact reason]
### Blockers to Address
- [Blocker] - Need: [what's needed to unblock]
### Notes for Tomorrow
[Anything to remember, context to preserve]
---
Review completed at: [TIME]
```
### Daily Review Checklist
- [ ] Review calendar for tomorrow
- [ ] Check completed tasks
- [ ] Process any urgent inbox items
- [ ] Identify top 3 priorities
- [ ] Note any blockers
- [ ] Clear desk/workspace (physical reset)
## Weekly Review Template
**Duration**: 30-45 minutes
**Best time**: Sunday evening or Friday afternoon
### Script
```
## Weekly Review - Week of [DATE]
### Part 1: Get Clear (Capture)
#### Inbox Processing
- Starting inbox count: [COUNT]
- Process each item:
- [Item] → [Destination: project/area/resource/trash]
- Ending inbox count: [TARGET: 0]
#### Loose Ends
- Notes to process:
- Voice memos:
- Screenshots/photos:
- Browser tabs to close:
- Email to archive:
### Part 2: Get Current (Review)
#### Area Review
**CTO Leadership**
- Active projects: [list]
- Stalled items: [list]
- Standards check: [On track / Needs attention]
- Next week focus:
**m3ta.dev**
- Active projects: [list]
- Content pipeline:
- Next week focus:
**YouTube @m3tam3re**
- Active projects: [list]
- Upload schedule:
- Next week focus:
**Technical Exploration**
- Current experiments:
- Learning goals:
- Next week focus:
**Personal Development**
- Current focus:
- Progress:
- Next week focus:
**Health & Wellness**
- This week: [assessment]
- Next week intention:
**Family**
- Quality time this week:
- Next week plans:
#### Project Status
| Project | Area | Status | Next Action | Due |
|---------|------|--------|-------------|-----|
| [Name] | [Area] | [On track/Stalled/Blocked] | [Next step] | [Date] |
#### Waiting For
[Items waiting on others]
| Item | Waiting On | Since | Follow-up Date |
|------|-----------|-------|----------------|
| | | | |
### Part 3: Get Creative (Reflect)
#### What Worked This Week?
-
#### What Didn't Work?
-
#### New Ideas/Projects
[Don't commit yet - just capture]
-
#### Should I Start?
[New projects to consider]
-
#### Should I Stop?
[Projects or commitments to drop]
-
#### Should I Continue?
[Projects going well]
-
### Part 4: Plan Next Week
#### Weekly Outcomes
[3-5 specific outcomes for the week]
1. [ ]
2. [ ]
3. [ ]
4. [ ]
5. [ ]
#### Time Blocks
[Protect time for deep work]
| Day | Block | Focus |
|-----|-------|-------|
| Mon | | |
| Tue | | |
| Wed | | |
| Thu | | |
| Fri | | |
#### Key Meetings
-
#### Week Theme (Optional)
[One word or phrase to guide the week]
---
Review completed at: [TIME]
Next weekly review: [DATE]
```
### Weekly Review Checklist
- [ ] Close all browser tabs
- [ ] Process email inbox to zero
- [ ] Process Anytype inbox to zero
- [ ] Review each Area
- [ ] Check all active Projects
- [ ] Review Waiting For list
- [ ] Clear completed tasks
- [ ] Archive finished projects
- [ ] Set weekly outcomes
- [ ] Block deep work time
- [ ] Review calendar for the week
## Monthly Review Template
**Duration**: 60 minutes
**Best time**: First Sunday of the month
### Script
```
## Monthly Review - [MONTH YEAR]
### Month Metrics
- Projects completed: [COUNT]
- Projects started: [COUNT]
- Tasks completed: [COUNT]
- Inbox avg items: [COUNT]
### Area Deep Dive
[For each Area, rate 1-10 and note]
| Area | Rating | Notes | Action |
|------|--------|-------|--------|
| CTO Leadership | /10 | | |
| m3ta.dev | /10 | | |
| YouTube | /10 | | |
| Tech Exploration | /10 | | |
| Personal Dev | /10 | | |
| Health | /10 | | |
| Family | /10 | | |
### Project Archive
[Projects completed this month → Archive]
-
### Stalled Projects
[No progress in 30+ days - decide: continue, pause, or kill]
| Project | Days Stalled | Decision |
|---------|--------------|----------|
| | | |
### Resource Audit
[Resources not accessed in 3+ months - archive or keep?]
-
### System Improvements
[What's not working in the system?]
-
### Next Month Focus
[Top 3 priorities for the month]
1.
2.
3.
---
Review completed at: [TIME]
Next monthly review: [DATE]
```
## ntfy Notification Templates
### Daily Review Summary
```
Daily Review Complete
Completed: [X] tasks
Tomorrow's Top 3:
1. [Task 1]
2. [Task 2]
3. [Task 3]
Inbox: [X] items pending
```
### Weekly Review Reminder
```
Weekly Review Reminder
Time for your weekly review!
Start here: "weekly review"
```
### Overdue Alert
```
Overdue Tasks Alert
[X] tasks past due date:
- [Task 1] (due [DATE])
- [Task 2] (due [DATE])
Review now: "show overdue"
```

View File

@@ -427,8 +427,7 @@ Prompts for: location, description, tools, then generates the agent file.
1. Add agent to `opencode.json` or `agents.json`
2. Create prompt file in `prompts/` directory
3. Validate with `scripts/validate-agent.sh` (if available in repo)
- Alternative: Use `python3 -c "import json; json.load(open('agents.json'))"` for syntax check
3. Validate with: `python3 -c "import json; json.load(open('agents.json'))"` for syntax check
### Method 3: Markdown File
@@ -442,10 +441,7 @@ Validate agent configuration:
```bash
# Validate agents.json
./scripts/validate-agent.sh agents.json
# Validate markdown agent
./scripts/validate-agent.sh ~/.config/opencode/agents/review.md
python3 -c "import json; json.load(open('agents.json'))"
```
## Testing

315
skills/basecamp/SKILL.md Normal file
View File

@@ -0,0 +1,315 @@
---
name: basecamp
description: "Use when: (1) Managing Basecamp projects, (2) Working with Basecamp todos and tasks, (3) Reading/updating message boards and campfire, (4) Managing card tables (kanban), (5) Handling email forwards/inbox, (6) Setting up webhooks for automation. Triggers: 'Basecamp', 'project', 'todo', 'card table', 'campfire', 'message board', 'webhook', 'inbox', 'email forwards'."
compatibility: opencode
---
# Basecamp
Basecamp 3 project management integration via MCP server. Provides comprehensive access to projects, todos, messages, card tables (kanban), campfire, inbox, documents, and webhooks.
## Core Workflows
### Finding Projects and Todos
**List all projects:**
```bash
# Get all accessible Basecamp projects
get_projects
```
**Get project details:**
```bash
# Get specific project information including status, tools, and access level
get_project --project_id <id>
```
**Explore todos:**
```bash
# Get all todo lists in a project
get_todolists --project_id <id>
# Get all todos from a specific todo list (handles pagination automatically)
get_todos --recording_id <todo_list_id>
# Search across projects for todos/messages containing keywords
search_basecamp --query <search_term>
```
### Managing Card Tables (Kanban)
**Card tables** are Basecamp's kanban-style workflow management tool.
**Explore card table:**
```bash
# Get card table for a project
get_card_table --project_id <id>
# Get all columns in a card table
get_columns --card_table_id <id>
# Get all cards in a specific column
get_cards --column_id <id>
```
**Manage columns:**
```bash
# Create new column (e.g., "In Progress", "Done")
create_column --card_table_id <id> --title "Column Name"
# Update column title
update_column --column_id <id> --title "New Title"
# Move column to different position
move_column --column_id <id> --position 3
# Update column color
update_column_color --column_id <id> --color "red"
# Put column on hold (freeze work)
put_column_on_hold --column_id <id>
# Remove hold from column (unfreeze work)
remove_column_hold --column_id <id>
```
**Manage cards:**
```bash
# Create new card in a column
create_card --column_id <id> --title "Task Name" --content "Description"
# Update card details
update_card --card_id <id> --title "Updated Title" --content "New content"
# Move card to different column
move_card --card_id <id> --to_column_id <new_column_id>
# Mark card as complete
complete_card --card_id <id>
# Mark card as incomplete
uncomplete_card --card_id <id>
```
**Manage card steps (sub-tasks):**
```bash
# Get all steps for a card
get_card_steps --card_id <id>
# Create new step
create_card_step --card_id <id> --content "Sub-task description"
# Update step
update_card_step --step_id <id> --content "Updated description"
# Delete step
delete_card_step --step_id <id>
# Mark step as complete
complete_card_step --step_id <id>
# Mark step as incomplete
uncomplete_card_step --step_id <id>
```
### Working with Messages and Campfire
**Message board:**
```bash
# Get message board for a project
get_message_board --project_id <id>
# Get all messages from a project
get_messages --project_id <id>
# Get specific message
get_message --message_id <id>
```
**Campfire (team chat):**
```bash
# Get recent campfire lines (messages)
get_campfire_lines --campfire_id <id>
```
**Comments:**
```bash
# Get comments for any Basecamp item (message, todo, card, etc.)
get_comments --recording_id <id>
# Create a comment
create_comment --recording_id <id> --content "Your comment"
```
### Managing Inbox (Email Forwards)
**Inbox** handles email forwarding to Basecamp projects.
**Explore inbox:**
```bash
# Get inbox for a project (email forwards container)
get_inbox --project_id <id>
# Get all forwarded emails from a project's inbox
get_forwards --project_id <id>
# Get specific forwarded email
get_forward --forward_id <id>
# Get all replies to a forwarded email
get_inbox_replies --forward_id <id>
# Get specific reply
get_inbox_reply --reply_id <id>
```
**Manage forwards:**
```bash
# Move forwarded email to trash
trash_forward --forward_id <id>
```
### Documents
**Manage documents:**
```bash
# List documents in a vault
get_documents --vault_id <id>
# Get specific document
get_document --document_id <id>
# Create new document
create_document --vault_id <id> --title "Document Title" --content "Document content"
# Update document
update_document --document_id <id> --title "Updated Title" --content "New content"
# Move document to trash
trash_document --document_id <id>
```
### Webhooks and Automation
**Webhooks** enable automation by triggering external services on Basecamp events.
**Manage webhooks:**
```bash
# List webhooks for a project
get_webhooks --project_id <id>
# Create webhook
create_webhook --project_id <id> --callback_url "https://your-service.com/webhook" --types "TodoCreated,TodoCompleted"
# Delete webhook
delete_webhook --webhook_id <id>
```
### Daily Check-ins
**Project check-ins:**
```bash
# Get daily check-in questions for a project
get_daily_check_ins --project_id <id>
# Get answers to daily check-in questions
get_question_answers --question_id <id>
```
### Attachments and Events
**Upload and track:**
```bash
# Upload file as attachment
create_attachment --recording_id <id> --file_path "/path/to/file"
# Get events for a recording
get_events --recording_id <id>
```
## Integration with Other Skills
### Hermes (Work Communication)
Hermes loads this skill when working with Basecamp projects. Common workflows:
| User Request | Hermes Action | Basecamp Tools Used |
|--------------|---------------|---------------------|
| "Create a task in Marketing project" | Create card/todo | `create_card`, `get_columns`, `create_column` |
| "Check project updates" | Read messages/campfire | `get_messages`, `get_campfire_lines`, `get_comments` |
| "Update my tasks" | Move cards, update status | `move_card`, `complete_card`, `update_card` |
| "Add comment to discussion" | Post comment | `create_comment`, `get_comments` |
| "Review project inbox" | Check email forwards | `get_inbox`, `get_forwards`, `get_inbox_replies` |
### Workflow Patterns
**Project setup:**
1. Use `get_projects` to find existing projects
2. Use `get_project` to verify project details
3. Use `get_todolists` or `get_card_table` to understand project structure
**Task management:**
1. Use `get_todolists` or `get_columns` to find appropriate location
2. Use `create_card` or todo creation to add work
3. Use `move_card`, `complete_card` to update status
4. Use `get_card_steps` and `create_card_step` for sub-task breakdown
**Communication:**
1. Use `get_messages` or `get_campfire_lines` to read discussions
2. Use `create_comment` to contribute to existing items
3. Use `search_basecamp` to find relevant content
**Automation:**
1. Use `get_webhooks` to check existing integrations
2. Use `create_webhook` to set up external notifications
## Tool Organization by Category
**Projects & Lists:**
- `get_projects`, `get_project`, `get_todolists`, `get_todos`, `search_basecamp`
**Card Table (Kanban):**
- `get_card_table`, `get_columns`, `get_column`, `create_column`, `update_column`, `move_column`, `update_column_color`, `put_column_on_hold`, `remove_column_hold`, `watch_column`, `unwatch_column`, `get_cards`, `get_card`, `create_card`, `update_card`, `move_card`, `complete_card`, `uncomplete_card`, `get_card_steps`, `create_card_step`, `get_card_step`, `update_card_step`, `delete_card_step`, `complete_card_step`, `uncomplete_card_step`
**Messages & Communication:**
- `get_message_board`, `get_messages`, `get_message`, `get_campfire_lines`, `get_comments`, `create_comment`
**Inbox (Email Forwards):**
- `get_inbox`, `get_forwards`, `get_forward`, `get_inbox_replies`, `get_inbox_reply`, `trash_forward`
**Documents:**
- `get_documents`, `get_document`, `create_document`, `update_document`, `trash_document`
**Webhooks:**
- `get_webhooks`, `create_webhook`, `delete_webhook`
**Other:**
- `get_daily_check_ins`, `get_question_answers`, `create_attachment`, `get_events`
## Common Queries
**Finding the right project:**
```bash
# Use search to find projects by keyword
search_basecamp --query "marketing"
# Then inspect specific project
get_project --project_id <id>
```
**Understanding project structure:**
```bash
# Check which tools are available in a project
get_project --project_id <id>
# Project response includes tools: message_board, campfire, card_table, todolists, etc.
```
**Bulk operations:**
```bash
# Get all todos across a project (pagination handled automatically)
get_todos --recording_id <todo_list_id>
# Returns all pages of results
# Get all cards across all columns
get_columns --card_table_id <id>
get_cards --column_id <id> # Repeat for each column
```

View File

@@ -65,56 +65,69 @@ Be ready to backtrack and clarify. Brainstorming is non-linear.
After reaching clarity, offer:
> "Would you like me to save this as an Anytype Brainstorm object for reference?"
> "Would you like me to save this brainstorm to Obsidian for reference?"
If yes, use the Anytype MCP to create a Brainstorm object:
If yes, create a brainstorm note in Obsidian:
```
Anytype_API-create-object
space_id: CHIRON_SPACE_ID
type_key: "brainstorm_v_2"
name: "<topic>"
body: "<full brainstorm content in markdown>"
icon: { format: "emoji", emoji: "💭" }
properties: [
{ key: "topic", text: "<short title>" },
{ key: "context", text: "<situation and trigger>" },
{ key: "outcome", text: "<what success looks like>" },
{ key: "constraints", text: "<time, resources, boundaries>" },
{ key: "options", text: "<options considered>" },
{ key: "decision", text: "<final choice>" },
{ key: "rationale", text: "<reasoning behind decision>" },
{ key: "next_steps", text: "<action items>" },
{ key: "framework", select: "<framework_tag_id>" },
{ key: "status", select: "draft" }
]
File: ~/CODEX/03-resources/brainstorms/YYYY-MM-DD-[topic].md
---
date: {{date}}
created: {{timestamp}}
type: brainstorm
framework: {{framework_used}}
status: {{draft|final|archived}}
tags: #brainstorm #{{framework_tag}}
---
# {{topic}}
## Context
{{situation and trigger}}
## Outcome
{{what success looks like}}
## Constraints
{{time, resources, boundaries}}
## Options Explored
{{options considered}}
## Decision
{{final choice}}
## Rationale
{{reasoning behind decision}}
## Next Steps
{{action items}}
---
*Created: {{timestamp}}*
```
**Chiron Space ID**: `bafyreie5sfq7pjfuq56hxsybos545bi4tok3kx7nab3vnb4tnt4i3575p4.yu20gbnjlbxv`
**Framework tags** (use in `tags:` frontmatter):
- `#pros-cons` - Pros/Cons analysis
- `#swot` - Strategic SWOT assessment
- `#5-whys` - Root cause analysis
- `#how-now-wow` - Prioritization matrix
- `#starbursting` - Comprehensive exploration (6 questions)
- `#constraint-mapping` - Boundary analysis
**Framework Tag IDs**:
- `bafyreiatkdbwq53shngaje6wuw752wxnwqlk3uhy6nicamdr56jpvji34i` - None
- `bafyreiaizrndgxmzbbzo6lurkgi7fc6evemoc5tivswrdu57ngkizy4b3u` - Pros/Cons
- `bafyreiaym5zkajnsrklivpjkizkuyhy3v5fzo62aaeobdlqzhq47clv6lm` - SWOT
- `bafyreihgfpsjeyuu7p46ejzd5jce5kmgfsuxy7r5kl4fqdhuq7jqoggtgq` - 5 Whys
- `bafyreieublfraypplrr5mmnksnytksv4iyh7frspyn64gixaodwmnhmosu` - How-Now-Wow
- `bafyreieyz6xjpt3zxad7h643m24oloajcae3ocnma3ttqfqykmggrsksk4` - Starbursting
- `bafyreigokn5xgdosd4cihehl3tqfsd25mwdaapuhopjgn62tkpvpwn4tmy` - Constraint Mapping
**Status Tag IDs**:
- `bafyreig5um57baws2dnntaxsi4smxtrzftpe57a7wyhfextvcq56kdkllq` - Draft
- `bafyreiffiinadpa2fwxw3iylj7pph3yzbnhe63dcyiwr4x24ne4jsgi24` - Final
- `bafyreihk6dlpwh3nljrxcqqe3v6tl52bxuvmx3rcgyzyom6yjmtdegu4ja` - Archived
**Optional**: Link to related objects using `linked_projects` or `linked_tasks` properties with object IDs.
**Status tags** (use in `status:` frontmatter):
- `draft` - Initial capture
- `final` - Decision made
- `archived` - No longer active
---
## Template Setup
For a better editing experience, create a template in Anytype:
For a better editing experience, create a template in Obsidian:
1. Open Anytype desktop app → Chiron space
1. Open Obsidian → ~/CODEX vault
2. Go to Content Model → Object Types → Brainstorm v2
3. Click Templates (top right) → Click + to create template
4. Name it "Brainstorm Session" and configure default fields:
@@ -185,4 +198,4 @@ After brainstorming, common next steps:
| Task identified | task-management | "Add this to my tasks" |
| Work project | basecamp | "Set this up in Basecamp" |
All handoffs can reference the Anytype Brainstorm object via its ID or linked objects.
All handoffs can reference the Obsidian brainstorm note via WikiLinks or file paths.

View File

@@ -0,0 +1,210 @@
# Brainstorm Obsidian Workflow
This document describes how to create and use brainstorm notes in Obsidian.
## Quick Create
Create a brainstorm note in Obsidian markdown format:
```markdown
File: ~/CODEX/03-resources/brainstorms/YYYY-MM-DD-[topic].md
---
date: 2026-01-27
created: 2026-01-27T18:30:00Z
type: brainstorm
framework: pros-cons
status: draft
tags: #brainstorm #pros-cons
---
# NixOS Course Launch Strategy
## Context
Want to launch NixOS course for developers who want to learn Nix
## Outcome
Build long-term audience/community around NixOS expertise
## Constraints
- 2-4 weeks preparation time
- Solo creator (no team yet)
- Limited budget for marketing
## Options Explored
### Option A: Early Access Beta
- **Approach**: Release course to 10-20 people first, gather feedback, then full launch
- **Pros**: Validates content, builds testimonials, catches bugs early
- **Cons**: Slower to revenue, requires managing beta users
- **Best if**: Quality is critical and you have patient audience
### Option B: Free Preview + Upsell
- **Approach**: Release first module free, full course for paid
- **Pros**: Low barrier to entry, demonstrates value, builds email list
- **Cons**: Lower conversion rate, can feel "bait-and-switchy"
- **Best if**: Content quality is obvious from preview
### Option C: Full Launch with Community
- **Approach**: Launch full course immediately with Discord/Community for support
- **Pros**: Immediate revenue, maximum reach, community built-in
- **Cons**: No validation, bugs in production, overwhelmed support
- **Best if**: Content is well-tested and you have support capacity
## Decision
**Early Access Beta** - Build anticipation while validating content
## Rationale
Quality and community trust matter more than speed. A beta launch lets me:
1. Catch errors before they damage reputation
2. Build testimonials that drive full launch
3. Gather feedback to improve the product
4. Create a community of early adopters who become evangelists
## Next Steps
1. Create landing page with beta signup
2. Build email list from signups
3. Create course outline and first modules
4. Select 10-20 beta users from community
5. Set up feedback collection system (notion/obsidian)
6. Launch beta (target: Feb 15)
7. Collect feedback for 2 weeks
8. Finalize content based on feedback
9. Full launch (target: March 1)
```
## Note Structure
| Frontmatter Field | Purpose | Values |
|-----------------|---------|---------|
| `date` | Date created | YYYY-MM-DD |
| `created` | Timestamp | ISO 8601 |
| `type` | Note type | `brainstorm` |
| `framework` | Framework used | `none`, `pros-cons`, `swot`, `5-whys`, `how-now-wow`, `starbursting`, `constraint-mapping` |
| `status` | Progress status | `draft`, `final`, `archived` |
| `tags` | Categorization | Always include `#brainstorm`, add framework tag |
## Framework Tags
| Framework | Tag | When to Use |
|-----------|------|-------------|
| None | `#none` | Conversational exploration without structure |
| Pros/Cons | `#pros-cons` | Binary decision (A or B, yes or no) |
| SWOT | `#swot` | Strategic assessment of situation |
| 5 Whys | `#5-whys` | Finding root cause of problem |
| How-Now-Wow | `#how-now-wow` | Prioritizing many ideas by impact/effort |
| Starbursting | `#starbursting` | Comprehensive exploration (6 questions) |
| Constraint Mapping | `#constraint-mapping` | Understanding boundaries and constraints |
## Status Values
| Status | Description | When to Use |
|--------|-------------|-------------|
| `draft` | Initial capture, work in progress | Start with this, update as you work |
| `final` | Decision made, brainstorm complete | When you've reached clarity |
| `archived` | No longer relevant or superseded | Historical reference only |
## Template Setup
For a better editing experience, create a template in Obsidian:
1. Open Obsidian → ~/CODEX vault
2. Create folder: `_chiron/templates/` if not exists
3. Create template file: `brainstorm-note.md` with:
- Frontmatter with placeholder values
- Markdown structure matching the sections above
- Empty sections ready to fill in
4. Set up Obsidian Templates plugin (optional) to use this template
**Obsidian Template:**
```markdown
---
date: {{date}}
created: {{timestamp}}
type: brainstorm
framework: {{framework}}
status: draft
tags: #brainstorm #{{framework}}
---
# {{topic}}
## Context
## Outcome
## Constraints
## Options Explored
### Option A: {{option_a_name}}
- **Approach**:
- **Pros**:
- **Cons**:
- **Best if**:
### Option B: {{option_b_name}}
- **Approach**:
- **Pros**:
- **Cons**:
- **Best if**:
## Decision
## Rationale
## Next Steps
1.
2.
3.
```
## Linking to Other Notes
After creating a brainstorm, link it to related notes using WikiLinks:
```markdown
## Related Projects
- [[Launch NixOS Flakes Course]]
- [[Q2 Training Program]]
## Related Tasks
- [[Tasks]]
```
## Searching Brainstorms
Find brainstorms by topic, framework, or status using Obsidian search:
**Obsidian search:**
- Topic: `path:03-resources/brainstorms "NixOS"`
- Framework: `#pros-cons path:03-resources/brainstorms`
- Status: `#draft path:03-resources/brainstorms`
**Dataview query (if using plugin):**
```dataview
TABLE date, topic, framework, status
FROM "03-resources/brainstorms"
WHERE type = "brainstorm"
SORT date DESC
```
## Best Practices
1. **Create brainstorms for any significant decision** - Capture reasoning while fresh
2. **Mark as Final when complete** - Helps with search and review
3. **Link to related notes** - Creates context web via WikiLinks
4. **Use frameworks selectively** - Not every brainstorm needs structure
5. **Review periodically** - Brainstorms can inform future decisions
6. **Keep structure consistent** - Same sections make reviews easier
7. **Use tags for filtering** - Framework and status tags are essential
## Integration with Other Skills
| From brainstorming | To skill | Handoff trigger |
|------------------|------------|-----------------|
| Project decision | plan-writing | "Create a project plan for this" |
| Task identified | task-management | "Add this to my tasks" |
| Work project | basecamp | "Set this up in Basecamp" |
All handoffs can reference the Obsidian brainstorm note via WikiLinks or file paths.

View File

@@ -0,0 +1,262 @@
---
name: doc-translator
description: "Translates external documentation websites to specified language(s) and publishes to Outline wiki. Use when: (1) Translating SaaS/product documentation into German or Czech, (2) Publishing translated docs to Outline wiki, (3) Re-hosting external images to Outline. Triggers: 'translate docs', 'translate documentation', 'translate to German', 'translate to Czech', 'publish to wiki', 'doc translation', 'TEEM translation'."
compatibility: opencode
---
# Doc Translator
Translate external documentation websites to German (DE) and/or Czech (CZ), then publish to the company Outline wiki at `https://wiki.az-gruppe.com`. All images are re-hosted on Outline. UI terms use TEEM format.
## Core Workflow
### 1. Validate Input & Clarify
Before starting, confirm:
1. **URL accessibility** - Check with `curl -sI <URL>` for HTTP 200
2. **Target language(s)** - Always ask explicitly using the `question` tool:
```
question: "Which language(s) should I translate to?"
options: ["German (DE)", "Czech (CZ)", "Both (DE + CZ)"]
```
3. **Scope** - If URL is an index page with multiple sub-pages, ask:
```
question: "This page links to multiple sub-pages. What should I translate?"
options: ["This page only", "This page + all linked sub-pages", "Let me pick specific pages"]
```
4. **Target collection** - Use `Outline_list_collections` to show available collections, then ask which one to publish to
**CRITICAL:** NEVER auto-select collection. Always present collection list to user and wait for explicit selection before proceeding with document creation.
If URL fetch fails, use `question` to ask for an alternative URL or manual content paste.
### 2. Fetch & Parse Content
Use the `webfetch` tool to retrieve page content:
```
webfetch(url="<URL>", format="markdown")
```
From the result:
- Extract main content body (ignore navigation, footers, sidebars, cookie banners)
- Preserve document structure (headings, lists, tables, code blocks)
- Collect all image URLs into a list for Step 3
- Note any embedded videos or interactive elements (these cannot be translated)
For multi-page docs, repeat for each page.
### 3. Download Images
Download all images to a temporary directory:
```bash
mkdir -p /tmp/doc-images
# For each image URL:
curl -sL "IMAGE_URL" -o "/tmp/doc-images/$(basename IMAGE_URL)"
```
Track a mapping of: `original_url -> local_filename -> outline_attachment_url`
If an image download fails, log it and continue. Use a placeholder in the final document:
```markdown
> **[Image unavailable]** Original: IMAGE_URL
```
### 4. Upload Images to Outline
MCP-outline does not support attachment creation. Use the bundled script for image uploads:
```bash
# Upload with optional document association
bash scripts/upload_image_to_outline.sh "/tmp/doc-images/screenshot.png" "$DOCUMENT_ID"
# Upload without document (attach later)
bash scripts/upload_image_to_outline.sh "/tmp/doc-images/screenshot.png"
```
The script handles API key loading from `/run/agenix/outline-key`, content-type detection, the two-step presigned POST flow, and retries. Output is JSON: `{"success": true, "attachment_url": "https://..."}`.
Replace image references in the translated markdown with the returned `attachment_url`:
```markdown
![description](ATTACHMENT_URL)
```
For all other Outline operations (documents, collections, search), use MCP tools (`Outline_*`).
### 5. Translate with TEEM Format
Translate the entire document into each target language. Apply TEEM format to UI elements.
#### Address Form (CRITICAL)
**Always use the informal "you" form** in ALL target languages:
- **German**: Use **"Du"** (informal), NEVER "Sie" (formal)
- **Czech**: Use **"ty"** (informal), NEVER "vy" (formal)
- This applies to all translations — documentation should feel approachable and direct
#### Infobox / Callout Formatting
Source documentation often uses admonitions, callouts, or info boxes (e.g., GitHub-style `> [!NOTE]`, Docusaurus `:::note`, or custom HTML boxes). **Convert ALL such elements** to Outline's callout syntax:
```markdown
:::tip
Tip or best practice content here.
:::
:::info
Informational content here.
:::
:::warning
Warning or caution content here.
:::
:::success
Success message or positive outcome here.
:::
```
**Mapping rules** (source → Outline):
| Source pattern | Outline syntax |
|---|---|
| Note, Info, Information | `:::info` |
| Tip, Hint, Best Practice | `:::tip` |
| Warning, Caution, Danger, Important | `:::warning` |
| Success, Done, Check | `:::success` |
**CRITICAL formatting**: The closing `:::` MUST be on its own line with an empty line before it. Content goes directly after the opening line.
#### TEEM Rules
**Format:** `**English UI Term** (Translation)`
**Apply TEEM to:**
- Button labels
- Menu items and navigation tabs
- Form field labels
- Dialog/modal titles
- Toolbar icons with text
- Status messages from the app
- **Headings containing UI terms** (example: "## [Adding a new To-do]" becomes "## [Ein neues **To-do** (Aufgabe) hinzufügen]")
**Translate normally (no TEEM):**
- Your own explanatory text
- Document headings you create (that don't contain UI terms)
- General descriptions and conceptual explanations
- Code blocks and technical identifiers
#### German Examples
```markdown
Click **Settings** (Einstellungen) to open preferences.
Navigate to **Dashboard** (Übersicht) > **Reports** (Berichte).
Press the **Submit** (Absenden) button.
In the **File** (Datei) menu, select **Export** (Exportieren).
# Heading with UI term: Create a new **To-do** (Aufgabe)
## [Adding a new **To-do** (Aufgabe)]
```
#### Czech Examples
```markdown
Click **Settings** (Nastavení) to open preferences.
Navigate to **Dashboard** (Přehled) > **Reports** (Sestavy).
Press the **Submit** (Odeslat) button.
In the **File** (Soubor) menu, select **Export** (Exportovat).
# Heading with UI term: Create a new **To-do** (Úkol)
## [Adding a new **To-do** (Úkol)]
```
#### Ambiguous UI Terms
If a UI term has multiple valid translations depending on context, use the `question` tool:
```
question: "The term 'Board' appears in the UI. Which translation fits this context?"
options: ["Pinnwand (pinboard/bulletin)", "Tafel (whiteboard)", "Gremium (committee)"]
```
### 6. Publish to Outline
Use mcp-outline tools to publish:
1. **Find or create collection:**
- `Outline_list_collections` to find target collection
- `Outline_create_collection` if needed
2. **Create document:**
- `Outline_create_document` with translated markdown content
- Set `publish: true` for immediate visibility
- Use `parent_document_id` if nesting under an existing doc
3. **For multi-language:** Create one document per language, clearly titled:
- `[Product Name] - Dokumentation (DE)`
- `[Product Name] - Dokumentace (CZ)`
## Error Handling
| Issue | Action |
|-------|--------|
| URL fetch fails | Use `question` to ask for alternative URL or manual paste |
| Image download fails | Continue with placeholder, note in completion report |
| Outline API error (attachments) | Script retries 3x with backoff; on final failure save markdown to `/tmp/doc-translator-backup-TIMESTAMP.md`, report error |
| Outline API error (document) | Save markdown to `/tmp/doc-translator-backup-TIMESTAMP.md`, report error |
| Ambiguous UI term | Use `question` to ask user for correct translation |
| Large document (>5000 words) | Ask user if splitting into multiple docs is preferred |
| Multi-page docs | Ask user about scope before proceeding |
| Rate limiting | Wait and retry with exponential backoff |
If Outline publish fails, always save the translated markdown locally as backup before reporting the error.
## Completion Report
After each translation, output:
```
Translation Complete
Documents Created:
- DE: [Document Title] - ID: [xxx] - URL: https://wiki.az-gruppe.com/doc/[slug]
- CZ: [Document Title] - ID: [xxx] - URL: https://wiki.az-gruppe.com/doc/[slug]
Images Processed: X of Y successfully uploaded
Items Needing Review:
- [Any sections with complex screenshots]
- [Any failed image uploads with original URLs]
- [Any unclear UI terms that were best-guessed]
```
## Language Codes
| Code | Language | Native Name |
|------|----------|-------------|
| DE | German | Deutsch |
| CZ | Czech | Čeština |
## Environment Variables
| Variable | Purpose | Source |
|----------|---------|--------|
| `OUTLINE_API_KEY` | Bearer token for wiki.az-gruppe.com API | Auto-loaded from `/run/agenix/outline-key` by upload script |
## Integration with Other Skills
| Need | Skill | When |
|------|-------|------|
| Wiki document management | outline | Managing existing translated docs |
| Browser-based content extraction | playwright / dev-browser | When webfetch cannot access content (login-required pages) |

View File

@@ -0,0 +1,116 @@
#!/usr/bin/env bash
# Upload an image to Outline via presigned POST (two-step flow)
#
# Usage:
# upload_image_to_outline.sh <image_path> [document_id]
#
# Environment:
# OUTLINE_API_KEY - Bearer token for wiki.az-gruppe.com API
# Auto-loaded from /run/agenix/outline-key if not set
#
# Output (JSON to stdout):
# {"success": true, "attachment_url": "https://..."}
# Error (JSON to stderr):
# {"success": false, "error": "error message"}
set -euo pipefail
MAX_RETRIES=3
RETRY_DELAY=2
if [ $# -lt 1 ] || [ $# -gt 2 ]; then
echo '{"success": false, "error": "Usage: upload_image_to_outline.sh <image_path> [document_id]"}' >&2
exit 1
fi
IMAGE_PATH="$1"
DOCUMENT_ID="${2:-}"
if [ -z "${OUTLINE_API_KEY:-}" ]; then
if [ -f /run/agenix/outline-key ]; then
OUTLINE_API_KEY=$(cat /run/agenix/outline-key)
export OUTLINE_API_KEY
else
echo '{"success": false, "error": "OUTLINE_API_KEY not set and /run/agenix/outline-key not found"}' >&2
exit 1
fi
fi
# Check if file exists
if [ ! -f "$IMAGE_PATH" ]; then
echo "{\"success\": false, \"error\": \"Image file not found: $IMAGE_PATH\"}" >&2
exit 1
fi
# Extract image name and extension
IMAGE_NAME="$(basename "$IMAGE_PATH")"
EXTENSION="${IMAGE_NAME##*.}"
# Detect content type by extension
case "${EXTENSION,,}" in
png) CONTENT_TYPE="image/png" ;;
jpg|jpeg) CONTENT_TYPE="image/jpeg" ;;
gif) CONTENT_TYPE="image/gif" ;;
svg) CONTENT_TYPE="image/svg+xml" ;;
webp) CONTENT_TYPE="image/webp" ;;
*) CONTENT_TYPE="application/octet-stream" ;;
esac
FILESIZE=$(stat -c%s "$IMAGE_PATH" 2>/dev/null || stat -f%z "$IMAGE_PATH" 2>/dev/null)
if [ -z "$FILESIZE" ]; then
echo "{\"success\": false, \"error\": \"Failed to get file size for: $IMAGE_PATH\"}" >&2
exit 1
fi
REQUEST_BODY=$(jq -n \
--arg name "$IMAGE_NAME" \
--arg contentType "$CONTENT_TYPE" \
--argjson size "$FILESIZE" \
--arg documentId "$DOCUMENT_ID" \
'if $documentId == "" then
{name: $name, contentType: $contentType, size: $size}
else
{name: $name, contentType: $contentType, size: $size, documentId: $documentId}
end')
# Step 1: Create attachment record
RESPONSE=$(curl -s -X POST "https://wiki.az-gruppe.com/api/attachments.create" \
-H "Authorization: Bearer $OUTLINE_API_KEY" \
-H "Content-Type: application/json" \
-d "$REQUEST_BODY")
UPLOAD_URL=$(echo "$RESPONSE" | jq -r '.data.uploadUrl // empty')
ATTACHMENT_URL=$(echo "$RESPONSE" | jq -r '.data.attachment.url // empty')
if [ -z "$UPLOAD_URL" ]; then
ERROR_MSG=$(echo "$RESPONSE" | jq -r '.message // "Failed to create attachment"')
echo "{\"success\": false, \"error\": \"$ERROR_MSG\", \"response\": $(echo "$RESPONSE" | jq -c .)}" >&2
exit 1
fi
FORM_ARGS=()
while IFS= read -r line; do
key=$(echo "$line" | jq -r '.key')
value=$(echo "$line" | jq -r '.value')
FORM_ARGS+=(-F "$key=$value")
done < <(echo "$RESPONSE" | jq -c '.data.form | to_entries[]')
# Step 2: Upload binary to presigned URL with retry
for attempt in $(seq 1 "$MAX_RETRIES"); do
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" -X POST "$UPLOAD_URL" \
"${FORM_ARGS[@]}" \
-F "file=@$IMAGE_PATH")
if [ "$HTTP_CODE" = "200" ] || [ "$HTTP_CODE" = "201" ] || [ "$HTTP_CODE" = "204" ]; then
echo "{\"success\": true, \"attachment_url\": \"$ATTACHMENT_URL\"}"
exit 0
fi
if [ "$attempt" -lt "$MAX_RETRIES" ]; then
sleep "$((RETRY_DELAY * attempt))"
fi
done
echo "{\"success\": false, \"error\": \"Upload failed after $MAX_RETRIES attempts (last HTTP $HTTP_CODE)\"}" >&2
exit 1

544
skills/excalidraw/SKILL.md Normal file
View File

@@ -0,0 +1,544 @@
---
name: excalidraw
description: "Create Excalidraw diagram JSON files that make visual arguments. Use when: (1) user wants to visualize workflows, architectures, or concepts, (2) creating system diagrams, (3) generating .excalidraw files. Triggers: excalidraw, diagram, visualize, architecture diagram, system diagram."
compatibility: opencode
---
# Excalidraw Diagram Creator
Generate `.excalidraw` JSON files that **argue visually**, not just display information.
## Customization
**All colors and brand-specific styles live in one file:** `references/color-palette.md`. Read it before generating any diagram and use it as the single source of truth for all color choices — shape fills, strokes, text colors, evidence artifact backgrounds, everything.
To make this skill produce diagrams in your own brand style, edit `color-palette.md`. Everything else in this file is universal design methodology and Excalidraw best practices.
---
## Core Philosophy
**Diagrams should ARGUE, not DISPLAY.**
A diagram isn't formatted text. It's a visual argument that shows relationships, causality, and flow that words alone can't express. The shape should BE the meaning.
**The Isomorphism Test**: If you removed all text, would the structure alone communicate the concept? If not, redesign.
**The Education Test**: Could someone learn something concrete from this diagram, or does it just label boxes? A good diagram teaches—it shows actual formats, real event names, concrete examples.
---
## Depth Assessment (Do This First)
Before designing, determine what level of detail this diagram needs:
### Simple/Conceptual Diagrams
Use abstract shapes when:
- Explaining a mental model or philosophy
- The audience doesn't need technical specifics
- The concept IS the abstraction (e.g., "separation of concerns")
### Comprehensive/Technical Diagrams
Use concrete examples when:
- Diagramming a real system, protocol, or architecture
- The diagram will be used to teach or explain (e.g., YouTube video)
- The audience needs to understand what things actually look like
- You're showing how multiple technologies integrate
**For technical diagrams, you MUST include evidence artifacts** (see below).
---
## Research Mandate (For Technical Diagrams)
**Before drawing anything technical, research the actual specifications.**
If you're diagramming a protocol, API, or framework:
1. Look up the actual JSON/data formats
2. Find the real event names, method names, or API endpoints
3. Understand how the pieces actually connect
4. Use real terminology, not generic placeholders
Bad: "Protocol" → "Frontend"
Good: "AG-UI streams events (RUN_STARTED, STATE_DELTA, A2UI_UPDATE)" → "CopilotKit renders via createA2UIMessageRenderer()"
**Research makes diagrams accurate AND educational.**
---
## Evidence Artifacts
Evidence artifacts are concrete examples that prove your diagram is accurate and help viewers learn. Include them in technical diagrams.
**Types of evidence artifacts** (choose what's relevant to your diagram):
| Artifact Type | When to Use | How to Render |
|---------------|-------------|---------------|
| **Code snippets** | APIs, integrations, implementation details | Dark rectangle + syntax-colored text (see color palette for evidence artifact colors) |
| **Data/JSON examples** | Data formats, schemas, payloads | Dark rectangle + colored text (see color palette) |
| **Event/step sequences** | Protocols, workflows, lifecycles | Timeline pattern (line + dots + labels) |
| **UI mockups** | Showing actual output/results | Nested rectangles mimicking real UI |
| **Real input content** | Showing what goes IN to a system | Rectangle with sample content visible |
| **API/method names** | Real function calls, endpoints | Use actual names from docs, not placeholders |
**Example**: For a diagram about a streaming protocol, you might show:
- The actual event names from the spec (not just "Event 1", "Event 2")
- A code snippet showing how to connect
- What the streamed data actually looks like
**Example**: For a diagram about a data transformation pipeline:
- Show sample input data (actual format, not "Input")
- Show sample output data (actual format, not "Output")
- Show intermediate states if relevant
The key principle: **show what things actually look like**, not just what they're called.
---
## Multi-Zoom Architecture
Comprehensive diagrams operate at multiple zoom levels simultaneously. Think of it like a map that shows both the country borders AND the street names.
### Level 1: Summary Flow
A simplified overview showing the full pipeline or process at a glance. Often placed at the top or bottom of the diagram.
*Example*: `Input → Processing → Output` or `Client → Server → Database`
### Level 2: Section Boundaries
Labeled regions that group related components. These create visual "rooms" that help viewers understand what belongs together.
*Example*: Grouping by responsibility (Backend / Frontend), by phase (Setup / Execution / Cleanup), or by team (User / System / External)
### Level 3: Detail Inside Sections
Evidence artifacts, code snippets, and concrete examples within each section. This is where the educational value lives.
*Example*: Inside a "Backend" section, you might show the actual API response format, not just a box labeled "API Response"
**For comprehensive diagrams, aim to include all three levels.** The summary gives context, the sections organize, and the details teach.
### Bad vs Good
| Bad (Displaying) | Good (Arguing) |
|------------------|----------------|
| 5 equal boxes with labels | Each concept has a shape that mirrors its behavior |
| Card grid layout | Visual structure matches conceptual structure |
| Icons decorating text | Shapes that ARE the meaning |
| Same container for everything | Distinct visual vocabulary per concept |
| Everything in a box | Free-floating text with selective containers |
### Simple vs Comprehensive (Know Which You Need)
| Simple Diagram | Comprehensive Diagram |
|----------------|----------------------|
| Generic labels: "Input" → "Process" → "Output" | Specific: shows what the input/output actually looks like |
| Named boxes: "API", "Database", "Client" | Named boxes + examples of actual requests/responses |
| "Events" or "Messages" label | Timeline with real event/message names from the spec |
| "UI" or "Dashboard" rectangle | Mockup showing actual UI elements and content |
| ~30 seconds to explain | ~2-3 minutes of teaching content |
| Viewer learns the structure | Viewer learns the structure AND the details |
**Simple diagrams** are fine for abstract concepts, quick overviews, or when the audience already knows the details. **Comprehensive diagrams** are needed for technical architectures, tutorials, educational content, or when you want the diagram itself to teach.
---
## Container vs. Free-Floating Text
**Not every piece of text needs a shape around it.** Default to free-floating text. Add containers only when they serve a purpose.
| Use a Container When... | Use Free-Floating Text When... |
|------------------------|-------------------------------|
| It's the focal point of a section | It's a label or description |
| It needs visual grouping with other elements | It's supporting detail or metadata |
| Arrows need to connect to it | It describes something nearby |
| The shape itself carries meaning (decision diamond, etc.) | It's a section title, subtitle, or annotation |
| It represents a distinct "thing" in the system | It's a section title, subtitle, or annotation |
**Typography as hierarchy**: Use font size, weight, and color to create visual hierarchy without boxes. A 28px title doesn't need a rectangle around it.
**The container test**: For each boxed element, ask "Would this work as free-floating text?" If yes, remove the container.
---
## Design Process (Do This BEFORE Generating JSON)
### Step 0: Assess Depth Required
Before anything else, determine if this needs to be:
- **Simple/Conceptual**: Abstract shapes, labels, relationships (mental models, philosophies)
- **Comprehensive/Technical**: Concrete examples, code snippets, real data (systems, architectures, tutorials)
**If comprehensive**: Do research first. Look up actual specs, formats, event names, APIs.
### Step 1: Understand Deeply
Read the content. For each concept, ask:
- What does this concept **DO**? (not what IS it)
- What relationships exist between concepts?
- What's the core transformation or flow?
- **What would someone need to SEE to understand this?** (not just read about)
### Step 2: Map Concepts to Patterns
For each concept, find the visual pattern that mirrors its behavior:
| If the concept... | Use this pattern |
|-------------------|------------------|
| Spawns multiple outputs | **Fan-out** (radial arrows from center) |
| Combines inputs into one | **Convergence** (funnel, arrows merging) |
| Has hierarchy/nesting | **Tree** (lines + free-floating text) |
| Is a sequence of steps | **Timeline** (line + dots + free-floating labels) |
| Loops or improves continuously | **Spiral/Cycle** (arrow returning to start) |
| Is an abstract state or context | **Cloud** (overlapping ellipses) |
| Transforms input to output | **Assembly line** (before → process → after) |
| Compares two things | **Side-by-side** (parallel with contrast) |
| Separates into phases | **Gap/Break** (visual separation between sections) |
### Step 3: Ensure Variety
For multi-concept diagrams: **each major concept must use a different visual pattern**. No uniform cards or grids.
### Step 4: Sketch the Flow
Before JSON, mentally trace how the eye moves through the diagram. There should be a clear visual story.
### Step 5: Generate JSON
Only now create the Excalidraw elements. **See below for how to handle large diagrams.**
### Step 6: Render & Validate (MANDATORY)
After generating the JSON, you MUST run the render-view-fix loop until the diagram looks right. This is not optional — see the **Render & Validate** section below for the full process.
---
## Large / Comprehensive Diagram Strategy
**For comprehensive or technical diagrams, you MUST build the JSON one section at a time.** Do NOT attempt to generate the entire file in a single pass. This is a hard constraint — output token limits mean a comprehensive diagram easily exceeds capacity in one shot. Even if it didn't, generating everything at once leads to worse quality. Section-by-section is better in every way.
### The Section-by-Section Workflow
**Phase 1: Build each section**
1. **Create the base file** with the JSON wrapper (`type`, `version`, `appState`, `files`) and the first section of elements.
2. **Add one section per edit.** Each section gets its own dedicated pass — take your time with it. Think carefully about the layout, spacing, and how this section connects to what's already there.
3. **Use descriptive string IDs** (e.g., `"trigger_rect"`, `"arrow_fan_left"`) so cross-section references are readable.
4. **Namespace seeds by section** (e.g., section 1 uses 100xxx, section 2 uses 200xxx) to avoid collisions.
5. **Update cross-section bindings** as you go. When a new section's element needs to bind to an element from a previous section (e.g., an arrow connecting sections), edit the earlier element's `boundElements` array at the same time.
**Phase 2: Review the whole**
After all sections are in place, read through the complete JSON and check:
- Are cross-section arrows bound correctly on both ends?
- Is the overall spacing balanced, or are some sections cramped while others have too much whitespace?
- Do IDs and bindings all reference elements that actually exist?
Fix any alignment or binding issues before rendering.
**Phase 3: Render & validate**
Now run the render-view-fix loop from the Render & Validate section. This is where you'll catch visual issues that aren't obvious from JSON — overlaps, clipping, imbalanced composition.
### Section Boundaries
Plan your sections around natural visual groupings from the diagram plan. A typical large diagram might split into:
- **Section 1**: Entry point / trigger
- **Section 2**: First decision or routing
- **Section 3**: Main content (hero section — may be the largest single section)
- **Section 4-N**: Remaining phases, outputs, etc.
Each section should be independently understandable: its elements, internal arrows, and any cross-references to adjacent sections.
### What NOT to Do
- **Don't generate the entire diagram in one response.** You will hit the output token limit and produce truncated, broken JSON. Even if the diagram is small enough to fit, splitting into sections produces better results.
- **Don't write a Python generator script.** The templating and coordinate math seem helpful but introduce a layer of indirection that makes debugging harder. Hand-crafted JSON with descriptive IDs is more maintainable.
---
## Visual Pattern Library
### Fan-Out (One-to-Many)
Central element with arrows radiating to multiple targets. Use for: sources, PRDs, root causes, central hubs.
```
□ → ○
```
### Convergence (Many-to-One)
Multiple inputs merging through arrows to single output. Use for: aggregation, funnels, synthesis.
```
○ ↘
○ → □
○ ↗
```
### Tree (Hierarchy)
Parent-child branching with connecting lines and free-floating text (no boxes needed). Use for: file systems, org charts, taxonomies.
```
label
├── label
│ ├── label
│ └── label
└── label
```
Use `line` elements for the trunk and branches, free-floating text for labels.
### Spiral/Cycle (Continuous Loop)
Elements in sequence with arrow returning to start. Use for: feedback loops, iterative processes, evolution.
```
□ → □
↑ ↓
□ ← □
```
### Cloud (Abstract State)
Overlapping ellipses with varied sizes. Use for: context, memory, conversations, mental states.
### Assembly Line (Transformation)
Input → Process Box → Output with clear before/after. Use for: transformations, processing, conversion.
```
○○○ → [PROCESS] → □□□
chaos order
```
### Side-by-Side (Comparison)
Two parallel structures with visual contrast. Use for: before/after, options, trade-offs.
### Gap/Break (Separation)
Visual whitespace or barrier between sections. Use for: phase changes, context resets, boundaries.
### Lines as Structure
Use lines (type: `line`, not arrows) as primary structural elements instead of boxes:
- **Timelines**: Vertical or horizontal line with small dots (10-20px ellipses) at intervals, free-floating labels beside each dot
- **Tree structures**: Vertical trunk line + horizontal branch lines, with free-floating text labels (no boxes needed)
- **Dividers**: Thin dashed lines to separate sections
- **Flow spines**: A central line that elements relate to, rather than connecting boxes
```
Timeline: Tree:
●─── Label 1 │
│ ├── item
●─── Label 2 │ ├── sub
│ │ └── sub
●─── Label 3 └── item
```
Lines + free-floating text often creates a cleaner result than boxes + contained text.
---
## Shape Meaning
Choose shape based on what it represents—or use no shape at all:
| Concept Type | Shape | Why |
|--------------|-------|-----|
| Labels, descriptions, details | **none** (free-floating text) | Typography creates hierarchy |
| Section titles, annotations | **none** (free-floating text) | Font size/weight is enough |
| Markers on a timeline | small `ellipse` (10-20px) | Visual anchor, not container |
| Start, trigger, input | `ellipse` | Soft, origin-like |
| End, output, result | `ellipse` | Completion, destination |
| Decision, condition | `diamond` | Classic decision symbol |
| Process, action, step | `rectangle` | Contained action |
| Abstract state, context | overlapping `ellipse` | Fuzzy, cloud-like |
| Hierarchy node | lines + text (no boxes) | Structure through lines |
**Rule**: Default to no container. Add shapes only when they carry meaning. Aim for <30% of text elements to be inside containers.
---
## Color as Meaning
Colors encode information, not decoration. Every color choice should come from `references/color-palette.md` — the semantic shape colors, text hierarchy colors, and evidence artifact colors are all defined there.
**Key principles:**
- Each semantic purpose (start, end, decision, AI, error, etc.) has a specific fill/stroke pair
- Free-floating text uses color for hierarchy (titles, subtitles, details — each at a different level)
- Evidence artifacts (code snippets, JSON examples) use their own dark background + colored text scheme
- Always pair a darker stroke with a lighter fill for contrast
**Do not invent new colors.** If a concept doesn't fit an existing semantic category, use Primary/Neutral or Secondary.
---
## Modern Aesthetics
For clean, professional diagrams:
### Roughness
- `roughness: 0` — Clean, crisp edges. Use for modern/technical diagrams.
- `roughness: 1` — Hand-drawn, organic feel. Use for brainstorming/informal diagrams.
**Default to 0** for most professional use cases.
### Stroke Width
- `strokeWidth: 1` — Thin, elegant. Good for lines, dividers, subtle connections.
- `strokeWidth: 2` — Standard. Good for shapes and primary arrows.
- `strokeWidth: 3` — Bold. Use sparingly for emphasis (main flow line, key connections).
### Opacity
**Always use `opacity: 100` for all elements.** Use color, size, and stroke width to create hierarchy instead of transparency.
### Small Markers Instead of Shapes
Instead of full shapes, use small dots (10-20px ellipses) as:
- Timeline markers
- Bullet points
- Connection nodes
- Visual anchors for free-floating text
---
## Layout Principles
### Hierarchy Through Scale
- **Hero**: 300×150 - visual anchor, most important
- **Primary**: 180×90
- **Secondary**: 120×60
- **Small**: 60×40
### Whitespace = Importance
The most important element has the most empty space around it (200px+).
### Flow Direction
Guide the eye: typically left→right or top→bottom for sequences, radial for hub-and-spoke.
### Connections Required
Position alone doesn't show relationships. If A relates to B, there must be an arrow.
---
## Text Rules
**CRITICAL**: The JSON `text` property contains ONLY readable words.
```json
{
"id": "myElement1",
"text": "Start",
"originalText": "Start"
}
```
Settings: `fontSize: 16`, `fontFamily: 3`, `textAlign: "center"`, `verticalAlign: "middle"`
---
## JSON Structure
```json
{
"type": "excalidraw",
"version": 2,
"source": "https://excalidraw.com",
"elements": [...],
"appState": {
"viewBackgroundColor": "#ffffff",
"gridSize": 20
},
"files": {}
}
```
## Element Templates
See `references/element-templates.md` for copy-paste JSON templates for each element type (text, line, dot, rectangle, arrow). Pull colors from `references/color-palette.md` based on each element's semantic purpose.
---
## Render & Validate (MANDATORY)
You cannot judge a diagram from JSON alone. After generating or editing the Excalidraw JSON, you MUST render it to PNG, view the image, and fix what you see — in a loop until it's right. This is a core part of the workflow, not a final check.
### How to Render
Run the render script from the skill's `references/` directory:
```bash
python3 <skill-references-dir>/render_excalidraw.py <path-to-file.excalidraw>
```
This outputs a PNG next to the `.excalidraw` file. Then use the **Read tool** on the PNG to actually view it.
### The Loop
After generating the initial JSON, run this cycle:
**1. Render & View** — Run the render script, then Read the PNG.
**2. Audit against your original vision** — Before looking for bugs, compare the rendered result to what you designed in Steps 1-4. Ask:
- Does the visual structure match the conceptual structure you planned?
- Does each section use the pattern you intended (fan-out, convergence, timeline, etc.)?
- Does the eye flow through the diagram in the order you designed?
- Is the visual hierarchy correct — hero elements dominant, supporting elements smaller?
- For technical diagrams: are the evidence artifacts (code snippets, data examples) readable and properly placed?
**3. Check for visual defects:**
- Text clipped by or overflowing its container
- Text or shapes overlapping other elements
- Arrows crossing through elements instead of routing around them
- Arrows landing on the wrong element or pointing into empty space
- Labels floating ambiguously (not clearly anchored to what they describe)
- Uneven spacing between elements that should be evenly spaced
- Sections with too much whitespace next to sections that are too cramped
- Text too small to read at the rendered size
- Overall composition feels lopsided or unbalanced
**4. Fix** — Edit the JSON to address everything you found. Common fixes:
- Widen containers when text is clipped
- Adjust `x`/`y` coordinates to fix spacing and alignment
- Add intermediate waypoints to arrow `points` arrays to route around elements
- Reposition labels closer to the element they describe
- Resize elements to rebalance visual weight across sections
**5. Re-render & re-view** — Run the render script again and Read the new PNG.
**6. Repeat** — Keep cycling until the diagram passes both the vision check (Step 2) and the defect check (Step 3). Typically takes 2-4 iterations. Don't stop after one pass just because there are no critical bugs — if the composition could be better, improve it.
### When to Stop
The loop is done when:
- The rendered diagram matches the conceptual design from your planning steps
- No text is clipped, overlapping, or unreadable
- Arrows route cleanly and connect to the right elements
- Spacing is consistent and the composition is balanced
- You'd be comfortable showing it to someone without caveats
---
## Quality Checklist
### Depth & Evidence (Check First for Technical Diagrams)
1. **Research done**: Did you look up actual specs, formats, event names?
2. **Evidence artifacts**: Are there code snippets, JSON examples, or real data?
3. **Multi-zoom**: Does it have summary flow + section boundaries + detail?
4. **Concrete over abstract**: Real content shown, not just labeled boxes?
5. **Educational value**: Could someone learn something concrete from this?
### Conceptual
6. **Isomorphism**: Does each visual structure mirror its concept's behavior?
7. **Argument**: Does the diagram SHOW something text alone couldn't?
8. **Variety**: Does each major concept use a different visual pattern?
9. **No uniform containers**: Avoided card grids and equal boxes?
### Container Discipline
10. **Minimal containers**: Could any boxed element work as free-floating text instead?
11. **Lines as structure**: Are tree/timeline patterns using lines + text rather than boxes?
12. **Typography hierarchy**: Are font size and color creating visual hierarchy (reducing need for boxes)?
### Structural
13. **Connections**: Every relationship has an arrow or line
14. **Flow**: Clear visual path for the eye to follow
15. **Hierarchy**: Important elements are larger/more isolated
### Technical
16. **Text clean**: `text` contains only readable words
17. **Font**: `fontFamily: 3`
18. **Roughness**: `roughness: 0` for clean/modern (unless hand-drawn style requested)
19. **Opacity**: `opacity: 100` for all elements (no transparency)
20. **Container ratio**: <30% of text elements should be inside containers
### Visual Validation (Render Required)
21. **Rendered to PNG**: Diagram has been rendered and visually inspected
22. **No text overflow**: All text fits within its container
23. **No overlapping elements**: Shapes and text don't overlap unintentionally
24. **Even spacing**: Similar elements have consistent spacing
25. **Arrows land correctly**: Arrows connect to intended elements without crossing others
26. **Readable at export size**: Text is legible in the rendered PNG
27. **Balanced composition**: No large empty voids or overcrowded regions

View File

@@ -0,0 +1,67 @@
# Color Palette & Brand Style
**This is the single source of truth for all colors and brand-specific styles.** To customize diagrams for your own brand, edit this file — everything else in the skill is universal.
---
## Shape Colors (Semantic)
Colors encode meaning, not decoration. Each semantic purpose has a fill/stroke pair.
| Semantic Purpose | Fill | Stroke |
|------------------|------|--------|
| Primary/Neutral | `#3b82f6` | `#1e3a5f` |
| Secondary | `#60a5fa` | `#1e3a5f` |
| Tertiary | `#93c5fd` | `#1e3a5f` |
| Start/Trigger | `#fed7aa` | `#c2410c` |
| End/Success | `#a7f3d0` | `#047857` |
| Warning/Reset | `#fee2e2` | `#dc2626` |
| Decision | `#fef3c7` | `#b45309` |
| AI/LLM | `#ddd6fe` | `#6d28d9` |
| Inactive/Disabled | `#dbeafe` | `#1e40af` (use dashed stroke) |
| Error | `#fecaca` | `#b91c1c` |
**Rule**: Always pair a darker stroke with a lighter fill for contrast.
---
## Text Colors (Hierarchy)
Use color on free-floating text to create visual hierarchy without containers.
| Level | Color | Use For |
|-------|-------|---------|
| Title | `#1e40af` | Section headings, major labels |
| Subtitle | `#3b82f6` | Subheadings, secondary labels |
| Body/Detail | `#64748b` | Descriptions, annotations, metadata |
| On light fills | `#374151` | Text inside light-colored shapes |
| On dark fills | `#ffffff` | Text inside dark-colored shapes |
---
## Evidence Artifact Colors
Used for code snippets, data examples, and other concrete evidence inside technical diagrams.
| Artifact | Background | Text Color |
|----------|-----------|------------|
| Code snippet | `#1e293b` | Syntax-colored (language-appropriate) |
| JSON/data example | `#1e293b` | `#22c55e` (green) |
---
## Default Stroke & Line Colors
| Element | Color |
|---------|-------|
| Arrows | Use the stroke color of the source element's semantic purpose |
| Structural lines (dividers, trees, timelines) | Primary stroke (`#1e3a5f`) or Slate (`#64748b`) |
| Marker dots (fill + stroke) | Primary fill (`#3b82f6`) |
---
## Background
| Property | Value |
|----------|-------|
| Canvas background | `#ffffff` |

View File

@@ -0,0 +1,182 @@
# Element Templates
Copy-paste JSON templates for each Excalidraw element type. The `strokeColor` and `backgroundColor` values are placeholders — always pull actual colors from `color-palette.md` based on the element's semantic purpose.
## Free-Floating Text (no container)
```json
{
"type": "text",
"id": "label1",
"x": 100, "y": 100,
"width": 200, "height": 25,
"text": "Section Title",
"originalText": "Section Title",
"fontSize": 20,
"fontFamily": 3,
"textAlign": "left",
"verticalAlign": "top",
"strokeColor": "<title color from palette>",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"seed": 11111,
"version": 1,
"versionNonce": 22222,
"isDeleted": false,
"groupIds": [],
"boundElements": null,
"link": null,
"locked": false,
"containerId": null,
"lineHeight": 1.25
}
```
## Line (structural, not arrow)
```json
{
"type": "line",
"id": "line1",
"x": 100, "y": 100,
"width": 0, "height": 200,
"strokeColor": "<structural line color from palette>",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"seed": 44444,
"version": 1,
"versionNonce": 55555,
"isDeleted": false,
"groupIds": [],
"boundElements": null,
"link": null,
"locked": false,
"points": [[0, 0], [0, 200]]
}
```
## Small Marker Dot
```json
{
"type": "ellipse",
"id": "dot1",
"x": 94, "y": 94,
"width": 12, "height": 12,
"strokeColor": "<marker dot color from palette>",
"backgroundColor": "<marker dot color from palette>",
"fillStyle": "solid",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"seed": 66666,
"version": 1,
"versionNonce": 77777,
"isDeleted": false,
"groupIds": [],
"boundElements": null,
"link": null,
"locked": false
}
```
## Rectangle
```json
{
"type": "rectangle",
"id": "elem1",
"x": 100, "y": 100, "width": 180, "height": 90,
"strokeColor": "<stroke from palette based on semantic purpose>",
"backgroundColor": "<fill from palette based on semantic purpose>",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"seed": 12345,
"version": 1,
"versionNonce": 67890,
"isDeleted": false,
"groupIds": [],
"boundElements": [{"id": "text1", "type": "text"}],
"link": null,
"locked": false,
"roundness": {"type": 3}
}
```
## Text (centered in shape)
```json
{
"type": "text",
"id": "text1",
"x": 130, "y": 132,
"width": 120, "height": 25,
"text": "Process",
"originalText": "Process",
"fontSize": 16,
"fontFamily": 3,
"textAlign": "center",
"verticalAlign": "middle",
"strokeColor": "<text color — match parent shape's stroke or use 'on light/dark fills' from palette>",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"seed": 11111,
"version": 1,
"versionNonce": 22222,
"isDeleted": false,
"groupIds": [],
"boundElements": null,
"link": null,
"locked": false,
"containerId": "elem1",
"lineHeight": 1.25
}
```
## Arrow
```json
{
"type": "arrow",
"id": "arrow1",
"x": 282, "y": 145, "width": 118, "height": 0,
"strokeColor": "<arrow color — typically matches source element's stroke from palette>",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"angle": 0,
"seed": 33333,
"version": 1,
"versionNonce": 44444,
"isDeleted": false,
"groupIds": [],
"boundElements": null,
"link": null,
"locked": false,
"points": [[0, 0], [118, 0]],
"startBinding": {"elementId": "elem1", "focus": 0, "gap": 2},
"endBinding": {"elementId": "elem2", "focus": 0, "gap": 2},
"startArrowhead": null,
"endArrowhead": "arrow"
}
```
For curves: use 3+ points in `points` array.

View File

@@ -0,0 +1,71 @@
# Excalidraw JSON Schema
## Element Types
| Type | Use For |
|------|---------|
| `rectangle` | Processes, actions, components |
| `ellipse` | Entry/exit points, external systems |
| `diamond` | Decisions, conditionals |
| `arrow` | Connections between shapes |
| `text` | Labels inside shapes |
| `line` | Non-arrow connections |
| `frame` | Grouping containers |
## Common Properties
All elements share these:
| Property | Type | Description |
|----------|------|-------------|
| `id` | string | Unique identifier |
| `type` | string | Element type |
| `x`, `y` | number | Position in pixels |
| `width`, `height` | number | Size in pixels |
| `strokeColor` | string | Border color (hex) |
| `backgroundColor` | string | Fill color (hex or "transparent") |
| `fillStyle` | string | "solid", "hachure", "cross-hatch" |
| `strokeWidth` | number | 1, 2, or 4 |
| `strokeStyle` | string | "solid", "dashed", "dotted" |
| `roughness` | number | 0 (smooth), 1 (default), 2 (rough) |
| `opacity` | number | 0-100 |
| `seed` | number | Random seed for roughness |
## Text-Specific Properties
| Property | Description |
|----------|-------------|
| `text` | The display text |
| `originalText` | Same as text |
| `fontSize` | Size in pixels (16-20 recommended) |
| `fontFamily` | 3 for monospace (use this) |
| `textAlign` | "left", "center", "right" |
| `verticalAlign` | "top", "middle", "bottom" |
| `containerId` | ID of parent shape |
## Arrow-Specific Properties
| Property | Description |
|----------|-------------|
| `points` | Array of [x, y] coordinates |
| `startBinding` | Connection to start shape |
| `endBinding` | Connection to end shape |
| `startArrowhead` | null, "arrow", "bar", "dot", "triangle" |
| `endArrowhead` | null, "arrow", "bar", "dot", "triangle" |
## Binding Format
```json
{
"elementId": "shapeId",
"focus": 0,
"gap": 2
}
```
## Rectangle Roundness
Add for rounded corners:
```json
"roundness": { "type": 3 }
```

View File

@@ -0,0 +1,205 @@
#!/usr/bin/env python3
"""Render Excalidraw JSON to PNG using Playwright + headless Chromium.
Usage:
python3 render_excalidraw.py <path-to-file.excalidraw> [--output path.png] [--scale 2] [--width 1920]
Dependencies (playwright, chromium) are provided by the Nix flake / direnv environment.
"""
from __future__ import annotations
import argparse
import json
import sys
from pathlib import Path
def validate_excalidraw(data: dict) -> list[str]:
"""Validate Excalidraw JSON structure. Returns list of errors (empty = valid)."""
errors: list[str] = []
if data.get("type") != "excalidraw":
errors.append(f"Expected type 'excalidraw', got '{data.get('type')}'")
if "elements" not in data:
errors.append("Missing 'elements' array")
elif not isinstance(data["elements"], list):
errors.append("'elements' must be an array")
elif len(data["elements"]) == 0:
errors.append("'elements' array is empty — nothing to render")
return errors
def compute_bounding_box(elements: list[dict]) -> tuple[float, float, float, float]:
"""Compute bounding box (min_x, min_y, max_x, max_y) across all elements."""
min_x = float("inf")
min_y = float("inf")
max_x = float("-inf")
max_y = float("-inf")
for el in elements:
if el.get("isDeleted"):
continue
x = el.get("x", 0)
y = el.get("y", 0)
w = el.get("width", 0)
h = el.get("height", 0)
# For arrows/lines, points array defines the shape relative to x,y
if el.get("type") in ("arrow", "line") and "points" in el:
for px, py in el["points"]:
min_x = min(min_x, x + px)
min_y = min(min_y, y + py)
max_x = max(max_x, x + px)
max_y = max(max_y, y + py)
else:
min_x = min(min_x, x)
min_y = min(min_y, y)
max_x = max(max_x, x + abs(w))
max_y = max(max_y, y + abs(h))
if min_x == float("inf"):
return (0, 0, 800, 600)
return (min_x, min_y, max_x, max_y)
def render(
excalidraw_path: Path,
output_path: Path | None = None,
scale: int = 2,
max_width: int = 1920,
) -> Path:
"""Render an .excalidraw file to PNG. Returns the output PNG path."""
# Import playwright here so validation errors show before import errors
try:
from playwright.sync_api import sync_playwright
except ImportError:
print("ERROR: playwright not installed.", file=sys.stderr)
print("Ensure the Nix dev shell is active (direnv allow).", file=sys.stderr)
sys.exit(1)
# Read and validate
raw = excalidraw_path.read_text(encoding="utf-8")
try:
data = json.loads(raw)
except json.JSONDecodeError as e:
print(f"ERROR: Invalid JSON in {excalidraw_path}: {e}", file=sys.stderr)
sys.exit(1)
errors = validate_excalidraw(data)
if errors:
print(f"ERROR: Invalid Excalidraw file:", file=sys.stderr)
for err in errors:
print(f" - {err}", file=sys.stderr)
sys.exit(1)
# Compute viewport size from element bounding box
elements = [e for e in data["elements"] if not e.get("isDeleted")]
min_x, min_y, max_x, max_y = compute_bounding_box(elements)
padding = 80
diagram_w = max_x - min_x + padding * 2
diagram_h = max_y - min_y + padding * 2
# Cap viewport width, let height be natural
vp_width = min(int(diagram_w), max_width)
vp_height = max(int(diagram_h), 600)
# Output path
if output_path is None:
output_path = excalidraw_path.with_suffix(".png")
# Template path (same directory as this script)
template_path = Path(__file__).parent / "render_template.html"
if not template_path.exists():
print(f"ERROR: Template not found at {template_path}", file=sys.stderr)
sys.exit(1)
template_url = template_path.as_uri()
with sync_playwright() as p:
try:
browser = p.chromium.launch(headless=True)
except Exception as e:
if "Executable doesn't exist" in str(e) or "browserType.launch" in str(e):
print("ERROR: Chromium not installed for Playwright.", file=sys.stderr)
print("Ensure the Nix dev shell is active (direnv allow).", file=sys.stderr)
sys.exit(1)
raise
page = browser.new_page(
viewport={"width": vp_width, "height": vp_height},
device_scale_factor=scale,
)
# Load the template
page.goto(template_url)
# Wait for the ES module to load (imports from esm.sh)
page.wait_for_function("window.__moduleReady === true", timeout=30000)
# Inject the diagram data and render
json_str = json.dumps(data)
result = page.evaluate(f"window.renderDiagram({json_str})")
if not result or not result.get("success"):
error_msg = (
result.get("error", "Unknown render error")
if result
else "renderDiagram returned null"
)
print(f"ERROR: Render failed: {error_msg}", file=sys.stderr)
browser.close()
sys.exit(1)
# Wait for render completion signal
page.wait_for_function("window.__renderComplete === true", timeout=15000)
# Screenshot the SVG element
svg_el = page.query_selector("#root svg")
if svg_el is None:
print("ERROR: No SVG element found after render.", file=sys.stderr)
browser.close()
sys.exit(1)
svg_el.screenshot(path=str(output_path))
browser.close()
return output_path
def main() -> None:
"""Entry point for rendering Excalidraw JSON files to PNG."""
parser = argparse.ArgumentParser(description="Render Excalidraw JSON to PNG")
parser.add_argument("input", type=Path, help="Path to .excalidraw JSON file")
parser.add_argument(
"--output",
"-o",
type=Path,
default=None,
help="Output PNG path (default: same name with .png)",
)
parser.add_argument(
"--scale", "-s", type=int, default=2, help="Device scale factor (default: 2)"
)
parser.add_argument(
"--width",
"-w",
type=int,
default=1920,
help="Max viewport width (default: 1920)",
)
args = parser.parse_args()
if not args.input.exists():
print(f"ERROR: File not found: {args.input}", file=sys.stderr)
sys.exit(1)
png_path = render(args.input, args.output, args.scale, args.width)
print(str(png_path))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,57 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<style>
* { margin: 0; padding: 0; box-sizing: border-box; }
body { background: #ffffff; overflow: hidden; }
#root { display: inline-block; }
#root svg { display: block; }
</style>
</head>
<body>
<div id="root"></div>
<script type="module">
import { exportToSvg } from "https://esm.sh/@excalidraw/excalidraw?bundle";
window.renderDiagram = async function(jsonData) {
try {
const data = typeof jsonData === "string" ? JSON.parse(jsonData) : jsonData;
const elements = data.elements || [];
const appState = data.appState || {};
const files = data.files || {};
// Force white background in appState
appState.viewBackgroundColor = appState.viewBackgroundColor || "#ffffff";
appState.exportWithDarkMode = false;
const svg = await exportToSvg({
elements: elements,
appState: {
...appState,
exportBackground: true,
},
files: files,
});
// Clear any previous render
const root = document.getElementById("root");
root.innerHTML = "";
root.appendChild(svg);
window.__renderComplete = true;
window.__renderError = null;
return { success: true, width: svg.getAttribute("width"), height: svg.getAttribute("height") };
} catch (err) {
window.__renderComplete = true;
window.__renderError = err.message;
return { success: false, error: err.message };
}
};
// Signal that the module is loaded and ready
window.__moduleReady = true;
</script>
</body>
</html>

View File

@@ -1,10 +1,16 @@
---
name: mem0-memory
description: "Store and retrieve memories using Mem0 REST API. Use when: (1) storing information for future recall, (2) searching past conversations or facts, (3) managing user/agent memory contexts, (4) building conversational AI with persistent memory. Triggers on keywords like 'remember', 'recall', 'memory', 'store for later', 'what did I say about'."
description: "DEPRECATED: Replaced by opencode-memory plugin. See skills/memory/SKILL.md for current memory system."
compatibility: opencode
---
# Mem0 Memory
> ⚠️ **DEPRECATED**
>
> This skill is deprecated. The memory system has been replaced by the opencode-memory plugin.
>
> **See:** `skills/memory/SKILL.md` for the current memory system.
# Mem0 Memory (Legacy)
Store and retrieve memories via Mem0 REST API at `http://localhost:8000`.
@@ -108,6 +114,36 @@ Combine scopes for fine-grained control:
}
```
## Memory Categories
Memories are classified into 5 categories for organization:
| Category | Definition | Obsidian Path | Example |
|----------|------------|---------------|---------|
| `preference` | Personal preferences | `80-memory/preferences/` | UI settings, workflow styles |
| `fact` | Objective information | `80-memory/facts/` | Tech stack, role, constraints |
| `decision` | Choices with rationale | `80-memory/decisions/` | Tool selections, architecture |
| `entity` | People, orgs, systems | `80-memory/entities/` | Contacts, APIs, concepts |
| `other` | Everything else | `80-memory/other/` | General learnings |
### Metadata Pattern
Include category in metadata when storing:
```json
{
"messages": [...],
"user_id": "user123",
"metadata": {
"category": "preference",
"source": "explicit"
}
}
```
- `category`: One of preference, fact, decision, entity, other
- `source`: "explicit" (user requested) or "auto-capture" (automatic)
## Workflow Patterns
### Pattern 1: Remember User Preferences
@@ -137,6 +173,43 @@ curl -X POST http://localhost:8000/memories \
-d '{"messages":[...], "run_id":"SESSION_ID"}'
```
## Dual-Layer Sync
Memories are stored in BOTH Mem0 AND the Obsidian CODEX vault for redundancy and accessibility.
### Sync Pattern
1. **Store in Mem0 first** - Get `mem0_id` from response
2. **Create Obsidian note** - In `80-memory/<category>/` using memory template
3. **Cross-reference**:
- Add `mem0_id` to Obsidian note frontmatter
- Update Mem0 metadata with `obsidian_ref` (file path)
### Example Flow
```bash
# 1. Store in Mem0
RESPONSE=$(curl -s -X POST http://localhost:8000/memories \
-d '{"messages":[{"role":"user","content":"I prefer dark mode"}],"user_id":"m3tam3re","metadata":{"category":"preference","source":"explicit"}}')
# 2. Extract mem0_id
MEM0_ID=$(echo $RESPONSE | jq -r '.id')
# 3. Create Obsidian note (via REST API or MCP)
# Path: 80-memory/preferences/prefers-dark-mode.md
# Frontmatter includes: mem0_id: $MEM0_ID
# 4. Update Mem0 with Obsidian reference
curl -X PUT http://localhost:8000/memories/$MEM0_ID \
-d '{"metadata":{"obsidian_ref":"80-memory/preferences/prefers-dark-mode.md"}}'
```
### When Obsidian Unavailable
- Store in Mem0 only
- Log sync failure
- Retry on next access
## Response Format
Memory objects include:
@@ -161,6 +234,45 @@ Verify API is running:
curl http://localhost:8000/health
```
### Pre-Operation Check
Before any memory operation, verify Mem0 is running:
```bash
if ! curl -s http://localhost:8000/health > /dev/null 2>&1; then
echo "WARNING: Mem0 unavailable. Memory operations skipped."
# Continue without memory features
fi
```
## Error Handling
### Mem0 Unavailable
When `curl http://localhost:8000/health` fails:
- Skip all memory operations
- Warn user: "Memory system unavailable. Mem0 not running at localhost:8000"
- Continue with degraded functionality
### Obsidian Unavailable
When vault sync fails:
- Store in Mem0 only
- Log: "Obsidian sync failed for memory [id]"
- Do not block user workflow
### API Errors
| Status | Meaning | Action |
|--------|---------|--------|
| 400 | Bad request | Check JSON format, required fields |
| 404 | Memory not found | Memory may have been deleted |
| 500 | Server error | Retry, check Mem0 logs |
### Graceful Degradation
Always continue core functionality even if memory system fails. Memory is enhancement, not requirement.
## API Reference
See [references/api_reference.md](references/api_reference.md) for complete OpenAPI schema.

337
skills/obsidian/SKILL.md Normal file
View File

@@ -0,0 +1,337 @@
---
name: obsidian
description: "Obsidian Local REST API integration for knowledge management. Use when: (1) Creating, reading, updating, or deleting notes in Obsidian vault, (2) Searching vault content by title, content, or tags, (3) Managing daily notes and journaling, (4) Working with WikiLinks and vault metadata. Triggers: 'Obsidian', 'note', 'vault', 'WikiLink', 'daily note', 'journal', 'create note'."
compatibility: opencode
---
# Obsidian
Knowledge management integration via Obsidian Local REST API for vault operations, note CRUD, search, and daily notes.
## Prerequisites
- **Obsidian Local REST API plugin** installed and enabled in Obsidian
- **API server running** on default port `27124` (or configured custom port)
- **Vault path** configured in plugin settings
- **API key** set (optional, if authentication enabled)
API endpoints available at `http://127.0.0.1:27124` by default.
## Core Workflows
### List Vault Files
Get list of all files in vault:
```bash
curl -X GET "http://127.0.0.1:27124/list"
```
Returns array of file objects with `path`, `mtime`, `ctime`, `size`.
### Get File Metadata
Retrieve metadata for a specific file:
```bash
curl -X GET "http://127.0.0.1:27124/get-file-info?path=Note%20Title.md"
```
Returns file metadata including tags, links, frontmatter.
### Create Note
Create a new note in the vault:
```bash
curl -X POST "http://127.0.0.1:27124/create-note" \
-H "Content-Type: application/json" \
-d '{"content": "# Note Title\n\nNote content..."}'
```
Use `path` parameter for specific location:
```json
{
"content": "# Note Title\n\nNote content...",
"path": "subdirectory/Note Title.md"
}
```
### Read Note
Read note content by path:
```bash
curl -X GET "http://127.0.0.1:27124/read-note?path=Note%20Title.md"
```
Returns note content as plain text or structured JSON with frontmatter parsing.
### Update Note
Modify existing note:
```bash
curl -X PUT "http://127.0.0.1:27124/update-note" \
-H "Content-Type: application/json" \
-d '{"path": "Note Title.md", "content": "# Updated Title\n\nNew content..."}'
```
### Delete Note
Remove note from vault:
```bash
curl -X DELETE "http://127.0.0.1:27124/delete-note?path=Note%20Title.md"
```
**Warning**: This operation is irreversible. Confirm with user before executing.
### Search Notes
Find notes by content, title, or tags:
```bash
# Content search
curl -X GET "http://127.0.0.1:27124/search?q=search%20term"
# Search with parameters
curl -X GET "http://127.0.0.1:27124/search?q=search%20term&path=subdirectory&context-length=100"
```
Returns array of matches with file path and context snippets.
### Daily Notes
#### Get Daily Note
Retrieve or create daily note for specific date:
```bash
# Today
curl -X GET "http://127.0.0.1:27124/daily-note"
# Specific date (YYYY-MM-DD)
curl -X GET "http://127.0.0.1:27124/daily-note?date=2026-02-03"
```
Returns daily note content or creates using Obsidian's Daily Notes template.
#### Update Daily Note
Modify today's daily note:
```bash
curl -X PUT "http://127.0.0.1:27124/daily-note" \
-H "Content-Type: application/json" \
-d '{"content": "## Journal\n\nToday I learned..."}'
```
### Get Vault Info
Retrieve vault metadata:
```bash
curl -X GET "http://127.0.0.1:27124/vault-info"
```
Returns vault path, file count, and configuration details.
## Note Structure Patterns
### Frontmatter Conventions
Use consistent frontmatter for note types:
```yaml
---
date: 2026-02-03
created: 2026-02-03T10:30:00Z
type: note
tags: #tag1 #tag2
status: active
---
```
### WikiLinks
Reference other notes using Obsidian WikiLinks:
- `[[Note Title]]` - Link to note by title
- `[[Note Title|Alias]]` - Link with custom display text
- `[[Note Title#Heading]]` - Link to specific heading
- `![[Image.png]]` - Embed images or media
### Tagging
Use tags for categorization:
- `#tag` - Single-word tag
- `#nested/tag` - Hierarchical tags
- Tags in frontmatter for metadata
- Tags in content for inline categorization
## Workflow Examples
### Create Brainstorm Note
```bash
curl -X POST "http://127.0.0.1:27124/create-note" \
-H "Content-Type: application/json" \
-d '{
"path": "03-resources/brainstorms/2026-02-03-Topic.md",
"content": "---\ndate: 2026-02-03\ncreated: 2026-02-03T10:30:00Z\ntype: brainstorm\nframework: pros-cons\nstatus: draft\ntags: #brainstorm #pros-cons\n---\n\n# Topic\n\n## Context\n\n## Options\n\n## Decision\n"
}'
```
### Append to Daily Journal
```bash
# Get current daily note
NOTE=$(curl -s "http://127.0.0.1:27124/daily-note")
# Append content
curl -X PUT "http://127.0.0.1:27124/daily-note" \
-H "Content-Type: application/json" \
-d "{\"content\": \"${NOTE}\n\n## Journal Entry\n\nLearned about Obsidian API integration.\"}"
```
### Search and Link Notes
```bash
# Search for related notes
curl -s "http://127.0.0.1:27124/search?q=Obsidian"
# Create note with WikiLinks to found notes
curl -X POST "http://127.0.0.1:27124/create-note" \
-H "Content-Type: application/json" \
-d '{
"path": "02-areas/Obsidian API Guide.md",
"content": "# Obsidian API Guide\n\nSee [[API Endpoints]] and [[Workflows]] for details."
}'
```
## Integration with Other Skills
| From Obsidian | To skill | Handoff pattern |
|--------------|----------|----------------|
| Note created | brainstorming | Create brainstorm note with frontmatter |
| Daily note updated | reflection | Append conversation analysis to journal |
| Research note | research | Save research findings with tags |
| Project note | task-management | Link tasks to project notes |
| Plan document | plan-writing | Save generated plan to vault |
| Memory note | memory | Create/read memory notes in 80-memory/ |
## Best Practices
1. **Use paths consistently** - Follow PARA structure or vault conventions
2. **Include frontmatter** - Enables search and metadata queries
3. **Use WikiLinks** - Creates knowledge graph connections
4. **Validate paths** - Check file existence before operations
5. **Handle errors** - API may return 404 for non-existent files
6. **Escape special characters** - URL-encode paths with spaces or symbols
7. **Backup vault** - REST API operations modify files directly
---
## Memory Folder Conventions
The `80-memory/` folder stores dual-layer memories synced with Mem0.
### Structure
```
80-memory/
├── preferences/ # Personal preferences (UI, workflow, communication)
├── facts/ # Objective information (role, tech stack, constraints)
├── decisions/ # Choices with rationale (tool selections, architecture)
├── entities/ # People, organizations, systems, concepts
└── other/ # Everything else
```
### Naming Convention
Memory notes use kebab-case: `prefers-dark-mode.md`, `uses-typescript.md`
### Required Frontmatter
```yaml
---
type: memory
category: # preference | fact | decision | entity | other
mem0_id: # Mem0 memory ID (e.g., "mem_abc123")
source: explicit # explicit | auto-capture
importance: # critical | high | medium | low
created: 2026-02-12
updated: 2026-02-12
tags:
- memory
sync_targets: []
---
```
### Key Fields
| Field | Purpose |
|-------|---------|
| `mem0_id` | Links to Mem0 entry for semantic search |
| `category` | Determines subfolder and classification |
| `source` | How memory was captured (explicit request vs auto) |
| `importance` | Priority for recall ranking |
---
## Memory Note Workflows
### Create Memory Note
When creating a memory note in the vault:
```bash
# Using REST API
curl -X POST "http://127.0.0.1:27124/create-note" \
-H "Content-Type: application/json" \
-d '{
"path": "80-memory/preferences/prefers-dark-mode.md",
"content": "---\ntype: memory\ncategory: preference\nmem0_id: mem_abc123\nsource: explicit\nimportance: medium\ncreated: 2026-02-12\nupdated: 2026-02-12\ntags:\n - memory\nsync_targets: []\n---\n\n# Prefers Dark Mode\n\n## Content\n\nUser prefers dark mode in all applications.\n\n## Context\n\nStated during UI preferences discussion on 2026-02-12.\n\n## Related\n\n- [[UI Settings]]\n"
}'
```
### Read Memory Note
Read by path with URL encoding:
```bash
curl -X GET "http://127.0.0.1:27124/read-note?path=80-memory%2Fpreferences%2Fprefers-dark-mode.md"
```
### Search Memories
Search within memory folder:
```bash
curl -X GET "http://127.0.0.1:27124/search?q=dark%20mode&path=80-memory"
```
### Update Memory Note
Update content and frontmatter:
```bash
curl -X PUT "http://127.0.0.1:27124/update-note" \
-H "Content-Type: application/json" \
-d '{
"path": "80-memory/preferences/prefers-dark-mode.md",
"content": "# Updated content..."
}'
```
---
## Error Handling
Common HTTP status codes:
- `200 OK` - Success
- `404 Not Found` - File or resource doesn't exist
- `400 Bad Request` - Invalid parameters or malformed JSON
- `500 Internal Server Error` - Plugin or vault error
Check API response body for error details before retrying operations.

126
skills/outline/SKILL.md Normal file
View File

@@ -0,0 +1,126 @@
---
name: outline
description: "Outline wiki integration for knowledge management and documentation workflows. Use when Opencode needs to interact with Outline for: (1) Creating and editing documents, (2) Searching and retrieving knowledge base content, (3) Managing document collections and hierarchies, (4) Handling document sharing and permissions, (5) Collaborative features like comments. Triggers: 'Outline', 'wiki', 'knowledge base', 'documentation', 'team docs', 'document in Outline', 'search Outline', 'Outline collection'."
compatibility: opencode
---
# Outline Wiki Integration
Outline is a team knowledge base and wiki platform. This skill provides guidance for Outline API operations and knowledge management workflows.
## Core Capabilities
### Document Operations
- **Create**: Create new documents with markdown content
- **Read**: Retrieve document content, metadata, and revisions
- **Update**: Edit existing documents, update titles and content
- **Delete**: Remove documents (with appropriate permissions)
### Collection Management
- **Organize**: Structure documents in collections and nested collections
- **Hierarchies**: Create parent-child relationships
- **Access Control**: Set permissions at collection level
### Search and Discovery
- **Full-text search**: Find documents by content
- **Metadata filters**: Search by collection, author, date
- **Advanced queries**: Combine multiple filters
### Sharing and Permissions
- **Public links**: Generate shareable document URLs
- **Team access**: Manage member permissions
- **Guest access**: Control external sharing
### Collaboration
- **Comments**: Add threaded discussions to documents
- **Revisions**: Track document history and changes
- **Notifications**: Stay updated on document activity
## Workflows
### Creating a New Document
1. Determine target collection
2. Create document with title and initial content
3. Set appropriate permissions
4. Share with relevant team members if needed
### Searching Knowledge Base
1. Formulate search query
2. Apply relevant filters (collection, date, author)
3. Review search results
4. Retrieve full document content when needed
### Organizing Documents
1. Review existing collection structure
2. Identify appropriate parent collection
3. Create or update documents in hierarchy
4. Update collection metadata if needed
### Document Collaboration
1. Add comments for feedback or discussion
2. Track revision history for changes
3. Notify stakeholders when needed
4. Resolve comments when addressed
## Integration Patterns
### Knowledge Capture
When capturing information from conversations or research:
- Create document in appropriate collection
- Use clear, descriptive titles
- Structure content with headers for readability
- Add tags for discoverability
### Documentation Updates
When updating existing documentation:
- Retrieve current document revision
- Make targeted, minimal changes
- Add comments explaining significant updates
- Share updates with relevant stakeholders
### Knowledge Retrieval
When searching for information:
- Start with broad search terms
- Refine with collection and metadata filters
- Review multiple relevant documents
- Cross-reference linked documents for context
## Common Use Cases
| Use Case | Recommended Approach |
|----------|---------------------|
| Project documentation | Create collection per project, organize by phase |
| Team guidelines | Use dedicated collection, group by topic |
| Meeting notes | Create documents with templates, tag by team |
| Knowledge capture | Search before creating, link to related docs |
| Onboarding resources | Create structured collection with step-by-step guides |
## Best Practices
- **Consistent naming**: Use clear, descriptive titles
- **Logical organization**: Group related documents in collections
- **Regular maintenance**: Review and update outdated content
- **Access control**: Set appropriate permissions for sensitive content
- **Searchability**: Use tags and metadata effectively
- **Collaboration**: Use comments for discussions, not content changes
## Handoff to Other Skills
| Output | Next Skill | Trigger |
|--------|------------|---------|
| Research findings | knowledge-management | "Organize this research in Outline" |
| Documentation draft | communications | "Share this document via email" |
| Task from document | task-management | "Create tasks from this outline" |
| Project plan | plan-writing | "Create project plan in Outline" |

Some files were not shown because too many files have changed in this diff Show More