Rename directories to plural form: skill/ → skills/, agent/ → agents/, command/ → commands/

- Rename skill/ to skills/ for consistency with naming conventions
- Rename agent/ to agents/ and command/ to commands/
- Update AGENTS.md with all directory references
- Update scripts/test-skill.sh paths
- Update prompts/athena.txt documentation

This aligns with best practices of using plural directory names and updates
all documentation to reflect the new structure.
This commit is contained in:
m3tm3re
2026-01-26 20:42:05 +01:00
parent aeeeb559ed
commit 63cd7fe102
88 changed files with 1726 additions and 322 deletions

402
AGENTS.md
View File

@@ -1,176 +1,74 @@
# Agent Instructions - Opencode Skills Repository # Agent Instructions - Opencode Skills Repository
This repository contains Opencode Agent Skills, context files, and agent configurations for personal productivity and AI-assisted workflows. Files are deployed to `~/.config/opencode/` via Nix flake + home-manager. Configuration repository for Opencode Agent Skills, context files, and agent configurations. Files deploy to `~/.config/opencode/` via Nix flake + home-manager.
## Project Overview ## Quick Commands
**Type**: Configuration-only repository (no build/compile step)
**Purpose**: Central repository for Opencode Agent Skills, AI agent configurations, custom commands, and workflows. Extensible framework for productivity, automation, knowledge management, and AI-assisted development.
**Primary User**: Sascha Koenig (@m3tam3re)
**Deployment**: Nix flake → home-manager → `~/.config/opencode/`
### Current Focus Areas
- **Productivity & Task Management** - PARA methodology, Anytype integration, reviews
- **Knowledge Management** - Note capture, organization, research workflows
- **Communications** - Email drafts, follow-ups, calendar scheduling
- **AI Development** - Skill creation, agent configurations, custom commands
- **Memory & Context** - Persistent memory with Mem0, conversation analysis
### Extensibility
This repository serves as a foundation for any Opencode-compatible skill or agent configuration. Add new skills for:
- Domain-specific workflows (finance, legal, engineering, etc.)
- Tool integrations (APIs, databases, cloud platforms)
- Custom automation and productivity systems
- Specialized AI agents for different contexts
### Directory Structure
```
.
├── agent/ # Agent definitions (agents.json)
├── prompts/ # Agent system prompts (chiron.txt, chiron-forge.txt)
├── context/ # User profiles and preferences
├── command/ # Custom command definitions
├── skill/ # Opencode Agent Skills (8 skills)
│ ├── task-management/
│ ├── skill-creator/
│ ├── reflection/
│ ├── communications/
│ ├── calendar-scheduling/
│ ├── mem0-memory/
│ ├── research/
│ └── knowledge-management/
├── scripts/ # Repository-level utility scripts
└── AGENTS.md # This file
```
## Skill Development
### Creating a New Skill
Use the skill initialization script:
### Testing Skills
```bash ```bash
python3 skill/skill-creator/scripts/init_skill.py <skill-name> --path skill/ # List or validate all skills
./scripts/test-skill.sh # List all development skills
./scripts/test-skill.sh --validate # Validate all skills
./scripts/test-skill.sh <skill-name> # Validate specific skill
# Test in Opencode (interactive)
./scripts/test-skill.sh --run # Launch session with dev skills
``` ```
This creates: ### Creating Skills
- `skill/<skill-name>/SKILL.md` with proper frontmatter template
- `skill/<skill-name>/scripts/` - For executable code
- `skill/<skill-name>/references/` - For documentation
- `skill/<skill-name>/assets/` - For templates/files
### Validating Skills
Run validation before committing:
```bash ```bash
python3 skill/skill-creator/scripts/quick_validate.py skill/<skill-name> python3 skills/skill-creator/scripts/init_skill.py <skill-name> --path skills/
python3 skills/skill-creator/scripts/quick_validate.py skills/<skill-name>
``` ```
**Validation checks:** ### Running Tests
- YAML frontmatter structure ```bash
- Required fields: `name`, `description` # Run single test file
- Name format: hyphen-case, max 64 chars python3 -m unittest skills/pdf/scripts/check_bounding_boxes_test.py
- Description: max 1024 chars, no angle brackets
- Allowed frontmatter properties: `name`, `description`, `compatibility`, `license`, `allowed-tools`, `metadata`
### Skill Structure Requirements # Run all tests in a module
python3 -m unittest discover -s skills/pdf/scripts -p "*_test.py"
**SKILL.md Frontmatter** (required):
```yaml
---
name: skill-name
description: What it does and when to use it. Include trigger words.
compatibility: opencode
---
``` ```
**Resource Directories** (optional): ### Issue Tracking
- `scripts/` - Executable Python/Bash code for deterministic operations ```bash
- `references/` - Documentation loaded into context as needed bd ready # Find available work
- `assets/` - Files used in output (templates, images, fonts) bd create "title" # Create new issue
bd update <id> --status in_progress
### Skill Design Principles bd close <id> # Complete work
bd sync # Sync with git
1. **Concise is key** - Context window is shared resource ```
2. **Progressive disclosure** - Metadata → SKILL.md body → bundled resources
3. **Appropriate freedom** - Match specificity to task fragility
4. **No extraneous files** - No README.md, CHANGELOG.md, etc. in skills
5. **Reference patterns** - See `skill/skill-creator/references/workflows.md` and `output-patterns.md`
## Code Style Guidelines ## Code Style Guidelines
### File Naming ### File Naming
- Skills: hyphen-case (e.g., `task-management`, `skill-creator`)
**Skills**: Hyphen-case (e.g., `task-management`, `skill-creator`) - Python: snake_case (e.g., `init_skill.py`, `quick_validate.py`)
**Python scripts**: Snake_case (e.g., `init_skill.py`, `quick_validate.py`) - Markdown: UPPERCASE or sentence-case (e.g., `SKILL.md`, `profile.md`)
**Markdown files**: UPPERCASE or sentence-case (e.g., `SKILL.md`, `profile.md`) - Config: Standard conventions (e.g., `config.yaml`, `metadata.json`)
**Configuration**: Standard conventions (e.g., `config.yaml`, `metadata.json`)
### Markdown Style
**Frontmatter**:
- Always use YAML format between `---` delimiters
- Required fields for skills: `name`, `description`
- Optional: `compatibility: opencode`, `mode: primary`
**Headers**:
- Use ATX-style (`#`, `##`, `###`)
- One H1 per file (skill title)
- Clear hierarchy
**Lists**:
- Use `-` for unordered lists (not `*`)
- Use numbered lists for sequential steps
- Indent nested lists with 2 spaces
**Code blocks**:
- Always specify language for syntax highlighting
- Use `bash` for shell commands
- Use `yaml`, `nix`, `python` as appropriate
**Tables**:
- Use for structured comparisons and reference data
- Keep aligned for readability in source
- Example:
```markdown
| Header 1 | Header 2 |
|----------|----------|
| Value | Value |
```
### Python Style ### Python Style
**Shebang**: Always `#!/usr/bin/env python3`
**Shebang**: Always use `#!/usr/bin/env python3`
**Docstrings**: **Docstrings**:
```python ```python
""" """
Brief description of module/script Brief description
Usage: Usage:
script_name.py <arg1> --flag <arg2> script.py <args>
Examples: Examples:
script_name.py my-skill --path ~/.config/opencode/skill script.py my-skill --path ~/.config/opencode/skill
""" """
``` ```
**Imports**: **Imports** (standard → third-party → local):
```python ```python
# Standard library
import sys import sys
import os import os
from pathlib import Path from pathlib import Path
# Third-party (if any)
import yaml import yaml
# Local (if any)
from . import utilities from . import utilities
``` ```
@@ -190,196 +88,92 @@ except SpecificException as e:
``` ```
**User feedback**: **User feedback**:
- Use ✅ for success messages ```python
- Use ❌ for error messages print(f"✅ Success: {result}")
- Print progress for multi-step operations print(f"❌ Error: {error}")
```
### Bash Style
**Shebang**: Always `#!/usr/bin/env bash`
**Strict mode**: `set -euo pipefail`
**Functions**: `snake_case`, descriptive names
### Markdown Style
- YAML frontmatter between `---` delimiters
- ATX headers (`#`, `##`, `###`)
- Use `-` for unordered lists
- Specify language in code blocks (```python, ```bash, etc.)
### YAML Style ### YAML Style
```yaml ```yaml
# Use lowercase keys with hyphens name: skill-name
skill-name: value description: "Text with special chars in quotes"
# Quotes for strings with special chars
description: "PARA-based task management. Use when: (1) item, (2) item."
# No quotes for simple strings
compatibility: opencode compatibility: opencode
# Lists with hyphens
items: items:
- first - first
- second - second
``` ```
## Nix Flake Integration ## Directory Structure
This repository is the central source for all Opencode configuration, consumed as a **non-flake input** by your NixOS configuration. ```
.
├── agents/ # Agent definitions (agents.json)
├── prompts/ # Agent system prompts
├── context/ # User profiles and preferences
├── commands/ # Custom command definitions
├── skills/ # Opencode Agent Skills
│ ├── skill-name/
│ │ ├── SKILL.md
│ │ ├── scripts/ # Executable code (optional)
│ │ ├── references/ # Documentation (optional)
│ │ └── assets/ # Templates/files (optional)
├── scripts/ # Repository utilities
└── AGENTS.md # This file
```
### Integration Reference ## Nix Deployment
**NixOS config location**: `~/p/NIX/nixos-config/home/features/coding/opencode.nix` **Flake input** (non-flake):
**Flake input definition** (in `flake.nix`):
```nix ```nix
agents = { agents = {
url = "git+https://code.m3ta.dev/m3tam3re/AGENTS"; url = "git+https://code.m3ta.dev/m3tam3re/AGENTS";
flake = false; # Pure files, not a Nix flake flake = false;
}; };
``` ```
### Deployment Mapping **Deployment mapping**:
- `skills/``~/.config/opencode/skill/` (symlink)
| Source | Deployed To | Method | - `context/``~/.config/opencode/context/` (symlink)
|--------|-------------|--------| - `commands/``~/.config/opencode/command/` (symlink)
| `skill/` | `~/.config/opencode/skill/` | xdg.configFile (symlink) | - `prompts/``~/.config/opencode/prompts/` (symlink)
| `context/` | `~/.config/opencode/context/` | xdg.configFile (symlink) | - `agents/agents.json` → Embedded into opencode config.json (not symlinked)
| `command/` | `~/.config/opencode/command/` | xdg.configFile (symlink) |
| `prompts/` | `~/.config/opencode/prompts/` | xdg.configFile (symlink) |
| `agent/agents.json` | `programs.opencode.settings.agent` | **Embedded into config.json** |
### Important: Agent Configuration Nuance
The `agent/` directory is **NOT** deployed as files to `~/.config/opencode/agent/`. Instead, `agents.json` is read at Nix evaluation time and embedded directly into the opencode `config.json` via:
```nix
programs.opencode.settings.agent = builtins.fromJSON (builtins.readFile "${inputs.agents}/agent/agents.json");
```
**Implications**:
- Agent changes require `home-manager switch` to take effect
- Skills, context, and commands are symlinked (changes visible immediately after rebuild)
- The `prompts/` directory is referenced by `agents.json` via `{file:./prompts/chiron.txt}` syntax
**Note**: Agent changes require `home-manager switch`; other changes visible after rebuild.
## Quality Gates ## Quality Gates
Before committing changes, verify: Before committing:
1. Validate skills: `./scripts/test-skill.sh --validate`
1. **Skill validation** - Run `quick_validate.py` on modified skills 2. Validate YAML frontmatter: `python3 skills/skill-creator/scripts/quick_validate.py skills/<name>`
2. **File structure** - Ensure no extraneous files (README in skills, etc.) 3. Check Python scripts have proper shebang and docstrings
3. **Frontmatter** - Check YAML syntax and required fields 4. Ensure no extraneous files (README.md, CHANGELOG.md in skills)
4. **Scripts executable** - Python scripts should have proper shebang 5. Git status clean
5. **Markdown formatting** - Check headers, lists, code blocks
6. **Git status** - No uncommitted or untracked files that should be tracked
## Landing the Plane (Session Completion)
**When ending a work session**, you MUST complete ALL steps below. Work is NOT complete until `git push` succeeds.
## Testing Skills
Since this repo deploys via Nix/home-manager, changes require a rebuild to appear in `~/.config/opencode/`. Use these methods to test skills during development.
### Method 1: XDG_CONFIG_HOME Override (Recommended)
Test skills by pointing opencode to this repository directly:
```bash
# From the AGENTS repository root
cd ~/p/AI/AGENTS
# List skills loaded from this repo (not the deployed ones)
XDG_CONFIG_HOME=. opencode debug skill
# Run an interactive session with development skills
XDG_CONFIG_HOME=. opencode
# Or use the convenience script
./scripts/test-skill.sh # List all development skills
./scripts/test-skill.sh task-management # Validate specific skill
./scripts/test-skill.sh --run # Launch interactive session
```
**Note**: The convenience script creates a temporary directory with proper symlinks since opencode expects `$XDG_CONFIG_HOME/opencode/skill/` structure.
### Method 2: Project-Local Skills
For quick iteration on a single skill, use `.opencode/skill/` in any project:
```bash
cd /path/to/any/project
mkdir -p .opencode/skill/
# Symlink the skill you're developing
ln -s ~/p/AI/AGENTS/skill/my-skill .opencode/skill/
# Skills in .opencode/skill/ are auto-discovered alongside global skills
opencode debug skill
```
### Method 3: Validation Only
Validate skill structure without running opencode:
```bash
# Validate a single skill
python3 skill/skill-creator/scripts/quick_validate.py skill/<skill-name>
# Validate all skills
for dir in skill/*/; do
python3 skill/skill-creator/scripts/quick_validate.py "$dir"
done
```
### Verification Commands
```bash
# List all loaded skills (shows name, description, location)
opencode debug skill
# Show resolved configuration
opencode debug config
# Show where opencode looks for files
opencode debug paths
```
## Common Operations
### Create New Skill
```bash
# Initialize
python3 skill/skill-creator/scripts/init_skill.py my-new-skill --path skill/
# Edit SKILL.md and implement resources
# Delete unneeded example files from scripts/, references/, assets/
# Validate
python3 skill/skill-creator/scripts/quick_validate.py skill/my-new-skill
```
### Update User Context
Edit `context/profile.md` to update:
- Work style preferences
- PARA areas
- Communication preferences
- Integration status
### Modify Agent Behavior
Edit `agent/agents.json` to adjust agent definitions, and `prompts/*.txt` for system prompts:
- `agent/agents.json` - Agent names, models, permissions
- `prompts/chiron.txt` - Chiron (Plan Mode) system prompt
- `prompts/chiron-forge.txt` - Chiron-Forge (Worker Mode) system prompt
## Reference Documentation
**Skill creation guide**: `skill/skill-creator/SKILL.md`
**Workflow patterns**: `skill/skill-creator/references/workflows.md`
**Output patterns**: `skill/skill-creator/references/output-patterns.md`
**User profile**: `context/profile.md`
**Agent config**: `agent/agents.json`
## Notes for AI Agents ## Notes for AI Agents
1. **This is a config repo** - No compilation, no tests, no runtime 1. **Config-only repo** - No compilation, no build, minimal test infrastructure
2. **Validation is manual** - Run scripts explicitly before committing 2. **Validation is manual** - Run scripts explicitly before committing
3. **Skills are documentation** - Write for AI consumption, not humans 3. **Skills are documentation** - Write for AI consumption, not humans
4. **Context window matters** - Keep skills concise, use progressive disclosure 4. **Directory naming** - Use `skills/` (plural), not `skill/`; `agents/` (plural), not `agent/`
5. **Nix deployment** - Maintain structure expected by home-manager 5. **Commands naming** - Use `commands/` (plural), not `command/`
6. **Always push** - Follow session completion workflow religiously 6. **Nix deployment** - Maintain structure expected by home-manager
7. **Always push** - Follow session completion workflow
## Optimization Opportunities
1. **Add Python linting** - Configure ruff or black for consistent formatting
2. **Add pre-commit hooks** - Auto-validate skills and run linters before commit
3. **Test coverage** - Add pytest for skill scripts beyond PDF skill
4. **CI/CD** - Add GitHub Actions to validate skills on PR
5. **Documentation** - Consolidate README.md and AGENTS.md to reduce duplication

View File

@@ -10,6 +10,10 @@ You are **Athena**, the Greek goddess of wisdom, knowledge, and strategy. You ar
**Scope**: Any domain except technical/coding tasks (those use other agents) **Scope**: Any domain except technical/coding tasks (those use other agents)
**Style**: Methodical, objective, source-critical, strategic **Style**: Methodical, objective, source-critical, strategic
## In a Nutshell
You transform complex research questions into clear, well-supported insights through systematic investigation. You gather information from diverse sources, evaluate credibility critically, synthesize findings objectively, and present them with appropriate confidence levels. Your value lies not in the volume of information you collect, but in the quality, credibility, and clarity of your synthesis.
## Your Core Responsibilities: ## Your Core Responsibilities:
1. **Multi-Source Investigation** 1. **Multi-Source Investigation**
@@ -36,7 +40,7 @@ You are **Athena**, the Greek goddess of wisdom, knowledge, and strategy. You ar
- Note uncertainty and confidence levels - Note uncertainty and confidence levels
- Recommend further investigation where needed - Recommend further investigation where needed
## Research Process ## Process:
When you receive a research request: When you receive a research request:
@@ -76,7 +80,7 @@ When you receive a research request:
- Include source citations (even if informal) - Include source citations (even if informal)
- Highlight limitations and recommended follow-up - Highlight limitations and recommended follow-up
## Output Formats ## Output Format:
Choose the format that best serves the research question: Choose the format that best serves the research question:
@@ -127,6 +131,69 @@ Caveats: [Limitations or uncertainty]
- [Date]: Event/Development - Significance - [Date]: Event/Development - Significance
``` ```
### Example: Executive Summary
**Research Question**: What are the main arguments for and against remote work policies?
```
Key Finding: Remote work offers productivity benefits for knowledge workers but presents collaboration and cultural challenges for organizations. Most companies adopt hybrid models to balance these competing factors.
Supporting Evidence:
- 77% of remote workers report higher productivity (Stanford 2023 study)
- 68% of employers cite communication difficulties (McKinsey 2022 survey)
- 52% of Fortune 500 companies use hybrid policies (Gartner 2023 report)
Caveats: Studies vary by industry, role type, and measurement methodology. Cultural factors significantly impact results.
```
### Example: Structured Report
**Research Question**: What is the current state of quantum computing?
```
## Executive Summary
Quantum computing is transitioning from theoretical research to early practical applications. Current quantum processors can solve specific problems faster than classical computers, but large-scale, error-corrected systems remain 5-10 years away. Investment and research activity are accelerating across tech companies, governments, and academia.
## Background
Quantum computing uses quantum bits (qubits) that can exist in superposition and entanglement, enabling parallel computation. Key applications include cryptography, optimization, and simulation of quantum systems. Major milestones include Google's 2019 "quantum supremacy" demonstration and IBM's 2021 127-qubit processor.
## Key Findings
### Quantum Hardware Progress
- IBM, Google, and others have demonstrated quantum processors with 100+ qubits [High Confidence - verified by company announcements and peer-reviewed papers]
- Error rates remain the primary technical barrier [High Confidence - consensus across 10+ technical reports]
- Multiple qubit technologies compete (superconducting, trapped ion, photonic) [Medium Confidence - active research area with varying claims]
### Commercial Viability
- No quantum computer has demonstrated clear commercial advantage at scale [High Confidence - industry analyst reports and expert interviews]
- Early adoption in finance and pharmaceutical research [Medium Confidence - pilot programs announced but results limited]
- Market projected to reach $65B by 2030 [Low Confidence - speculative forecasts from consulting firms, limited historical data]
### Investment Landscape
- Global quantum computing investment exceeded $30B in 2023 [High Confidence - government spending data and venture capital tracking]
- US and China lead in quantum computing funding [High Confidence - government budget documents and independent analysis]
- Private equity shifting toward applied quantum companies [Medium Confidence - deal flow data, emerging trend]
## Diverging Perspectives
**Optimistic View**: Quantum computers will solve previously intractable problems in drug discovery, climate modeling, and AI within 5 years. Proponents cite rapid qubit scaling and breakthrough algorithms.
**Cautious View**: Significant engineering challenges remain. Skeptics point to decoherence, error correction overhead, and the specialized nature of quantum advantage.
**Consensus**: Practical quantum advantage will emerge in niche applications before broader adoption. Timeline estimates cluster around 2027-2030 for meaningful commercial impact.
## Uncertainties and Gaps
- Which qubit technology will dominate? (active research, no clear winner yet)
- When will error-corrected logical qubits become practical? (estimates range 5-15 years)
- What will be the actual economic value of quantum advantage? (limited real-world testing)
- Will post-quantum cryptography be deployed in time? (timeline unknown, but urgency recognized)
## Recommendations
- For technology organizations: Monitor quantum computing advances through research partnerships
- For cryptography: Accelerate transition to post-quantum cryptographic standards
- For researchers: Focus on quantum error correction and algorithm development
```
## Quality Standards ## Quality Standards
- Present information fairly, even when it conflicts - Present information fairly, even when it conflicts
@@ -135,7 +202,37 @@ Caveats: [Limitations or uncertainty]
- Distinguish between public information and private matters - Distinguish between public information and private matters
- Attribute information to sources when possible - Attribute information to sources when possible
## When You Cannot Answer ## Confidence Ratings
Always indicate your confidence level for each major finding:
**High Confidence** - Use when:
- Multiple independent, reputable sources agree
- Information is recent and from authoritative sources (peer-reviewed, official reports, established institutions)
- Primary sources or direct evidence available
- Consensus among experts in the field
Example: "Climate warming is unequivocal [High Confidence - supported by IPCC 2023 report and peer-reviewed studies from NASA, NOAA, and 10+ research institutes]"
**Medium Confidence** - Use when:
- Sources are credible but limited in number or recency
- Some disagreement among experts
- Information from reputable secondary sources (well-regarded news, industry reports)
- Evidence supports the claim but is not definitive
Example: "Remote work productivity varies by role and individual [Medium Confidence - supported by Stanford 2022 study and McKinsey survey, but mixed results across different industries]"
**Low Confidence** - Use when:
- Limited or conflicting information
- Sources are unclear, dated, or not authoritative
- Information is primarily anecdotal or from opinion pieces
- Questionable methodology or potential bias in sources
Example: "The new policy will increase employment [Low Confidence - only one preliminary estimate from industry group; independent analysis pending]"
**When uncertain**: Explicitly state gaps in information and recommend what additional research would increase confidence.
## Edge Cases:
State clearly when: State clearly when:
- Information is insufficient or conflicting - Information is insufficient or conflicting
@@ -156,11 +253,106 @@ You are a sub-agent invoked by others. Your role is to:
- Return to the invoking agent with your findings - Return to the invoking agent with your findings
- Not initiate new research tasks unless explicitly asked - Not initiate new research tasks unless explicitly asked
### Handoff Templates
When returning research to the invoking agent, use these structured formats:
**Concise Handoff** (for quick research questions):
```
## Research Complete
**Question**: [Original research question]
**Key Finding**: [Primary conclusion with confidence level]
**Supporting Points**:
- Point 1
- Point 2
- Point 3
**Sources**: [2-3 main sources cited]
**Limitations**: [Brief note on gaps or uncertainties]
```
**Comprehensive Handoff** (for complex research):
```
## Research Complete
**Question**: [Original research question]
**Executive Summary**:
[2-3 paragraph overview of main findings]
**Key Findings**:
1. **Finding 1** [Confidence: X] - Description and evidence
2. **Finding 2** [Confidence: X] - Description and evidence
3. **Finding 3** [Confidence: X] - Description and evidence
**Source Quality**: [Assessment of source credibility - e.g., "Strong: 3 peer-reviewed papers, 2 government reports"]
**Areas of Uncertainty**:
- Gap 1: What's unknown and why
- Gap 2: What's unknown and why
**Recommended Follow-up** (if applicable):
- Suggestion 1: What additional research would clarify
- Suggestion 2: What specific documents or experts to consult
**Full Details**: [Reference to detailed report if lengthy research was conducted]
```
**Follow-up Questions Template**:
When appropriate, suggest next research steps to deepen understanding:
```
**Suggested Next Research**:
Based on current findings, the following would strengthen this research:
1. [Specific question] - Why this matters
2. [Specific question] - Why this matters
```
Always adapt handoff format to match the complexity and needs of the research request.
## Tool Usage ## Tool Usage
- **Web Search**: Use for finding current information, diverse perspectives, and primary sources ### Tool Selection Decision Tree
- **Document Retrieval**: Use for accessing reports, papers, reference materials
- **Read Tools**: For analyzing source documents **Start with Web Search when:**
- **Analysis Tools**: For organizing, comparing, and synthesizing information - Researching recent events, current data, or rapidly evolving topics
- Seeking diverse perspectives and public discourse
- Looking for primary sources or authoritative documents (then retrieve specific docs)
- Exploring a new topic to understand scope and available sources
- Finding specific quotes, statistics, or facts
- When you don't know what documents exist
**Use Document Retrieval when:**
- You already know specific document titles or URLs to retrieve
- Accessing known reports, academic papers, or reference materials
- Need to analyze the full content of a specific document
- Working with curated document collections or databases
- User provides specific document references
**Use Read Tools for:**
- Analyzing retrieved documents in detail
- Extracting specific information, quotes, or data points
- Cross-referencing multiple documents
- Deep content analysis beyond what retrieval summaries provide
**Use Analysis Tools for:**
- Organizing information into structured formats (tables, matrices, timelines)
- Comparing and contrasting sources
- Identifying patterns across multiple pieces of information
- Synthesizing findings into coherent narratives
**Typical workflow:**
1. Start with Web Search to discover sources
2. Use Document Retrieval for specific documents identified
3. Apply Read Tools to analyze document contents
4. Use Analysis Tools to synthesize findings
**- Web Search**: For discovery and broad information gathering
**- Document Retrieval**: For accessing specific known documents
**- Read Tools**: For deep analysis of source content
**- Analysis Tools**: For organizing and synthesizing information
Remember: As Athena, goddess of wisdom, your value is in the **quality, credibility, and clarity** of your research synthesis, not in the quantity of information gathered. Seek truth through methodical inquiry and strategic thinking. Remember: As Athena, goddess of wisdom, your value is in the **quality, credibility, and clarity** of your research synthesis, not in the quantity of information gathered. Seek truth through methodical inquiry and strategic thinking.

View File

@@ -26,9 +26,9 @@ setup_test_config() {
local tmp_config="$tmp_base/opencode" local tmp_config="$tmp_base/opencode"
mkdir -p "$tmp_config" mkdir -p "$tmp_config"
ln -sf "$REPO_ROOT/skill" "$tmp_config/skill" ln -sf "$REPO_ROOT/skills" "$tmp_config/skills"
ln -sf "$REPO_ROOT/context" "$tmp_config/context" ln -sf "$REPO_ROOT/context" "$tmp_config/context"
ln -sf "$REPO_ROOT/command" "$tmp_config/command" ln -sf "$REPO_ROOT/commands" "$tmp_config/commands"
ln -sf "$REPO_ROOT/prompts" "$tmp_config/prompts" ln -sf "$REPO_ROOT/prompts" "$tmp_config/prompts"
echo "$tmp_base" echo "$tmp_base"

266
skills/excalidraw/SKILL.md Normal file
View File

@@ -0,0 +1,266 @@
---
name: excalidraw
description: Generate architecture diagrams as .excalidraw files from codebase analysis. Use when the user asks to create architecture diagrams, system diagrams, visualize codebase structure, or generate excalidraw files.
---
# Excalidraw Diagram Generator
Generate architecture diagrams as `.excalidraw` files directly from codebase analysis.
---
## Quick Start
**User just asks:**
```
"Generate an architecture diagram for this project"
"Create an excalidraw diagram of the system"
"Visualize this codebase as an excalidraw file"
```
**Claude Code will:**
1. Analyze the codebase (any language/framework)
2. Identify components, services, databases, APIs
3. Map relationships and data flows
4. Generate valid `.excalidraw` JSON with dynamic IDs and labels
**No prerequisites:** Works without existing diagrams, Terraform, or specific file types.
---
## Critical Rules
### 1. NEVER Use Diamond Shapes
Diamond arrow connections are broken in raw Excalidraw JSON. Use styled rectangles instead:
| Semantic Meaning | Rectangle Style |
|------------------|-----------------|
| Orchestrator/Hub | Coral (`#ffa8a8`/`#c92a2a`) + strokeWidth: 3 |
| Decision Point | Orange (`#ffd8a8`/`#e8590c`) + dashed stroke |
### 2. Labels Require TWO Elements
The `label` property does NOT work in raw JSON. Every labeled shape needs:
```json
// 1. Shape with boundElements reference
{
"id": "my-box",
"type": "rectangle",
"boundElements": [{ "type": "text", "id": "my-box-text" }]
}
// 2. Separate text element with containerId
{
"id": "my-box-text",
"type": "text",
"containerId": "my-box",
"text": "My Label"
}
```
### 3. Elbow Arrows Need Three Properties
For 90-degree corners (not curved):
```json
{
"type": "arrow",
"roughness": 0, // Clean lines
"roundness": null, // Sharp corners
"elbowed": true // 90-degree mode
}
```
### 4. Arrow Edge Calculations
Arrows must start/end at shape edges, not centers:
| Edge | Formula |
|------|---------|
| Top | `(x + width/2, y)` |
| Bottom | `(x + width/2, y + height)` |
| Left | `(x, y + height/2)` |
| Right | `(x + width, y + height/2)` |
**Detailed arrow routing:** See `references/arrows.md`
---
## Element Types
| Type | Use For |
|------|---------|
| `rectangle` | Services, databases, containers, orchestrators |
| `ellipse` | Users, external systems, start/end points |
| `text` | Labels inside shapes, titles, annotations |
| `arrow` | Data flow, connections, dependencies |
| `line` | Grouping boundaries, separators |
**Full JSON format:** See `references/json-format.md`
---
## Workflow
### Step 1: Analyze Codebase
Discover components by looking for:
| Codebase Type | What to Look For |
|---------------|------------------|
| Monorepo | `packages/*/package.json`, workspace configs |
| Microservices | `docker-compose.yml`, k8s manifests |
| IaC | Terraform/Pulumi resource definitions |
| Backend API | Route definitions, controllers, DB models |
| Frontend | Component hierarchy, API calls |
**Use tools:**
- `Glob``**/package.json`, `**/Dockerfile`, `**/*.tf`
- `Grep``app.get`, `@Controller`, `CREATE TABLE`
- `Read` → README, config files, entry points
### Step 2: Plan Layout
**Vertical flow (most common):**
```
Row 1: Users/Entry points (y: 100)
Row 2: Frontend/Gateway (y: 230)
Row 3: Orchestration (y: 380)
Row 4: Services (y: 530)
Row 5: Data layer (y: 680)
Columns: x = 100, 300, 500, 700, 900
Element size: 160-200px x 80-90px
```
**Other patterns:** See `references/examples.md`
### Step 3: Generate Elements
For each component:
1. Create shape with unique `id`
2. Add `boundElements` referencing text
3. Create text with `containerId`
4. Choose color based on type
**Color palettes:** See `references/colors.md`
### Step 4: Add Connections
For each relationship:
1. Calculate source edge point
2. Plan elbow route (avoid overlaps)
3. Create arrow with `points` array
4. Match stroke color to destination type
**Arrow patterns:** See `references/arrows.md`
### Step 5: Add Grouping (Optional)
For logical groupings:
- Large transparent rectangle with `strokeStyle: "dashed"`
- Standalone text label at top-left
### Step 6: Validate and Write
Run validation before writing. Save to `docs/` or user-specified path.
**Validation checklist:** See `references/validation.md`
---
## Quick Arrow Reference
**Straight down:**
```json
{ "points": [[0, 0], [0, 110]], "x": 590, "y": 290 }
```
**L-shape (left then down):**
```json
{ "points": [[0, 0], [-325, 0], [-325, 125]], "x": 525, "y": 420 }
```
**U-turn (callback):**
```json
{ "points": [[0, 0], [50, 0], [50, -125], [20, -125]], "x": 710, "y": 440 }
```
**Arrow width/height** = bounding box of points:
```
points [[0,0], [-440,0], [-440,70]] → width=440, height=70
```
**Multiple arrows from same edge** - stagger positions:
```
5 arrows: 20%, 35%, 50%, 65%, 80% across edge width
```
---
## Default Color Palette
| Component | Background | Stroke |
|-----------|------------|--------|
| Frontend | `#a5d8ff` | `#1971c2` |
| Backend/API | `#d0bfff` | `#7048e8` |
| Database | `#b2f2bb` | `#2f9e44` |
| Storage | `#ffec99` | `#f08c00` |
| AI/ML | `#e599f7` | `#9c36b5` |
| External APIs | `#ffc9c9` | `#e03131` |
| Orchestration | `#ffa8a8` | `#c92a2a` |
| Message Queue | `#fff3bf` | `#fab005` |
| Cache | `#ffe8cc` | `#fd7e14` |
| Users | `#e7f5ff` | `#1971c2` |
**Cloud-specific palettes:** See `references/colors.md`
---
## Quick Validation Checklist
Before writing file:
- [ ] Every shape with label has boundElements + text element
- [ ] Text elements have containerId matching shape
- [ ] Multi-point arrows have `elbowed: true`, `roundness: null`
- [ ] Arrow x,y = source shape edge point
- [ ] Arrow final point offset reaches target edge
- [ ] No diamond shapes
- [ ] No duplicate IDs
**Full validation algorithm:** See `references/validation.md`
---
## Common Issues
| Issue | Fix |
|-------|-----|
| Labels don't appear | Use TWO elements (shape + text), not `label` property |
| Arrows curved | Add `elbowed: true`, `roundness: null`, `roughness: 0` |
| Arrows floating | Calculate x,y from shape edge, not center |
| Arrows overlapping | Stagger start positions across edge |
**Detailed bug fixes:** See `references/validation.md`
---
## Reference Files
| File | Contents |
|------|----------|
| `references/json-format.md` | Element types, required properties, text bindings |
| `references/arrows.md` | Routing algorithm, patterns, bindings, staggering |
| `references/colors.md` | Default, AWS, Azure, GCP, K8s palettes |
| `references/examples.md` | Complete JSON examples, layout patterns |
| `references/validation.md` | Checklists, validation algorithm, bug fixes |
---
## Output
- **Location:** `docs/architecture/` or user-specified
- **Filename:** Descriptive, e.g., `system-architecture.excalidraw`
- **Testing:** Open in https://excalidraw.com or VS Code extension

View File

@@ -0,0 +1,288 @@
# Arrow Routing Reference
Complete guide for creating elbow arrows with proper connections.
---
## Critical: Elbow Arrow Properties
Three required properties for 90-degree corners:
```json
{
"type": "arrow",
"roughness": 0, // Clean lines
"roundness": null, // Sharp corners (not curved)
"elbowed": true // Enables elbow mode
}
```
**Without these, arrows will be curved, not 90-degree elbows.**
---
## Edge Calculation Formulas
| Shape Type | Edge | Formula |
|------------|------|---------|
| Rectangle | Top | `(x + width/2, y)` |
| Rectangle | Bottom | `(x + width/2, y + height)` |
| Rectangle | Left | `(x, y + height/2)` |
| Rectangle | Right | `(x + width, y + height/2)` |
| Ellipse | Top | `(x + width/2, y)` |
| Ellipse | Bottom | `(x + width/2, y + height)` |
---
## Universal Arrow Routing Algorithm
```
FUNCTION createArrow(source, target, sourceEdge, targetEdge):
// Step 1: Get source edge point
sourcePoint = getEdgePoint(source, sourceEdge)
// Step 2: Get target edge point
targetPoint = getEdgePoint(target, targetEdge)
// Step 3: Calculate offsets
dx = targetPoint.x - sourcePoint.x
dy = targetPoint.y - sourcePoint.y
// Step 4: Determine routing pattern
IF sourceEdge == "bottom" AND targetEdge == "top":
IF abs(dx) < 10: // Nearly aligned
points = [[0, 0], [0, dy]]
ELSE: // Need L-shape
points = [[0, 0], [dx, 0], [dx, dy]]
ELSE IF sourceEdge == "right" AND targetEdge == "left":
IF abs(dy) < 10:
points = [[0, 0], [dx, 0]]
ELSE:
points = [[0, 0], [0, dy], [dx, dy]]
ELSE IF sourceEdge == targetEdge: // U-turn
clearance = 50
IF sourceEdge == "right":
points = [[0, 0], [clearance, 0], [clearance, dy], [dx, dy]]
ELSE IF sourceEdge == "bottom":
points = [[0, 0], [0, clearance], [dx, clearance], [dx, dy]]
// Step 5: Calculate bounding box
width = max(abs(p[0]) for p in points)
height = max(abs(p[1]) for p in points)
RETURN {x: sourcePoint.x, y: sourcePoint.y, points, width, height}
FUNCTION getEdgePoint(shape, edge):
SWITCH edge:
"top": RETURN (shape.x + shape.width/2, shape.y)
"bottom": RETURN (shape.x + shape.width/2, shape.y + shape.height)
"left": RETURN (shape.x, shape.y + shape.height/2)
"right": RETURN (shape.x + shape.width, shape.y + shape.height/2)
```
---
## Arrow Patterns Reference
| Pattern | Points | Use Case |
|---------|--------|----------|
| Down | `[[0,0], [0,h]]` | Vertical connection |
| Right | `[[0,0], [w,0]]` | Horizontal connection |
| L-left-down | `[[0,0], [-w,0], [-w,h]]` | Go left, then down |
| L-right-down | `[[0,0], [w,0], [w,h]]` | Go right, then down |
| L-down-left | `[[0,0], [0,h], [-w,h]]` | Go down, then left |
| L-down-right | `[[0,0], [0,h], [w,h]]` | Go down, then right |
| S-shape | `[[0,0], [0,h1], [w,h1], [w,h2]]` | Navigate around obstacles |
| U-turn | `[[0,0], [w,0], [w,-h], [0,-h]]` | Callback/return arrows |
---
## Worked Examples
### Vertical Connection (Bottom to Top)
```
Source: x=500, y=200, width=180, height=90
Target: x=500, y=400, width=180, height=90
source_bottom = (500 + 180/2, 200 + 90) = (590, 290)
target_top = (500 + 180/2, 400) = (590, 400)
Arrow x = 590, y = 290
Distance = 400 - 290 = 110
Points = [[0, 0], [0, 110]]
```
### Fan-out (One to Many)
```
Orchestrator: x=570, y=400, width=140, height=80
Target: x=120, y=550, width=160, height=80
orchestrator_bottom = (570 + 140/2, 400 + 80) = (640, 480)
target_top = (120 + 160/2, 550) = (200, 550)
Arrow x = 640, y = 480
Horizontal offset = 200 - 640 = -440
Vertical offset = 550 - 480 = 70
Points = [[0, 0], [-440, 0], [-440, 70]] // Left first, then down
```
### U-turn (Callback)
```
Source: x=570, y=400, width=140, height=80
Target: x=550, y=270, width=180, height=90
Connection: Right of source -> Right of target
source_right = (570 + 140, 400 + 80/2) = (710, 440)
target_right = (550 + 180, 270 + 90/2) = (730, 315)
Arrow x = 710, y = 440
Vertical distance = 315 - 440 = -125
Final x offset = 730 - 710 = 20
Points = [[0, 0], [50, 0], [50, -125], [20, -125]]
// Right 50px (clearance), up 125px, left 30px
```
---
## Staggering Multiple Arrows
When N arrows leave from same edge, spread evenly:
```
FUNCTION getStaggeredPositions(shape, edge, numArrows):
positions = []
FOR i FROM 0 TO numArrows-1:
percentage = 0.2 + (0.6 * i / (numArrows - 1))
IF edge == "bottom" OR edge == "top":
x = shape.x + shape.width * percentage
y = (edge == "bottom") ? shape.y + shape.height : shape.y
ELSE:
x = (edge == "right") ? shape.x + shape.width : shape.x
y = shape.y + shape.height * percentage
positions.append({x, y})
RETURN positions
// Examples:
// 2 arrows: 20%, 80%
// 3 arrows: 20%, 50%, 80%
// 5 arrows: 20%, 35%, 50%, 65%, 80%
```
---
## Arrow Bindings
For better visual attachment, use `startBinding` and `endBinding`:
```json
{
"id": "arrow-workflow-convert",
"type": "arrow",
"x": 525,
"y": 420,
"width": 325,
"height": 125,
"points": [[0, 0], [-325, 0], [-325, 125]],
"roughness": 0,
"roundness": null,
"elbowed": true,
"startBinding": {
"elementId": "cloud-workflows",
"focus": 0,
"gap": 1,
"fixedPoint": [0.5, 1]
},
"endBinding": {
"elementId": "convert-pdf-service",
"focus": 0,
"gap": 1,
"fixedPoint": [0.5, 0]
},
"startArrowhead": null,
"endArrowhead": "arrow"
}
```
### fixedPoint Values
- Top center: `[0.5, 0]`
- Bottom center: `[0.5, 1]`
- Left center: `[0, 0.5]`
- Right center: `[1, 0.5]`
### Update Shape boundElements
```json
{
"id": "cloud-workflows",
"boundElements": [
{ "type": "text", "id": "cloud-workflows-text" },
{ "type": "arrow", "id": "arrow-workflow-convert" }
]
}
```
---
## Bidirectional Arrows
For two-way data flows:
```json
{
"type": "arrow",
"startArrowhead": "arrow",
"endArrowhead": "arrow"
}
```
Arrowhead options: `null`, `"arrow"`, `"bar"`, `"dot"`, `"triangle"`
---
## Arrow Labels
Position standalone text near arrow midpoint:
```json
{
"id": "arrow-api-db-label",
"type": "text",
"x": 305, // Arrow x + offset
"y": 245, // Arrow midpoint
"text": "SQL",
"fontSize": 12,
"containerId": null,
"backgroundColor": "#ffffff"
}
```
**Positioning formula:**
- Vertical: `label.y = arrow.y + (total_height / 2)`
- Horizontal: `label.x = arrow.x + (total_width / 2)`
- L-shaped: Position at corner or longest segment midpoint
---
## Width/Height Calculation
Arrow `width` and `height` = bounding box of path:
```
points = [[0, 0], [-440, 0], [-440, 70]]
width = abs(-440) = 440
height = abs(70) = 70
points = [[0, 0], [50, 0], [50, -125], [20, -125]]
width = max(abs(50), abs(20)) = 50
height = abs(-125) = 125
```

View File

@@ -0,0 +1,91 @@
# Color Palettes Reference
Color schemes for different platforms and component types.
---
## Default Palette (Platform-Agnostic)
| Component Type | Background | Stroke | Example |
|----------------|------------|--------|---------|
| Frontend/UI | `#a5d8ff` | `#1971c2` | Next.js, React apps |
| Backend/API | `#d0bfff` | `#7048e8` | API servers, processors |
| Database | `#b2f2bb` | `#2f9e44` | PostgreSQL, MySQL, MongoDB |
| Storage | `#ffec99` | `#f08c00` | Object storage, file systems |
| AI/ML Services | `#e599f7` | `#9c36b5` | ML models, AI APIs |
| External APIs | `#ffc9c9` | `#e03131` | Third-party services |
| Orchestration | `#ffa8a8` | `#c92a2a` | Workflows, schedulers |
| Validation | `#ffd8a8` | `#e8590c` | Validators, checkers |
| Network/Security | `#dee2e6` | `#495057` | VPC, IAM, firewalls |
| Classification | `#99e9f2` | `#0c8599` | Routers, classifiers |
| Users/Actors | `#e7f5ff` | `#1971c2` | User ellipses |
| Message Queue | `#fff3bf` | `#fab005` | Kafka, RabbitMQ, SQS |
| Cache | `#ffe8cc` | `#fd7e14` | Redis, Memcached |
| Monitoring | `#d3f9d8` | `#40c057` | Prometheus, Grafana |
---
## AWS Palette
| Service Category | Background | Stroke |
|-----------------|------------|--------|
| Compute (EC2, Lambda, ECS) | `#ff9900` | `#cc7a00` |
| Storage (S3, EBS) | `#3f8624` | `#2d6119` |
| Database (RDS, DynamoDB) | `#3b48cc` | `#2d3899` |
| Networking (VPC, Route53) | `#8c4fff` | `#6b3dcc` |
| Security (IAM, KMS) | `#dd344c` | `#b12a3d` |
| Analytics (Kinesis, Athena) | `#8c4fff` | `#6b3dcc` |
| ML (SageMaker, Bedrock) | `#01a88d` | `#017d69` |
---
## Azure Palette
| Service Category | Background | Stroke |
|-----------------|------------|--------|
| Compute | `#0078d4` | `#005a9e` |
| Storage | `#50e6ff` | `#3cb5cc` |
| Database | `#0078d4` | `#005a9e` |
| Networking | `#773adc` | `#5a2ca8` |
| Security | `#ff8c00` | `#cc7000` |
| AI/ML | `#50e6ff` | `#3cb5cc` |
---
## GCP Palette
| Service Category | Background | Stroke |
|-----------------|------------|--------|
| Compute (GCE, Cloud Run) | `#4285f4` | `#3367d6` |
| Storage (GCS) | `#34a853` | `#2d8e47` |
| Database (Cloud SQL, Firestore) | `#ea4335` | `#c53929` |
| Networking | `#fbbc04` | `#d99e04` |
| AI/ML (Vertex AI) | `#9334e6` | `#7627b8` |
---
## Kubernetes Palette
| Component | Background | Stroke |
|-----------|------------|--------|
| Pod | `#326ce5` | `#2756b8` |
| Service | `#326ce5` | `#2756b8` |
| Deployment | `#326ce5` | `#2756b8` |
| ConfigMap/Secret | `#7f8c8d` | `#626d6e` |
| Ingress | `#00d4aa` | `#00a888` |
| Node | `#303030` | `#1a1a1a` |
| Namespace | `#f0f0f0` | `#c0c0c0` (dashed) |
---
## Diagram Type Suggestions
| Diagram Type | Recommended Layout | Key Elements |
|--------------|-------------------|--------------|
| Microservices | Vertical flow | Services, databases, queues, API gateway |
| Data Pipeline | Horizontal flow | Sources, transformers, sinks, storage |
| Event-Driven | Hub-and-spoke | Event bus center, producers/consumers |
| Kubernetes | Layered groups | Namespace boxes, pods inside deployments |
| CI/CD | Horizontal flow | Source -> Build -> Test -> Deploy -> Monitor |
| Network | Hierarchical | Internet -> LB -> VPC -> Subnets -> Instances |
| User Flow | Swimlanes | User actions, system responses, external calls |

View File

@@ -0,0 +1,381 @@
# Complete Examples Reference
Full JSON examples showing proper element structure.
---
## 3-Tier Architecture Example
This is a REFERENCE showing JSON structure. Replace IDs, labels, positions, and colors based on discovered components.
```json
{
"type": "excalidraw",
"version": 2,
"source": "claude-code-excalidraw-skill",
"elements": [
{
"id": "user",
"type": "ellipse",
"x": 150,
"y": 50,
"width": 100,
"height": 60,
"angle": 0,
"strokeColor": "#1971c2",
"backgroundColor": "#e7f5ff",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": { "type": 2 },
"seed": 1,
"version": 1,
"versionNonce": 1,
"isDeleted": false,
"boundElements": [{ "type": "text", "id": "user-text" }],
"updated": 1,
"link": null,
"locked": false
},
{
"id": "user-text",
"type": "text",
"x": 175,
"y": 67,
"width": 50,
"height": 25,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": null,
"seed": 2,
"version": 1,
"versionNonce": 2,
"isDeleted": false,
"boundElements": null,
"updated": 1,
"link": null,
"locked": false,
"text": "User",
"fontSize": 16,
"fontFamily": 1,
"textAlign": "center",
"verticalAlign": "middle",
"baseline": 14,
"containerId": "user",
"originalText": "User",
"lineHeight": 1.25
},
{
"id": "frontend",
"type": "rectangle",
"x": 100,
"y": 180,
"width": 200,
"height": 80,
"angle": 0,
"strokeColor": "#1971c2",
"backgroundColor": "#a5d8ff",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": { "type": 3 },
"seed": 3,
"version": 1,
"versionNonce": 3,
"isDeleted": false,
"boundElements": [{ "type": "text", "id": "frontend-text" }],
"updated": 1,
"link": null,
"locked": false
},
{
"id": "frontend-text",
"type": "text",
"x": 105,
"y": 195,
"width": 190,
"height": 50,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": null,
"seed": 4,
"version": 1,
"versionNonce": 4,
"isDeleted": false,
"boundElements": null,
"updated": 1,
"link": null,
"locked": false,
"text": "Frontend\nNext.js",
"fontSize": 16,
"fontFamily": 1,
"textAlign": "center",
"verticalAlign": "middle",
"baseline": 14,
"containerId": "frontend",
"originalText": "Frontend\nNext.js",
"lineHeight": 1.25
},
{
"id": "database",
"type": "rectangle",
"x": 100,
"y": 330,
"width": 200,
"height": 80,
"angle": 0,
"strokeColor": "#2f9e44",
"backgroundColor": "#b2f2bb",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": { "type": 3 },
"seed": 5,
"version": 1,
"versionNonce": 5,
"isDeleted": false,
"boundElements": [{ "type": "text", "id": "database-text" }],
"updated": 1,
"link": null,
"locked": false
},
{
"id": "database-text",
"type": "text",
"x": 105,
"y": 345,
"width": 190,
"height": 50,
"angle": 0,
"strokeColor": "#1e1e1e",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 1,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": null,
"seed": 6,
"version": 1,
"versionNonce": 6,
"isDeleted": false,
"boundElements": null,
"updated": 1,
"link": null,
"locked": false,
"text": "Database\nPostgreSQL",
"fontSize": 16,
"fontFamily": 1,
"textAlign": "center",
"verticalAlign": "middle",
"baseline": 14,
"containerId": "database",
"originalText": "Database\nPostgreSQL",
"lineHeight": 1.25
},
{
"id": "arrow-user-frontend",
"type": "arrow",
"x": 200,
"y": 115,
"width": 0,
"height": 60,
"angle": 0,
"strokeColor": "#1971c2",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": null,
"seed": 7,
"version": 1,
"versionNonce": 7,
"isDeleted": false,
"boundElements": null,
"updated": 1,
"link": null,
"locked": false,
"points": [[0, 0], [0, 60]],
"lastCommittedPoint": null,
"startBinding": null,
"endBinding": null,
"startArrowhead": null,
"endArrowhead": "arrow",
"elbowed": true
},
{
"id": "arrow-frontend-database",
"type": "arrow",
"x": 200,
"y": 265,
"width": 0,
"height": 60,
"angle": 0,
"strokeColor": "#2f9e44",
"backgroundColor": "transparent",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 0,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": null,
"seed": 8,
"version": 1,
"versionNonce": 8,
"isDeleted": false,
"boundElements": null,
"updated": 1,
"link": null,
"locked": false,
"points": [[0, 0], [0, 60]],
"lastCommittedPoint": null,
"startBinding": null,
"endBinding": null,
"startArrowhead": null,
"endArrowhead": "arrow",
"elbowed": true
}
],
"appState": {
"gridSize": 20,
"viewBackgroundColor": "#ffffff"
},
"files": {}
}
```
---
## Layout Patterns
### Vertical Flow (Most Common)
```
Grid positioning:
- Column width: 200-250px
- Row height: 130-150px
- Element size: 160-200px x 80-90px
- Spacing: 40-50px between elements
Row positions (y):
Row 0: 20 (title)
Row 1: 100 (users/entry points)
Row 2: 230 (frontend/gateway)
Row 3: 380 (orchestration)
Row 4: 530 (services)
Row 5: 680 (data layer)
Row 6: 830 (external services)
Column positions (x):
Col 0: 100
Col 1: 300
Col 2: 500
Col 3: 700
Col 4: 900
```
### Horizontal Flow (Pipelines)
```
Stage positions (x):
Stage 0: 100 (input/source)
Stage 1: 350 (transform 1)
Stage 2: 600 (transform 2)
Stage 3: 850 (transform 3)
Stage 4: 1100 (output/sink)
All stages at same y: 200
Arrows: "right" -> "left" connections
```
### Hub-and-Spoke
```
Center hub: x=500, y=350
8 positions at 45° increments:
N: (500, 150)
NE: (640, 210)
E: (700, 350)
SE: (640, 490)
S: (500, 550)
SW: (360, 490)
W: (300, 350)
NW: (360, 210)
```
---
## Complex Architecture Layout
```
Row 0: Title/Header (y: 20)
Row 1: Users/Clients (y: 80)
Row 2: Frontend/Gateway (y: 200)
Row 3: Orchestration (y: 350)
Row 4: Processing Services (y: 550)
Row 5: Data Layer (y: 680)
Row 6: External Services (y: 830)
Columns (x):
Col 0: 120
Col 1: 320
Col 2: 520
Col 3: 720
Col 4: 920
```
---
## Diagram Complexity Guidelines
| Complexity | Max Elements | Max Arrows | Approach |
|------------|-------------|------------|----------|
| Simple | 5-10 | 5-10 | Single file, no groups |
| Medium | 10-25 | 15-30 | Use grouping rectangles |
| Complex | 25-50 | 30-60 | Split into multiple diagrams |
| Very Complex | 50+ | 60+ | Multiple focused diagrams |
**When to split:**
- More than 50 elements
- Create: `architecture-overview.excalidraw`, `architecture-data-layer.excalidraw`
**When to use groups:**
- 3+ related services
- Same deployment unit
- Logical boundaries (VPC, Security Zone)

View File

@@ -0,0 +1,210 @@
# Excalidraw JSON Format Reference
Complete reference for Excalidraw JSON structure and element types.
---
## File Structure
```json
{
"type": "excalidraw",
"version": 2,
"source": "claude-code-excalidraw-skill",
"elements": [],
"appState": {
"gridSize": 20,
"viewBackgroundColor": "#ffffff"
},
"files": {}
}
```
---
## Element Types
| Type | Use For | Arrow Reliability |
|------|---------|-------------------|
| `rectangle` | Services, components, databases, containers, orchestrators, decision points | Excellent |
| `ellipse` | Users, external systems, start/end points | Good |
| `text` | Labels inside shapes, titles, annotations | N/A |
| `arrow` | Data flow, connections, dependencies | N/A |
| `line` | Grouping boundaries, separators | N/A |
### BANNED: Diamond Shapes
**NEVER use `type: "diamond"` in generated diagrams.**
Diamond arrow connections are fundamentally broken in raw Excalidraw JSON:
- Excalidraw applies `roundness` to diamond vertices during rendering
- Visual edges appear offset from mathematical edge points
- No offset formula reliably compensates
- Arrows appear disconnected/floating
**Use styled rectangles instead** for visual distinction:
| Semantic Meaning | Rectangle Style |
|------------------|-----------------|
| Orchestrator/Hub | Coral (`#ffa8a8`/`#c92a2a`) + strokeWidth: 3 |
| Decision Point | Orange (`#ffd8a8`/`#e8590c`) + dashed stroke |
| Central Router | Larger size + bold color |
---
## Required Element Properties
Every element MUST have these properties:
```json
{
"id": "unique-id-string",
"type": "rectangle",
"x": 100,
"y": 100,
"width": 200,
"height": 80,
"angle": 0,
"strokeColor": "#1971c2",
"backgroundColor": "#a5d8ff",
"fillStyle": "solid",
"strokeWidth": 2,
"strokeStyle": "solid",
"roughness": 1,
"opacity": 100,
"groupIds": [],
"frameId": null,
"roundness": { "type": 3 },
"seed": 1,
"version": 1,
"versionNonce": 1,
"isDeleted": false,
"boundElements": null,
"updated": 1,
"link": null,
"locked": false
}
```
---
## Text Inside Shapes (Labels)
**Every labeled shape requires TWO elements:**
### Shape with boundElements
```json
{
"id": "{component-id}",
"type": "rectangle",
"x": 500,
"y": 200,
"width": 200,
"height": 90,
"strokeColor": "#1971c2",
"backgroundColor": "#a5d8ff",
"boundElements": [{ "type": "text", "id": "{component-id}-text" }],
// ... other required properties
}
```
### Text with containerId
```json
{
"id": "{component-id}-text",
"type": "text",
"x": 505, // shape.x + 5
"y": 220, // shape.y + (shape.height - text.height) / 2
"width": 190, // shape.width - 10
"height": 50,
"text": "{Component Name}\n{Subtitle}",
"fontSize": 16,
"fontFamily": 1,
"textAlign": "center",
"verticalAlign": "middle",
"containerId": "{component-id}",
"originalText": "{Component Name}\n{Subtitle}",
"lineHeight": 1.25,
// ... other required properties
}
```
### DO NOT Use the `label` Property
The `label` property is for the JavaScript API, NOT raw JSON files:
```json
// WRONG - will show empty boxes
{ "type": "rectangle", "label": { "text": "My Label" } }
// CORRECT - requires TWO elements
// 1. Shape with boundElements reference
// 2. Separate text element with containerId
```
### Text Positioning
- Text `x` = shape `x` + 5
- Text `y` = shape `y` + (shape.height - text.height) / 2
- Text `width` = shape `width` - 10
- Use `\n` for multi-line labels
- Always use `textAlign: "center"` and `verticalAlign: "middle"`
### ID Naming Convention
Always use pattern: `{shape-id}-text` for text element IDs.
---
## Dynamic ID Generation
IDs and labels are generated from codebase analysis:
| Discovered Component | Generated ID | Generated Label |
|---------------------|--------------|-----------------|
| Express API server | `express-api` | `"API Server\nExpress.js"` |
| PostgreSQL database | `postgres-db` | `"PostgreSQL\nDatabase"` |
| Redis cache | `redis-cache` | `"Redis\nCache Layer"` |
| S3 bucket for uploads | `s3-uploads` | `"S3 Bucket\nuploads/"` |
| Lambda function | `lambda-processor` | `"Lambda\nProcessor"` |
| React frontend | `react-frontend` | `"React App\nFrontend"` |
---
## Grouping with Dashed Rectangles
For logical groupings (namespaces, VPCs, pipelines):
```json
{
"id": "group-ai-pipeline",
"type": "rectangle",
"x": 100,
"y": 500,
"width": 1000,
"height": 280,
"strokeColor": "#9c36b5",
"backgroundColor": "transparent",
"strokeStyle": "dashed",
"roughness": 0,
"roundness": null,
"boundElements": null
}
```
Group labels are standalone text (no containerId) at top-left:
```json
{
"id": "group-ai-pipeline-label",
"type": "text",
"x": 120,
"y": 510,
"text": "AI Processing Pipeline (Cloud Run)",
"textAlign": "left",
"verticalAlign": "top",
"containerId": null
}
```

View File

@@ -0,0 +1,182 @@
# Validation Reference
Checklists, validation algorithms, and common bug fixes.
---
## Pre-Flight Validation Algorithm
Run BEFORE writing the file:
```
FUNCTION validateDiagram(elements):
errors = []
// 1. Validate shape-text bindings
FOR each shape IN elements WHERE shape.boundElements != null:
FOR each binding IN shape.boundElements:
textElement = findById(elements, binding.id)
IF textElement == null:
errors.append("Shape {shape.id} references missing text {binding.id}")
ELSE IF textElement.containerId != shape.id:
errors.append("Text containerId doesn't match shape")
// 2. Validate arrow connections
FOR each arrow IN elements WHERE arrow.type == "arrow":
sourceShape = findShapeNear(elements, arrow.x, arrow.y)
IF sourceShape == null:
errors.append("Arrow {arrow.id} doesn't start from shape edge")
finalPoint = arrow.points[arrow.points.length - 1]
endX = arrow.x + finalPoint[0]
endY = arrow.y + finalPoint[1]
targetShape = findShapeNear(elements, endX, endY)
IF targetShape == null:
errors.append("Arrow {arrow.id} doesn't end at shape edge")
IF arrow.points.length > 2:
IF arrow.elbowed != true:
errors.append("Arrow {arrow.id} missing elbowed:true")
IF arrow.roundness != null:
errors.append("Arrow {arrow.id} should have roundness:null")
// 3. Validate unique IDs
ids = [el.id for el in elements]
duplicates = findDuplicates(ids)
IF duplicates.length > 0:
errors.append("Duplicate IDs: {duplicates}")
// 4. Validate bounding boxes
FOR each arrow IN elements WHERE arrow.type == "arrow":
maxX = max(abs(p[0]) for p in arrow.points)
maxY = max(abs(p[1]) for p in arrow.points)
IF arrow.width < maxX OR arrow.height < maxY:
errors.append("Arrow {arrow.id} bounding box too small")
RETURN errors
FUNCTION findShapeNear(elements, x, y, tolerance=15):
FOR each shape IN elements WHERE shape.type IN ["rectangle", "ellipse"]:
edges = [
(shape.x + shape.width/2, shape.y), // top
(shape.x + shape.width/2, shape.y + shape.height), // bottom
(shape.x, shape.y + shape.height/2), // left
(shape.x + shape.width, shape.y + shape.height/2) // right
]
FOR each edge IN edges:
IF abs(edge.x - x) < tolerance AND abs(edge.y - y) < tolerance:
RETURN shape
RETURN null
```
---
## Checklists
### Before Generating
- [ ] Identified all components from codebase
- [ ] Mapped all connections/data flows
- [ ] Chose layout pattern (vertical, horizontal, hub-and-spoke)
- [ ] Selected color palette (default, AWS, Azure, K8s)
- [ ] Planned grid positions
- [ ] Created ID naming scheme
### During Generation
- [ ] Every labeled shape has BOTH shape AND text elements
- [ ] Shape has `boundElements: [{ "type": "text", "id": "{id}-text" }]`
- [ ] Text has `containerId: "{shape-id}"`
- [ ] Multi-point arrows have `elbowed: true`, `roundness: null`, `roughness: 0`
- [ ] Arrows have `startBinding` and `endBinding`
- [ ] No diamond shapes used
- [ ] Applied staggering formula for multiple arrows
### Arrow Validation (Every Arrow)
- [ ] Arrow `x,y` calculated from shape edge
- [ ] Final point offset = `targetEdge - sourceEdge`
- [ ] Arrow `width` = `max(abs(point[0]))`
- [ ] Arrow `height` = `max(abs(point[1]))`
- [ ] U-turn arrows have 40-60px clearance
### After Generation
- [ ] All `boundElements` IDs reference valid text elements
- [ ] All `containerId` values reference valid shapes
- [ ] All arrows start within 15px of shape edge
- [ ] All arrows end within 15px of shape edge
- [ ] No duplicate IDs
- [ ] Arrow bounding boxes match points
- [ ] File is valid JSON
---
## Common Bugs and Fixes
### Bug: Arrow appears disconnected/floating
**Cause**: Arrow `x,y` not calculated from shape edge.
**Fix**:
```
Rectangle bottom: arrow_x = shape.x + shape.width/2
arrow_y = shape.y + shape.height
```
### Bug: Arrow endpoint doesn't reach target
**Cause**: Final point offset calculated incorrectly.
**Fix**:
```
target_edge = (target.x + target.width/2, target.y)
offset_x = target_edge.x - arrow.x
offset_y = target_edge.y - arrow.y
Final point = [offset_x, offset_y]
```
### Bug: Multiple arrows from same source overlap
**Cause**: All arrows start from identical `x,y`.
**Fix**: Stagger start positions:
```
For 5 arrows from bottom edge:
arrow1.x = shape.x + shape.width * 0.2
arrow2.x = shape.x + shape.width * 0.35
arrow3.x = shape.x + shape.width * 0.5
arrow4.x = shape.x + shape.width * 0.65
arrow5.x = shape.x + shape.width * 0.8
```
### Bug: Callback arrow doesn't loop correctly
**Cause**: U-turn path lacks clearance.
**Fix**: Use 4-point path:
```
Points = [[0, 0], [clearance, 0], [clearance, -vert], [final_x, -vert]]
clearance = 40-60px
```
### Bug: Labels don't appear inside shapes
**Cause**: Using `label` property instead of separate text element.
**Fix**: Create TWO elements:
1. Shape with `boundElements` referencing text
2. Text with `containerId` referencing shape
### Bug: Arrows are curved, not 90-degree
**Cause**: Missing elbow properties.
**Fix**: Add all three:
```json
{
"roughness": 0,
"roundness": null,
"elbowed": true
}
```

View File

@@ -1,6 +1,6 @@
--- ---
name: skill-creator name: skill-creator
description: (opencode - Skill) Guide for creating effective Opencode Agent Skills. Use this when users want to create a new skill (or update an existing skill) that extends Opencode's capabilities with specialized knowledge, workflows, or tool integrations. description: Guide for creating effective Opencode Agent Skills. Use this when users want to create a new skill (or update an existing skill) that extends Opencode's capabilities with specialized knowledge, workflows, or tool integrations.
compatibility: opencode compatibility: opencode
--- ---