chore: repo cleanup

This commit is contained in:
sascha.koenig
2026-03-31 19:13:10 +02:00
parent 586d1484ec
commit a05558b811
8 changed files with 251 additions and 780 deletions

3
.gitattributes vendored
View File

@@ -1,3 +0,0 @@
# Use bd merge for beads JSONL files
.beads/issues.jsonl merge=beads

309
AGENTS.md
View File

@@ -2,174 +2,239 @@
Configuration repository for Opencode Agent Skills, context files, and agent configurations. Deployed via Nix home-manager to `~/.config/opencode/`. Configuration repository for Opencode Agent Skills, context files, and agent configurations. Deployed via Nix home-manager to `~/.config/opencode/`.
## Quick Commands ## Build / Lint / Test Commands
```bash ```bash
# Skill validation # Validate a single skill (PRIMARY quality gate)
./scripts/test-skill.sh --validate # Validate all skills ./scripts/test-skill.sh <skill-name>
./scripts/test-skill.sh <skill-name> # Validate specific skill python3 skills/skill-creator/scripts/quick_validate.py skills/<skill-name>
./scripts/test-skill.sh --run # Test interactively
# Skill creation # Validate all skills
./scripts/test-skill.sh --validate
# Validate agent configuration (agents.json + prompt files)
./scripts/validate-agents.sh
# Launch interactive opencode with dev skills (test without deploying)
./scripts/test-skill.sh --run
# Test with external skills.sh repos merged in
./scripts/test-skill.sh --run --external /path/to/external/skills
# Scaffold a new skill
python3 skills/skill-creator/scripts/init_skill.py <name> --path skills/ python3 skills/skill-creator/scripts/init_skill.py <name> --path skills/
# Enter dev shell (provides Python, jq, poppler, playwright)
nix develop
# or with direnv:
direnv allow
``` ```
**No automated CI.** All validation is manual via the scripts above.
## Directory Structure ## Directory Structure
``` ```
. .
├── skills/ # Agent skills (15 modules) ├── skills/ # Agent skills (one subdirectory per skill)
│ └── skill-name/ │ └── skill-name/
│ ├── SKILL.md # Required: YAML frontmatter + workflows │ ├── SKILL.md # Required: YAML frontmatter + workflows
│ ├── scripts/ # Executable code (optional) │ ├── scripts/ # Executable code (optional)
│ ├── references/ # Domain docs (optional) │ ├── references/ # Domain docs (optional)
│ └── assets/ # Templates/files (optional) │ └── assets/ # Templates/files (optional)
├── rules/ # AI coding rules (languages, concerns, frameworks) ├── rules/ # AI coding rules consumed by mkOpencodeRules
│ ├── languages/ # Python, TypeScript, Nix, Shell │ ├── languages/ # python.md, typescript.md, nix.md, shell.md
│ ├── concerns/ # Testing, naming, documentation, etc. │ ├── concerns/ # coding-style.md, naming.md, testing.md, git-workflow.md, etc.
│ └── frameworks/ # Framework-specific rules (n8n, etc.) │ └── frameworks/ # Framework-specific rules (n8n.md)
├── agents/ # Agent definitions (agents.json) ├── agents/ # agents.json — embedded into opencode config.json
├── prompts/ # System prompts (chiron*.txt) ├── prompts/ # System prompts (chiron.txt, chiron-forge.txt, etc.)
├── context/ # User profiles ├── context/ # User profile (profile.md)
├── commands/ # Custom commands ├── commands/ # Custom command definitions (reflection.md)
── scripts/ # Repo utilities (test-skill.sh, validate-agents.sh) ── scripts/ # Repo utilities (test-skill.sh, validate-agents.sh)
├── flake.nix # Nix flake: devShells, packages, lib.mkOpencodeSkills
└── .envrc # direnv: activates nix develop automatically
``` ```
## Code Conventions ## SKILL.md Structure (Required Format)
**File naming**: hyphen-case (skills), snake_case (Python), UPPERCASE/sentence-case (MD)
**SKILL.md structure**:
```yaml ```yaml
--- ---
name: skill-name name: skill-name
description: "Use when: (1) X, (2) Y. Triggers: a, b, c." description: "Use when: (1) X, (2) Y. Triggers: keyword-a, keyword-b."
compatibility: opencode compatibility: opencode
--- ---
## Overview (1 line) ## Overview
## Core Workflows (step-by-step) One-line summary.
## Core Workflows
Step-by-step instructions for the AI agent.
## Integration with Other Skills ## Integration with Other Skills
When and how to delegate to other skills.
``` ```
**Python**: `#!/usr/bin/env python3` + docstrings + emoji feedback (✅/❌) **YAML frontmatter is the primary quality gate.** The `quick_validate.py` script checks that `name`, `description`, and `compatibility` fields are present and well-formed.
**Bash**: `#!/usr/bin/env bash` + `set -euo pipefail`
**Markdown**: YAML frontmatter, ATX headers, `-` lists, language in code blocks
## Anti-Patterns (CRITICAL) ## Code Style Guidelines
**Frontend Design**: NEVER use generic AI aesthetics, NEVER converge on common choices ### General (All Languages)
**Excalidraw**: NEVER use `label` property (use boundElements + text element pairs instead)
**Debugging**: NEVER fix just symptom, ALWAYS find root cause first - Prioritize readability over cleverness
**Excel**: ALWAYS respect existing template conventions over guidelines - Fail fast and explicitly — never silently swallow errors
**Structure**: NEVER place scripts/docs outside scripts/references/ directories - Keep functions under 20 lines; extract duplicated logic
- Use guard clauses to reduce nesting (avoid arrow-shaped code)
- Validate inputs at function boundaries
- Write self-documenting code; comments explain **why**, not **what**
- Never commit commented-out code
### Python
- **Shebang**: `#!/usr/bin/env python3`
- **Docstrings**: Google-style (`Args:`, `Returns:`, `Raises:`)
- **Formatting**: `ruff` with `line-length = 100`, `quote-style = "double"`
- **Types**: Full type annotations; use `pyright` in strict mode
- **Imports**: Explicit only — no `from module import *`; stdlib → third-party → local
- **Error handling**: Catch specific exceptions; always log context, never `except: pass`
- **Defaults**: Use `None` sentinel, not mutable defaults (`def f(x=None): if x is None: x = []`)
- **State**: Avoid `global`; encapsulate state in classes
- **Feedback**: Use emoji in user-facing output (`✅` success, `❌` error, `⚠️` warning)
- **Package management**: `uv` for projects; `pyproject.toml` with `[tool.ruff]` and `[tool.pyright]`
### Bash / Shell
- **Shebang**: `#!/usr/bin/env bash` (never `/bin/bash`)
- **Strict mode**: `set -euo pipefail` at the top of every script
- **Variables**: Always quote: `"${var}"`, use arrays for lists
- **Functions**: Parentheses style (`my_func() { local var; ... }`)
- **Substitution**: Use `$()` not backticks
- **Cleanup**: Use `trap cleanup EXIT` for temp files/dirs
- **Indentation**: 2 spaces; lines ≤ 80 chars
- **Lint**: Run `shellcheck` before committing
- **Colors**: Define `RED`, `GREEN`, `YELLOW`, `NC` constants for terminal output
### Nix
- **Formatting**: `alejandra` (2-space indent, no trailing whitespace)
- **Naming**: camelCase for variables, PascalCase for types, hyphen-case for files
- **Packages**: Explicit `pkgs.vim` references — avoid `with pkgs;` namespace pollution
- **Inputs**: Always use flake inputs, never `import <nixpkgs>` or `builtins.fetchTarball`
- **Conditionals**: Use `lib.mkIf`, `lib.mkMerge`, `lib.mkOptionDefault`
- **Attributes**: Use `lib.attrByPath`/`lib.optionalAttrs` instead of `builtins.getAttr`
### Markdown
- YAML frontmatter where required (skills, commands)
- ATX-style headers (`##`, not underlines)
- `-` for unordered lists (not `*`)
- Always specify language in fenced code blocks
## Naming Conventions
| Context | Python | TypeScript | Nix | Shell |
| -------------- | ------------ | ------------ | ------------ | --------------- |
| Variables | `snake_case` | `camelCase` | `camelCase` | `UPPER_SNAKE` |
| Functions | `snake_case` | `camelCase` | `camelCase` | `lower_case` |
| Classes | `PascalCase` | `PascalCase` | — | — |
| Constants | `UPPER_SNAKE`| `UPPER_SNAKE`| `camelCase` | `UPPER_SNAKE` |
| Files | `snake_case` | `camelCase` | `hyphen-case`| `hyphen-case` |
| Skill dirs | `hyphen-case`| — | — | — |
| Markdown files | `UPPERCASE.md` or `sentence-case.md` | | | |
Function names: verb-noun pattern (`get_user_data`, `validate_skill`). Classes: descriptive nouns, no abbreviations.
## Anti-Patterns (CRITICAL — Never Do These)
**Skills:**
- NEVER place scripts or docs outside `scripts/` and `references/` subdirectories
- NEVER add `README.md` or `CHANGELOG.md` inside a skill directory
- NEVER create a skill without valid YAML frontmatter
**Frontend Design:**
- NEVER use generic AI aesthetics; NEVER converge on common design choices
**Excalidraw:**
- NEVER use the `label` property; use `boundElements` + separate text elements
**Debugging:**
- NEVER fix just the symptom; ALWAYS find and address the root cause first
**Excel / Spreadsheets:**
- ALWAYS respect existing template conventions over general guidelines
**Python:**
- NEVER use bare `except:` — always catch specific exception types
- NEVER use mutable default arguments
**Nix:**
- NEVER use `with pkgs;` — always use explicit `pkgs.packageName` references
## Testing Patterns ## Testing Patterns
**Unique conventions** (skill-focused, not CI/CD): This repo is **documentation-only** (no compilation, no CI). Testing is skill-focused:
- Manual validation via `test-skill.sh`, no automated CI
- Tests co-located with source (not separate test directories)
- YAML frontmatter validation = primary quality gate
- Mixed formats: Python unittest, markdown pressure tests, A/B prompt testing
**Known deviations**: ```bash
- `systematic-debugging/test-*.md` - Academic/pressure testing in wrong location # Validate single skill's YAML frontmatter and structure
- `pdf/forms.md`, `pdf/reference.md` - Docs outside references/ python3 skills/skill-creator/scripts/quick_validate.py skills/<skill-name>
# Validate all skills
./scripts/test-skill.sh --validate
# Live integration test: launch opencode with dev skills
./scripts/test-skill.sh --run
```
**Test structure for Python scripts** (when writing `scripts/*.py`):
- Use `pytest` + `hypothesis` for property-based tests
- Arrange-Act-Assert pattern; one behavior per test
- Test public contracts and observable behavior, not internals
- Mock external I/O (network, filesystem); don't mock internal logic
**Known structural deviations** (do not replicate):
- `systematic-debugging/test-*.md` — pressure tests in wrong location
- `pdf/forms.md`, `pdf/reference.md` — docs outside `references/`
## Git Workflow
**Commit format**: `<type>(<scope>): <subject>` (Conventional Commits)
- Types: `feat`, `fix`, `refactor`, `docs`, `chore`, `test`, `style`
- Subject: imperative mood, ≤ 72 chars, no trailing period
- Example: `feat(skill-creator): add YAML frontmatter auto-repair`
**Branch naming**: `<type>/<short-description>` (lowercase, hyphens, ≤ 50 chars)
**Session completion workflow**: commit + `git push` (always push at end of session)
## Deployment ## Deployment
**Nix flake pattern**: **Agent changes** (`agents/agents.json`, `prompts/*.txt`) require `home-manager switch`.
**All other changes** (skills, context, commands) are visible immediately via symlinks.
```nix ```nix
agents = { # Minimal home-manager setup
url = "git+https://code.m3ta.dev/m3tam3re/AGENTS";
inputs.nixpkgs.follows = "nixpkgs"; # Optional but recommended
};
```
**Exports:**
- `lib.mkOpencodeSkills` — compose custom + external [skills.sh](https://skills.sh) skills into one directory
- `packages.skills-runtime` — composable runtime with all skill dependencies
- `devShells.default` — dev environment for working on skills
**Mapping** (via home-manager):
- `skills/` → composed via `mkOpencodeSkills` (custom + external merged)
- `context/`, `commands/`, `prompts/` → symlinks
- `agents/agents.json` → embedded into config.json
- Agent changes: require `home-manager switch`
- Other changes: visible immediately
### External Skills (skills.sh)
This repo supports composing skills from external [skills.sh](https://skills.sh) repositories
alongside custom skills. External repos follow the [Agent Skills](https://agentskills.io)
standard (same `SKILL.md` format).
**`lib.mkOpencodeSkills` parameters:**
- `pkgs` (required) — nixpkgs package set
- `customSkills` (optional) — path to custom skills directory (e.g., `"${inputs.agents}/skills"`)
- `externalSkills` (optional) — list of external sources, each with:
- `src` — flake input or path to repo root
- `skillsDir` — subdirectory containing skills (default: `"skills"`)
- `selectSkills` — list of skill names to include (default: all)
**Collision handling:** Custom skills always win. Among externals, earlier entries take priority.
**Home-manager example:**
```nix
inputs = {
agents.url = "git+https://code.m3ta.dev/m3tam3re/AGENTS";
skills-anthropic = { url = "github:anthropics/skills"; flake = false; };
};
xdg.configFile."opencode/skills".source = xdg.configFile."opencode/skills".source =
inputs.agents.lib.mkOpencodeSkills { inputs.agents.lib.mkOpencodeSkills {
pkgs = nixpkgs.legacyPackages.${system}; pkgs = nixpkgs.legacyPackages.${system};
customSkills = "${inputs.agents}/skills"; customSkills = "${inputs.agents}/skills";
externalSkills = [
{ src = inputs.skills-anthropic; }
];
}; };
``` ```
**Project flake example (selective):** See `README.md` for full deployment examples including external skill composition.
```nix
".agents/skills".source =
inputs.agents.lib.mkOpencodeSkills {
pkgs = nixpkgs.legacyPackages.${system};
externalSkills = [
{ src = inputs.skills-anthropic; selectSkills = [ "mcp-builder" ]; }
];
};
```
## Rules System ## Quality Gates (Before Committing)
Centralized AI coding rules consumed via `mkOpencodeRules` from m3ta-nixpkgs: 1. `./scripts/test-skill.sh --validate` — all skills pass
2. `./scripts/validate-agents.sh` — agent config is valid (if agents/ changed)
```nix 3. Python scripts have `#!/usr/bin/env python3` shebang + Google-style docstrings
# In project flake.nix 4. No extraneous files (`README.md`, `CHANGELOG.md`) inside skill directories
m3taLib.opencode-rules.mkOpencodeRules { 5. If skill scripts have new dependencies → update `flake.nix` `pythonEnv` or `paths`
inherit agents; 6. Git status clean before pushing
languages = [ "python" "typescript" ];
frameworks = [ "n8n" ];
};
```
See `rules/USAGE.md` for full documentation.
## Notes for AI Agents ## Notes for AI Agents
1. **Config-only repo** - No compilation, no build, manual validation only 1. **Config-only repo** — no compilation step; `./scripts/test-skill.sh --validate` is the build
2. **Skills are documentation** - Write for AI consumption, progressive disclosure 2. **Skills are documentation** — write for AI consumption with progressive disclosure
3. **Consistent structure** - All skills follow 4-level deep pattern (skills/name/ + optional subdirs) 3. **Consistent 4-level structure** `skills/name/{SKILL.md,scripts/,references/,assets/}`
4. **Cross-cutting concerns** - Standardized SKILL.md, workflow patterns, delegation rules 4. **Delegation model**`Chiron (Assistant)` (plan-only), `Chiron Forge (Builder)` (execute), `Hermes (Communication)` (comms), `Athena (Researcher)` (research), `Apollo (Knowledge Management)` (private knowledge), `Calliope (Writer)` (writing). All use model `zai-coding-plan/glm-5`.
5. **Always push** - Session completion workflow: commit + git push 5. **Always push** — end every session with commit + `git push`
6. **Rules system**`rules/` contains language + concern rules injected into projects via `mkOpencodeRules`; edit these when updating cross-repo coding standards
## Quality Gates
Before committing:
1. `./scripts/test-skill.sh --validate`
2. Python shebang + docstrings check
3. No extraneous files (README.md, CHANGELOG.md in skills/)
4. If skill has scripts with external dependencies → verify `flake.nix` is updated (see skill-creator Step 4)
5. Git status clean

View File

@@ -9,7 +9,7 @@ This repository serves as a **personal AI operating system** - a collection of s
- **Productivity & Task Management** - PARA methodology, GTD workflows, project tracking - **Productivity & Task Management** - PARA methodology, GTD workflows, project tracking
- **Knowledge Management** - Note-taking, research workflows, information organization - **Knowledge Management** - Note-taking, research workflows, information organization
- **AI Development** - Tools for creating new skills and agent configurations - **AI Development** - Tools for creating new skills and agent configurations
- **Memory & Context** - Persistent memory systems, conversation analysis - **Memory & Context** - Knowledge retrieval via QMD, conversation analysis
- **Document Processing** - PDF manipulation, spreadsheet handling, diagram generation - **Document Processing** - PDF manipulation, spreadsheet handling, diagram generation
- **Custom Workflows** - Domain-specific automation and specialized agents - **Custom Workflows** - Domain-specific automation and specialized agents
@@ -23,18 +23,17 @@ This repository serves as a **personal AI operating system** - a collection of s
│ └── profile.md # Work style, PARA areas, preferences │ └── profile.md # Work style, PARA areas, preferences
├── commands/ # Custom command definitions ├── commands/ # Custom command definitions
│ └── reflection.md │ └── reflection.md
├── skills/ # Opencode Agent Skills (15 skills) ├── skills/ # Opencode Agent Skills (14 skills)
│ ├── agent-development/ # Agent creation and configuration │ ├── agent-development/ # Agent creation and configuration
│ ├── basecamp/ # Basecamp project management
│ ├── brainstorming/ # Ideation & strategic thinking │ ├── brainstorming/ # Ideation & strategic thinking
│ ├── doc-translator/ # Documentation translation │ ├── doc-translator/ # Documentation translation
│ ├── excalidraw/ # Architecture diagrams │ ├── excalidraw/ # Architecture diagrams
│ ├── frontend-design/ # UI/UX design patterns │ ├── mem0-memory/ # DEPRECATED — replaced by opencode-memory plugin
│ ├── memory/ # Persistent memory system
│ ├── obsidian/ # Obsidian vault management │ ├── obsidian/ # Obsidian vault management
│ ├── outline/ # Outline wiki integration │ ├── outline/ # Outline wiki integration
│ ├── pdf/ # PDF manipulation toolkit │ ├── pdf/ # PDF manipulation toolkit
│ ├── prompt-engineering-patterns/ # Prompt patterns │ ├── prompt-engineering-patterns/ # Prompt patterns
│ ├── qmd/ # Knowledge retrieval via QMD
│ ├── reflection/ # Conversation analysis │ ├── reflection/ # Conversation analysis
│ ├── skill-creator/ # Meta-skill for creating skills │ ├── skill-creator/ # Meta-skill for creating skills
│ ├── systematic-debugging/ # Debugging methodology │ ├── systematic-debugging/ # Debugging methodology
@@ -258,41 +257,41 @@ python3 skills/skill-creator/scripts/quick_validate.py skills/my-skill-name
## 📚 Available Skills ## 📚 Available Skills
| Skill | Purpose | Status | | Skill | Purpose | Status |
| --------------------------- | -------------------------------------------------------------- | ------------ | | ------------------------------- | -------------------------------------------------------------- | --------------- |
| **agent-development** | Create and configure Opencode agents | ✅ Active | | **agent-development** | Create and configure Opencode agents | ✅ Active |
| **basecamp** | Basecamp project & todo management via MCP | ✅ Active |
| **brainstorming** | General-purpose ideation and strategic thinking | ✅ Active | | **brainstorming** | General-purpose ideation and strategic thinking | ✅ Active |
| **doc-translator** | Documentation translation to German/Czech with Outline publish | ✅ Active | | **doc-translator** | Documentation translation to German/Czech with Outline publish | ✅ Active |
| **excalidraw** | Architecture diagrams from codebase analysis | ✅ Active | | **excalidraw** | Architecture diagrams from codebase analysis | ✅ Active |
| **frontend-design** | Production-grade UI/UX with high design quality | ✅ Active |
| **memory** | SQLite-based persistent memory with hybrid search | ✅ Active |
| **obsidian** | Obsidian vault management via Local REST API | ✅ Active | | **obsidian** | Obsidian vault management via Local REST API | ✅ Active |
| **outline** | Outline wiki integration for team documentation | ✅ Active | | **outline** | Outline wiki integration for team documentation | ✅ Active |
| **pdf** | PDF manipulation, extraction, creation, and forms | ✅ Active | | **pdf** | PDF manipulation, extraction, creation, and forms | ✅ Active |
| **prompt-engineering-patterns** | Advanced prompt engineering techniques | ✅ Active | | **prompt-engineering-patterns** | Advanced prompt engineering techniques | ✅ Active |
| **qmd** | Knowledge retrieval and memory via Query Markup Documents | ✅ Active |
| **reflection** | Conversation analysis and skill improvement | ✅ Active | | **reflection** | Conversation analysis and skill improvement | ✅ Active |
| **skill-creator** | Guide for creating new Opencode skills | ✅ Active | | **skill-creator** | Guide for creating new Opencode skills | ✅ Active |
| **systematic-debugging** | Debugging methodology for bugs and test failures | ✅ Active | | **systematic-debugging** | Debugging methodology for bugs and test failures | ✅ Active |
| **xlsx** | Spreadsheet creation, editing, and analysis | ✅ Active | | **xlsx** | Spreadsheet creation, editing, and analysis | ✅ Active |
| **mem0-memory** | Legacy memory skill (deprecated — use opencode-memory plugin) | ⚠️ Deprecated |
## 🤖 AI Agents ## 🤖 AI Agents
### Primary Agents ### Primary Agents
| Agent | Mode | Purpose | | Agent | Mode | Purpose |
| ------------------- | ------- | ---------------------------------------------------- | | -------------------------- | ------- | ---------------------------------------------------- |
| **Chiron** | Plan | Read-only analysis, planning, and guidance | | **Chiron (Assistant)** | primary | Read-only analysis, planning, and guidance |
| **Chiron Forge** | Build | Full execution and task completion with safety | | **Chiron Forge (Builder)** | primary | Full execution and task completion with safety |
### Subagents (Specialists) ### Subagents (Specialists)
| Agent | Domain | Purpose | | Agent | Domain | Purpose |
| ------------------- | ---------------- | ------------------------------------------ | | ------------------------------- | ----------------- | ------------------------------------------ |
| **Hermes** | Communication | Basecamp, Outlook, MS Teams | | **Hermes (Communication)** | Communication | Basecamp, Outlook, MS Teams |
| **Athena** | Research | Outline wiki, documentation, knowledge | | **Athena (Researcher)** | Research | Outline wiki, documentation, knowledge |
| **Apollo** | Private Knowledge| Obsidian vault, personal notes | | **Apollo (Knowledge Management)** | Private Knowledge | Obsidian vault, personal notes |
| **Calliope** | Writing | Documentation, reports, prose | | **Calliope (Writer)** | Writing | Documentation, reports, prose |
**Model**: All agents use `zai-coding-plan/glm-5`.
**Configuration**: `agents/agents.json` + `prompts/*.txt` **Configuration**: `agents/agents.json` + `prompts/*.txt`
## 🛠️ Development ## 🛠️ Development
@@ -343,9 +342,8 @@ Before committing:
- **skill-creator/** - Meta-skill with bundled resources - **skill-creator/** - Meta-skill with bundled resources
- **reflection/** - Conversation analysis with rating system - **reflection/** - Conversation analysis with rating system
- **basecamp/** - MCP server integration with multiple tool categories - **qmd/** - Knowledge retrieval with QMD query documents
- **brainstorming/** - Framework-based ideation with Obsidian markdown save - **brainstorming/** - Framework-based ideation
- **memory/** - SQLite-based hybrid search implementation
- **excalidraw/** - Diagram generation with JSON templates and Python renderer - **excalidraw/** - Diagram generation with JSON templates and Python renderer
## 🔧 Customization ## 🔧 Customization

View File

@@ -1,315 +0,0 @@
---
name: basecamp
description: "Use when: (1) Managing Basecamp projects, (2) Working with Basecamp todos and tasks, (3) Reading/updating message boards and campfire, (4) Managing card tables (kanban), (5) Handling email forwards/inbox, (6) Setting up webhooks for automation. Triggers: 'Basecamp', 'project', 'todo', 'card table', 'campfire', 'message board', 'webhook', 'inbox', 'email forwards'."
compatibility: opencode
---
# Basecamp
Basecamp 3 project management integration via MCP server. Provides comprehensive access to projects, todos, messages, card tables (kanban), campfire, inbox, documents, and webhooks.
## Core Workflows
### Finding Projects and Todos
**List all projects:**
```bash
# Get all accessible Basecamp projects
get_projects
```
**Get project details:**
```bash
# Get specific project information including status, tools, and access level
get_project --project_id <id>
```
**Explore todos:**
```bash
# Get all todo lists in a project
get_todolists --project_id <id>
# Get all todos from a specific todo list (handles pagination automatically)
get_todos --recording_id <todo_list_id>
# Search across projects for todos/messages containing keywords
search_basecamp --query <search_term>
```
### Managing Card Tables (Kanban)
**Card tables** are Basecamp's kanban-style workflow management tool.
**Explore card table:**
```bash
# Get card table for a project
get_card_table --project_id <id>
# Get all columns in a card table
get_columns --card_table_id <id>
# Get all cards in a specific column
get_cards --column_id <id>
```
**Manage columns:**
```bash
# Create new column (e.g., "In Progress", "Done")
create_column --card_table_id <id> --title "Column Name"
# Update column title
update_column --column_id <id> --title "New Title"
# Move column to different position
move_column --column_id <id> --position 3
# Update column color
update_column_color --column_id <id> --color "red"
# Put column on hold (freeze work)
put_column_on_hold --column_id <id>
# Remove hold from column (unfreeze work)
remove_column_hold --column_id <id>
```
**Manage cards:**
```bash
# Create new card in a column
create_card --column_id <id> --title "Task Name" --content "Description"
# Update card details
update_card --card_id <id> --title "Updated Title" --content "New content"
# Move card to different column
move_card --card_id <id> --to_column_id <new_column_id>
# Mark card as complete
complete_card --card_id <id>
# Mark card as incomplete
uncomplete_card --card_id <id>
```
**Manage card steps (sub-tasks):**
```bash
# Get all steps for a card
get_card_steps --card_id <id>
# Create new step
create_card_step --card_id <id> --content "Sub-task description"
# Update step
update_card_step --step_id <id> --content "Updated description"
# Delete step
delete_card_step --step_id <id>
# Mark step as complete
complete_card_step --step_id <id>
# Mark step as incomplete
uncomplete_card_step --step_id <id>
```
### Working with Messages and Campfire
**Message board:**
```bash
# Get message board for a project
get_message_board --project_id <id>
# Get all messages from a project
get_messages --project_id <id>
# Get specific message
get_message --message_id <id>
```
**Campfire (team chat):**
```bash
# Get recent campfire lines (messages)
get_campfire_lines --campfire_id <id>
```
**Comments:**
```bash
# Get comments for any Basecamp item (message, todo, card, etc.)
get_comments --recording_id <id>
# Create a comment
create_comment --recording_id <id> --content "Your comment"
```
### Managing Inbox (Email Forwards)
**Inbox** handles email forwarding to Basecamp projects.
**Explore inbox:**
```bash
# Get inbox for a project (email forwards container)
get_inbox --project_id <id>
# Get all forwarded emails from a project's inbox
get_forwards --project_id <id>
# Get specific forwarded email
get_forward --forward_id <id>
# Get all replies to a forwarded email
get_inbox_replies --forward_id <id>
# Get specific reply
get_inbox_reply --reply_id <id>
```
**Manage forwards:**
```bash
# Move forwarded email to trash
trash_forward --forward_id <id>
```
### Documents
**Manage documents:**
```bash
# List documents in a vault
get_documents --vault_id <id>
# Get specific document
get_document --document_id <id>
# Create new document
create_document --vault_id <id> --title "Document Title" --content "Document content"
# Update document
update_document --document_id <id> --title "Updated Title" --content "New content"
# Move document to trash
trash_document --document_id <id>
```
### Webhooks and Automation
**Webhooks** enable automation by triggering external services on Basecamp events.
**Manage webhooks:**
```bash
# List webhooks for a project
get_webhooks --project_id <id>
# Create webhook
create_webhook --project_id <id> --callback_url "https://your-service.com/webhook" --types "TodoCreated,TodoCompleted"
# Delete webhook
delete_webhook --webhook_id <id>
```
### Daily Check-ins
**Project check-ins:**
```bash
# Get daily check-in questions for a project
get_daily_check_ins --project_id <id>
# Get answers to daily check-in questions
get_question_answers --question_id <id>
```
### Attachments and Events
**Upload and track:**
```bash
# Upload file as attachment
create_attachment --recording_id <id> --file_path "/path/to/file"
# Get events for a recording
get_events --recording_id <id>
```
## Integration with Other Skills
### Hermes (Work Communication)
Hermes loads this skill when working with Basecamp projects. Common workflows:
| User Request | Hermes Action | Basecamp Tools Used |
|--------------|---------------|---------------------|
| "Create a task in Marketing project" | Create card/todo | `create_card`, `get_columns`, `create_column` |
| "Check project updates" | Read messages/campfire | `get_messages`, `get_campfire_lines`, `get_comments` |
| "Update my tasks" | Move cards, update status | `move_card`, `complete_card`, `update_card` |
| "Add comment to discussion" | Post comment | `create_comment`, `get_comments` |
| "Review project inbox" | Check email forwards | `get_inbox`, `get_forwards`, `get_inbox_replies` |
### Workflow Patterns
**Project setup:**
1. Use `get_projects` to find existing projects
2. Use `get_project` to verify project details
3. Use `get_todolists` or `get_card_table` to understand project structure
**Task management:**
1. Use `get_todolists` or `get_columns` to find appropriate location
2. Use `create_card` or todo creation to add work
3. Use `move_card`, `complete_card` to update status
4. Use `get_card_steps` and `create_card_step` for sub-task breakdown
**Communication:**
1. Use `get_messages` or `get_campfire_lines` to read discussions
2. Use `create_comment` to contribute to existing items
3. Use `search_basecamp` to find relevant content
**Automation:**
1. Use `get_webhooks` to check existing integrations
2. Use `create_webhook` to set up external notifications
## Tool Organization by Category
**Projects & Lists:**
- `get_projects`, `get_project`, `get_todolists`, `get_todos`, `search_basecamp`
**Card Table (Kanban):**
- `get_card_table`, `get_columns`, `get_column`, `create_column`, `update_column`, `move_column`, `update_column_color`, `put_column_on_hold`, `remove_column_hold`, `watch_column`, `unwatch_column`, `get_cards`, `get_card`, `create_card`, `update_card`, `move_card`, `complete_card`, `uncomplete_card`, `get_card_steps`, `create_card_step`, `get_card_step`, `update_card_step`, `delete_card_step`, `complete_card_step`, `uncomplete_card_step`
**Messages & Communication:**
- `get_message_board`, `get_messages`, `get_message`, `get_campfire_lines`, `get_comments`, `create_comment`
**Inbox (Email Forwards):**
- `get_inbox`, `get_forwards`, `get_forward`, `get_inbox_replies`, `get_inbox_reply`, `trash_forward`
**Documents:**
- `get_documents`, `get_document`, `create_document`, `update_document`, `trash_document`
**Webhooks:**
- `get_webhooks`, `create_webhook`, `delete_webhook`
**Other:**
- `get_daily_check_ins`, `get_question_answers`, `create_attachment`, `get_events`
## Common Queries
**Finding the right project:**
```bash
# Use search to find projects by keyword
search_basecamp --query "marketing"
# Then inspect specific project
get_project --project_id <id>
```
**Understanding project structure:**
```bash
# Check which tools are available in a project
get_project --project_id <id>
# Project response includes tools: message_board, campfire, card_table, todolists, etc.
```
**Bulk operations:**
```bash
# Get all todos across a project (pagination handled automatically)
get_todos --recording_id <todo_list_id>
# Returns all pages of results
# Get all cards across all columns
get_columns --card_table_id <id>
get_cards --column_id <id> # Repeat for each column
```

View File

@@ -1,42 +0,0 @@
---
name: frontend-design
description: Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
license: Complete terms in LICENSE.txt
---
This skill guides creation of distinctive, production-grade frontend interfaces that avoid generic "AI slop" aesthetics. Implement real working code with exceptional attention to aesthetic details and creative choices.
The user provides frontend requirements: a component, page, application, or interface to build. They may include context about the purpose, audience, or technical constraints.
## Design Thinking
Before coding, understand the context and commit to a BOLD aesthetic direction:
- **Purpose**: What problem does this interface solve? Who uses it?
- **Tone**: Pick an extreme: brutally minimal, maximalist chaos, retro-futuristic, organic/natural, luxury/refined, playful/toy-like, editorial/magazine, brutalist/raw, art deco/geometric, soft/pastel, industrial/utilitarian, etc. There are so many flavors to choose from. Use these for inspiration but design one that is true to the aesthetic direction.
- **Constraints**: Technical requirements (framework, performance, accessibility).
- **Differentiation**: What makes this UNFORGETTABLE? What's the one thing someone will remember?
**CRITICAL**: Choose a clear conceptual direction and execute it with precision. Bold maximalism and refined minimalism both work - the key is intentionality, not intensity.
Then implement working code (HTML/CSS/JS, React, Vue, etc.) that is:
- Production-grade and functional
- Visually striking and memorable
- Cohesive with a clear aesthetic point-of-view
- Meticulously refined in every detail
## Frontend Aesthetics Guidelines
Focus on:
- **Typography**: Choose fonts that are beautiful, unique, and interesting. Avoid generic fonts like Arial and Inter; opt instead for distinctive choices that elevate the frontend's aesthetics; unexpected, characterful font choices. Pair a distinctive display font with a refined body font.
- **Color & Theme**: Commit to a cohesive aesthetic. Use CSS variables for consistency. Dominant colors with sharp accents outperform timid, evenly-distributed palettes.
- **Motion**: Use animations for effects and micro-interactions. Prioritize CSS-only solutions for HTML. Use Motion library for React when available. Focus on high-impact moments: one well-orchestrated page load with staggered reveals (animation-delay) creates more delight than scattered micro-interactions. Use scroll-triggering and hover states that surprise.
- **Spatial Composition**: Unexpected layouts. Asymmetry. Overlap. Diagonal flow. Grid-breaking elements. Generous negative space OR controlled density.
- **Backgrounds & Visual Details**: Create atmosphere and depth rather than defaulting to solid colors. Add contextual effects and textures that match the overall aesthetic. Apply creative forms like gradient meshes, noise textures, geometric patterns, layered transparencies, dramatic shadows, decorative borders, custom cursors, and grain overlays.
NEVER use generic AI-generated aesthetics like overused font families (Inter, Roboto, Arial, system fonts), cliched color schemes (particularly purple gradients on white backgrounds), predictable layouts and component patterns, and cookie-cutter design that lacks context-specific character.
Interpret creatively and make unexpected choices that feel genuinely designed for the context. No design should be the same. Vary between light and dark themes, different fonts, different aesthetics. NEVER converge on common choices (Space Grotesk, for example) across generations.
**IMPORTANT**: Match implementation complexity to the aesthetic vision. Maximalist designs need elaborate code with extensive animations and effects. Minimalist or refined designs need restraint, precision, and careful attention to spacing, typography, and subtle details. Elegance comes from executing the vision well.
Remember: the Coding Agent is capable of extraordinary creative work. Don't hold back, show what can truly be created when thinking outside the box and committing fully to a distinctive vision.

View File

@@ -1,123 +0,0 @@
---
name: kestra-flow
description: Generate, modify, or debug Kestra Flow YAML by fetching the live flow schema and applying the same guardrails used by the Kestra AI Copilot. Use when users ask to create, write, update, or fix a Kestra flow.
compatibility: Requires curl and network access to https://api.kestra.io/v1/plugins/schemas/flow. No Kestra instance required.
---
# Kestra Flow Skill
Use this skill to generate production-ready Kestra Flow YAML grounded in the live schema.
## When to use
Use this skill when the request includes:
- Generating a new Kestra flow from scratch
- Modifying, extending, or debugging an existing flow
- Translating a workflow description into valid Kestra YAML
## Required inputs
- A description of the desired flow behavior
- Namespace (and tenant ID if applicable)
- Existing flow YAML if the request is a modification
## Workflow
### Step 1 — Fetch the flow schema
Fetch the full schema with `curl` and read it directly — do not pipe it through any interpreter:
```bash
curl -s https://api.kestra.io/v1/plugins/schemas/flow
```
Read the raw JSON output to validate every type, property name, and structure used in the output. Do not generate anything before the schema is available.
### Step 2 — Collect context
Identify from the user message or conversation:
- `id` — flow identifier (preserve if provided)
- `namespace` — target namespace (preserve if provided)
- Existing flow YAML (for modification requests)
- Whether this is an **addition / deletion / modification** or a **full rewrite**
### Step 3 — Generate the YAML
Apply all generation rules below, then output raw YAML only.
## Generation rules
**Schema compliance**
- Use only task types and properties explicitly defined in the fetched schema. Never invent or guess types or property names.
- Property keys must be unique within each task or block.
**Structural preservation**
- Always preserve root-level `id` and `namespace` if provided.
- For modification requests, touch only the relevant part. Do not restructure or rewrite unrelated sections.
- Avoid duplicating existing intent (e.g., replace a log message rather than adding a second one).
**Triggers**
- Include at least one trigger if execution should start based on an event or schedule.
- Do NOT add a `Schedule` trigger unless a regular occurrence is explicitly requested.
- Trigger outputs are accessed via `{{ trigger.outputName }}`; only use variables defined in the trigger's declared outputs.
**Looping**
- Use `ForEach` for repeated actions over a collection.
- Use `LoopUntil` for condition-based looping.
**Flow outputs**
- Only include flow-level `outputs` if the user explicitly requests returning a value from the execution.
**State tracking between executions**
- For state-change detection, use KV tasks (`io.kestra.plugin.core.kv.Set` / `io.kestra.plugin.core.kv.Get`) to store and compare state across executions.
**JDBC plugin**
- Always set `fetchType: STORE` when using JDBC tasks.
**Date manipulation in Pebble**
- Use `dateAdd` and `date` filters for date arithmetic.
- Apply `| number` before numeric comparisons.
**Credentials and secrets**
- Never embed secrets or hardcoded credentials.
- Use flow `inputs` of type `SECRET` or Pebble expressions (e.g., `{{ secret('MY_SECRET') }}`).
**APIs and connectivity**
- Prefer public/unauthenticated APIs unless the user specifies otherwise.
- Never assume a local port; use remote URLs.
**Quoting**
- Prefer double quotes; use single quotes inside double-quoted strings when needed.
**Error handling**
- If the request cannot be fulfilled using only schema-defined types and properties, output exactly:
```
I cannot generate a valid Kestra Flow YAML for this request based on the available schema.
```
## Output format
- **Raw YAML only** — no prose, no markdown fences, no explanations outside the YAML.
- Use `#` comments at the top of the output for any caveats, assumptions, or warnings.
- The output must be ready to paste directly into the Kestra UI or deploy via `kestractl`.
## Example prompts
- "Write a Kestra flow that fetches a public API every hour and stores the result in KV store."
- "Add a Slack notification task to this existing flow when any task fails."
- "Generate a flow in namespace `prod.data` that reads from a Postgres table and writes the result to S3."
- "Debug this flow YAML — it has a trigger variable reference that doesn't exist."

View File

@@ -1,113 +0,0 @@
---
name: kestra-ops
description: Operate Kestra environments using kestractl for context setup, flow inspection, flow validation and deployment, execution monitoring, namespace operations, and namespace file management. Use when users request Kestra operational CLI tasks in dev, staging, or production.
compatibility: Requires kestractl, network access to the Kestra API, and valid tenant/token credentials.
---
# Kestra Operations Skill
Use this skill to perform day-to-day Kestra operations with `kestractl`.
## When to use
Use this skill when the request includes:
- Listing, inspecting, validating, or deploying flows
- Triggering executions and checking execution status
- Managing namespaces or namespace files (`nsfiles`)
- Configuring or switching Kestra CLI contexts
## Required inputs
- Target environment or context (`dev`, `staging`, `prod`)
- Host URL, tenant, and authentication method (usually token)
- Namespace, flow ID, execution ID, and/or local file paths
- Output preference (`table` for human-readable, `json` for automation)
## Prerequisites
- `kestractl` is installed and executable
- Access token and tenant are available
- A valid context exists in `~/.kestractl/config.yaml` or values are provided via env vars/flags
## Configuration precedence
Resolve config from highest to lowest precedence:
1. Command flags (`--host`, `--tenant`, `--token`, `--output`)
2. Environment variables (`KESTRACTL_HOST`, `KESTRACTL_TENANT`, `KESTRACTL_TOKEN`, `KESTRACTL_OUTPUT`)
3. Config file (`~/.kestractl/config.yaml`)
4. Built-in defaults
Common setup:
```bash
kestractl config add dev http://localhost:8080 main --token DEV_TOKEN
kestractl config add prod https://prod.kestra.io production --token PROD_TOKEN
kestractl config use dev
kestractl config show
```
## Standard workflow
1. Resolve and confirm the target context.
2. Run read-only discovery first.
3. Validate artifacts before any deployment.
4. Execute the requested operation with explicit flags.
5. Verify outcomes (`--wait` for run operations where needed).
6. Return a concise ops report with results and follow-up actions.
## Command patterns
Flows:
```bash
kestractl flows list my.namespace
kestractl flows get my.namespace my-flow
kestractl flows validate ./flows/
kestractl flows deploy ./flows/ --namespace prod.namespace --override --fail-fast
```
Executions:
```bash
kestractl executions run my.namespace my-flow --wait
kestractl executions get 2TLGqHrXC9k8BczKJe5djX
```
Namespaces:
```bash
kestractl namespaces list
kestractl namespaces list --query my.namespace
```
Namespace files:
```bash
kestractl nsfiles list my.namespace --path workflows/ --recursive
kestractl nsfiles get my.namespace workflows/example.yaml --revision 3
kestractl nsfiles upload my.namespace ./assets resources --override --fail-fast
kestractl nsfiles delete my.namespace workflows --recursive
```
## Guardrails
- Confirm production context before write operations (`deploy`, `upload`, `delete`).
- Prefer `flows validate` before `flows deploy`.
- Use `--output json` for scripting and automation reliability.
- Avoid `--verbose` in shared logs because it can expose credentials.
- For destructive `nsfiles` actions, confirm path scope and only use `--force` intentionally.
## Response format
- Context used (host, tenant, context name)
- Commands executed (grouped by read vs write)
- Results (success/failure and key IDs)
- Risks, rollback notes, and follow-up actions
## Example prompts
- "Use `kestra-ops` to validate and deploy all flows in `./flows` to `prod.namespace` with fail-fast enabled, then report what changed."
- "Use `kestra-ops` to run `my-flow` in `my.namespace`, wait for completion, and summarize execution status."
- "Use `kestra-ops` to upload `./assets` to namespace files under `resources` with override enabled, then list uploaded files recursively."

View File

@@ -15,7 +15,7 @@ Knowledge management integration via Obsidian Local REST API for vault operation
- **Vault path** configured in plugin settings - **Vault path** configured in plugin settings
- **API key** set (optional, if authentication enabled) - **API key** set (optional, if authentication enabled)
API endpoints available at `http://127.0.0.1:27124` by default. API endpoints available at `http://127.0.0.1:27123` by default.
## Core Workflows ## Core Workflows
@@ -24,7 +24,7 @@ API endpoints available at `http://127.0.0.1:27124` by default.
Get list of all files in vault: Get list of all files in vault:
```bash ```bash
curl -X GET "http://127.0.0.1:27124/list" curl -X GET "http://127.0.0.1:27123/list"
``` ```
Returns array of file objects with `path`, `mtime`, `ctime`, `size`. Returns array of file objects with `path`, `mtime`, `ctime`, `size`.
@@ -34,7 +34,7 @@ Returns array of file objects with `path`, `mtime`, `ctime`, `size`.
Retrieve metadata for a specific file: Retrieve metadata for a specific file:
```bash ```bash
curl -X GET "http://127.0.0.1:27124/get-file-info?path=Note%20Title.md" curl -X GET "http://127.0.0.1:27123/get-file-info?path=Note%20Title.md"
``` ```
Returns file metadata including tags, links, frontmatter. Returns file metadata including tags, links, frontmatter.
@@ -44,12 +44,13 @@ Returns file metadata including tags, links, frontmatter.
Create a new note in the vault: Create a new note in the vault:
```bash ```bash
curl -X POST "http://127.0.0.1:27124/create-note" \ curl -X POST "http://127.0.0.1:27123/create-note" \
-H "Content-Type: application/json" \ -H "Content-Type: application/json" \
-d '{"content": "# Note Title\n\nNote content..."}' -d '{"content": "# Note Title\n\nNote content..."}'
``` ```
Use `path` parameter for specific location: Use `path` parameter for specific location:
```json ```json
{ {
"content": "# Note Title\n\nNote content...", "content": "# Note Title\n\nNote content...",
@@ -62,7 +63,7 @@ Use `path` parameter for specific location:
Read note content by path: Read note content by path:
```bash ```bash
curl -X GET "http://127.0.0.1:27124/read-note?path=Note%20Title.md" curl -X GET "http://127.0.0.1:27123/read-note?path=Note%20Title.md"
``` ```
Returns note content as plain text or structured JSON with frontmatter parsing. Returns note content as plain text or structured JSON with frontmatter parsing.
@@ -72,7 +73,7 @@ Returns note content as plain text or structured JSON with frontmatter parsing.
Modify existing note: Modify existing note:
```bash ```bash
curl -X PUT "http://127.0.0.1:27124/update-note" \ curl -X PUT "http://127.0.0.1:27123/update-note" \
-H "Content-Type: application/json" \ -H "Content-Type: application/json" \
-d '{"path": "Note Title.md", "content": "# Updated Title\n\nNew content..."}' -d '{"path": "Note Title.md", "content": "# Updated Title\n\nNew content..."}'
``` ```
@@ -82,7 +83,7 @@ curl -X PUT "http://127.0.0.1:27124/update-note" \
Remove note from vault: Remove note from vault:
```bash ```bash
curl -X DELETE "http://127.0.0.1:27124/delete-note?path=Note%20Title.md" curl -X DELETE "http://127.0.0.1:27123/delete-note?path=Note%20Title.md"
``` ```
**Warning**: This operation is irreversible. Confirm with user before executing. **Warning**: This operation is irreversible. Confirm with user before executing.
@@ -93,10 +94,10 @@ Find notes by content, title, or tags:
```bash ```bash
# Content search # Content search
curl -X GET "http://127.0.0.1:27124/search?q=search%20term" curl -X GET "http://127.0.0.1:27123/search?q=search%20term"
# Search with parameters # Search with parameters
curl -X GET "http://127.0.0.1:27124/search?q=search%20term&path=subdirectory&context-length=100" curl -X GET "http://127.0.0.1:27123/search?q=search%20term&path=subdirectory&context-length=100"
``` ```
Returns array of matches with file path and context snippets. Returns array of matches with file path and context snippets.
@@ -109,10 +110,10 @@ Retrieve or create daily note for specific date:
```bash ```bash
# Today # Today
curl -X GET "http://127.0.0.1:27124/daily-note" curl -X GET "http://127.0.0.1:27123/daily-note"
# Specific date (YYYY-MM-DD) # Specific date (YYYY-MM-DD)
curl -X GET "http://127.0.0.1:27124/daily-note?date=2026-02-03" curl -X GET "http://127.0.0.1:27123/daily-note?date=2026-02-03"
``` ```
Returns daily note content or creates using Obsidian's Daily Notes template. Returns daily note content or creates using Obsidian's Daily Notes template.
@@ -122,7 +123,7 @@ Returns daily note content or creates using Obsidian's Daily Notes template.
Modify today's daily note: Modify today's daily note:
```bash ```bash
curl -X PUT "http://127.0.0.1:27124/daily-note" \ curl -X PUT "http://127.0.0.1:27123/daily-note" \
-H "Content-Type: application/json" \ -H "Content-Type: application/json" \
-d '{"content": "## Journal\n\nToday I learned..."}' -d '{"content": "## Journal\n\nToday I learned..."}'
``` ```
@@ -132,7 +133,7 @@ curl -X PUT "http://127.0.0.1:27124/daily-note" \
Retrieve vault metadata: Retrieve vault metadata:
```bash ```bash
curl -X GET "http://127.0.0.1:27124/vault-info" curl -X GET "http://127.0.0.1:27123/vault-info"
``` ```
Returns vault path, file count, and configuration details. Returns vault path, file count, and configuration details.
@@ -156,6 +157,7 @@ status: active
### WikiLinks ### WikiLinks
Reference other notes using Obsidian WikiLinks: Reference other notes using Obsidian WikiLinks:
- `[[Note Title]]` - Link to note by title - `[[Note Title]]` - Link to note by title
- `[[Note Title|Alias]]` - Link with custom display text - `[[Note Title|Alias]]` - Link with custom display text
- `[[Note Title#Heading]]` - Link to specific heading - `[[Note Title#Heading]]` - Link to specific heading
@@ -164,6 +166,7 @@ Reference other notes using Obsidian WikiLinks:
### Tagging ### Tagging
Use tags for categorization: Use tags for categorization:
- `#tag` - Single-word tag - `#tag` - Single-word tag
- `#nested/tag` - Hierarchical tags - `#nested/tag` - Hierarchical tags
- Tags in frontmatter for metadata - Tags in frontmatter for metadata
@@ -174,7 +177,7 @@ Use tags for categorization:
### Create Brainstorm Note ### Create Brainstorm Note
```bash ```bash
curl -X POST "http://127.0.0.1:27124/create-note" \ curl -X POST "http://127.0.0.1:27123/create-note" \
-H "Content-Type: application/json" \ -H "Content-Type: application/json" \
-d '{ -d '{
"path": "03-resources/brainstorms/2026-02-03-Topic.md", "path": "03-resources/brainstorms/2026-02-03-Topic.md",
@@ -186,10 +189,10 @@ curl -X POST "http://127.0.0.1:27124/create-note" \
```bash ```bash
# Get current daily note # Get current daily note
NOTE=$(curl -s "http://127.0.0.1:27124/daily-note") NOTE=$(curl -s "http://127.0.0.1:27123/daily-note")
# Append content # Append content
curl -X PUT "http://127.0.0.1:27124/daily-note" \ curl -X PUT "http://127.0.0.1:27123/daily-note" \
-H "Content-Type: application/json" \ -H "Content-Type: application/json" \
-d "{\"content\": \"${NOTE}\n\n## Journal Entry\n\nLearned about Obsidian API integration.\"}" -d "{\"content\": \"${NOTE}\n\n## Journal Entry\n\nLearned about Obsidian API integration.\"}"
``` ```
@@ -198,10 +201,10 @@ curl -X PUT "http://127.0.0.1:27124/daily-note" \
```bash ```bash
# Search for related notes # Search for related notes
curl -s "http://127.0.0.1:27124/search?q=Obsidian" curl -s "http://127.0.0.1:27123/search?q=Obsidian"
# Create note with WikiLinks to found notes # Create note with WikiLinks to found notes
curl -X POST "http://127.0.0.1:27124/create-note" \ curl -X POST "http://127.0.0.1:27123/create-note" \
-H "Content-Type: application/json" \ -H "Content-Type: application/json" \
-d '{ -d '{
"path": "02-areas/Obsidian API Guide.md", "path": "02-areas/Obsidian API Guide.md",
@@ -212,7 +215,7 @@ curl -X POST "http://127.0.0.1:27124/create-note" \
## Integration with Other Skills ## Integration with Other Skills
| From Obsidian | To skill | Handoff pattern | | From Obsidian | To skill | Handoff pattern |
|--------------|----------|----------------| | ------------------ | --------------- | ------------------------------------------ |
| Note created | brainstorming | Create brainstorm note with frontmatter | | Note created | brainstorming | Create brainstorm note with frontmatter |
| Daily note updated | reflection | Append conversation analysis to journal | | Daily note updated | reflection | Append conversation analysis to journal |
| Research note | research | Save research findings with tags | | Research note | research | Save research findings with tags |
@@ -248,6 +251,7 @@ See the qmd skill for memory workflows, session summaries, and auto-recall patte
## Error Handling ## Error Handling
Common HTTP status codes: Common HTTP status codes:
- `200 OK` - Success - `200 OK` - Success
- `404 Not Found` - File or resource doesn't exist - `404 Not Found` - File or resource doesn't exist
- `400 Bad Request` - Invalid parameters or malformed JSON - `400 Bad Request` - Invalid parameters or malformed JSON