Merge pull request 'feat: pi-agent wrapper' (#6) from feature/pi-agent-wrapper into master

Reviewed-on: #6
This commit was merged in pull request #6.
This commit is contained in:
2026-04-14 18:52:02 +02:00
17 changed files with 1076 additions and 394 deletions

Binary file not shown.

View File

@@ -1,3 +1,3 @@
{ {
"timestamp": "2026-04-13T19:16:03.510Z" "timestamp": "2026-04-14T15:35:06.339Z"
} }

View File

@@ -1,3 +1,3 @@
{ {
"timestamp": "2026-04-13T19:16:06.847Z" "timestamp": "2026-04-14T15:35:07.218Z"
} }

View File

@@ -1,3 +1,3 @@
{ {
"timestamp": "2026-04-13T18:05:03.813Z" "timestamp": "2026-04-14T05:13:57.102Z"
} }

View File

@@ -1,3 +1,3 @@
{ {
"timestamp": "2026-04-13T18:04:03.698Z" "timestamp": "2026-04-14T05:11:47.088Z"
} }

View File

@@ -2,5 +2,5 @@
"files": {}, "files": {},
"turnCycles": 0, "turnCycles": 0,
"maxCycles": 3, "maxCycles": 3,
"lastUpdated": "2026-04-13T19:16:06.848Z" "lastUpdated": "2026-04-14T15:35:07.218Z"
} }

125
PLAN.md
View File

@@ -1,57 +1,98 @@
# PLAN # PLAN
## Context ## Context
- Implement **Option A**: run `pi` through a **rootless Podman** container while keeping a native terminal UX. - Target implementation is confirmed as `m3ta.pi-agent` (no container mode).
- Preserve `flake.nix` + `nix develop` workflows by using the **host Nix daemon** from inside the container. - You want a **fresh-from-scratch rewrite** of `modules/nixos/pi-agent.nix` and to ignore prior behavior as design baseline.
- Keep logic centralized in `nixpkgs` and host-specific values in `nixos-config`. - Required behavior:
- dedicated isolated Unix user/group for Pi (`pi-agent` defaults)
- host UX stays `pi`
- bypass prevention (wrapper should be the canonical executable path)
- per-host-user project root policy (different roots per user)
- no writable/access scope beyond isolated Pi home/state + explicitly allowed project roots
- isolated environment must include user Pi config from HM (`modules/home-manager/coding/agents/pi.nix`) and support Nix-managed settings/env merging.
- Repo findings:
- `modules/nixos/default.nix` + `flake.nix` already import/export `pi-agent` module.
- `modules/home-manager/coding/agents/pi.nix` already renders Pi config files under a configurable relative path (`coding.agents.pi.path`, default `.pi/agent`).
## Approach ## Approach
- Extend the existing Home Manager module at `modules/home-manager/coding/agents/pi.nix` with a `coding.agents.pi.container.*` option set. - Fully replace `modules/nixos/pi-agent.nix` with a new design centered on:
- Implement **Option A defaults** from your decisions: 1. **Dedicated runtime identity** (`user/group/createUser/stateDir`).
- wrapper command name is `pi` (native command replacement), 2. **Policy-driven wrapper flow** (`pi` -> privileged runner -> isolated execution).
- project roots are mounted read-write, 3. **Per-user project allowlists** (cwd must be under roots assigned to invoking host user).
- `autoStart = true` by default, 4. **Config + env convergence**:
- `autoNixDevelop = false` by default, - sync user HM Pi config directory (e.g. `~/.pi/agent`) into isolated state,
- `image` default set to `docker.io/nixos/nix:latest` as a conservative base and overridden in host config for a Pi-ready image. - merge Nix-managed Pi settings into isolated `settings.json`,
- Generate a deterministic wrapper script (installed via Home Manager) that: - merge Nix-managed env vars + env files into isolated runtime env source,
- verifies cwd is within allowed project roots, - make merged results visible to the isolated runtime every invocation (without container recreation semantics).
- ensures rootless container exists/runs, 5. **Hard isolation defaults** with `systemd-run` sandboxing and explicit bind/read-write paths only for state + allowed projects.
- maps cwd and runs `podman exec -it <container> pi "$@"`, - Keep wrapper command as `pi`, and avoid exposing direct package binary on PATH when wrapper mode is enabled.
- optionally runs via `nix develop -c pi ...` when `autoNixDevelop=true` and `flake.nix` is present.
- Configure safe Podman mounts:
- allowed project roots only,
- host Nix daemon socket (Option A),
- minimal Nix config/certs needed for CLI operation.
- Wire host-specific config in `nixos-config/home/features/coding/pi.nix` and remove direct host `pi` binary installation from the coding package list to avoid command-path ambiguity.
## Files to modify ## Files to modify
- `modules/home-manager/coding/agents/pi.nix` (new container options + wrapper + container lifecycle logic) - `modules/nixos/pi-agent.nix` (full rewrite)
- `/home/m3tam3re/p/NIX/nixos-config/home/features/coding/pi.nix` (host-specific container settings) - `modules/nixos/default.nix` (only if import list changes)
- `flake.nix` (only if output export attrs change)
- `docs/guides/pi-agent-isolation.md` (update option model + merge behavior)
- `docs/guides/using-modules.md` (update examples/options)
## Reuse ## Reuse
- Existing Pi HM module and option namespace: - Module/user/service patterns:
- `modules/home-manager/coding/agents/pi.nix` - `modules/nixos/mem0.nix`
- Existing coding feature wiring in nixos-config: - `templates/nixos-module/default.nix`
- `/home/m3tam3re/p/NIX/nixos-config/home/features/coding/default.nix` - Pi config rendering contract to consume/sync:
- `/home/m3tam3re/p/NIX/nixos-config/home/features/coding/pi.nix` - `modules/home-manager/coding/agents/pi.nix` (`coding.agents.pi.path`, `settings.json`, `mcp.json`, agent docs)
## Steps ## Steps
- [ ] Add `coding.agents.pi.container` options (enable/name/image/projectRoots/autoStart/autoNixDevelop/extraRunArgs/extraEnv) with defaults matching your preferences (`autoStart=true`, `autoNixDevelop=false`, default image as above). - [ ] Define the new `m3ta.pi-agent` option schema for fresh module behavior, including:
- [ ] Implement wrapper script generation in HM module with cwd allowlist checks and container create/start/exec behavior. - base runtime options (`package`, `binaryName`, `user`, `group`, `createUser`, `stateDir`),
- [ ] Make wrapper binary name `pi` (native UX) when container mode is enabled. - wrapper controls (`enable`, `commandName`, runner name, hide-direct-binary behavior),
- [ ] Add deterministic container run/create args with safe mounts and host Nix daemon socket. - per-user policy map (allowed users and each users allowed project roots),
- [ ] Add optional in-container `nix develop -c pi` path when flake project is detected. - host-config sync knobs (source path relative/absolute),
- [ ] Wire host-specific values in nixos-config `home/features/coding/pi.nix`. - Nix-managed settings/env options for merge.
- [ ] Remove direct host `pi` package install in nixos-config coding packages so wrapper is the effective `pi` command. - [ ] Implement new wrapper script:
- [ ] Validate eval/build and document command outputs for flake and non-flake wrapper behavior. - identify invoking user,
- validate user exists in policy map,
- expand/resolve that users roots,
- deny out-of-policy cwd,
- escalate only to the dedicated runner.
- [ ] Implement new privileged runner script:
- enforce root-only execution,
- resync host Pi config into isolated config dir,
- merge managed settings into isolated settings file,
- merge managed env + env files into isolated env file/export source,
- prepare deterministic project mount aliases under isolated home,
- launch Pi through hardened transient `systemd-run` unit as isolated user.
- [ ] Apply hardening policy in execution profile:
- `ProtectSystem=strict`, `ProtectHome=yes`, `NoNewPrivileges=yes`,
- explicit `ReadWritePaths` limited to state + mounted allowed projects,
- bounded runtime PATH and writable tool/cache locations under `stateDir`.
- [ ] Add assertions for misconfiguration (e.g., empty per-user roots, wrapper enabled without authorized users).
- [ ] Add tightly scoped sudoers rule for runner command only.
- [ ] Ensure bypass prevention in packaging/PATH behavior when wrapper mode is enabled.
- [ ] Update docs with new option examples (per-user roots + settings/env merge + HM sync expectations).
## Verification ## Verification
- Static checks for both repos (module eval/build where appropriate). - Static/eval:
- Home Manager evaluation/switch check in nixos-config. - `nix flake check`
- Manual wrapper checks: - host config eval/build with new module options.
- Inside a flake project: `pi` resolves via `nix develop -c pi ...` when enabled. - Policy checks:
- Outside flake project: `pi` runs directly via container exec. - authorized user in authorized root: succeeds
- Capture exact commands + outputs for report. - authorized user outside authorized root: denied
- unauthorized user: denied
- Isolation checks:
- runtime identity is isolated service user (`pi-agent`)
- no unintended write access outside `stateDir` + allowed project binds
- direct binary bypass unavailable when wrapper mode is enabled
- Merge checks:
- HM-rendered Pi files are present in isolated config dir
- Nix-managed settings are merged into effective isolated `settings.json`
- env values from declarative attrs + env files are present in isolated runtime environment.
## Open questions ## Open questions
- None currently blocking; proceed with conservative default image and host override guidance. - None.
## Resolved decisions
- Merge precedence is confirmed as:
1) synced host Pi config/env,
2) Nix-managed settings/env override synced values,
3) wrapper/runtime shell env does not implicitly override managed values.
- Per-user host config source defaults to `.pi/agent` for all users, with optional per-user override support in the policy map.

View File

@@ -0,0 +1,99 @@
# Pi Agent Isolation (two-repo setup)
This guide documents the split setup where:
- `m3ta-nixpkgs` provides reusable module logic.
- `nixos-config` consumes it on specific hosts.
## 1) In `m3ta-nixpkgs`
Use:
- Home Manager module: `coding.agents.pi`
- renders Pi config in user space (default path: `.pi/agent` => `~/.pi/agent`)
- NixOS module: `m3ta.pi-agent`
- dedicated user/group (default `pi-agent`)
- state directory (default `/var/lib/pi-agent`)
- hardened execution via transient `systemd-run`
- host-side wrapper command (default `pi`)
- per-user allowlists via `hostUsers.<name>.projectRoots`
- host config sync into isolated runtime (default source `.pi/agent`)
- managed settings/env merge into isolated runtime
## 2) In consumer repo (`nixos-config`)
### Home Manager side
Keep Pi config rendering enabled for your normal user:
```nix
coding.agents.pi = {
enable = true;
agentsInput = inputs.agents;
path = ".pi/agent";
};
```
### NixOS host side (example: `m3-kratos`)
Enable isolated wrapper execution:
```nix
m3ta.pi-agent = {
enable = true;
stateDir = "/var/lib/pi-agent";
hostUsers = {
m3tam3re = {
projectRoots = ["~/p" "~/work/private"];
# optional; defaults to wrapper.hostConfigPath
configPath = ".pi/agent";
};
};
settings = {
defaultProvider = "anthropic";
defaultModel = "anthropic/claude-sonnet-4";
quietStartup = true;
};
environment = {
PI_TELEMETRY = "0";
};
environmentFiles = [
"/run/secrets/pi-agent.env"
];
wrapper = {
enable = true;
commandName = "pi";
hideDirectBinary = true;
hostConfigPath = ".pi/agent";
};
};
```
## 3) Authorization model
The wrapper uses a tightly scoped sudo rule:
- authorized users may run only the privileged runner command
- with `NOPASSWD`
- no broad `NOPASSWD: ALL`
## 4) Merge behavior
At invocation time, isolated runtime files are built from:
1. Host user Pi config (synced from source path, e.g. `~/.pi/agent`)
2. Nix-managed settings/env (override host values)
3. Environment files (appended after managed env attrs)
This keeps user-authored Pi config available while allowing reproducible Nix overrides.
## 5) Migration notes
- If wrapper mode is canonical, remove direct `pi-coding-agent` from user package lists to reduce command-path ambiguity.
- Rebuild host config and test from an allowlisted project path.
- Validate `pi` process identity runs as `pi-agent`.

View File

@@ -157,6 +157,32 @@ m3ta.mem0 = {
**Documentation**: [mem0 Module](../modules/nixos/mem0.md) **Documentation**: [mem0 Module](../modules/nixos/mem0.md)
#### `m3ta.pi-agent`
Isolated Pi execution with a dedicated system user (`pi-agent` by default),
a hardened runtime, and a host-side `pi` wrapper command.
```nix
m3ta.pi-agent = {
enable = true;
stateDir = "/var/lib/pi-agent";
hostUsers = {
m3tam3re = {
projectRoots = ["~/p" "~/work/private"];
configPath = ".pi/agent"; # optional
};
};
settings.defaultModel = "anthropic/claude-sonnet-4";
environment.PI_TELEMETRY = "0";
wrapper.commandName = "pi";
wrapper.hideDirectBinary = true;
};
```
**Documentation**: [Pi Agent Isolation Guide](./pi-agent-isolation.md)
### Home Manager Modules ### Home Manager Modules
#### `m3ta.ports` #### `m3ta.ports`
@@ -255,6 +281,7 @@ Pi agent deployment from canonical TOML definitions.
coding.agents.pi = { coding.agents.pi = {
enable = true; enable = true;
agentsInput = inputs.agents; agentsInput = inputs.agents;
path = ".pi/agent"; # default; can be changed
}; };
``` ```

View File

@@ -72,6 +72,7 @@
# Individual modules for selective imports # Individual modules for selective imports
ports = ./modules/nixos/ports.nix; ports = ./modules/nixos/ports.nix;
mem0 = ./modules/nixos/mem0.nix; mem0 = ./modules/nixos/mem0.nix;
pi-agent = ./modules/nixos/pi-agent.nix;
}; };
# Home Manager modules - for user-level configuration # Home Manager modules - for user-level configuration

View File

@@ -119,23 +119,26 @@ coding.agents.claude-code = {
enable = true; enable = true;
agentsInput = inputs.agents; agentsInput = inputs.agents;
modelOverrides = {}; modelOverrides = {};
externalSkills = [{ src = inputs.skills-anthropic; }];
}; };
``` ```
**Options:** `enable`, `agentsInput`, `modelOverrides` **Options:** `enable`, `agentsInput`, `modelOverrides`, `externalSkills`
### Pi (`coding.agents.pi`) ### Pi (`coding.agents.pi`)
Renders `AGENTS.md` + `SYSTEM.md` to `~/.pi/agent/`: Renders `AGENTS.md` + `SYSTEM.md` to `~/.pi/agent/` by default:
```nix ```nix
coding.agents.pi = { coding.agents.pi = {
enable = true; enable = true;
agentsInput = inputs.agents; agentsInput = inputs.agents;
path = ".pi/agent"; # default, relative to $HOME
externalSkills = [{ src = inputs.skills-anthropic; }];
}; };
``` ```
**Options:** `enable`, `agentsInput` **Options:** `enable`, `path`, `agentsInput`, `modelOverrides`, `externalSkills`, `primaryAgent`, `mcpServers`, `settings`
### Project-level usage ### Project-level usage

View File

@@ -36,6 +36,44 @@ in {
''; '';
}; };
externalSkills = mkOption {
type = types.listOf (types.submodule {
options = {
src = mkOption {
type = types.anything;
description = "Flake input pointing to a skills repository root.";
};
skillsDir = mkOption {
type = types.str;
default = "skills";
description = ''
Subdirectory inside src that contains skill folders.
'';
};
selectSkills = mkOption {
type = types.nullOr (types.listOf types.str);
default = null;
description = ''
List of skill names to cherry-pick from this source.
null means include every skill found in skillsDir.
'';
};
};
});
default = [];
description = ''
External skill sources passed to mkOpencodeSkills.
Each entry maps directly to an element of the externalSkills
list accepted by the AGENTS flake's lib.mkOpencodeSkills.
'';
example = literalExpression ''
[
{ src = inputs.skills-anthropic; selectSkills = [ "claude-api" ]; }
{ src = inputs.skills-vercel; }
]
'';
};
mcpServers = mkOption { mcpServers = mkOption {
type = types.attrsOf types.anything; type = types.attrsOf types.anything;
default = if mcpCfg != null then mcpCfg.servers else {}; default = if mcpCfg != null then mcpCfg.servers else {};
@@ -82,6 +120,21 @@ in {
source = "${rendered}/.claude/agents"; source = "${rendered}/.claude/agents";
}; };
# Skills (merged from personal AGENTS repo + optional external skills)
home.file.".claude/skills" = mkIf (cfg.agentsInput != null) {
source = cfg.agentsInput.lib.mkOpencodeSkills {
inherit pkgs;
customSkills = "${cfg.agentsInput}/skills";
externalSkills =
map (
entry:
{inherit (entry) src skillsDir;}
// optionalAttrs (entry.selectSkills != null) {inherit (entry) selectSkills;}
)
cfg.externalSkills;
};
};
# Rendered settings.json with permissions + MCP servers # Rendered settings.json with permissions + MCP servers
home.file.".claude/settings.json" = mkIf (settingsJson != null) { home.file.".claude/settings.json" = mkIf (settingsJson != null) {
source = "${settingsJson}"; source = "${settingsJson}";

View File

@@ -7,124 +7,32 @@
with lib; let with lib; let
cfg = config.coding.agents.pi; cfg = config.coding.agents.pi;
mcpCfg = config.programs.mcp or null; mcpCfg = config.programs.mcp or null;
hasPiPackage = pkgs ? pi;
defaultPiImageArchive =
if hasPiPackage
then
pkgs.dockerTools.buildLayeredImage {
name = "pi-agent";
tag = "latest";
contents = with pkgs; [
bashInteractive
bun
cacert
coreutils
findutils
git
gnugrep
gnused
nix
nodejs
pi
];
config = {
Env = [
"PATH=/bin:/usr/bin"
"NIX_REMOTE=daemon"
"SSL_CERT_FILE=${pkgs.cacert}/etc/ssl/certs/ca-bundle.crt"
];
WorkingDir = "/tmp";
Cmd = ["${pkgs.coreutils}/bin/sleep" "infinity"];
};
}
else null;
in { in {
options.coding.agents.pi = { options.coding.agents.pi = {
enable = mkEnableOption "Pi agent management via canonical agent.toml definitions"; enable = mkEnableOption "Pi agent management via canonical agent.toml definitions";
container = mkOption { path = mkOption {
description = "Run Pi through a rootless Podman container while keeping a native host UX.";
default = {};
type = types.submodule {
options = {
enable = mkEnableOption "Containerized Pi wrapper";
name = mkOption {
type = types.str; type = types.str;
default = "pi-agent"; default = ".pi/agent";
description = "Container name used by the Pi wrapper.";
};
image = mkOption {
type = types.str;
default = if hasPiPackage then "pi-agent:latest" else "docker.io/nixos/nix:latest";
description = '' description = ''
Podman image to run for Pi. Relative path (inside the Home Manager user's home) where Pi agent
Defaults to a local declarative Pi-ready image when `pkgs.pi` exists, config should be materialized.
otherwise falls back to docker.io/nixos/nix:latest.
Defaults to `.pi/agent`, i.e. `~/.pi/agent`.
''; '';
}; example = ".config/pi/agent";
imageArchive = mkOption {
type = types.nullOr types.path;
default = defaultPiImageArchive;
description = ''
Optional OCI/Docker archive path to load into Podman when `image`
is missing locally. By default, a Pi-ready local image archive is
generated when `pkgs.pi` is available.
'';
};
projectRoots = mkOption {
type = types.listOf types.str;
default = [];
description = ''
Allowlisted absolute host roots that may be mounted into the container.
Wrapper exits with a clear error when cwd is outside these roots.
'';
example = ["/home/m3tam3re/p"];
};
autoStart = mkOption {
type = types.bool;
default = true;
description = "Automatically start container when wrapper is invoked and it is not running.";
};
autoNixDevelop = mkOption {
type = types.bool;
default = false;
description = ''
If true and cwd contains flake.nix, run Pi as:
nix develop -c pi ...
inside the container.
'';
};
extraRunArgs = mkOption {
type = types.listOf types.str;
default = [];
description = "Additional Podman create args appended after safe defaults.";
};
extraEnv = mkOption {
type = types.attrsOf types.str;
default = {};
description = "Extra environment variables passed to the container.";
};
};
};
}; };
mcpServers = mkOption { mcpServers = mkOption {
type = types.attrsOf types.anything; type = types.attrsOf types.anything;
default = if mcpCfg != null then mcpCfg.servers else {}; default =
if mcpCfg != null
then mcpCfg.servers
else {};
defaultText = literalExpression "config.programs.mcp.servers"; defaultText = literalExpression "config.programs.mcp.servers";
description = '' description = ''
MCP server configurations for Pi (pi-mcp-adapter). MCP server configurations for Pi (pi-mcp-adapter).
Written to ~/.pi/agent/mcp.json. Written to `${cfg.path}/mcp.json`.
Automatically inherits from config.programs.mcp.servers. Automatically inherits from config.programs.mcp.servers.
''; '';
}; };
@@ -149,6 +57,44 @@ in {
''; '';
}; };
externalSkills = mkOption {
type = types.listOf (types.submodule {
options = {
src = mkOption {
type = types.anything;
description = "Flake input pointing to a skills repository root.";
};
skillsDir = mkOption {
type = types.str;
default = "skills";
description = ''
Subdirectory inside src that contains skill folders.
'';
};
selectSkills = mkOption {
type = types.nullOr (types.listOf types.str);
default = null;
description = ''
List of skill names to cherry-pick from this source.
null means include every skill found in skillsDir.
'';
};
};
});
default = [];
description = ''
External skill sources passed to mkOpencodeSkills.
Each entry maps directly to an element of the externalSkills
list accepted by the AGENTS flake's lib.mkOpencodeSkills.
'';
example = literalExpression ''
[
{ src = inputs.skills-anthropic; selectSkills = [ "claude-api" ]; }
{ src = inputs.skills-vercel; }
]
'';
};
primaryAgent = mkOption { primaryAgent = mkOption {
type = types.nullOr types.str; type = types.nullOr types.str;
default = null; default = null;
@@ -167,7 +113,7 @@ in {
default = []; default = [];
description = '' description = ''
Pi packages to install (npm:, git:, or local paths). Pi packages to install (npm:, git:, or local paths).
These are written to ~/.pi/agent/settings.json. These are written to `${cfg.path}/settings.json`.
''; '';
}; };
@@ -255,7 +201,7 @@ in {
}; };
default = {}; default = {};
description = '' description = ''
Pi settings written to ~/.pi/agent/settings.json. Pi settings written to `${cfg.path}/settings.json`.
Only non-null values are included in the generated JSON. Only non-null values are included in the generated JSON.
See pi docs/settings.md for all options. See pi docs/settings.md for all options.
''; '';
@@ -263,6 +209,8 @@ in {
}; };
config = mkIf cfg.enable (let config = mkIf cfg.enable (let
basePath = lib.removeSuffix "/" cfg.path;
# Build settings.json by filtering out null values recursively # Build settings.json by filtering out null values recursively
filterNulls = attrs: filterNulls = attrs:
lib.filterAttrs (_: v: v != null) ( lib.filterAttrs (_: v: v != null) (
@@ -271,182 +219,15 @@ in {
then let then let
filtered = filterNulls v; filtered = filterNulls v;
in in
if filtered == {} then null else filtered if filtered == {}
else v) attrs then null
else filtered
else v)
attrs
); );
piSettings = filterNulls cfg.settings; piSettings = filterNulls cfg.settings;
projectRoots = map toString cfg.container.projectRoots;
projectRootsShell = concatStringsSep " " (map escapeShellArg projectRoots);
extraRunArgsShell = concatStringsSep " " (map escapeShellArg cfg.container.extraRunArgs);
extraEnvPairs = map (k: "${k}=${cfg.container.extraEnv.${k}}") (builtins.attrNames cfg.container.extraEnv);
extraEnvShell = concatStringsSep " " (map escapeShellArg extraEnvPairs);
hostPiDir = "${config.home.homeDirectory}/.pi";
hostPiDirShell = escapeShellArg hostPiDir;
imageArchiveShell =
if cfg.container.imageArchive != null
then escapeShellArg (toString cfg.container.imageArchive)
else "";
piWrapper = pkgs.writeShellScriptBin "pi" ''
set -euo pipefail
PODMAN="${pkgs.podman}/bin/podman"
REALPATH="${pkgs.coreutils}/bin/realpath"
CONTAINER_NAME=${escapeShellArg cfg.container.name}
IMAGE=${escapeShellArg cfg.container.image}
IMAGE_ARCHIVE=${imageArchiveShell}
AUTO_START=${if cfg.container.autoStart then "1" else "0"}
AUTO_NIX_DEVELOP=${if cfg.container.autoNixDevelop then "1" else "0"}
HOST_PI_DIR=${hostPiDirShell}
PROJECT_ROOTS=(${projectRootsShell})
EXTRA_RUN_ARGS=(${extraRunArgsShell})
EXTRA_ENV_VARS=(${extraEnvShell})
err() {
printf "pi-wrapper: %s\n" "$1" >&2
exit 1
}
if [ "''${#PROJECT_ROOTS[@]}" -eq 0 ]; then
err "No allowed projectRoots configured. Set coding.agents.pi.container.projectRoots."
fi
if ! command -v "$PODMAN" >/dev/null 2>&1; then
err "podman binary not found at $PODMAN"
fi
CWD="$($REALPATH -m "$PWD")"
cwd_allowed=0
NORMALIZED_ROOTS=()
for root in "''${PROJECT_ROOTS[@]}"; do
norm_root="$($REALPATH -m "$root")"
NORMALIZED_ROOTS+=("$norm_root")
case "$CWD/" in
"$norm_root/"*)
cwd_allowed=1
;;
esac
done
if [ "$cwd_allowed" -ne 1 ]; then
{
printf "pi-wrapper: cwd '%s' is outside allowed projectRoots.\n" "$CWD"
printf "Allowed roots:\n"
for root in "''${NORMALIZED_ROOTS[@]}"; do
printf " - %s\n" "$root"
done
} >&2
exit 1
fi
tty_args=()
if [ -t 0 ] && [ -t 1 ]; then
tty_args=(-it)
fi
ensure_image_available() {
if [ -n "$IMAGE_ARCHIVE" ] && [ -f "$IMAGE_ARCHIVE" ]; then
"$PODMAN" load -i "$IMAGE_ARCHIVE" >/dev/null
fi
if ! "$PODMAN" image exists "$IMAGE"; then
err "Container image '$IMAGE' is not available and no valid imageArchive was provided."
fi
}
create_container() {
mount_args=()
for root in "''${NORMALIZED_ROOTS[@]}"; do
mount_args+=("-v" "$root:$root:rw")
done
if [ ! -S /nix/var/nix/daemon-socket/socket ]; then
err "Host Nix daemon socket not found at /nix/var/nix/daemon-socket/socket"
fi
mount_args+=("-v" "/nix/var/nix/daemon-socket/socket:/nix/var/nix/daemon-socket/socket:rw")
mkdir -p "$HOST_PI_DIR"
mount_args+=("-v" "$HOST_PI_DIR:/tmp/.pi:rw")
if [ -d /nix/store ]; then
mount_args+=("-v" "/nix/store:/nix/store:ro")
fi
if [ -e /etc/nix/nix.conf ]; then
mount_args+=("-v" "/etc/nix/nix.conf:/etc/nix/nix.conf:ro")
fi
if [ -d /etc/ssl/certs ]; then
mount_args+=("-v" "/etc/ssl/certs:/etc/ssl/certs:ro")
fi
if [ -d /etc/pki ]; then
mount_args+=("-v" "/etc/pki:/etc/pki:ro")
fi
env_args=()
for kv in "''${EXTRA_ENV_VARS[@]}"; do
env_args+=("--env" "$kv")
done
"$PODMAN" create \
--name "$CONTAINER_NAME" \
--hostname "$CONTAINER_NAME" \
--userns keep-id \
--user "$(${pkgs.coreutils}/bin/id -u):$(${pkgs.coreutils}/bin/id -g)" \
--security-opt no-new-privileges \
--workdir /tmp \
--tmpfs /tmp:rw,nodev,nosuid \
--env HOME=/tmp \
--env NIX_REMOTE=daemon \
--env NPM_CONFIG_PREFIX=/tmp/.npm-global \
--env npm_config_prefix=/tmp/.npm-global \
--env NPM_CONFIG_CACHE=/tmp/.npm \
--env npm_config_cache=/tmp/.npm \
--env PATH=/tmp/.npm-global/bin:/bin:/usr/bin \
"''${mount_args[@]}" \
"''${env_args[@]}" \
"''${EXTRA_RUN_ARGS[@]}" \
"$IMAGE" \
sleep infinity >/dev/null
}
ensure_container_running() {
if ! "$PODMAN" container exists "$CONTAINER_NAME"; then
ensure_image_available
create_container
fi
running="$($PODMAN inspect -f '{{.State.Running}}' "$CONTAINER_NAME" 2>/dev/null || true)"
if [ "$running" != "true" ]; then
if [ "$AUTO_START" = "1" ]; then
"$PODMAN" start "$CONTAINER_NAME" >/dev/null
else
err "Container '$CONTAINER_NAME' is not running and autoStart=false. Start it manually with: podman start $CONTAINER_NAME"
fi
fi
}
ensure_container_running
if [ "$AUTO_NIX_DEVELOP" = "1" ] && [ -f "$CWD/flake.nix" ]; then
exec "$PODMAN" exec "''${tty_args[@]}" --workdir "$CWD" "$CONTAINER_NAME" nix develop -c pi "$@"
fi
if "$PODMAN" exec --workdir "$CWD" "$CONTAINER_NAME" sh -lc 'command -v pi >/dev/null 2>&1'; then
exec "$PODMAN" exec "''${tty_args[@]}" --workdir "$CWD" "$CONTAINER_NAME" pi "$@"
fi
err "Container '$CONTAINER_NAME' does not have 'pi' in PATH (image: $IMAGE). Use a Pi-ready image or run from a flake project with autoNixDevelop=true."
'';
# Rendered agents (only computed when agentsInput is set) # Rendered agents (only computed when agentsInput is set)
rendered = rendered =
if cfg.agentsInput != null if cfg.agentsInput != null
@@ -462,87 +243,56 @@ in {
# Dynamic home.file entries for agent .md files # Dynamic home.file entries for agent .md files
agentFiles = agentFiles =
if cfg.agentsInput != null if cfg.agentsInput != null
then then let
let
agentNames = builtins.attrNames cfg.agentsInput.lib.loadAgents; agentNames = builtins.attrNames cfg.agentsInput.lib.loadAgents;
in in
builtins.listToAttrs ( builtins.listToAttrs (
map (name: { map (name: {
name = ".pi/agent/agents/${name}.md"; name = "${basePath}/agents/${name}.md";
value = {text = builtins.readFile "${rendered}/agents/${name}.md";}; value = {source = "${rendered}/agents/${name}.md";};
}) })
agentNames agentNames
) )
else {}; else {};
skillsSource =
if cfg.agentsInput != null
then
cfg.agentsInput.lib.mkOpencodeSkills {
inherit pkgs;
customSkills = "${cfg.agentsInput}/skills";
}
else null;
in { in {
assertions =
[
{
assertion = cfg.container.enable || hasPiPackage;
message = "coding.agents.pi.enable requires pkgs.pi when container mode is disabled.";
}
]
++ optional cfg.container.enable {
assertion = cfg.container.projectRoots != [];
message = "coding.agents.pi.container.projectRoots must contain at least one absolute path when container mode is enabled.";
}
++ optional cfg.container.enable {
assertion = all (path: hasPrefix "/" (toString path)) cfg.container.projectRoots;
message = "coding.agents.pi.container.projectRoots entries must be absolute paths.";
};
home.packages =
(optional cfg.container.enable piWrapper)
++ (optional (!cfg.container.enable && hasPiPackage) pkgs.pi);
home.file = mkMerge [ home.file = mkMerge [
# ── MCP servers from programs.mcp → ~/.pi/agent/mcp.json ─────── # ── MCP servers from programs.mcp → ${cfg.path}/mcp.json ───────
(mkIf (cfg.mcpServers != {}) { (mkIf (cfg.mcpServers != {}) {
".pi/agent/mcp.json".text = builtins.toJSON {mcpServers = cfg.mcpServers;}; "${basePath}/mcp.json".text = builtins.toJSON {mcpServers = cfg.mcpServers;};
}) })
# ── ~/.pi/agent/settings.json ────────────────────────────────── # ── ${cfg.path}/settings.json ──────────────────────────────────
{ {
".pi/agent/settings.json".text = builtins.toJSON piSettings; "${basePath}/settings.json".text = builtins.toJSON piSettings;
} }
# ── AGENTS.md — agent descriptions and specialist listing ────── # ── AGENTS.md — agent descriptions and specialist listing ──────
(mkIf (cfg.agentsInput != null) { (mkIf (cfg.agentsInput != null) {
".pi/agent/AGENTS.md".text = builtins.readFile "${rendered}/AGENTS.md"; "${basePath}/AGENTS.md".source = "${rendered}/AGENTS.md";
}) })
# ── SYSTEM.md — primary agent's system prompt ────────────────── # ── SYSTEM.md — primary agent's system prompt ──────────────────
(mkIf (cfg.agentsInput != null) { (mkIf (cfg.agentsInput != null) {
".pi/agent/SYSTEM.md".text = builtins.readFile "${rendered}/SYSTEM.md"; "${basePath}/SYSTEM.md".source = "${rendered}/SYSTEM.md";
}) })
# ── Agents — pi-subagents .md files ──────────────────────────── # ── Agents — pi-subagents .md files ────────────────────────────
agentFiles agentFiles
# ── Skills symlinked from AGENTS repo (non-container mode) ───── # ── Skills symlinked from AGENTS repo ──────────────────────────
(mkIf (cfg.agentsInput != null && !cfg.container.enable) { (mkIf (cfg.agentsInput != null) {
".pi/agent/skills".source = skillsSource; "${basePath}/skills".source = cfg.agentsInput.lib.mkOpencodeSkills {
inherit pkgs;
customSkills = "${cfg.agentsInput}/skills";
externalSkills =
map (
entry:
{inherit (entry) src skillsDir;}
// optionalAttrs (entry.selectSkills != null) {inherit (entry) selectSkills;}
)
cfg.externalSkills;
};
}) })
]; ];
home.activation.piMaterializeSkills = mkIf (cfg.container.enable && cfg.agentsInput != null) (
lib.hm.dag.entryAfter ["writeBoundary"] ''
skillsSrc=${escapeShellArg "${skillsSource}"}
skillsDst=${escapeShellArg "${config.home.homeDirectory}/.pi/agent/skills"}
${pkgs.coreutils}/bin/rm -rf "$skillsDst"
${pkgs.coreutils}/bin/mkdir -p "$skillsDst"
${pkgs.coreutils}/bin/cp -aL "$skillsSrc"/. "$skillsDst"/
''
);
}); });
} }

View File

@@ -13,6 +13,7 @@
imports = [ imports = [
./mem0.nix ./mem0.nix
./ports.nix ./ports.nix
./pi-agent.nix
# Example: ./my-service.nix # Example: ./my-service.nix
# Add more module files here as you create them # Add more module files here as you create them
]; ];

707
modules/nixos/pi-agent.nix Normal file
View File

@@ -0,0 +1,707 @@
# NixOS Module for isolated Pi execution (fresh design)
#
# Goals:
# - Dedicated isolated runtime identity (pi-agent user/group)
# - Host UX via `pi` wrapper command
# - Per-host-user project allowlists (different roots per user)
# - No container mode
# - Merge user Pi config + Nix-managed settings/env into isolated runtime
{
config,
lib,
pkgs,
...
}:
with lib; let
cfg = config.m3ta.pi-agent;
hostUserNames = attrNames cfg.hostUsers;
managedSettingsFile = pkgs.writeText "pi-agent-managed-settings.json" (builtins.toJSON cfg.settings);
managedEnvFile =
pkgs.writeText "pi-agent-managed.env"
(concatStringsSep "\n" (mapAttrsToList (k: v: "${k}=${v}") cfg.environment));
runtimePath = concatStringsSep ":" (
[
"${cfg.package}/bin"
"${pkgs.nodejs}/bin"
"${pkgs.git}/bin"
"${pkgs.coreutils}/bin"
"${pkgs.findutils}/bin"
"${pkgs.gnugrep}/bin"
"${pkgs.gnused}/bin"
"${pkgs.util-linux}/bin"
"/run/current-system/sw/bin"
]
++ map (p: "${p}/bin") cfg.extraPackages
);
userPolicyCase = concatStringsSep "\n" (
mapAttrsToList (
user: userCfg: ''
${escapeShellArg user})
USER_CONFIG_PATH=${escapeShellArg (if userCfg.configPath != null then userCfg.configPath else cfg.wrapper.hostConfigPath)}
USER_ROOTS=(${concatStringsSep " " (map escapeShellArg userCfg.projectRoots)})
;;
''
)
cfg.hostUsers
);
runner = pkgs.writeShellScriptBin cfg.wrapper.runnerName ''
set -euo pipefail
if [ "$(id -u)" -ne 0 ]; then
echo "${cfg.wrapper.runnerName} must run as root" >&2
exit 1
fi
if [ "$#" -lt 2 ]; then
echo "Usage: ${cfg.wrapper.runnerName} <invoking-user> <cwd> [pi-args...]" >&2
exit 2
fi
invoking_user="$1"
shift
cwd="$1"
shift
resolve_user_policy() {
local user="$1"
USER_CONFIG_PATH=""
USER_ROOTS=()
case "$user" in
${userPolicyCase}
*)
return 1
;;
esac
return 0
}
if ! resolve_user_policy "$invoking_user"; then
echo "User '$invoking_user' is not allowed to use ${cfg.wrapper.commandName}" >&2
exit 1
fi
user_home="$(eval echo "~$invoking_user")"
if [ -z "$user_home" ] || [ "$user_home" = "~$invoking_user" ]; then
echo "Unable to determine home directory for user '$invoking_user'" >&2
exit 1
fi
expand_home_path() {
local input="$1"
if [ "$input" = "~" ]; then
printf '%s\n' "$user_home"
elif ${pkgs.gnugrep}/bin/grep -q '^~/' <<<"$input"; then
printf '%s\n' "$user_home/''${input:2}"
else
printf '%s\n' "$input"
fi
}
cwd_real="$(${pkgs.coreutils}/bin/realpath -m "$cwd")"
resolved_roots=()
skipped_roots=()
is_allowed_cwd=0
for configured_root in "''${USER_ROOTS[@]}"; do
expanded_root="$(expand_home_path "$configured_root")"
resolved_root="$(${pkgs.coreutils}/bin/realpath -m "$expanded_root")"
if [ ! -d "$resolved_root" ]; then
skipped_roots+=("$resolved_root")
continue
fi
resolved_roots+=("$resolved_root")
case "$cwd_real/" in
"$resolved_root"/*)
is_allowed_cwd=1
;;
esac
done
if [ "''${#resolved_roots[@]}" -eq 0 ]; then
echo "Denied: no valid existing project roots are configured for user '$invoking_user'." >&2
if [ "''${#skipped_roots[@]}" -gt 0 ]; then
echo "Configured but missing roots:" >&2
for root in "''${skipped_roots[@]}"; do
echo " - $root" >&2
done
fi
exit 1
fi
if [ "$is_allowed_cwd" -ne 1 ]; then
echo "Denied: '$cwd_real' is outside allowed project roots for user '$invoking_user'." >&2
echo "Allowed roots:" >&2
for root in "''${resolved_roots[@]}"; do
echo " - $root" >&2
done
exit 1
fi
${pkgs.coreutils}/bin/install -d -m 0750 -o ${escapeShellArg cfg.user} -g ${escapeShellArg cfg.group} \
${escapeShellArg cfg.stateDir} \
${escapeShellArg "${cfg.stateDir}/.pi"} \
${escapeShellArg "${cfg.stateDir}/.pi/agent"} \
${escapeShellArg "${cfg.stateDir}/.project-mounts"} \
${escapeShellArg "${cfg.stateDir}/projects"} \
${escapeShellArg "${cfg.stateDir}/.npm"} \
${escapeShellArg "${cfg.stateDir}/.npm-global"} \
${escapeShellArg "${cfg.stateDir}/.npm-global/bin"} \
${escapeShellArg "${cfg.stateDir}/.npm-global/lib"}
config_source="$USER_CONFIG_PATH"
if ${pkgs.gnugrep}/bin/grep -q '^/' <<<"$config_source"; then
source_dir="$config_source"
else
source_dir="$(expand_home_path "$config_source")"
fi
if [ "${if cfg.wrapper.syncConfigFromHost then "1" else "0"}" = "1" ] && [ -d "$source_dir" ]; then
${pkgs.rsync}/bin/rsync -a --delete "$source_dir/" ${escapeShellArg "${cfg.stateDir}/.pi/agent/"}
${pkgs.coreutils}/bin/chown -R ${escapeShellArg "${cfg.user}:${cfg.group}"} ${escapeShellArg "${cfg.stateDir}/.pi/agent"}
fi
# Merge host settings.json (if any) with Nix-managed settings.
# Precedence: host settings first, Nix-managed keys override recursively.
settings_target=${escapeShellArg "${cfg.stateDir}/.pi/agent/settings.json"}
${pkgs.python3}/bin/python3 - "$settings_target" ${escapeShellArg managedSettingsFile} <<'PY_PI_SETTINGS_MERGE'
import json
import os
import sys
def load_obj(path):
if not os.path.exists(path):
return {}
try:
with open(path, "r", encoding="utf-8") as f:
data = json.load(f)
return data if isinstance(data, dict) else {}
except Exception:
return {}
def deep_merge(base, override):
if isinstance(base, dict) and isinstance(override, dict):
out = dict(base)
for key, value in override.items():
out[key] = deep_merge(out.get(key), value)
return out
return override
def main():
target = sys.argv[1]
managed = sys.argv[2]
base_obj = load_obj(target)
managed_obj = load_obj(managed)
merged = deep_merge(base_obj, managed_obj)
os.makedirs(os.path.dirname(target), exist_ok=True)
tmp = f"{target}.tmp"
with open(tmp, "w", encoding="utf-8") as f:
json.dump(merged, f, indent=2, sort_keys=True)
f.write("\n")
os.replace(tmp, target)
if __name__ == "__main__":
main()
PY_PI_SETTINGS_MERGE
${pkgs.coreutils}/bin/chown ${escapeShellArg "${cfg.user}:${cfg.group}"} "$settings_target"
${pkgs.coreutils}/bin/chmod 0640 "$settings_target"
# Merge environment into isolated .env with precedence:
# 1) synced host env (source_dir/.env)
# 2) Nix-managed environment attrset
# 3) Nix-managed environmentFiles (appended in declaration order)
env_target=${escapeShellArg "${cfg.stateDir}/.pi/.env"}
${pkgs.coreutils}/bin/install -o ${escapeShellArg cfg.user} -g ${escapeShellArg cfg.group} -m 0640 /dev/null "$env_target"
if [ -f "$source_dir/.env" ]; then
${pkgs.coreutils}/bin/cat "$source_dir/.env" >> "$env_target"
printf '\n' >> "$env_target"
fi
if [ -f ${escapeShellArg managedEnvFile} ]; then
${pkgs.coreutils}/bin/cat ${escapeShellArg managedEnvFile} >> "$env_target"
printf '\n' >> "$env_target"
fi
${concatStringsSep "\n" (map (f: ''
if [ -f ${escapeShellArg f} ]; then
${pkgs.coreutils}/bin/cat ${escapeShellArg f} >> "$env_target"
printf '\n' >> "$env_target"
fi
'') cfg.environmentFiles)}
${pkgs.coreutils}/bin/chown ${escapeShellArg "${cfg.user}:${cfg.group}"} "$env_target"
${pkgs.coreutils}/bin/chmod 0640 "$env_target"
npm_prefix=${escapeShellArg "${cfg.stateDir}/.npm-global"}
runtime_path=${escapeShellArg runtimePath}
project_mount_dir=${escapeShellArg "${cfg.stateDir}/.project-mounts"}
project_links_dir=${escapeShellArg "${cfg.stateDir}/projects"}
project_bind_pairs=()
matched_root=""
matched_mount=""
project_index=0
for root in "''${resolved_roots[@]}"; do
if [ ! -d "$root" ]; then
continue
fi
root_slug="$(printf '%s' "$root" | ${pkgs.gnused}/bin/sed 's#^/##; s#/#-#g; s#-\{2,\}#-#g; s#-$##; s#^$#root#')"
root_slug="''${project_index}-''${root_slug}"
project_index=$((project_index + 1))
mount_point="''${project_mount_dir}/''${root_slug}"
link_path="''${project_links_dir}/''${root_slug}"
${pkgs.coreutils}/bin/install -d -m 0750 -o ${escapeShellArg cfg.user} -g ${escapeShellArg cfg.group} "$mount_point"
${pkgs.coreutils}/bin/ln -sfn "$mount_point" "$link_path"
project_bind_pairs+=("$root:$mount_point")
case "$cwd_real/" in
"$root"/*)
if [ -z "$matched_root" ] || [ "''${#root}" -gt "''${#matched_root}" ]; then
matched_root="$root"
matched_mount="$mount_point"
fi
;;
esac
done
if [ -z "$matched_root" ]; then
echo "Failed to map cwd '$cwd_real' to an allowed root." >&2
exit 1
fi
if [ "$cwd_real" = "$matched_root" ]; then
mapped_cwd="$matched_mount"
else
rel_path="''${cwd_real#"$matched_root/"}"
mapped_cwd="$matched_mount/$rel_path"
fi
pi_bin=${escapeShellArg "${cfg.package}/bin/${cfg.binaryName}"}
if [ ! -x "$pi_bin" ]; then
for candidate in pi pi-agent; do
alt=${escapeShellArg "${cfg.package}/bin"}/$candidate
if [ -x "$alt" ]; then
pi_bin="$alt"
break
fi
done
fi
if [ ! -x "$pi_bin" ]; then
echo "Pi binary not found or not executable: $pi_bin" >&2
echo "Available executables in ${cfg.package}/bin:" >&2
${pkgs.coreutils}/bin/ls -1 ${escapeShellArg "${cfg.package}/bin"} >&2 || true
exit 127
fi
cmd=(
${pkgs.systemd}/bin/systemd-run
--collect
--wait
--pty
--service-type=exec
-p User=${cfg.user}
-p Group=${cfg.group}
-p WorkingDirectory="$mapped_cwd"
-p NoNewPrivileges=yes
-p PrivateTmp=yes
-p ProtectSystem=strict
-p ProtectHome=false
-p ProtectControlGroups=yes
-p ProtectKernelTunables=yes
-p ProtectKernelModules=yes
-p RestrictSUIDSGID=yes
-p LockPersonality=yes
-p RestrictRealtime=yes
-p RestrictNamespaces=yes
-p MemoryDenyWriteExecute=no
-p UMask=0077
-p ReadWritePaths=${cfg.stateDir}
-p EnvironmentFile=${cfg.stateDir}/.pi/.env
-E HOME=${cfg.stateDir}
-E PI_HOME=${cfg.stateDir}/.pi
-E MESSAGING_CWD="$mapped_cwd"
-E PATH="$runtime_path"
-E NPM_CONFIG_CACHE=${cfg.stateDir}/.npm
-E NPM_CONFIG_PREFIX="$npm_prefix"
-E PI_AGENT_INVOKING_USER="$invoking_user"
)
# Only mark existing top-level paths inaccessible; systemd fails namespace
# setup if InaccessiblePaths points to a non-existent path on this host.
for p in /home /root /mnt /media /srv; do
if [ -e "$p" ]; then
cmd+=( -p "InaccessiblePaths=$p" )
fi
done
for pair in "''${project_bind_pairs[@]}"; do
src="''${pair%%:*}"
dst="''${pair#*:}"
cmd+=( -p "BindPaths=$src:$dst" )
done
${concatStringsSep "\n" (mapAttrsToList (name: value: ''cmd+=( -E ${escapeShellArg "${name}=${value}"} )'') cfg.wrapper.extraEnvironment)}
cmd+=( "$pi_bin" )
${concatStringsSep "\n" (map (arg: ''cmd+=( ${escapeShellArg arg} )'') cfg.wrapper.extraRunArgs)}
cmd+=( "$@" )
exec "''${cmd[@]}"
'';
wrapper = pkgs.writeShellScriptBin cfg.wrapper.commandName ''
set -euo pipefail
user_name="$(id -un)"
user_home="$(eval echo "~$user_name")"
if [ -z "$user_home" ] || [ "$user_home" = "~$user_name" ]; then
user_home="$HOME"
fi
resolve_user_policy() {
local user="$1"
USER_ROOTS=()
case "$user" in
${concatStringsSep "\n" (
mapAttrsToList (
user: userCfg: ''
${escapeShellArg user})
USER_ROOTS=(${concatStringsSep " " (map escapeShellArg userCfg.projectRoots)})
;;
''
)
cfg.hostUsers
)}
*)
return 1
;;
esac
return 0
}
if ! resolve_user_policy "$user_name"; then
echo "User '$user_name' is not allowed to use ${cfg.wrapper.commandName}" >&2
exit 1
fi
expand_home_path() {
local input="$1"
if [ "$input" = "~" ]; then
printf '%s\n' "$user_home"
elif ${pkgs.gnugrep}/bin/grep -q '^~/' <<<"$input"; then
printf '%s\n' "$user_home/''${input:2}"
else
printf '%s\n' "$input"
fi
}
cwd_real="$(${pkgs.coreutils}/bin/realpath -m "$PWD")"
is_allowed_cwd=0
resolved_roots=()
skipped_roots=()
for configured_root in "''${USER_ROOTS[@]}"; do
expanded_root="$(expand_home_path "$configured_root")"
resolved_root="$(${pkgs.coreutils}/bin/realpath -m "$expanded_root")"
if [ ! -d "$resolved_root" ]; then
skipped_roots+=("$resolved_root")
continue
fi
resolved_roots+=("$resolved_root")
case "$cwd_real/" in
"$resolved_root"/*)
is_allowed_cwd=1
;;
esac
done
if [ "''${#resolved_roots[@]}" -eq 0 ]; then
echo "Denied: no valid existing project roots are configured for user '$user_name'." >&2
if [ "''${#skipped_roots[@]}" -gt 0 ]; then
echo "Configured but missing roots:" >&2
for root in "''${skipped_roots[@]}"; do
echo " - $root" >&2
done
fi
exit 1
fi
if [ "$is_allowed_cwd" -ne 1 ]; then
echo "Denied: '$cwd_real' is outside allowed project roots for user '$user_name'." >&2
echo "Allowed roots:" >&2
for root in "''${resolved_roots[@]}"; do
echo " - $root" >&2
done
exit 1
fi
exec /run/wrappers/bin/sudo --non-interactive ${runner}/bin/${cfg.wrapper.runnerName} "$user_name" "$cwd_real" "$@"
'';
in {
options.m3ta.pi-agent = {
enable = mkEnableOption "isolated Pi execution with dedicated system user and policy-enforced wrapper";
package = mkOption {
type = types.package;
default = pkgs.pi-coding-agent;
defaultText = literalExpression "pkgs.pi-coding-agent";
description = "Pi package providing the executable used in isolated runtime.";
};
binaryName = mkOption {
type = types.str;
default = "pi-agent";
description = "Preferred executable name inside `${cfg.package}/bin` (falls back to pi/pi-agent auto-detection).";
example = "pi";
};
user = mkOption {
type = types.str;
default = "pi-agent";
description = "System user that executes Pi in isolated mode.";
};
group = mkOption {
type = types.str;
default = "pi-agent";
description = "System group for the isolated Pi user.";
};
stateDir = mkOption {
type = types.str;
default = "/var/lib/pi-agent";
description = "Writable state/home directory for isolated Pi runtime.";
};
createUser = mkOption {
type = types.bool;
default = true;
description = "Whether to create the dedicated Pi user/group automatically.";
};
hostUsers = mkOption {
type = types.attrsOf (types.submodule {
options = {
projectRoots = mkOption {
type = types.listOf types.str;
default = [];
description = ''
Allowed project roots for this host user.
`~` and `~/...` are expanded relative to that host user's home.
'';
example = ["~/p" "~/work/client-a"];
};
configPath = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
Optional host path for this user's Pi config source. If null,
wrapper.hostConfigPath is used. Relative paths resolve from the
host user's home.
'';
example = ".pi/agent";
};
};
});
default = {};
description = ''
Per-host-user policy map. Keys are host usernames.
Each user defines their own allowed project roots and optional config source.
'';
example = literalExpression ''
{
m3tam3re = {
projectRoots = [ "~/p" "~/src/private" ];
configPath = ".pi/agent";
};
teammate = {
projectRoots = [ "~/projects" ];
};
}
'';
};
settings = mkOption {
type = types.attrsOf types.anything;
default = {};
description = ''
Nix-managed Pi settings merged into isolated `${cfg.stateDir}/.pi/agent/settings.json`.
Merge precedence: synced host settings first, Nix-managed values override recursively.
'';
example = literalExpression ''
{
defaultModel = "anthropic/claude-sonnet-4";
defaultProvider = "anthropic";
quietStartup = true;
}
'';
};
environment = mkOption {
type = types.attrsOf types.str;
default = {};
description = ''
Non-secret Nix-managed environment variables appended into isolated
`${cfg.stateDir}/.pi/.env` after synced host values.
'';
};
environmentFiles = mkOption {
type = types.listOf types.str;
default = [];
description = ''
Paths to env files (secrets/tokens) appended to isolated `${cfg.stateDir}/.pi/.env`
after `environment` entries.
'';
};
extraPackages = mkOption {
type = types.listOf types.package;
default = [];
description = "Extra packages added to isolated runtime PATH.";
};
wrapper = {
enable = mkOption {
type = types.bool;
default = true;
description = "Enable host-side wrapper command that enforces policy and runs isolated Pi.";
};
commandName = mkOption {
type = types.str;
default = "pi";
description = "Host wrapper command name.";
};
runnerName = mkOption {
type = types.str;
default = "m3ta-pi-agent-runner";
description = "Privileged runner command invoked via scoped sudo rule.";
};
hideDirectBinary = mkOption {
type = types.bool;
default = true;
description = ''
When true and wrapper is enabled, do not add the raw Pi package to host PATH,
reducing bypass risk by making wrapper the canonical entrypoint.
'';
};
syncConfigFromHost = mkOption {
type = types.bool;
default = true;
description = ''
Sync host Pi config directory into isolated `${cfg.stateDir}/.pi/agent`
on each invocation.
'';
};
hostConfigPath = mkOption {
type = types.str;
default = ".pi/agent";
description = ''
Default source path for host Pi config sync. Relative paths resolve from
the invoking user's home. Per-user hostUsers.<name>.configPath overrides this.
'';
};
extraRunArgs = mkOption {
type = types.listOf types.str;
default = [];
description = "Extra arguments inserted before user-provided Pi args.";
};
extraEnvironment = mkOption {
type = types.attrsOf types.str;
default = {};
description = "Additional environment variables passed to isolated Pi runtime.";
};
};
};
config = mkIf cfg.enable {
assertions =
[
{
assertion = cfg.hostUsers != {};
message = "m3ta.pi-agent.hostUsers must define at least one authorized host user.";
}
{
assertion = (!cfg.wrapper.enable) || (cfg.hostUsers != {});
message = "m3ta.pi-agent.hostUsers must not be empty when wrapper is enabled.";
}
]
++ mapAttrsToList (user: userCfg: {
assertion = userCfg.projectRoots != [];
message = "m3ta.pi-agent.hostUsers.${user}.projectRoots must not be empty.";
}) cfg.hostUsers;
users.groups = mkIf cfg.createUser {
"${cfg.group}" = {};
};
users.users = mkIf cfg.createUser {
"${cfg.user}" = {
isSystemUser = true;
group = cfg.group;
description = "Isolated Pi agent user";
home = cfg.stateDir;
createHome = true;
shell = pkgs.bashInteractive;
};
};
systemd.tmpfiles.rules = [
"d ${cfg.stateDir} 0750 ${cfg.user} ${cfg.group} - -"
"d ${cfg.stateDir}/.pi 0750 ${cfg.user} ${cfg.group} - -"
"d ${cfg.stateDir}/.pi/agent 0750 ${cfg.user} ${cfg.group} - -"
"d ${cfg.stateDir}/.project-mounts 0750 ${cfg.user} ${cfg.group} - -"
"d ${cfg.stateDir}/projects 0750 ${cfg.user} ${cfg.group} - -"
"d ${cfg.stateDir}/.npm 0750 ${cfg.user} ${cfg.group} - -"
"d ${cfg.stateDir}/.npm-global 0750 ${cfg.user} ${cfg.group} - -"
"d ${cfg.stateDir}/.npm-global/bin 0750 ${cfg.user} ${cfg.group} - -"
"d ${cfg.stateDir}/.npm-global/lib 0750 ${cfg.user} ${cfg.group} - -"
];
# Wrapper is canonical when enabled; raw package on PATH is optional and
# disabled by default to reduce bypass opportunities.
environment.systemPackages =
optional cfg.wrapper.enable wrapper
++ optional ((!cfg.wrapper.enable) || (!cfg.wrapper.hideDirectBinary)) cfg.package;
security.sudo.extraRules = mkIf (cfg.wrapper.enable && hostUserNames != []) [
{
users = hostUserNames;
commands = [
{
command = "${runner}/bin/${cfg.wrapper.runnerName}";
options = ["NOPASSWD"];
}
];
}
];
};
}

View File

@@ -25,20 +25,20 @@
in in
stdenv.mkDerivation (finalAttrs: { stdenv.mkDerivation (finalAttrs: {
pname = "n8n"; pname = "n8n";
version = "stable"; version = "2.14.2";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "n8n-io"; owner = "n8n-io";
repo = "n8n"; repo = "n8n";
tag = "${finalAttrs.version}"; tag = "n8n@${finalAttrs.version}";
hash = "sha256-/atba0ymCqhh5Rt61UxwC2xf8SGrRsEKtlsDCIkg37Y="; hash = "sha256-nWV3DFDkBlfDdoOxwYB0HSrTyKpTt70YxAQYUPartkE=";
}; };
pnpmDeps = fetchPnpmDeps { pnpmDeps = fetchPnpmDeps {
inherit (finalAttrs) pname version src; inherit (finalAttrs) pname version src;
pnpm = pnpm_10; pnpm = pnpm_10;
fetcherVersion = 3; fetcherVersion = 3;
hash = "sha256-YGplNNvIOIY1BthWmejAzucXujq8AkgPJus774GmWCA="; hash = "sha256-0SnPF3CgIja3M1ubLrwyFcx7vY0eHz9DEgn/gDLXN80=";
}; };
nativeBuildInputs = nativeBuildInputs =