- Update package description and fix mainProgram typo - Rewrite documentation to describe project switching, not process viewing - Add PROJECT_FOLDERS configuration and usage examples - Update all references across docs (README, guides, module overviews)
7.9 KiB
mem0 NixOS Module
Mem0 REST API server module for AI memory management.
Overview
This module provides a systemd service for the Mem0 REST API server, enabling AI agents to maintain persistent memory across conversations.
Quick Start
{config, ...}: {
imports = [m3ta-nixpkgs.nixosModules.mem0];
m3ta.mem0 = {
enable = true;
port = 8000;
llm = {
provider = "openai";
apiKeyFile = "/run/secrets/openai-api-key";
model = "gpt-4o-mini";
};
vectorStore = {
provider = "qdrant";
config = {
host = "localhost";
port = 6333;
};
};
};
}
Module Options
m3ta.mem0.enable
Enable the Mem0 REST API server.
- Type:
boolean - Default:
false
m3ta.mem0.package
The mem0 package to use.
- Type:
package - Default:
pkgs.mem0
m3ta.mem0.host
Host address to bind the server to.
- Type:
string - Default:
"127.0.0.1"
m3ta.mem0.port
Port to run the REST API server on.
- Type:
port - Default:
8000
m3ta.mem0.workers
Number of worker processes.
- Type:
integer - Default:
1
m3ta.mem0.logLevel
Logging level for the server.
- Type:
enum["critical" "error" "warning" "info" "debug" "trace"] - Default:
"info"
m3ta.mem0.stateDir
Directory to store mem0 data and state.
- Type:
path - Default:
"/var/lib/mem0"
m3ta.mem0.user
User account under which mem0 runs.
- Type:
string - Default:
"mem0"
m3ta.mem0.group
Group under which mem0 runs.
- Type:
string - Default:
"mem0"
m3ta.mem0.environmentFile
Environment file containing additional configuration.
- Type:
nullOr path - Default:
null
LLM Configuration
m3ta.mem0.llm.provider
LLM provider to use.
- Type:
enum["openai" "anthropic" "azure" "groq" "together" "ollama" "litellm"] - Default:
"openai"
m3ta.mem0.llm.model
Model name to use.
- Type:
string - Default:
"gpt-4o-mini"
m3ta.mem0.llm.apiKeyFile
Path to file containing the API key.
- Type:
nullOr path - Default:
null - Example:
"/run/secrets/openai-api-key"
m3ta.mem0.llm.temperature
Temperature parameter for LLM generation.
- Type:
nullOr float - Default:
null
m3ta.mem0.llm.maxTokens
Maximum tokens for LLM generation.
- Type:
nullOr int - Default:
null
m3ta.mem0.llm.extraConfig
Additional LLM configuration options.
- Type:
attrs - Default:
{}
Vector Store Configuration
m3ta.mem0.vectorStore.provider
Vector database provider.
- Type:
enum["qdrant" "chroma" "pinecone" "weaviate" "faiss" "pgvector" "redis" "elasticsearch" "milvus"] - Default:
"qdrant"
m3ta.mem0.vectorStore.config
Configuration for the vector store.
- Type:
attrs - Default:
{}
Example for Qdrant:
vectorStore.config = {
host = "localhost";
port = 6333;
collection_name = "mem0_memories";
};
Example for pgvector:
vectorStore.config = {
host = "localhost";
port = 5432;
dbname = "postgres";
user = "postgres";
password = "postgres";
};
Embedder Configuration
m3ta.mem0.embedder.provider
Embedding model provider.
- Type:
nullOr (enum["openai" "huggingface" "ollama" "vertexai"]) - Default:
null
m3ta.mem0.embedder.model
Embedding model name.
- Type:
nullOr string - Default:
null
m3ta.mem0.embedder.config
Configuration for the embedder.
- Type:
attrs - Default:
{}
Usage Examples
Minimal Configuration
{config, ...}: {
m3ta.mem0 = {
enable = true;
};
}
With OpenAI
{config, ...}: {
m3ta.mem0 = {
enable = true;
llm = {
provider = "openai";
apiKeyFile = "/run/secrets/openai-api-key";
model = "gpt-4";
};
};
}
With Local LLM (Ollama)
{config, ...}: {
m3ta.mem0 = {
enable = true;
llm = {
provider = "ollama";
model = "llama2";
};
};
}
With Port Management
{config, ...}: {
m3ta.ports = {
enable = true;
definitions = {
mem0 = 8000;
};
currentHost = config.networking.hostName;
};
m3ta.mem0 = {
enable = true;
port = config.m3ta.ports.get "mem0";
};
}
With Qdrant
{config, ...}: {
m3ta.mem0 = {
enable = true;
vectorStore = {
provider = "qdrant";
config = {
host = "localhost";
port = 6333;
};
};
};
}
services.qdrant = {
enable = true;
port = 6333;
};
With Secrets (agenix)
{config, ...}: {
age.secrets.openai-api-key = {
file = ./secrets/openai-api-key.age;
};
m3ta.mem0 = {
enable = true;
llm = {
apiKeyFile = config.age.secrets.openai-api-key.path;
};
};
}
Service Management
Start/Stop/Restart
# Start service
sudo systemctl start mem0
# Stop service
sudo systemctl stop mem0
# Restart service
sudo systemctl restart mem0
# Check status
sudo systemctl status mem0
View Logs
# View logs
sudo journalctl -u mem0 -f
# View last 100 lines
sudo journalctl -u mem0 -n 100
Service File
The module creates a systemd service at /etc/systemd/system/mem0.service with:
- Security hardening enabled
- Automatic restart on failure
- Proper user/group setup
API Usage
Add Memory
curl -X POST http://localhost:8000/v1/memories \
-H "Content-Type: application/json" \
-d '{
"content": "User prefers coffee over tea",
"metadata": {"user_id": "123"}
}'
Search Memories
curl http://localhost:8000/v1/memories/search?q=coffee
Update Memory
curl -X PATCH http://localhost:8000/v1/memories/memory_id \
-H "Content-Type: application/json" \
-d '{
"content": "User prefers coffee over tea, but also likes chai"
}'
Delete Memory
curl -X DELETE http://localhost:8000/v1/memories/memory_id
Dependencies
Required Services
Depending on your configuration, you may need:
- qdrant service (if using qdrant vector store)
- postgresql with pgvector (if using pgvector)
- chroma service (if using chroma)
- ollama (if using local LLMs)
Example: Qdrant
services.qdrant = {
enable = true;
port = 6333;
};
Example: PostgreSQL
services.postgresql = {
enable = true;
enableTCPIP = true;
package = pkgs.postgresql_15;
extensions = ["pgvector"];
settings = {
port = 5432;
};
};
Firewall
The module automatically opens the firewall port if binding to non-localhost addresses:
# Opens port if host is not "127.0.0.1" or "localhost"
m3ta.mem0 = {
enable = true;
host = "0.0.0.0"; # Binds to all interfaces
port = 8000;
};
# Firewall automatically opens port 8000
Security
User/Group
Creates dedicated user and group:
- User:
mem0 - Group:
mem0 - Home:
/var/lib/mem0
Hardening
Systemd service includes security hardening:
NoNewPrivilegesPrivateTmpProtectSystem=strictProtectHome=trueRestrictRealtime=trueRestrictNamespaces=trueLockPersonality=true
Secrets
Use apiKeyFile for API keys instead of plain text:
# Good
llm.apiKeyFile = "/run/secrets/openai-api-key";
# Bad (insecure)
llm.apiKey = "sk-xxx";
Troubleshooting
Service Won't Start
Check logs:
sudo journalctl -u mem0 -n 50
Common issues:
- API key missing: Ensure
apiKeyFileexists and is readable - Vector store unavailable: Ensure qdrant/other store is running
- Port in use: Check if port is available
API Not Responding
Check service status:
sudo systemctl status mem0
# Check if port is open
ss -tuln | grep 8000
Memory Issues
Increase memory limit in systemd override:
sudo systemctl edit mem0
[Service]
MemoryMax=2G
Related
- mem0 Package - Package documentation
- Port Management Guide - Using with port management
- Using Modules Guide - Module usage patterns