- Update package description and fix mainProgram typo - Rewrite documentation to describe project switching, not process viewing - Add PROJECT_FOLDERS configuration and usage examples - Update all references across docs (README, guides, module overviews)
5.4 KiB
5.4 KiB
mem0
Long-term memory layer for AI agents with REST API support.
Description
Mem0 provides a sophisticated memory management system for AI applications and agents. It enables AI assistants to maintain persistent memory across conversations and sessions using vector storage.
Features
- 💾 Long-term Memory: Persistent memory across conversations
- 🔍 Semantic Search: Vector-based similarity search
- 🤖 Multiple LLM Support: OpenAI, Anthropic, Groq, Ollama, etc.
- 📊 Vector Stores: Qdrant, Chroma, Pinecone, pgvector, etc.
- 🌐 REST API: Easy integration via HTTP endpoints
- 🎯 Multi-modal Support: Text and image memories
- 🔧 Configurable: Flexible LLM and embedding models
Installation
Via Overlay
{pkgs, ...}: {
environment.systemPackages = with pkgs; [
mem0
];
}
Direct Reference
{pkgs, ...}: {
environment.systemPackages = with pkgs; [
inputs.m3ta-nixpkgs.packages.${pkgs.system}.mem0
];
}
Run Directly
nix run git+https://code.m3ta.dev/m3tam3re/nixpkgs#mem0
Usage
Command Line
# Start mem0 server
mem0-server
# With custom port
MEM0_PORT=8080 mem0-server
# With LLM provider
MEM0_LLM_PROVIDER=openai OPENAI_API_KEY=sk-xxx mem0-server
Python Library
from mem0 import Memory
# Initialize with OpenAI
memory = Memory(
llm_provider="openai",
llm_model="gpt-4o-mini",
vector_store="qdrant",
openai_api_key="sk-xxx"
)
# Add a memory
result = memory.add(
"I prefer coffee over tea",
metadata={"user_id": "123"}
)
# Search memories
memories = memory.search("What does the user prefer?")
REST API
# Start server
mem0-server
# Add memory
curl -X POST http://localhost:8000/v1/memories \
-H "Content-Type: application/json" \
-d '{"content": "User likes Python"}'
# Search memories
curl http://localhost:8000/v1/memories/search?q=Python
Configuration
Environment Variables
# LLM Configuration
export MEM0_LLM_PROVIDER=openai
export MEM0_LLM_MODEL=gpt-4o-mini
export MEM0_LLM_TEMPERATURE=0.7
export OPENAI_API_KEY=sk-xxx
# Vector Store
export MEM0_VECTOR_PROVIDER=qdrant
export QDRANT_HOST=localhost
export QDRANT_PORT=6333
# Server
export MEM0_HOST=0.0.0.0
export MEM0_PORT=8000
export MEM0_WORKERS=4
export MEM0_LOG_LEVEL=info
NixOS Module (Recommended)
Use the NixOS module for production:
{config, ...}: {
imports = [m3ta-nixpkgs.nixosModules.mem0];
m3ta.mem0 = {
enable = true;
port = 8000;
llm = {
provider = "openai";
apiKeyFile = "/run/secrets/openai-api-key";
model = "gpt-4o-mini";
};
vectorStore = {
provider = "qdrant";
config = {
host = "localhost";
port = 6333;
};
};
};
}
See mem0 Module for full module documentation.
Supported LLM Providers
| Provider | Model Examples | Notes |
|---|---|---|
openai |
gpt-4, gpt-3.5-turbo | Most tested |
anthropic |
claude-3-opus, claude-3-sonnet | Requires key |
groq |
mixtral-8x7b-32768 | Fast inference |
ollama |
llama2, mistral | Local only |
together |
llama-2-70b | API access |
Supported Vector Stores
| Provider | Requirements | Notes |
|---|---|---|
qdrant |
qdrant server | Recommended |
chroma |
chroma server | Simple setup |
pgvector |
PostgreSQL + pgvector | SQL-based |
pinecone |
Pinecone API | Cloud only |
redis |
Redis stack | Fast |
elasticsearch |
ES cluster | Scalable |
Use Cases
AI Chatbot with Memory
from mem0 import Memory
memory = Memory()
# During conversation
memory.add("User works at Google as a software engineer")
# Later conversations
memories = memory.search("Where does the user work?")
# Returns: ["User works at Google as a software engineer"]
Personal Assistant
memory = Memory()
# Store preferences
memory.add("User prefers dark mode", metadata={"category": "preferences"})
memory.add("User is vegetarian", metadata={"category": "diet"})
# Retrieve relevant info
memories = memory.search("What should I cook for the user?")
Code Assistant
memory = Memory()
# Store project context
memory.add("This is a NixOS project with custom packages")
# Later analysis
memories = memory.search("What kind of project is this?")
Requirements
Vector Store
You need a vector store running. Example with Qdrant:
services.qdrant = {
enable = true;
port = 6333;
};
LLM Provider
You need API keys or local models:
# OpenAI
services.mem0.llm.apiKeyFile = "/run/secrets/openai-api-key";
# Or use agenix
age.secrets.openai-api-key.file = ./secrets/openai.age;
Build Information
- Version: 1.0.0
- Language: Python
- License: Apache-2.0
- Source: GitHub
Python Dependencies
litellm- Multi-LLM supportqdrant-client- Qdrant clientpydantic- Data validationopenai- OpenAI clientfastapi- REST APIuvicorn- ASGI serversqlalchemy- Database ORM
Platform Support
- Linux (primary)
- macOS (may work)
- Windows (not tested)
Related
- mem0 Module - NixOS module documentation
- Adding Packages - How to add new packages
- Port Management - Managing service ports
- Quick Start - Getting started guide