269 lines
5.4 KiB
Markdown
269 lines
5.4 KiB
Markdown
|
|
# mem0
|
||
|
|
|
||
|
|
Long-term memory layer for AI agents with REST API support.
|
||
|
|
|
||
|
|
## Description
|
||
|
|
|
||
|
|
Mem0 provides a sophisticated memory management system for AI applications and agents. It enables AI assistants to maintain persistent memory across conversations and sessions using vector storage.
|
||
|
|
|
||
|
|
## Features
|
||
|
|
|
||
|
|
- 💾 **Long-term Memory**: Persistent memory across conversations
|
||
|
|
- 🔍 **Semantic Search**: Vector-based similarity search
|
||
|
|
- 🤖 **Multiple LLM Support**: OpenAI, Anthropic, Groq, Ollama, etc.
|
||
|
|
- 📊 **Vector Stores**: Qdrant, Chroma, Pinecone, pgvector, etc.
|
||
|
|
- 🌐 **REST API**: Easy integration via HTTP endpoints
|
||
|
|
- 🎯 **Multi-modal Support**: Text and image memories
|
||
|
|
- 🔧 **Configurable**: Flexible LLM and embedding models
|
||
|
|
|
||
|
|
## Installation
|
||
|
|
|
||
|
|
### Via Overlay
|
||
|
|
|
||
|
|
```nix
|
||
|
|
{pkgs, ...}: {
|
||
|
|
environment.systemPackages = with pkgs; [
|
||
|
|
mem0
|
||
|
|
];
|
||
|
|
}
|
||
|
|
```
|
||
|
|
|
||
|
|
### Direct Reference
|
||
|
|
|
||
|
|
```nix
|
||
|
|
{pkgs, ...}: {
|
||
|
|
environment.systemPackages = with pkgs; [
|
||
|
|
inputs.m3ta-nixpkgs.packages.${pkgs.system}.mem0
|
||
|
|
];
|
||
|
|
}
|
||
|
|
```
|
||
|
|
|
||
|
|
### Run Directly
|
||
|
|
|
||
|
|
```bash
|
||
|
|
nix run git+https://code.m3ta.dev/m3tam3re/nixpkgs#mem0
|
||
|
|
```
|
||
|
|
|
||
|
|
## Usage
|
||
|
|
|
||
|
|
### Command Line
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# Start mem0 server
|
||
|
|
mem0-server
|
||
|
|
|
||
|
|
# With custom port
|
||
|
|
MEM0_PORT=8080 mem0-server
|
||
|
|
|
||
|
|
# With LLM provider
|
||
|
|
MEM0_LLM_PROVIDER=openai OPENAI_API_KEY=sk-xxx mem0-server
|
||
|
|
```
|
||
|
|
|
||
|
|
### Python Library
|
||
|
|
|
||
|
|
```python
|
||
|
|
from mem0 import Memory
|
||
|
|
|
||
|
|
# Initialize with OpenAI
|
||
|
|
memory = Memory(
|
||
|
|
llm_provider="openai",
|
||
|
|
llm_model="gpt-4o-mini",
|
||
|
|
vector_store="qdrant",
|
||
|
|
openai_api_key="sk-xxx"
|
||
|
|
)
|
||
|
|
|
||
|
|
# Add a memory
|
||
|
|
result = memory.add(
|
||
|
|
"I prefer coffee over tea",
|
||
|
|
metadata={"user_id": "123"}
|
||
|
|
)
|
||
|
|
|
||
|
|
# Search memories
|
||
|
|
memories = memory.search("What does the user prefer?")
|
||
|
|
```
|
||
|
|
|
||
|
|
### REST API
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# Start server
|
||
|
|
mem0-server
|
||
|
|
|
||
|
|
# Add memory
|
||
|
|
curl -X POST http://localhost:8000/v1/memories \
|
||
|
|
-H "Content-Type: application/json" \
|
||
|
|
-d '{"content": "User likes Python"}'
|
||
|
|
|
||
|
|
# Search memories
|
||
|
|
curl http://localhost:8000/v1/memories/search?q=Python
|
||
|
|
```
|
||
|
|
|
||
|
|
## Configuration
|
||
|
|
|
||
|
|
### Environment Variables
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# LLM Configuration
|
||
|
|
export MEM0_LLM_PROVIDER=openai
|
||
|
|
export MEM0_LLM_MODEL=gpt-4o-mini
|
||
|
|
export MEM0_LLM_TEMPERATURE=0.7
|
||
|
|
export OPENAI_API_KEY=sk-xxx
|
||
|
|
|
||
|
|
# Vector Store
|
||
|
|
export MEM0_VECTOR_PROVIDER=qdrant
|
||
|
|
export QDRANT_HOST=localhost
|
||
|
|
export QDRANT_PORT=6333
|
||
|
|
|
||
|
|
# Server
|
||
|
|
export MEM0_HOST=0.0.0.0
|
||
|
|
export MEM0_PORT=8000
|
||
|
|
export MEM0_WORKERS=4
|
||
|
|
export MEM0_LOG_LEVEL=info
|
||
|
|
```
|
||
|
|
|
||
|
|
### NixOS Module (Recommended)
|
||
|
|
|
||
|
|
Use the NixOS module for production:
|
||
|
|
|
||
|
|
```nix
|
||
|
|
{config, ...}: {
|
||
|
|
imports = [m3ta-nixpkgs.nixosModules.mem0];
|
||
|
|
|
||
|
|
m3ta.mem0 = {
|
||
|
|
enable = true;
|
||
|
|
port = 8000;
|
||
|
|
llm = {
|
||
|
|
provider = "openai";
|
||
|
|
apiKeyFile = "/run/secrets/openai-api-key";
|
||
|
|
model = "gpt-4o-mini";
|
||
|
|
};
|
||
|
|
vectorStore = {
|
||
|
|
provider = "qdrant";
|
||
|
|
config = {
|
||
|
|
host = "localhost";
|
||
|
|
port = 6333;
|
||
|
|
};
|
||
|
|
};
|
||
|
|
};
|
||
|
|
}
|
||
|
|
```
|
||
|
|
|
||
|
|
See [mem0 Module](../modules/nixos/mem0.md) for full module documentation.
|
||
|
|
|
||
|
|
## Supported LLM Providers
|
||
|
|
|
||
|
|
| Provider | Model Examples | Notes |
|
||
|
|
|----------|---------------|-------|
|
||
|
|
| `openai` | gpt-4, gpt-3.5-turbo | Most tested |
|
||
|
|
| `anthropic` | claude-3-opus, claude-3-sonnet | Requires key |
|
||
|
|
| `groq` | mixtral-8x7b-32768 | Fast inference |
|
||
|
|
| `ollama` | llama2, mistral | Local only |
|
||
|
|
| `together` | llama-2-70b | API access |
|
||
|
|
|
||
|
|
## Supported Vector Stores
|
||
|
|
|
||
|
|
| Provider | Requirements | Notes |
|
||
|
|
|----------|--------------|-------|
|
||
|
|
| `qdrant` | qdrant server | Recommended |
|
||
|
|
| `chroma` | chroma server | Simple setup |
|
||
|
|
| `pgvector` | PostgreSQL + pgvector | SQL-based |
|
||
|
|
| `pinecone` | Pinecone API | Cloud only |
|
||
|
|
| `redis` | Redis stack | Fast |
|
||
|
|
| `elasticsearch` | ES cluster | Scalable |
|
||
|
|
|
||
|
|
## Use Cases
|
||
|
|
|
||
|
|
### AI Chatbot with Memory
|
||
|
|
|
||
|
|
```python
|
||
|
|
from mem0 import Memory
|
||
|
|
|
||
|
|
memory = Memory()
|
||
|
|
|
||
|
|
# During conversation
|
||
|
|
memory.add("User works at Google as a software engineer")
|
||
|
|
|
||
|
|
# Later conversations
|
||
|
|
memories = memory.search("Where does the user work?")
|
||
|
|
# Returns: ["User works at Google as a software engineer"]
|
||
|
|
```
|
||
|
|
|
||
|
|
### Personal Assistant
|
||
|
|
|
||
|
|
```python
|
||
|
|
memory = Memory()
|
||
|
|
|
||
|
|
# Store preferences
|
||
|
|
memory.add("User prefers dark mode", metadata={"category": "preferences"})
|
||
|
|
memory.add("User is vegetarian", metadata={"category": "diet"})
|
||
|
|
|
||
|
|
# Retrieve relevant info
|
||
|
|
memories = memory.search("What should I cook for the user?")
|
||
|
|
```
|
||
|
|
|
||
|
|
### Code Assistant
|
||
|
|
|
||
|
|
```python
|
||
|
|
memory = Memory()
|
||
|
|
|
||
|
|
# Store project context
|
||
|
|
memory.add("This is a NixOS project with custom packages")
|
||
|
|
|
||
|
|
# Later analysis
|
||
|
|
memories = memory.search("What kind of project is this?")
|
||
|
|
```
|
||
|
|
|
||
|
|
## Requirements
|
||
|
|
|
||
|
|
### Vector Store
|
||
|
|
|
||
|
|
You need a vector store running. Example with Qdrant:
|
||
|
|
|
||
|
|
```nix
|
||
|
|
services.qdrant = {
|
||
|
|
enable = true;
|
||
|
|
port = 6333;
|
||
|
|
};
|
||
|
|
```
|
||
|
|
|
||
|
|
### LLM Provider
|
||
|
|
|
||
|
|
You need API keys or local models:
|
||
|
|
|
||
|
|
```nix
|
||
|
|
# OpenAI
|
||
|
|
services.mem0.llm.apiKeyFile = "/run/secrets/openai-api-key";
|
||
|
|
|
||
|
|
# Or use agenix
|
||
|
|
age.secrets.openai-api-key.file = ./secrets/openai.age;
|
||
|
|
```
|
||
|
|
|
||
|
|
## Build Information
|
||
|
|
|
||
|
|
- **Version**: 1.0.0
|
||
|
|
- **Language**: Python
|
||
|
|
- **License**: Apache-2.0
|
||
|
|
- **Source**: [GitHub](https://github.com/mem0ai/mem0)
|
||
|
|
|
||
|
|
## Python Dependencies
|
||
|
|
|
||
|
|
- `litellm` - Multi-LLM support
|
||
|
|
- `qdrant-client` - Qdrant client
|
||
|
|
- `pydantic` - Data validation
|
||
|
|
- `openai` - OpenAI client
|
||
|
|
- `fastapi` - REST API
|
||
|
|
- `uvicorn` - ASGI server
|
||
|
|
- `sqlalchemy` - Database ORM
|
||
|
|
|
||
|
|
## Platform Support
|
||
|
|
|
||
|
|
- Linux (primary)
|
||
|
|
- macOS (may work)
|
||
|
|
- Windows (not tested)
|
||
|
|
|
||
|
|
## Related
|
||
|
|
|
||
|
|
- [mem0 Module](../modules/nixos/mem0.md) - NixOS module documentation
|
||
|
|
- [Adding Packages](../guides/adding-packages.md) - How to add new packages
|
||
|
|
- [Port Management](../guides/port-management.md) - Managing service ports
|
||
|
|
- [Quick Start](../QUICKSTART.md) - Getting started guide
|