Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/MemoriLabs/Memori/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Memori automatically captures information from conversations and recalls it when relevant. This guide shows you how to get started with basic memory operations.

Quick Start

1

Install Memori

pip install memori
2

Set up your environment

export OPENAI_API_KEY="your-api-key"
export MEMORI_API_KEY="your-memori-api-key"
3

Create your first memory-enabled app

from openai import OpenAI
from memori import Memori

# Initialize OpenAI client
client = OpenAI()

# Initialize Memori and register the LLM client
mem = Memori().llm.register(client)
mem.attribution(entity_id="user-123", process_id="my-app")

# First conversation - establish facts
response1 = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "user", "content": "My favorite color is blue and I live in Paris"}
    ],
)
print(response1.choices[0].message.content)

# Second conversation - Memori recalls context automatically
response2 = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "What's my favorite color?"}],
)
print(response2.choices[0].message.content)  # Output: "Your favorite color is blue"

Using Self-Hosted Storage

For production use cases, you can use your own database instead of Memori Cloud.
from openai import OpenAI
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from memori import Memori

# Setup OpenAI
client = OpenAI()

# Setup SQLite database
engine = create_engine("sqlite:///memori.db")
Session = sessionmaker(bind=engine)

# Setup Memori with SQLite
mem = Memori(conn=Session).llm.register(client)
mem.attribution(entity_id="user-123", process_id="my-app")
mem.config.storage.build()  # Create tables

# Use normally - memories are stored locally
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "I prefer morning meetings"}],
)
print(response.choices[0].message.content)

# Wait for augmentation to complete (important for short-lived scripts)
mem.augmentation.wait()

Manual Recall

You can manually retrieve relevant memories without making an LLM call.
from memori import Memori

mem = Memori()
mem.attribution(entity_id="user-123")

# Manually search for relevant facts
facts = mem.recall("What are my preferences?")

for fact in facts:
    print(f"- {fact['content']}")
    print(f"  Relevance: {fact.get('score', 0):.2f}")

How It Works

1

Capture

When you make an LLM call, Memori automatically captures the conversation in the background.
2

Extract

Memori analyzes the conversation and extracts key facts about the user (preferences, context, history).
3

Recall

On subsequent conversations, Memori searches for relevant facts and injects them into the system prompt.
4

Context

Your LLM receives enriched context automatically, enabling it to remember past interactions.

Best Practices

Set Entity IDs

Always call attribution() with an entity_id to ensure memories are properly associated with each user.

Wait for Augmentation

For short-lived scripts, call mem.augmentation.wait() before exiting to ensure memories are fully processed.

Use Meaningful Process IDs

Set a process_id to segment memories by application or workflow (e.g., “support-chat”, “onboarding”).

Next Steps

Multi-User Management

Learn how to manage memories for multiple users

Streaming

Use Memori with streaming responses

Build docs developers (and LLMs) love