Memori transforms stateless chatbots into intelligent conversational agents that remember user preferences, past interactions, and context across sessions. No more “What’s your account number?” every time — your chatbot recalls everything automatically.
Traditional chatbots lose context between sessions. Users must repeat themselves, and the experience feels frustrating and impersonal. Memori solves this by:
Remembering user preferences — favorite products, communication style, accessibility needs
Recalling past conversations — previous issues, solutions, and outcomes
Building user profiles — automatically extracting facts, preferences, and context over time
Providing continuity — seamless experience across days, weeks, or months
Entity ID — The user interacting with your bot (e.g., user_456 or customer_jane_doe)
Process ID — Your chatbot’s identity (e.g., support_bot or sales_assistant)
mem.attribution( entity_id="user_456", # Who is this conversation with? process_id="support_bot" # Which bot is handling it?)
Memori uses these to create isolated memory spaces. User A never sees User B’s memories, and your support bot maintains different context than your sales bot.
from memori import Memorifrom openai import OpenAIdef create_support_bot(customer_id: str): """Initialize a support bot for a specific customer.""" client = OpenAI() mem = Memori().llm.register(client) # Link conversations to this customer and the support bot process mem.attribution( entity_id=customer_id, process_id="support_bot" ) return client, memdef chat(client, user_message: str): """Send a message and get a response.""" response = client.chat.completions.create( model="gpt-4o-mini", messages=[ { "role": "system", "content": "You are a helpful customer support agent. " "Remember customer preferences and history." }, {"role": "user", "content": user_message} ] ) return response.choices[0].message.contentif __name__ == "__main__": # Customer first interaction client, mem = create_support_bot("customer_456") print("Customer: I'm having trouble logging in. My username is jane_smith.") response1 = chat(client, "I'm having trouble logging in. My username is jane_smith.") print(f"Support: {response1}\n") print("Customer: Can you reset my password?") response2 = chat(client, "Can you reset my password?") print(f"Support: {response2}\n") # Wait for memory processing mem.augmentation.wait() # Later conversation — new session, same customer print("--- Customer returns 3 days later ---\n") client2, mem2 = create_support_bot("customer_456") print("Customer: I'm locked out again!") response3 = chat(client2, "I'm locked out again!") print(f"Support: {response3}") # Memori recalls: username is jane_smith, previous login issues
4
Run the Bot
python support_bot.py
The bot remembers the customer’s username and previous login issues, even in a completely new session!
Create a shopping assistant that learns user preferences and recommends products based on past interactions.
from memori import Memorifrom openai import OpenAIclient = OpenAI()mem = Memori().llm.register(client)# Attribution for this shoppermem.attribution( entity_id="shopper_789", process_id="shopping_assistant")# First interaction: User shares preferencesresponse = client.chat.completions.create( model="gpt-4o-mini", messages=[ { "role": "system", "content": "You are a personal shopping assistant. " "Learn user preferences and make personalized recommendations." }, { "role": "user", "content": "I'm looking for a laptop. I prefer MacBooks and need 16GB RAM minimum." } ])print(response.choices[0].message.content)mem.augmentation.wait()# Later interaction — Memori recalls preferencesresponse2 = client.chat.completions.create( model="gpt-4o-mini", messages=[ { "role": "user", "content": "Show me your latest laptop deals." } ])print(response2.choices[0].message.content)# Memori injects: "User prefers MacBooks, needs 16GB+ RAM"
Build an Agno-powered chatbot with persistent memory across conversations.
from agno.agent import Agentfrom agno.models.openai import OpenAIChatfrom memori import Memorimodel = OpenAIChat(id="gpt-4o-mini")mem = Memori().llm.register(openai_chat=model)mem.attribution( entity_id="user_123", process_id="conversational_agent")agent = Agent( model=model, instructions=[ "You are a friendly conversational assistant.", "Remember user preferences and context from previous conversations.", ], markdown=True,)# First conversationprint("User: I love science fiction books, especially by Philip K. Dick")response1 = agent.run( "I love science fiction books, especially by Philip K. Dick")print(f"Agent: {response1.content}\n")# Later conversationprint("User: Can you recommend a book?")response2 = agent.run("Can you recommend a book?")print(f"Agent: {response2.content}")# Agent recalls: User loves sci-fi, especially Philip K. Dickmem.augmentation.wait()
Group related conversations into sessions for better context organization.
from memori import Memorifrom openai import OpenAIclient = OpenAI()mem = Memori().llm.register(client)mem.attribution(entity_id="customer_456", process_id="support_bot")# Session 1: Password reset issueprint("Session 1: Password Reset")response = client.chat.completions.create( model="gpt-4o-mini", messages=[{"role": "user", "content": "I need to reset my password"}])print(response.choices[0].message.content)# Start new session for a different issuemem.new_session()print("\nSession 2: Billing Question")response2 = client.chat.completions.create( model="gpt-4o-mini", messages=[{"role": "user", "content": "Why was I charged twice?"}])print(response2.choices[0].message.content)# Each session maintains separate conversation context
This makes debugging easier and helps you understand memory patterns in the dashboard.
Handle Memory Processing in Scripts
Memory augmentation runs asynchronously. In short-lived CLI scripts, call mem.augmentation.wait() to ensure processing completes before exit.
# In CLI scriptsresponse = client.chat.completions.create(...)print(response.choices[0].message.content)mem.augmentation.wait() # Wait for memory processing
In long-running web servers, this is not needed — augmentation happens in the background.
Use Sessions for Conversation Grouping
Group related interactions into sessions:
# Start a new conversation threadmem.new_session()# Or restore a previous sessionsession_id = mem.config.session_id# ... later ...mem.set_session(session_id)