Overview
The ConversationManager provides context management for RepoMaster CLI interactions, allowing users to maintain conversation history across multiple inputs. It handles message storage, history optimization, and persistent conversation state management.
Class Definition
class ConversationManager:
"""Simple conversation manager for CLI interactions"""
Located in: src/core/conversation_manager.py
Constructor
__init__()
Initialize a conversation manager for CLI interactions.
def __init__(self, user_id: str, mode: str, persistent: bool = False)
User identifier for tracking conversation sessions
Backend mode identifier. Supported modes:
"deepsearch" - Deep Search Agent mode
"general_assistant" - General Programming Assistant mode
"repository_agent" - Repository Exploration Agent mode
"unified" - Unified Multi-Agent Interface mode
Whether to load/save conversation history across sessions. When True, conversation history is automatically saved to disk and restored in future sessions.
Example
from src.core.conversation_manager import ConversationManager, get_user_id_for_cli
# Get user ID
user_id = get_user_id_for_cli()
# Create conversation manager
conversation = ConversationManager(
user_id=user_id,
mode="unified",
persistent=True # Enable persistent history
)
Attributes
Backend mode (deepsearch, general_assistant, repository_agent, unified)
Whether conversation history persists across sessions
List of conversation messages. Each message is a dictionary with:
"role" (str): Either "user" or "assistant"
"content" (str): Message content
Directory for storing conversation history files (default: data/cli_conversations)
Methods
add_message()
Add a message to the conversation history.
def add_message(self, role: str, content: str)
Message role: "user" or "assistant"
Message content (empty messages are ignored)
If persistent mode is enabled, the conversation is automatically saved after adding each message.
Example
# Add user message
conversation.add_message("user", "How do I parse JSON in Python?")
# Add assistant response
conversation.add_message("assistant", "You can use the json module...")
get_optimized_prompt()
Get an optimized prompt with conversation context included.
def get_optimized_prompt(self, current_input: str) -> str
Optimized prompt with history context. Format:
- If no history (≤1 messages): Returns current input as-is
- With history: Returns formatted prompt with history context and current question
Behavior
- No History: Returns current input unchanged
- With History: Attempts to optimize dialogue history using
optimize_execution() from the tool optimizer
- Fallback: If optimization fails, returns simple history summary
- Error Handling: On any exception, falls back to returning current input
Example
# First interaction - no history
optimized = conversation.get_optimized_prompt("What is Python?")
# Returns: "What is Python?"
# Later interaction - with history
optimized = conversation.get_optimized_prompt("How about JavaScript?")
# Returns: "[History Message]:\n<optimized history>\n[Current User Question]:\nHow about JavaScript?\n"
clear_conversation()
Clear all conversation history.
def clear_conversation(self)
Clears the message list and saves the empty state to disk if persistent mode is enabled.
Example
conversation.clear_conversation()
# Output: ✅ Conversation history cleared
show_history()
Display a summary of the conversation history.
Shows:
- Total number of messages
- Last 6 messages (or all messages if fewer than 6)
- Truncated message content (max 100 characters per message)
- Count of earlier messages if more than 6 total
Example
conversation.show_history()
Output:
📚 Conversation History (8 messages):
--------------------------------------------------
1. User: What is Python?
2. Assistant: Python is a high-level programming language...
3. User: How about JavaScript?
4. Assistant: JavaScript is primarily used for web development...
5. User: Compare them
6. Assistant: Here's a comparison of Python and JavaScript...
... and 2 earlier messages
--------------------------------------------------
Internal Methods
_optimize_dialogue()
Optimize dialogue history using the tool optimizer module (internal use).
def _optimize_dialogue(self) -> Optional[str]
Optimized dialogue history string, or None if optimization fails
Attempts to import and use optimize_execution() from src.utils.tool_optimizer_dialog. Falls back to _simple_history_summary() if the module is not available.
_simple_history_summary()
Create a simple history summary when optimization is not available (internal use).
def _simple_history_summary(self) -> str
Returns a summary of the last 4 messages, with long messages truncated to 200 characters.
_load_conversation()
Load existing conversation from disk (internal use).
def _load_conversation(self)
Loads conversation history from a pickle file in the data directory. File format: {user_id}_{mode}_conversation.pkl
_save_conversation()
Save conversation to disk (internal use).
def _save_conversation(self)
Saves conversation history to a pickle file using joblib.
Utility Functions
get_user_id_for_cli()
Get a persistent user ID for CLI usage.
def get_user_id_for_cli() -> str
User ID for CLI sessions. Either a persistent ID from temp file or a session-based ID.
Behavior
- Checks for existing user ID in temp directory (
/tmp/repomaster_cli_user_id)
- If found, returns the existing ID
- If not found, generates a new short UUID (8 characters) and saves it
- Falls back to process-based ID if file operations fail
Example
from src.core.conversation_manager import get_user_id_for_cli
user_id = get_user_id_for_cli()
# Returns: "a1b2c3d4" (short UUID) or "cli_user_12345" (fallback)
Complete Usage Example
Example from launcher.py - Unified Mode
from src.core.agent_scheduler import RepoMasterAgent
from src.core.conversation_manager import ConversationManager, get_user_id_for_cli
# Get configuration
llm_config = get_llm_config()
execution_config = {
"work_dir": "/path/to/work/dir",
"use_docker": False
}
# Create RepoMaster agent
agent = RepoMasterAgent(
llm_config=llm_config,
code_execution_config=execution_config
)
# Create conversation manager
user_id = get_user_id_for_cli()
conversation = ConversationManager(
user_id=user_id,
mode="unified",
persistent=True
)
# Interactive loop
while True:
task = input("🤖 Please describe your task: ").strip()
if task.lower() in ['quit', 'exit', 'q']:
break
if task.lower() in ['history', 'h']:
conversation.show_history()
continue
if task.lower() in ['clear', 'c']:
conversation.clear_conversation()
continue
if not task:
continue
# Get optimized prompt with conversation context
optimized_task = conversation.get_optimized_prompt(task)
conversation.add_message("user", task)
# Execute task
result = agent.solve_task_with_repo(optimized_task)
conversation.add_message("assistant", result)
print(f"\n📋 Task result:\n{result}\n")
Example from launcher.py - Deep Search Mode
from src.services.agents.deep_search_agent import AutogenDeepSearchAgent
from src.core.conversation_manager import ConversationManager, get_user_id_for_cli
import asyncio
# Create deep search agent
agent = AutogenDeepSearchAgent(
llm_config=llm_config,
code_execution_config=execution_config
)
# Create conversation manager
user_id = get_user_id_for_cli()
conversation = ConversationManager(user_id, "deepsearch")
# Interactive loop
while True:
query = input("\n🤔 Please enter search question: ").strip()
if query.lower() in ['quit', 'exit', 'q']:
break
if query.lower() in ['history', 'h']:
conversation.show_history()
continue
if query.lower() in ['clear', 'c']:
conversation.clear_conversation()
continue
if not query:
continue
# Get optimized prompt with conversation context
optimized_query = conversation.get_optimized_prompt(query)
conversation.add_message("user", query)
print("🔍 Searching...")
result = asyncio.run(agent.deep_search(optimized_query))
conversation.add_message("assistant", result)
print(f"\n📋 Search results:\n{result}\n")
Example from launcher.py - Repository Agent Mode
from src.core.agent_scheduler import RepoMasterAgent
from src.core.conversation_manager import ConversationManager, get_user_id_for_cli
# Create RepoMaster agent
agent = RepoMasterAgent(
llm_config=llm_config,
code_execution_config=execution_config
)
# Create conversation manager
user_id = get_user_id_for_cli()
conversation = ConversationManager(user_id, "repository_agent")
# Interactive loop
while True:
task_description = input("\n📝 Please describe your task: ").strip()
if task_description.lower() in ['quit', 'exit', 'q']:
break
if task_description.lower() in ['history', 'h']:
conversation.show_history()
continue
if task_description.lower() in ['clear', 'c']:
conversation.clear_conversation()
continue
if not task_description:
continue
repository = input("📁 Please enter repository path or URL: ").strip()
if not repository:
print("❌ Repository path cannot be empty")
continue
# Optional: input data
use_input_data = input("🗂️ Do you need to provide input data files? (y/N): ").strip().lower()
input_data = None
if use_input_data in ['y', 'yes']:
input_path = input("📂 Please enter data file path: ").strip()
if input_path and os.path.exists(input_path):
input_data = f'[{{"path": "{input_path}", "description": "User provided input data"}}]'
# Get optimized prompt with conversation context
optimized_task = conversation.get_optimized_prompt(task_description)
conversation.add_message("user", f"Task: {task_description}\nRepository: {repository}")
print("🔧 Processing repository task...")
# Execute task
result = agent.run_repository_agent(
task_description=optimized_task,
repository=repository,
input_data=input_data
)
conversation.add_message("assistant", result)
print(f"\n📋 Task result:\n{result}\n")
Storage
Conversation history is stored in pickle format using joblib:
- Location:
data/cli_conversations/
- Filename:
{user_id}_{mode}_conversation.pkl
- Format: Pickled list of message dictionaries
Data Structure
messages = [
{
"role": "user",
"content": "What is Python?"
},
{
"role": "assistant",
"content": "Python is a high-level programming language..."
},
# ... more messages
]
Error Handling
The ConversationManager includes graceful error handling:
- Load Failures: Warns user and initializes empty message list
- Save Failures: Warns user but continues operation
- Optimization Failures: Falls back to simple summary or current input
- Import Errors: Uses fallback methods when optional modules unavailable
See Also