Overview
RepoMaster supports extensive runtime configuration through command-line flags, allowing you to customize behavior without modifying environment files.
Command Line Interface
Basic Usage
python launcher.py [--mode MODE] [--backend-mode BACKEND_MODE] [OPTIONS]
Getting Help
# View all available options
python launcher.py --help
# List available modes
python launcher.py --modes
# or
python launcher.py --list-modes
Mode Selection
Primary Mode
--mode {frontend,backend}
-m {frontend,backend}
Default : backend
Select the primary runtime mode:
frontend: Streamlit web interface
backend: Command-line agent interface
Examples :
python launcher.py --mode frontend
python launcher.py -m backend
Backend Mode
--backend-mode {unified,deepsearch,general_assistant,repository_agent}
-b {unified,deepsearch,general_assistant,repository_agent}
Default : unified
Select the backend sub-mode (only applies when --mode backend):
unified: Unified multi-agent interface (recommended)
deepsearch: Deep search agent direct access
general_assistant: Programming assistant direct access
repository_agent: Repository exploration agent direct access
Examples :
python launcher.py --backend-mode unified
python launcher.py -b deepsearch
API Configuration
API Provider Type
--api-type {basic,azure_openai,openai,claude,deepseek,basic_claude4,basic_deepseek_r1}
-a {basic,azure_openai,openai,claude,deepseek,basic_claude4,basic_deepseek_r1}
Default : basic
Select which API provider to use for LLM operations.
The basic type uses OpenAI-compatible configuration and is suitable for OpenAI and compatible endpoints.
Provider Options :
Option Provider Configuration Used basicOpenAI-compatible OPENAI_API_KEY, OPENAI_MODELopenaiOpenAI OPENAI_API_KEY, OPENAI_MODELclaudeAnthropic ANTHROPIC_API_KEY, ANTHROPIC_MODELdeepseekDeepSeek DEEPSEEK_API_KEY, DEEPSEEK_MODELazure_openaiAzure OpenAI AZURE_OPENAI_* variablesbasic_claude4Claude 4 CLAUDE_API_KEY, model: claude-sonnet-4-20250514basic_deepseek_r1DeepSeek R1 DEEPSEEK_API_KEY, model: deepseek-r1-0528
Examples :
# Use OpenAI
python launcher.py --api-type openai
# Use Claude
python launcher.py -a claude
# Use DeepSeek R1 model specifically
python launcher.py --api-type basic_deepseek_r1
Model Temperature
--temperature FLOAT
-t FLOAT
Default : 0.1
Range : 0.0 - 2.0
Control the randomness of model outputs:
0.0 - 0.3: More deterministic and focused
0.4 - 0.7: Balanced creativity and consistency
0.8 - 1.0: More creative and varied
1.0+: Highly creative (GPT-5 requires exactly 1.0)
GPT-5 only supports temperature=1.0. RepoMaster automatically adjusts this setting if you use GPT-5.
Examples :
# Very deterministic for code generation
python launcher.py --temperature 0.1
# More creative for brainstorming
python launcher.py -t 0.8
Request Timeout
Default : 120 (2 minutes)
Maximum time (in seconds) to wait for API responses.
Examples :
# Longer timeout for complex tasks
python launcher.py --timeout 300
# Shorter timeout for quick operations
python launcher.py --timeout 60
Max Tokens
Default : 4000
Maximum number of tokens in model responses.
Different models have different context limits. Ensure your max-tokens value is within the model’s supported range.
Examples :
# Longer responses
python launcher.py --max-tokens 8000
# Concise responses
python launcher.py --max-tokens 2000
Max Conversation Turns
Default : 30
Maximum number of conversation turns in a single session.
Examples :
# Extended conversation
python launcher.py --max-turns 50
# Quick interactions
python launcher.py --max-turns 10
Working Directory
Custom Work Directory
Default : coding (with random subdirectory)
Specify where RepoMaster stores working files and execution results.
Behavior (from mode_config.py:35):
Not specified or "coding" : Uses {pwd}/coding/{random_string()}
Relative path : Converted to {pwd}/{path}
Absolute path : Used as-is
Examples :
# Use custom directory
python launcher.py --work-dir /tmp/repomaster
# Use relative path (converted to absolute)
python launcher.py -w my_workspace
# Default with random subdirectory
python launcher.py # Uses: ./coding/abc123xyz/
Frontend-Specific Options
Streamlit Port
--streamlit-port PORT
-p PORT
Default : 8501
Port number for the Streamlit web interface.
Examples :
python launcher.py --mode frontend --streamlit-port 8080
python launcher.py -m frontend -p 9000
Streamlit Host
Default : localhost
Host address for the Streamlit server.
Examples :
# Listen on all interfaces
python launcher.py --mode frontend --streamlit-host 0.0.0.0
# Specific IP
python launcher.py --mode frontend --streamlit-host 192.168.1.100
Max Upload Size
Default : 200 (MB)
Range : 1 - 2000 (MB)
Maximum file upload size for the web interface.
Examples :
# Allow larger files (500MB)
python launcher.py --mode frontend --max-upload-size 500
# Restrict uploads (50MB)
python launcher.py --mode frontend --max-upload-size 50
Logging and Debugging
Log Level
--log-level {DEBUG,INFO,WARNING,ERROR}
-l {DEBUG,INFO,WARNING,ERROR}
Default : INFO
Control logging verbosity:
DEBUG: Detailed debugging information
INFO: General informational messages
WARNING: Warning messages only
ERROR: Error messages only
Examples :
# Debug mode for troubleshooting
python launcher.py --log-level DEBUG
# Quiet mode - errors only
python launcher.py -l ERROR
Logging Configuration (from launcher.py:38):
logging.basicConfig(
level = getattr (logging, log_level.upper()),
format = ' %(asctime)s - %(name)s - %(levelname)s - %(message)s '
)
When log level is not DEBUG, third-party library warnings are suppressed for cleaner output.
Code Execution
Docker Execution
Default : False
Execute code in Docker containers for isolation and security.
Requires Docker to be installed and running. See Docker documentation for setup.
Examples :
# Enable Docker execution
python launcher.py --use-docker
# Combined with other options
python launcher.py --backend-mode unified --use-docker
Configuration Management
Skip Configuration Check
Default : False
Skip API configuration validation on startup.
Not recommended for production use. This bypasses critical validation that ensures at least one API provider is configured.
Use Cases :
Testing with mock API responses
Development environments without API access
Automated testing pipelines
Examples :
python launcher.py --skip-config-check
Complete Examples
Production Backend Setup
python launcher.py \
--mode backend \
--backend-mode unified \
--api-type openai \
--temperature 0.1 \
--timeout 300 \
--max-tokens 8000 \
--max-turns 50 \
--work-dir /opt/repomaster/workspace \
--log-level INFO
Development Frontend Setup
python launcher.py \
--mode frontend \
--streamlit-port 8080 \
--streamlit-host 0.0.0.0 \
--max-upload-size 500 \
--log-level DEBUG
Deep Search Agent (Direct Access)
python launcher.py \
--mode backend \
--backend-mode deepsearch \
--api-type claude \
--temperature 0.2 \
--timeout 180
Repository Agent with Docker
python launcher.py \
--mode backend \
--backend-mode repository_agent \
--api-type deepseek \
--use-docker \
--work-dir /tmp/repo-tasks \
--log-level INFO
Configuration Dataclasses
RepoMaster uses dataclasses for type-safe configuration (from mode_config.py):
RunConfig (Base)
@dataclass
class RunConfig :
mode: str
work_dir: str = ""
log_level: str = "INFO"
use_docker: bool = False
timeout: int = 120
FrontendConfig
@dataclass
class FrontendConfig ( RunConfig ):
mode: str = "frontend"
streamlit_port: int = 8501
streamlit_host: str = "localhost"
file_watcher_type: str = "none"
enable_auth: bool = True
enable_file_browser: bool = True
max_upload_size: int = 200 # MB
BackendConfig
@dataclass
class BackendConfig ( RunConfig ):
mode: str = "backend"
backend_mode: str = "deepsearch"
api_type: str = "basic"
temperature: float = 0.1
max_tokens: int = 4000
max_turns: int = 30
Mode-Specific Configs
Each backend mode has additional configuration options:
@dataclass
class DeepSearchConfig ( BackendConfig ):
backend_mode: str = "deepsearch"
enable_web_search: bool = True
max_search_results: int = 10
search_timeout: int = 30
enable_code_tool: bool = True
max_tool_messages: int = 2
@dataclass
class GeneralAssistantConfig ( BackendConfig ):
backend_mode: str = "general_assistant"
enable_venv: bool = True
cleanup_venv: bool = False
max_execution_time: int = 600
supported_languages: list = [
'python' , 'javascript' , 'typescript' ,
'java' , 'cpp' , 'go'
]
@dataclass
class RepositoryAgentConfig ( BackendConfig ):
backend_mode: str = "repository_agent"
enable_repository_search: bool = True
max_repo_size_mb: int = 100
clone_timeout: int = 300
enable_parallel_execution: bool = True
retry_times: int = 3
@dataclass
class UnifiedConfig ( BackendConfig ):
backend_mode: str = "unified"
enable_web_search: bool = True
enable_repository_search: bool = True
enable_venv: bool = True
cleanup_venv: bool = False
max_search_results: int = 10
search_timeout: int = 30
max_execution_time: int = 600
max_repo_size_mb: int = 100
clone_timeout: int = 300
retry_times: int = 3
supported_languages: list = [
'python' , 'javascript' , 'typescript' ,
'java' , 'cpp' , 'go'
]
Environment vs CLI Priority
Priority Order
CLI Arguments (highest priority)
Environment Variables (.env file)
Default Values (lowest priority)
Example
# In .env file:
DEFAULT_API_PROVIDER = openai
OPENAI_MODEL = gpt-4o
# CLI overrides:
python launcher.py --api-type claude --temperature 0.5
# Result: Uses Claude provider with temperature 0.5
# (overrides env file settings)
Programmatic Configuration
For Python API usage:
from configs.mode_config import ModeConfigManager, UnifiedConfig
# Create custom configuration
config = UnifiedConfig(
api_type = 'openai' ,
temperature = 0.2 ,
work_dir = '/custom/path' ,
max_tokens = 8000 ,
timeout = 300
)
# Use configuration
manager = ModeConfigManager()
manager.config = config
# Get LLM config
llm_config = manager.get_llm_config( api_type = 'openai' )
Troubleshooting
Invalid Arguments
Error : error: unrecognized arguments: --invalid-flag
Solution : Use --help to see valid arguments:
python launcher.py --help
Conflicting Options
Issue : Frontend-specific flags used with backend mode
Example :
# This combination doesn't make sense
python launcher.py --mode backend --streamlit-port 8080
Solution : Frontend options only apply to frontend mode:
python launcher.py --mode frontend --streamlit-port 8080
Configuration Override Issues
Issue : Changes not taking effect
Solutions :
# 1. Verify CLI arguments are correct
python launcher.py --log-level DEBUG # See what's loaded
# 2. Check environment file isn't overriding
cat configs/.env
# 3. Use --skip-config-check to bypass env validation
python launcher.py --skip-config-check
Next Steps
API Providers Configure LLM providers and models
Environment Setup Complete environment variable guide