All agent runs are launched viaDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/SanMuzZzZz/LuaN1aoAgent/llms.txt
Use this file to discover all available pages before exploring further.
agent.py. The entry point is the main() coroutine driven by Python’s argparse.
Required arguments
The penetration testing goal. This is the top-level instruction given to the Planner. Be specific — vague goals produce vague plans.
Task and logging arguments
A human-readable name for this task. Used as the first path segment of the log directory and for display in the Web UI.
Override the log directory path. If not provided, logs are written to:where
timestamp is formatted as YYYYMMDD_HHMMSS.Controls console verbosity. Allowed values:
Can also be set via the
| Value | Description |
|---|---|
simple | Minimal output — task graph and key findings only |
default | Standard output including plan summaries and reflection results |
debug | Full output including raw LLM prompts/responses and causal graph state |
OUTPUT_MODE environment variable.LLM configuration arguments
These arguments override the correspondingconf/config.py settings and environment variables at runtime.
Override the LLM API base URL. Useful for switching to a different provider endpoint without modifying
.env.Override the LLM API key.
Model identifier for the Planner role. The Planner generates and updates the DAG task plan. Defaults to
LLM_PLANNER_MODEL env var or gpt-4o.Model identifier for the Executor role. The Executor runs the ReAct tool-calling loop for each subtask. Defaults to
LLM_EXECUTOR_MODEL env var or gpt-4o.Model identifier for the Reflector role. The Reflector audits subtask execution and updates the causal graph. Defaults to
LLM_REFLECTOR_MODEL env var or gpt-4o.Default model for any role without a specific override. Defaults to
LLM_DEFAULT_MODEL env var or gpt-4o.Model identifier for the Expert Analysis role (used by the
expert_analysis tool). This role uses a higher temperature (0.7) for more creative analysis. Defaults to LLM_EXPERT_MODEL env var or gpt-4o.Execution mode arguments
Selects the execution architecture. Allowed values:
default, linear, react.See Execution Modes for a full comparison.Flag (no value). When present, disables the Reflector’s causal graph updates and the Planner’s causal reasoning. Used for ablation studies to benchmark the value of dual-graph reasoning.Sets
conf.config.NO_CAUSAL_GRAPH = True at runtime.Web UI arguments
Flag (no value). When present, prints the Web UI URL to the console on startup. The Web UI itself runs as a separate process (
python web/server.py) and is not started by this flag.Port number to include in the printed Web UI URL. This is display-only — it does not start or configure the Web service.
Specify an operation ID for this task. When provided by the Web UI, this links the agent run to an existing session record in the SQLite database. If omitted, a new ID is generated as
task_{timestamp}_{uuid_prefix}.Example commands
Environment variables
All LLM and execution settings can be provided via environment variables in a.env file. CLI arguments take precedence over environment variables.
| Variable | CLI equivalent | Description |
|---|---|---|
LLM_API_BASE_URL | --llm-api-base-url | LLM API endpoint |
LLM_API_KEY | --llm-api-key | API key |
LLM_PLANNER_MODEL | --llm-planner-model | Planner model |
LLM_EXECUTOR_MODEL | --llm-executor-model | Executor model |
LLM_REFLECTOR_MODEL | --llm-reflector-model | Reflector model |
LLM_DEFAULT_MODEL | --llm-default-model | Default model |
LLM_EXPERT_MODEL | --llm-expert-model | Expert analysis model |
EXECUTION_MODE | --mode | Execution mode |
NO_CAUSAL_GRAPH | --no-causal-graph | Disable causal graph |
OUTPUT_MODE | --output-mode | Console verbosity |
WEB_PORT | --web-port | Web UI port |
GLOBAL_MAX_CYCLES | — | Max P-E-R loop cycles (default: 50) |
GLOBAL_MAX_TOKEN_USAGE | — | Token circuit breaker (default: 5,000,000) |