Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/SanMuzZzZz/LuaN1aoAgent/llms.txt

Use this file to discover all available pages before exploring further.

All agent runs are launched via agent.py. The entry point is the main() coroutine driven by Python’s argparse.
python agent.py --goal "<task goal>" [options]
python_exec and shell_exec allow the agent to execute arbitrary code and shell commands. Always run the agent inside an isolated, controlled environment. No sandbox isolation is provided.

Required arguments

--goal
string
required
The penetration testing goal. This is the top-level instruction given to the Planner. Be specific — vague goals produce vague plans.
--goal "Find and exploit SQL injection vulnerabilities on http://target.com"

Task and logging arguments

--task-name
string
default:"default_task"
A human-readable name for this task. Used as the first path segment of the log directory and for display in the Web UI.
--log-dir
string
Override the log directory path. If not provided, logs are written to:
logs/{task-name}/{timestamp}/
where timestamp is formatted as YYYYMMDD_HHMMSS.
--output-mode
string
default:"default"
Controls console verbosity. Allowed values:
ValueDescription
simpleMinimal output — task graph and key findings only
defaultStandard output including plan summaries and reflection results
debugFull output including raw LLM prompts/responses and causal graph state
Can also be set via the OUTPUT_MODE environment variable.

LLM configuration arguments

These arguments override the corresponding conf/config.py settings and environment variables at runtime.
--llm-api-base-url
string
Override the LLM API base URL. Useful for switching to a different provider endpoint without modifying .env.
--llm-api-base-url https://api.deepseek.com/v1
--llm-api-key
string
Override the LLM API key.
--llm-planner-model
string
Model identifier for the Planner role. The Planner generates and updates the DAG task plan. Defaults to LLM_PLANNER_MODEL env var or gpt-4o.
--llm-executor-model
string
Model identifier for the Executor role. The Executor runs the ReAct tool-calling loop for each subtask. Defaults to LLM_EXECUTOR_MODEL env var or gpt-4o.
--llm-reflector-model
string
Model identifier for the Reflector role. The Reflector audits subtask execution and updates the causal graph. Defaults to LLM_REFLECTOR_MODEL env var or gpt-4o.
--llm-default-model
string
Default model for any role without a specific override. Defaults to LLM_DEFAULT_MODEL env var or gpt-4o.
--llm-expert-model
string
Model identifier for the Expert Analysis role (used by the expert_analysis tool). This role uses a higher temperature (0.7) for more creative analysis. Defaults to LLM_EXPERT_MODEL env var or gpt-4o.

Execution mode arguments

--mode
string
default:"default"
Selects the execution architecture. Allowed values: default, linear, react.See Execution Modes for a full comparison.
--no-causal-graph
boolean
Flag (no value). When present, disables the Reflector’s causal graph updates and the Planner’s causal reasoning. Used for ablation studies to benchmark the value of dual-graph reasoning.Sets conf.config.NO_CAUSAL_GRAPH = True at runtime.

Web UI arguments

--web
boolean
Flag (no value). When present, prints the Web UI URL to the console on startup. The Web UI itself runs as a separate process (python web/server.py) and is not started by this flag.
--web-port
number
default:"8088"
Port number to include in the printed Web UI URL. This is display-only — it does not start or configure the Web service.
--op-id
string
Specify an operation ID for this task. When provided by the Web UI, this links the agent run to an existing session record in the SQLite database. If omitted, a new ID is generated as task_{timestamp}_{uuid_prefix}.

Example commands

python agent.py \
  --goal "Perform comprehensive web security testing on http://testphp.vulnweb.com" \
  --task-name "web_test"

Environment variables

All LLM and execution settings can be provided via environment variables in a .env file. CLI arguments take precedence over environment variables.
VariableCLI equivalentDescription
LLM_API_BASE_URL--llm-api-base-urlLLM API endpoint
LLM_API_KEY--llm-api-keyAPI key
LLM_PLANNER_MODEL--llm-planner-modelPlanner model
LLM_EXECUTOR_MODEL--llm-executor-modelExecutor model
LLM_REFLECTOR_MODEL--llm-reflector-modelReflector model
LLM_DEFAULT_MODEL--llm-default-modelDefault model
LLM_EXPERT_MODEL--llm-expert-modelExpert analysis model
EXECUTION_MODE--modeExecution mode
NO_CAUSAL_GRAPH--no-causal-graphDisable causal graph
OUTPUT_MODE--output-modeConsole verbosity
WEB_PORT--web-portWeb UI port
GLOBAL_MAX_CYCLESMax P-E-R loop cycles (default: 50)
GLOBAL_MAX_TOKEN_USAGEToken circuit breaker (default: 5,000,000)

Build docs developers (and LLMs) love