Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/SanMuzZzZz/LuaN1aoAgent/llms.txt

Use this file to discover all available pages before exploring further.

LuaN1aoAgent is configured entirely through environment variables, loaded from a .env file at startup via python-dotenv. Copy .env.example to .env and edit the values before running the agent.

Complete .env example

.env
# ── Required ────────────────────────────────────────────────────
LLM_API_KEY=your_api_key_here

# ── LLM provider & models ───────────────────────────────────────
LLM_PROVIDER=openai
LLM_API_BASE_URL=https://api.openai.com/v1
LLM_DEFAULT_MODEL=gpt-4o
LLM_PLANNER_MODEL=gpt-4o
LLM_EXECUTOR_MODEL=gpt-4o
LLM_REFLECTOR_MODEL=gpt-4o
LLM_EXPERT_MODEL=gpt-4o

# ── Anthropic (only when LLM_PROVIDER=anthropic) ────────────────
ANTHROPIC_API_KEY=your_anthropic_api_key_here
ANTHROPIC_API_BASE_URL=https://api.anthropic.com/v1/messages
ANTHROPIC_VERSION=2023-06-01
ANTHROPIC_DEFAULT_MODEL=claude-sonnet-4-5

# ── Scenario & output ───────────────────────────────────────────
SCENARIO_MODE=general
OUTPUT_MODE=default
PROMPT_LANGUAGE=en

# ── Executor behavior ───────────────────────────────────────────
EXECUTOR_MAX_STEPS=8
EXECUTOR_TOOL_TIMEOUT=120
EXECUTOR_MAX_OUTPUT_LENGTH=50000
GLOBAL_MAX_CYCLES=50
GLOBAL_MAX_TOKEN_USAGE=5000000

# ── Web UI ──────────────────────────────────────────────────────
WEB_HOST=127.0.0.1
WEB_PORT=8088

# ── Knowledge service ───────────────────────────────────────────
KNOWLEDGE_SERVICE_HOST=127.0.0.1
KNOWLEDGE_SERVICE_PORT=8081

# ── Human-in-the-loop ───────────────────────────────────────────
HUMAN_IN_THE_LOOP=false
Variables marked required have no default and will cause the agent to fail at startup if unset. All others are optional and fall back to the documented defaults.

Core scenario

SCENARIO_MODE
string
default:"general"
Sets the overall operating mode.
ValueDescription
generalFull-featured mode for real-world and internal-network pentests. All tools enabled.
ctfOptimised for Capture-the-Flag competitions. Disables large-scale scanning tools and activates CTF-specific prompt tuning.

Output

OUTPUT_MODE
string
default:"default"
Controls how much information is printed to the console during a run. See Output Modes for a full comparison.
ValueDescription
simpleMinimal output — core results only.
defaultStandard output for normal use.
debugVerbose output, equivalent to --verbose.
PROMPT_LANGUAGE
string
default:"zh"
Language used for internal agent prompts.
ValueDescription
zhChinese (default)
enEnglish

LLM API

LLM_API_KEY
string
required
API key for the primary LLM service. Required — the agent will not start without this value.
LLM_API_BASE_URL
string
default:"https://api.openai.com/v1"
Base URL for the OpenAI-compatible API endpoint. Override this to use a third-party provider such as DeepSeek or a local proxy.
LLM_FALLBACK_API_KEY
string
Secondary API key used automatically when the primary key encounters a 429 rate-limit error. If unset, the agent performs exponential back-off on the primary key instead.

LLM models

Each agent role can be assigned an independent model. This lets you run an inexpensive fast model for the executor while reserving your strongest model for the planner.
LLM_DEFAULT_MODEL
string
default:"gpt-4o"
Fallback model used by any role that does not have an explicit model configured.
LLM_PLANNER_MODEL
string
default:"gpt-4o"
Model for the Planner role, which builds the attack task graph. Assign your strongest model here.
LLM_EXECUTOR_MODEL
string
default:"gpt-4o"
Model for the Executor role, which runs tools step-by-step within a task.
LLM_REFLECTOR_MODEL
string
default:"gpt-4o"
Model for the Reflector role, which performs causal analysis and updates the knowledge graph.
LLM_EXPERT_MODEL
string
default:"gpt-4o"
Model for the Expert Analysis role, invoked by the expert_analysis MCP tool when the executor escalates a hard problem.
LLM_SUMMARIZER_MODEL
string
default:"LLM_DEFAULT_MODEL"
Model used to compress long conversation history. Falls back to LLM_DEFAULT_MODEL when not set.
LLM_REFLECTOR_VALIDATOR_MODEL
string
default:"LLM_REFLECTOR_MODEL"
Model for the binary yes/no reflector validation step. Defaults to LLM_REFLECTOR_MODEL.
LLM_PLANNER_CRISIS_EXPERT_MODEL
string
default:"LLM_PLANNER_MODEL"
Model used when the planner triggers a crisis re-planning event. Defaults to LLM_PLANNER_MODEL.

LLM advanced

LLM_EXTRA_BODY_ENABLED
boolean
default:"false"
When true, the agent injects an extra_body field into OpenAI-compatible API requests. This is required to enable thinking-mode features on providers that support it (e.g., extra_body: {thinking: "hidden"}). Has no effect when LLM_PROVIDER=anthropic.
LLM_DEFAULT_THINKING
string
default:"off"
Default thinking-mode setting applied to all roles unless a per-role override is present. Only takes effect when LLM_EXTRA_BODY_ENABLED=true.
ValueDescription
offThinking mode disabled (no extra_body injected).
hiddenThinking enabled; chain-of-thought not returned in the response.
visibleThinking enabled; chain-of-thought returned in reasoning_content or similar.
Per-role thinking overrides follow the pattern LLM_<ROLE>_THINKING. All fall back to LLM_DEFAULT_THINKING if unset:
VariableRole
LLM_PLANNER_THINKINGPlanner
LLM_EXECUTOR_THINKINGExecutor
LLM_REFLECTOR_THINKINGReflector
LLM_EXPERT_THINKINGExpert Analysis
LLM_SUMMARIZER_THINKINGSummarizer
LLM_REFLECTOR_VALIDATOR_THINKINGReflector Validator
LLM_PLANNER_CRISIS_EXPERT_THINKINGPlanner Crisis Expert

LLM providers

LLM_PROVIDER
string
default:"openai"
Selects the LLM backend.
ValueDescription
openaiOpenAI or any compatible API (DeepSeek, local proxies, etc.).
anthropicAnthropic Claude native API.
See LLM Providers for full configuration examples.
ANTHROPIC_API_KEY
string
default:"LLM_API_KEY"
API key for the Anthropic API. Defaults to the value of LLM_API_KEY when not explicitly set.
ANTHROPIC_API_BASE_URL
string
default:"https://api.anthropic.com/v1/messages"
Endpoint for the Anthropic Messages API.
ANTHROPIC_FALLBACK_API_KEY
string
default:"LLM_FALLBACK_API_KEY"
Fallback key for Anthropic rate-limit handling. Defaults to LLM_FALLBACK_API_KEY.
ANTHROPIC_VERSION
string
default:"2023-06-01"
Value sent in the anthropic-version request header.
Anthropic per-role model overrides follow the pattern ANTHROPIC_<ROLE>_MODEL:
VariableDefault
ANTHROPIC_DEFAULT_MODELclaude-3-5-sonnet-20240620
ANTHROPIC_PLANNER_MODELclaude-3-5-sonnet-20240620
ANTHROPIC_EXECUTOR_MODELclaude-3-5-sonnet-20240620
ANTHROPIC_REFLECTOR_MODELclaude-3-5-sonnet-20240620
ANTHROPIC_EXPERT_MODELclaude-3-5-sonnet-20240620
ANTHROPIC_SUMMARIZER_MODELANTHROPIC_DEFAULT_MODEL
ANTHROPIC_REFLECTOR_VALIDATOR_MODELANTHROPIC_REFLECTOR_MODEL
ANTHROPIC_PLANNER_CRISIS_EXPERT_MODELANTHROPIC_PLANNER_MODEL

Executor behavior

EXECUTOR_MAX_STEPS
integer
default:"8"
Maximum number of tool-call steps the executor may take within a single task cycle before forcing termination.
EXECUTOR_MESSAGE_COMPRESS_THRESHOLD
integer
default:"12"
Number of messages in the executor’s context window that triggers history compression.
EXECUTOR_TOKEN_COMPRESS_THRESHOLD
integer
default:"80000"
Token count in the executor context that triggers compression, regardless of message count.
EXECUTOR_NO_ARTIFACTS_PATIENCE
integer
default:"4"
If the executor completes this many consecutive steps without producing a new artifact (finding, credential, flag, etc.), the cycle is terminated. Must be less than EXECUTOR_MAX_STEPS.
EXECUTOR_FAILURE_THRESHOLD
integer
default:"3"
Number of consecutive tool failures that triggers a strategy switch.
EXECUTOR_RECENT_MESSAGES_KEEP
integer
default:"6"
Number of most-recent messages preserved verbatim when compressing history.
EXECUTOR_MIN_COMPRESS_MESSAGES
integer
default:"5"
Minimum message count required before compression is considered.
EXECUTOR_COMPRESS_INTERVAL
integer
default:"5"
How many execution rounds must pass between successive compressions.
EXECUTOR_COMPRESS_INTERVAL_MSG_THRESHOLD
integer
default:"8"
Message count threshold evaluated at each compression interval.
EXECUTOR_TOOL_TIMEOUT
integer
default:"120"
Default tool execution timeout in seconds. Used for any tool that does not have its own TOOL_TIMEOUT_* override.
EXECUTOR_MAX_OUTPUT_LENGTH
integer
default:"50000"
Maximum characters retained from a single tool output. Longer outputs are truncated before being fed back to the LLM.
GLOBAL_MAX_CYCLES
integer
default:"50"
Hard cap on the total number of Planner-Executor-Reflector (P-E-R) cycles. Acts as a safety circuit-breaker against infinite loops.
GLOBAL_MAX_TOKEN_USAGE
integer
default:"5000000"
Hard cap on cumulative token consumption across the entire run. The agent halts when this limit is reached.

Per-tool timeout overrides

Each tool can have its timeout adjusted independently. Unrecognised tools fall back to EXECUTOR_TOOL_TIMEOUT.
VariableToolDefault (seconds)
TOOL_TIMEOUT_SQLMAPsqlmap_tool600
TOOL_TIMEOUT_NUCLEInuclei_scan300
TOOL_TIMEOUT_DIRSEARCHdirsearch_scan300
TOOL_TIMEOUT_CONCURRENCYconcurrency_test180
TOOL_TIMEOUT_HTTPhttp_request60
TOOL_TIMEOUT_SHELLshell_exec120
TOOL_TIMEOUT_PYTHONpython_exec300
TOOL_TIMEOUT_WEB_SEARCHweb_search30
TOOL_TIMEOUT_SEARCH_EXPLOITsearch_exploit30
TOOL_TIMEOUT_THINKthink30
TOOL_TIMEOUT_HYPOTHESESformulate_hypotheses30
TOOL_TIMEOUT_REFLECTreflect_on_failure30
TOOL_TIMEOUT_EXPERTexpert_analysis60
TOOL_TIMEOUT_RETRIEVEretrieve_knowledge15
TOOL_TIMEOUT_DISTILLdistill_knowledge20

Context management

PLANNER_HISTORY_WINDOW
integer
default:"15"
Number of past P-E-R cycle summaries the Planner can see when building a new plan.
REFLECTOR_HISTORY_WINDOW
integer
default:"15"
Number of recent reflection log entries available to the Reflector when updating the causal graph.

Ablation

These variables are primarily for research and ablation studies. Changing them disables architectural components of the agent.
EXECUTION_MODE
string
default:"default"
Controls which agent architecture is active.
ValueDescription
defaultFull P-E-R (Planner-Executor-Reflector) mode with dynamic task graph.
linearLinear mode — task graph disabled, no dynamic branching.
reactPure ReAct mode — Executor only, Planner and Reflector disabled.
NO_CAUSAL_GRAPH
boolean
default:"false"
When true, disables the Reflector’s causal graph updates and the Planner’s causal reasoning. The agent reverts to a simple memory model.

Web service

WEB_HOST
string
default:"127.0.0.1"
Host address the Web UI server binds to. Set to 0.0.0.0 to expose the UI on all network interfaces.
WEB_PORT
integer
default:"8088"
Port for the Web UI server.

Knowledge service

KNOWLEDGE_SERVICE_HOST
string
default:"127.0.0.1"
Host where the RAG knowledge service is running.
KNOWLEDGE_SERVICE_PORT
integer
default:"8081"
Port for the RAG knowledge service.
KNOWLEDGE_SERVICE_URL
string
Full URL for the knowledge service. Constructed automatically from host and port when not set explicitly. Override this when the service runs on a different machine.

Human-in-the-loop (HITL)

HUMAN_IN_THE_LOOP
boolean
default:"false"
When true, the agent pauses after generating each plan and waits for human approval via the Web UI or CLI before execution begins. Useful for supervised red-team engagements.

RAG knowledge service

RAG_SNIPPET_LEN
integer
default:"800"
Maximum character length for each knowledge snippet returned by the retrieval service.
RAG_TOP_K
integer
default:"5"
Default number of top results returned by the RAG service when top_k is not specified in the tool call.

LLM request behavior

LLM_TIMEOUT
integer
default:"60"
Timeout in seconds for individual LLM API requests.
LLM_MAX_RETRIES
integer
default:"3"
Maximum number of retry attempts for failed LLM API requests before the agent raises an error.

Built-in temperature defaults

Temperature values for each role are defined in conf/config.py in the LLM_TEMPERATURES dict. They are not currently overridable via environment variables — to change them, edit the dict directly.
RoleDefault temperatureNotes
Planner0.5Higher value enables diverse strategy generation
Executor0.3Stable, reliable tool-calling behavior
Reflector0.2Precise analysis and judgment
Expert Analysis0.7More creative problem-solving
Summarizer0.2Stable, concise summarization
Reflector Validator0.1Binary yes/no judgment needs high determinism
Planner Crisis Expert0.4Balanced stability and exploration for crisis replanning

Logging

LOG_LEVEL
string
default:"INFO"
Python logging level for internal service logs (not the console output controlled by OUTPUT_MODE).
ValueDescription
DEBUGVerbose internal logging
INFOStandard operational messages
WARNINGWarnings only
ERRORErrors only

Build docs developers (and LLMs) love