LuaN1aoAgent provides three console output modes and a prompt language switch. These settings affect what you see in the terminal during a run — they do not change the agent’s behaviour or the content of log files.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/SanMuzZzZz/LuaN1aoAgent/llms.txt
Use this file to discover all available pages before exploring further.
Setting the output mode
- Environment variable
- CLI flag
.env
Mode comparison
simple | default | debug | |
|---|---|---|---|
| Final result / flag | Yes | Yes | Yes |
| Task plan summary | No | Yes | Yes |
| Per-step tool calls | No | Yes | Yes |
| Tool outputs (truncated) | No | Yes | Yes |
| Reflector analysis | No | No | Yes |
| Causal graph updates | No | No | Yes |
| LLM request / response payloads | No | No | Yes |
| Token usage per call | No | No | Yes |
| Full tool stdout (untruncated) | No | No | Yes |
simple
Minimal output — only the final result or captured flag is printed. Suitable for automated pipelines, CI environments, or batch runs where you only care about the outcome.
default
Standard output for interactive use. Shows the current task plan, each executor step, tool names and arguments, truncated tool outputs, and a brief summary at the end of each P-E-R cycle. This is the recommended mode for most users.
debug
Verbose output equivalent to --verbose. Prints everything default shows plus full LLM request/response payloads, token usage per API call, reflector causal graph diffs, and untruncated tool output. Use this when diagnosing unexpected agent behaviour.
Prompt language
PROMPT_LANGUAGE controls the language used in all internal agent prompts — system messages, planning instructions, and reflector directives sent to the LLM.
.env
| Value | Description |
|---|---|
zh | Chinese prompts (default). Used in the original research and benchmarks. |
en | English prompts. |
PROMPT_LANGUAGE affects the language of the prompts sent to the LLM, not the language of console output or log files. Tool output and LLM responses will still appear in whatever language the model produces.Log files
Console output mode does not affect file logging. Logs are always written at full verbosity to thelogs/ directory:
| File | Contents |
|---|---|
logs/mcp_service.log | MCP server tool execution events |
logs/agent_*.log | Per-run agent trace (created at run start) |
LOG_LEVEL environment variable:
.env