Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/SanMuzZzZz/LuaN1aoAgent/llms.txt

Use this file to discover all available pages before exploring further.

LuaN1aoAgent is intended only for authorized security testing and educational purposes. You must have explicit written permission from the system owner before testing any target. Unauthorized access is illegal.
This guide walks you through cloning the repository, configuring your LLM provider, initializing the knowledge base, and running your first agent task against a target.
1

Clone and install

Clone the repository and install Python dependencies into a virtual environment.
git clone https://github.com/SanMuzZzZz/LuaN1aoAgent.git
cd LuaN1aoAgent
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
Running inside Docker is strongly recommended because the agent uses shell_exec and python_exec — high-privilege tools that can affect your host system. See the installation guide for a container-based setup.
2

Configure your environment

Copy the example configuration file and fill in your LLM credentials.
cp .env.example .env
Open .env in your editor and set at minimum the following required values:
# Required: your LLM API key
LLM_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxx

# Required: your LLM API base URL
LLM_API_BASE_URL=https://api.openai.com/v1

# LLM provider: "openai" or "anthropic"
LLM_PROVIDER=openai

# Model assignments per role (use powerful models for best results)
LLM_DEFAULT_MODEL=gpt-4o
LLM_PLANNER_MODEL=gpt-4o
LLM_EXECUTOR_MODEL=gpt-4o
LLM_REFLECTOR_MODEL=gpt-4o
LLM_EXPERT_MODEL=gpt-4o
LLM_PROVIDER=openai
LLM_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxx
LLM_API_BASE_URL=https://api.openai.com/v1
LLM_DEFAULT_MODEL=gpt-4o
LLM_PLANNER_MODEL=gpt-4o
LLM_EXECUTOR_MODEL=gpt-4o
LLM_REFLECTOR_MODEL=gpt-4o
You can also control console verbosity with OUTPUT_MODE:
# "simple" | "default" | "debug"
OUTPUT_MODE=default
See Environment variables for all available settings.
3

Initialize the knowledge base

LuaN1ao uses a RAG (Retrieval-Augmented Generation) system backed by FAISS to retrieve relevant attack payloads and techniques during testing. You must build the vector index before running the agent for the first time.
# Clone PayloadsAllTheThings into the knowledge base directory
mkdir -p knowledge_base
git clone https://github.com/swisskyrepo/PayloadsAllTheThings \
    knowledge_base/PayloadsAllTheThings

# Build the FAISS vector index (takes a few minutes)
cd rag
python -m rag_kdprepare
rag_kdprepare downloads embedding models and chunks all markdown files in knowledge_base/. This only needs to run once, or again when you add new knowledge documents. The RAG service starts automatically when you run the agent.
4

Start the web server

The web dashboard is a standalone process that must be running before or alongside the agent. It persists all task data in luan1ao.db and streams live updates via SSE.
python -m web.server
Open your browser and navigate to http://localhost:8088 (default port). You should see the LuaN1aoAgent dashboard.
Keep this terminal running. The web server is persistent — you can view historical tasks and monitor live runs from the same interface.
5

Run your first agent task

Open a new terminal window (keep the web server running in the first), activate your virtual environment, and launch the agent with a goal and task name.
python agent.py \
    --goal "Perform comprehensive web security testing on http://testphp.vulnweb.com" \
    --task-name "demo_test"
The --goal flag describes the penetration testing objective in natural language. The --task-name flag sets the identifier used for logging and the Web UI display.To print the Web UI task URL after launch, add --web:
python agent.py \
    --goal "Perform comprehensive web security testing on http://testphp.vulnweb.com" \
    --task-name "demo_test" \
    --web
You can also override the LLM model configuration per-run without editing .env:
python agent.py \
    --goal "Scan localhost for open ports and identify running services" \
    --task-name "local_scan" \
    --llm-planner-model gpt-4o \
    --llm-executor-model gpt-4o-mini \
    --output-mode debug
Only run the agent against systems you own or have explicit written authorization to test. http://testphp.vulnweb.com is a deliberately vulnerable test site provided by Acunetix for this purpose.
6

View results

While the agent runs, the Web UI at http://localhost:8088 shows:
  • The live task graph evolving in real-time
  • Node-by-node execution logs with state transitions
  • Confirmed vulnerabilities and key findings as they are discovered
When the task completes, results are also written to disk under logs/:
logs/demo_test/20250204_120000/
├── run_log.json          # Complete P-E-R execution log
├── metrics.json          # Performance metrics and token usage
└── console_output.log    # Formatted console output
All task history is persisted in luan1ao.db, so you can review past runs from the Web UI even after restarting the server.

CLI reference

The most commonly used agent.py arguments:
FlagRequiredDescription
--goalYesThe penetration testing objective in natural language
--task-nameNoTask identifier for logging and Web UI (default: default_task)
--output-modeNoConsole verbosity: simple, default, or debug
--webNoPrint the Web UI task URL after launch
--web-portNoWeb service port for display purposes (default: 8088)
--llm-api-keyNoOverride LLM_API_KEY from .env
--llm-api-base-urlNoOverride LLM_API_BASE_URL from .env
--llm-planner-modelNoOverride the model used by the Planner
--llm-executor-modelNoOverride the model used by the Executor
--llm-reflector-modelNoOverride the model used by the Reflector
--modeNoExecution mode: default (P-E-R), linear, or react
--log-dirNoCustom log directory path

Next steps

Installation

Virtual environments, Docker setup, and troubleshooting common issues.

Environment variables

Full reference for all .env configuration options.

Web UI

Learn how to use the dashboard for task monitoring and human-in-the-loop control.

P-E-R architecture

Understand how the three agents collaborate to reason about your target.

Build docs developers (and LLMs) love