Documentation Index
Fetch the complete documentation index at: https://mintlify.com/SanMuzZzZz/LuaN1aoAgent/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
Before installing LuaN1aoAgent, ensure your system meets the following requirements:| Requirement | Minimum | Notes |
|---|---|---|
| Python | 3.10+ | Required for asyncio features and type hints |
| Operating system | Linux, macOS, or Windows (WSL2) | Linux is recommended for production use |
| Memory | 4 GB RAM | 8 GB+ recommended for RAG + LLM inference |
| Disk space | ~2 GB | For dependencies, embedding models, and knowledge base |
| Network | Internet access | For LLM API calls and PayloadsAllTheThings clone |
| LLM API | OpenAI-compatible | GPT-4o, DeepSeek, Claude, or any compatible endpoint |
Step 1: Clone the repository
Step 2: Set up a virtual environment
Using a virtual environment keeps LuaN1ao’s dependencies isolated from your system Python.- Linux / macOS
- Windows (WSL2)
Step 3: Install dependencies
| Package | Purpose |
|---|---|
openai, anthropic | LLM API clients |
fastapi, uvicorn | Web server and knowledge service |
faiss-cpu | FAISS vector index for RAG |
sentence-transformers | Embedding model for knowledge retrieval |
sqlalchemy, aiosqlite | SQLite database layer |
networkx | Task graph (DAG) management |
rich | Terminal output formatting |
fastmcp, mcp | Model Context Protocol tool integration |
tenacity | Automatic retry logic for LLM requests |
httpx | Async HTTP client |
Step 4: Verify the installation
Confirm Python and key packages are available:--goal, --task-name, --output-mode, and others.
Step 5: Configure environment variables
Create your.env file from the example template:
.env and set your LLM provider credentials at minimum:
LLM_PROVIDER=anthropic and configure the ANTHROPIC_* variables instead:
Step 6: Initialize the knowledge base
The RAG system requires a vector index built from security knowledge documents before the agent can retrieve attack payloads and techniques. This is a one-time setup step.rag_kdprepare script:
- Downloads the sentence-transformer embedding model (first run only)
- Chunks all markdown documents in
knowledge_base/into retrievable segments - Builds a FAISS vector index and saves it to disk
You can add your own knowledge documents (Markdown files) to
knowledge_base/ and re-run rag_kdprepare to include them in retrieval. Custom payloads, internal runbooks, or target-specific notes all work.Optional: Docker setup
Running LuaN1aoAgent inside Docker is the recommended approach for production or repeated use. It isolates the agent’s high-privilege tool execution from your host system. A minimalDockerfile to get started:
Troubleshooting
Python version is below 3.10
Python version is below 3.10
LuaN1aoAgent requires Python 3.10 or higher due to use of On Ubuntu/Debian, install a newer version:On macOS with Homebrew:
asyncio task groups and newer type hint syntax.Check your version:pip install fails with a faiss-cpu error
pip install fails with a faiss-cpu error
faiss-cpu requires a compatible CPU (x86_64 or ARM64). On some Linux distributions you may need to install additional build tools:rag_kdprepare fails or hangs
rag_kdprepare fails or hangs
The embedding model download can time out on slow connections. Check:
- You have internet access from the machine or container.
- The
knowledge_base/PayloadsAllTheThingsdirectory is non-empty. - You are running from within the
rag/directory:cd rag && python -m rag_kdprepare.
HF_ENDPOINT to a mirror:Web server fails to start on port 8088
Web server fails to start on port 8088
If port 8088 is already in use, change it in Then start the server and access the UI at
.env:http://localhost:9000.Knowledge service port 8081 is already in use
Knowledge service port 8081 is already in use
The RAG knowledge service runs on port 8081 by default. If another process is using it, change the port in The agent auto-starts the knowledge service and will use the configured port.
.env:LLM API returns 401 or 403 errors
LLM API returns 401 or 403 errors
Verify your API key and base URL in Common causes:
.env:LLM_API_KEYis not set or contains whitespaceLLM_API_BASE_URLdoes not match your provider’s endpointLLM_PROVIDERis set toanthropicbutANTHROPIC_API_KEYis missing
Directory structure
After a successful installation and knowledge base setup, your directory should look like this:Next steps
Quickstart
Run your first penetration testing task end-to-end.
Environment variables
Complete reference for all configuration options.
Docker deployment
Full Docker Compose setup with pre-installed pentest tools.
Knowledge base setup
Add custom knowledge documents and rebuild the RAG index.