Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/SanMuzZzZz/LuaN1aoAgent/llms.txt

Use this file to discover all available pages before exploring further.

Status overview

Completed

Human-in-the-Loop Mode is fully implemented and available in the current release.

Planned

Experience Self-Evolution, Tool Ecosystem Expansion, and Multimodal Capabilities are on the near-term roadmap.

Completed features

Human-in-the-Loop (HITL) mode

LuaN1aoAgent supports supervised operation, allowing security experts to review and intervene in the agent’s decision-making process in real time.
Before executing any operation flagged as high-risk, the agent pauses and waits for explicit human approval. This prevents irreversible actions from being taken without oversight.Enable via .env:
HUMAN_IN_THE_LOOP=true
Experts can inspect, modify, and inject new sub-tasks into the live task graph while the agent is running — without stopping the session. Modifications can be made through either the Web UI or the CLI.
  • Web UI: An approval modal appears automatically after plan generation. Use “Modify” to edit the plan JSON directly, or “Add Task” to inject new sub-tasks.
  • CLI: The agent pauses at HITL >. Type y to approve, n to reject, or m to open the system editor and modify the plan.
Security experts can inject domain knowledge, alternative hypotheses, or targeted instructions directly into the running plan. The Planner incorporates this guidance on its next planning cycle.

Planned features

Experience self-evolution

Persistent cross-task learning so the agent improves from every engagement it runs.
1

Cross-task long-term memory

The agent will maintain a persistent memory store across separate tasks. Findings, failed approaches, and confirmed vulnerabilities from past engagements inform future ones.
2

Automatic extraction of successful attack patterns

When an exploit succeeds, the attack chain is automatically extracted, vectorized, and stored in the knowledge library. This creates a self-growing playbook.
3

Intelligent recommendations from historical experience

At the start of each new task, the RAG system retrieves relevant past patterns based on the target’s profile, guiding the Planner toward high-probability attack paths from the outset.

Tool ecosystem expansion

Broader tool integration to cover more of the standard penetration testing toolkit.
Native integration with the Metasploit Framework via its RPC API. The Executor will be able to invoke Metasploit modules directly as part of the tool chain, enabling exploitation of the full Metasploit module library.
Integration with industry-standard scanning tools:
  • Nuclei — Template-based vulnerability scanning
  • Xray — Passive vulnerability scanner
  • AWVS — Web application vulnerability scanner
These can already be added manually via mcp.json. First-class support will include pre-built configurations and result parsing.
A Docker-based sandbox environment for executing tools in isolation. This eliminates the host-system risk currently posed by shell_exec and python_exec, making the agent safe to run outside of a dedicated VM.

Multimodal capabilities

Extending the agent’s perceptual range beyond text-based HTTP traffic.

Image recognition

CAPTCHA solving and screenshot analysis. The agent will be able to interpret visual elements on web pages, enabling it to bypass common bot-detection mechanisms and analyze rendered page state.

Traffic analysis

PCAP file parsing. The agent will be able to ingest raw network captures, reconstruct protocol-level interactions, and identify anomalies that are invisible at the HTTP application layer.

Long-term vision

These items represent research-grade goals beyond the near-term roadmap.
Multi-agent distributed collaboration: a network of specialized agents operating in parallel — one scanning, one exploiting, one analyzing results — with shared state and coordinated task assignment. This would allow LuaN1aoAgent to scale horizontally across large, complex targets.
Autonomous optimization of attack strategies through environmental interaction. Rather than relying solely on LLM priors, agents would refine their decision policies through trial and feedback, achieving strategy convergence in complex scenarios over time.
Automatic generation of compliant penetration testing reports. After a task completes, the agent assembles a structured report from the causal graph — listing findings, evidence chains, severity assessments, and remediation guidance — in formats suitable for regulatory and client delivery.
Roadmap items are subject to change. To suggest a feature or follow development, visit GitHub Issues or GitHub Discussions.

Build docs developers (and LLMs) love