Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/Kevin2523/nextAuditAi/llms.txt

Use this file to discover all available pages before exploring further.

NextAudit AI is composed of eight services organized into three functional layers. The FleetDM layer handles endpoint enrollment, inventory, and posture data. The AI layer provides LLM inference and AI agent flow execution backed by a vector-enabled PostgreSQL instance. The workflow layer uses n8n to orchestrate audit pipelines that connect fleet data to AI analysis and compliance outputs. All services are defined in Docker Compose files under src/ai-sentinel/ and communicate over Docker’s internal network.

Service overview

ServiceImage (dev)LayerPurpose
fleetfleetdm/fleetFleetFleetDM API, UI, and osquery management
fleet-initalpine:latestFleetOne-time volume permission initialization
mysqlmysql:8FleetFleetDM operational database
redisredis:6FleetFleetDM session cache and queue
ollamaBuilt from ./ollamaAISelf-hosted LLM inference
postgresBuilt from ./postgresAIAI data store with pgvector extension
flowiseflowiseai/flowise:latestAIAI agent flow builder, backed by postgres
n8ndocker.n8n.io/n8nio/n8nWorkflowAudit workflow automation
In test and production environments, ollama and postgres use versioned images from jjsotom2k4/ollama-ai:${VERSION} and jjsotom2k4/postgres-ai:${VERSION} instead of local builds. This ensures reproducible deployments from tagged releases.

Layer 1: Fleet management

The FleetDM layer provides real-time endpoint visibility through osquery. Three services form the FleetDM cluster.

fleet

The core FleetDM process. On startup it runs fleet prepare db to migrate the MySQL schema before serving the API and UI. It depends on mysql and redis being healthy, and on fleet-init completing successfully. Dependencies: mysql (healthy), redis (healthy), fleet-init (completed) Port: ${FLEET_SERVER_PORT}:${FLEET_SERVER_PORT} Volumes:
MountPurpose
data:/fleetFleet application data
logs:/logsosquery status and result logs
vulndb:${FLEET_VULNERABILITIES_DATABASES_PATH}Vulnerability database
./certs/fleet.crt:/fleet/fleet.crt:roTLS certificate (read-only)
./certs/fleet.key:/fleet/fleet.key:roTLS private key (read-only)
TLS is controlled by FLEET_SERVER_TLS. Set it to false for local development. For production, generate a certificate and set FLEET_SERVER_CERT and FLEET_SERVER_KEY accordingly.

fleet-init

A one-shot alpine container that runs chown -R 100:101 on the logs, data, and vulndb volumes before the fleet service starts. It exits on completion and does not restart. Volumes: logs:/logs, data:/data, vulndb:/vulndb

mysql

MySQL 8 provides FleetDM’s relational storage. It includes a health check using mysqladmin ping that fleet waits on before starting. Port: 3306:3306 Volume: mysql:/var/lib/mysql

redis

Redis 6 with append-only persistence (--appendonly yes). Used by FleetDM for session management and internal queuing. Port: 6379:6379 Volume: redis:/data

Layer 2: AI stack

The AI layer consists of three services: a local LLM runtime, a vector-enabled database, and an AI agent flow builder.

ollama

Runs LLM inference locally. The OLLAMA_MODELS environment variable specifies which models to pull at startup. Flowise and n8n workflows send inference requests to Ollama over the internal Docker network. Port: ${OLLAMA_PORT}:11434 Volume: ollama_data:/root/.ollama

postgres

A custom PostgreSQL build that includes the pgvector extension, controlled by the EMBEDDING_SIZE environment variable. It serves as the backing database for Flowise and stores AI embeddings. Port: ${POSTGRES_PORT}:5432 Volume: postgres_data:/var/lib/postgresql/data Health check: pg_isready -U $POSTGRES_USER -d $POSTGRES_DB (5s interval, 10 retries)

flowise

Flowise connects to Postgres to persist AI agent flows, credentials, and chat history. It starts only after the postgres health check passes. Dependency: postgres (healthy) Port: ${FLOWISE_PORT}:${FLOWISE_PORT} Volume: flowise_data:/root/.flowise Database connection: Uses DATABASE_TYPE=postgres with DATABASE_HOST=postgres (resolved over Docker’s internal network).
The Postgres schema used by Flowise is configured via the DATABASE_SCHEMA environment variable. Keep this separate from any application schemas to avoid conflicts.

Layer 3: Workflow automation

n8n

n8n is the audit orchestration engine. It connects to FleetDM’s API, Flowise, Ollama, and external systems to build automated audit pipelines. Workflows are persisted in the n8n_data volume. Port: ${N8N_PORT}:5678 Volume: n8n_data:/home/node/.n8n Key settings:
VariableValue
GENERIC_TIMEZONE / TZ${N8N_TIMEZONE}
N8N_ENFORCE_SETTINGS_FILE_PERMISSIONStrue
N8N_RUNNERS_ENABLEDtrue

Service dependency graph

Named volumes

All persistent data is stored in Docker named volumes. The following volumes are defined across all compose files:
VolumeServiceData stored
ollama_dataollamaDownloaded LLM model weights
postgres_datapostgresAI database (Flowise flows, embeddings)
flowise_dataflowiseFlowise configuration and secrets
n8n_datan8nWorkflow definitions and credentials
mysqlmysqlFleetDM relational data
redisredisAppend-only Redis persistence
datafleet, fleet-initFleet application data
logsfleet, fleet-initosquery log files
vulndbfleet, fleet-initVulnerability database files
Deleting named volumes removes all persisted data, including enrolled fleet hosts, AI flows, and n8n workflows. Always back up volumes before running docker compose down -v.

Build docs developers (and LLMs) love