Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/Kevin2523/nextAuditAi/llms.txt

Use this file to discover all available pages before exploring further.

This guide walks you through cloning the repository, configuring your environment variables, and launching the complete NextAudit AI stack. By the end you will have all services running locally and the FleetDM, Flowise, and n8n dashboards accessible in your browser.
You need Docker and Docker Compose installed before proceeding. The stack includes eight containers, so ensure your machine has at least 8 GB of available RAM.
1

Clone the repository

Clone the NextAudit AI repository and navigate into the project root.
git clone https://github.com/Kevin2523/nextAuditAi.git
cd nextAuditAi
2

Configure environment variables

Copy the environment variable template for your target environment. Three templates are provided — one for each deployment tier. These files live in src/ai-sentinel/ alongside the compose files.
cp src/ai-sentinel/dev.env.example src/ai-sentinel/.env
The .env.example template files are distributed on the develop branch. Switch to develop to access them, or create src/ai-sentinel/.env manually using the Environment variables reference as a guide.
Open the copied .env file and fill in the required values. Key variables to set include:
VariableDescription
POSTGRES_USER / POSTGRES_PASSWORD / POSTGRES_DBPostgreSQL credentials for the AI layer
MYSQL_USER / MYSQL_PASSWORD / MYSQL_DATABASEMySQL credentials for FleetDM
FLEET_SERVER_PRIVATE_KEYGenerate with openssl rand -base64 32
OLLAMA_MODELSComma-separated list of models to pull on startup
FLOWISE_PORT / N8N_PORT / OLLAMA_PORTHost ports for each service dashboard
Run openssl rand -base64 32 to generate a secure value for FLEET_SERVER_PRIVATE_KEY. Never reuse this value across environments.
3

Launch the development stack

Start all services in detached mode using the development compose file.
docker compose -f src/ai-sentinel/docker-compose.dev.yml up -d
For other environments, substitute the appropriate compose file:
docker compose -f src/ai-sentinel/docker-compose.dev.yml up -d
In development, the ollama and postgres services are built from local Dockerfiles under src/ai-sentinel/ollama/ and src/ai-sentinel/postgres/. In test and production, versioned images from jjsotom2k4/ollama-ai and jjsotom2k4/postgres-ai are used instead.
4

Verify all services are running

Check that all eight containers started successfully.
docker compose -f src/ai-sentinel/docker-compose.dev.yml ps
You should see the following services with a running or healthy status:
ServiceRole
fleetFleetDM API and UI
fleet-initOne-time volume permission setup (exits after completion)
mysqlFleetDM database
redisFleetDM session cache
ollamaLocal LLM inference
postgresAI layer database with pgvector
flowiseAI agent flow builder
n8nWorkflow automation engine
The fleet service waits for mysql, redis, and fleet-init to be healthy before starting. If fleet is slow to come up, this is expected behavior — the health checks run at 10-second intervals with up to 12 retries.
5

Access the dashboards

Once all services are healthy, open the following dashboards in your browser. The exact ports depend on the values you set in your .env file.
ServiceDefault variableURL pattern
FleetDMFLEET_SERVER_PORThttp://localhost:$FLEET_SERVER_PORT
FlowiseFLOWISE_PORThttp://localhost:$FLOWISE_PORT
n8nN8N_PORThttp://localhost:$N8N_PORT
Ollama APIOLLAMA_PORThttp://localhost:$OLLAMA_PORT
Log in to FleetDM to start enrolling endpoints, then open n8n to configure your first audit workflow and Flowise to build AI agent flows against your fleet data.

Next steps

Architecture

Understand the service dependencies and data flows in the stack.

Environment variables

Full reference for all configuration variables across environments.

Fleet management

Enroll endpoints and configure osquery policies in FleetDM.

AI analysis

Build AI agent flows in Flowise to analyze your fleet data.

Build docs developers (and LLMs) love