Flowise is the visual AI composition layer in NextAudit AI. It lets you build LLM-powered chains and autonomous agents through a drag-and-drop interface, connecting to the self-hosted Ollama inference service and storing vector embeddings in the pgvector-enabled PostgreSQL instance. This keeps all AI computation and data within the stack, with no dependency on external AI providers.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/Kevin2523/nextAuditAi/llms.txt
Use this file to discover all available pages before exploring further.
Service configuration
Flowise runs from theflowiseai/flowise:latest image. Both the host and container ports are set to FLOWISE_PORT, so the container listens on the same port number it exposes.
postgres service to pass its health check before starting, ensuring the database schema is available when Flowise initializes.
Environment variables
| Variable | Value | Description |
|---|---|---|
PORT | ${FLOWISE_PORT} | Port Flowise listens on inside the container |
DATABASE_TYPE | postgres | Selects the PostgreSQL driver |
DATABASE_HOST | postgres | Docker service name of the PostgreSQL container |
DATABASE_PORT | 5432 | Standard PostgreSQL port |
DATABASE_NAME | ${POSTGRES_DB} | Database name created during PostgreSQL initialization |
DATABASE_SCHEMA | ${DATABASE_SCHEMA} | PostgreSQL schema Flowise uses for its tables |
DATABASE_USER | ${POSTGRES_USER} | Database user with read/write access |
DATABASE_PASSWORD | ${POSTGRES_PASSWORD} | Credential for the database user |
Volume
flowise_data volume stores uploaded files, flow definitions saved locally, and any configuration that Flowise writes to disk. The database-backed metadata (flows, credentials, chat history) lives in PostgreSQL.
pgvector integration
The PostgreSQL instance in this stack is built with thepgvector extension enabled. Flowise uses this to store and query vector embeddings generated during document ingestion and retrieval-augmented generation (RAG) flows. The EMBEDDING_SIZE environment variable on the PostgreSQL service controls the vector dimension — this must match the embedding model you configure inside Flowise.
When building RAG flows, use the Postgres vector store node in Flowise and point it at
DATABASE_HOST=postgres. The pgvector extension handles similarity search directly in SQL, keeping embedding retrieval within the stack.AI flow types for audit
Q&A chains
Build question-answering flows over ingested audit documents, policy PDFs, or FleetDM inventory exports. Users or n8n workflows can query the knowledge base in natural language.
Document analysis
Run LLM chains over osquery result sets or vulnerability reports to extract structured findings, classify severity, and generate human-readable summaries.
Alert triage
Create agent flows that receive a raw alert from n8n, look up context from the vector store, and return a triage decision with a recommended action.
Policy compliance
Chain FleetDM policy results through an LLM to explain non-compliance in plain language and suggest remediation steps tailored to the specific host configuration.
Connecting to Ollama
Inside the Flowise UI, add an Ollama chat model node and set the base URL tohttp://ollama:11434. This uses Docker’s internal DNS to reach the Ollama service directly without going through the host network. See Ollama: self-hosted LLM inference for the full Ollama service configuration.