Skip to main content
Before you deploy SoftArchitect AI, make sure your machine meets the software and hardware requirements below.

Required software

1

Install Docker 20.10+

Docker is required to run all three services (API, ChromaDB, Ollama) in isolated containers.
docker --version
# Docker version 20.10.x, build ...
Download from the official Docker site.
2

Install Docker Compose 2.0+

SoftArchitect AI uses the docker compose (v2) CLI plugin, not the legacy docker-compose binary.
docker compose version
# Docker Compose version v2.x.x
Docker Desktop includes Compose v2 by default. On Linux, install it via the Docker Compose plugin guide.
3

Install Git

Git is required to clone the repository.
git --version
# git version 2.x.x

Optional software

Ollama (local LLM mode)

Install Ollama on your host machine if you want to run LLM inference natively outside Docker. For the fully containerised setup, Ollama runs inside the sa_ollama container automatically — no host installation needed.

API keys (cloud mode)

If you prefer cloud inference, obtain an API key from Google AI Studio (Gemini) or Groq (Groq). Cloud mode requires only 4 GB RAM on the host.

Hardware requirements

ModeMinimum RAMRecommended RAMNotes
Local Ollama8 GB16 GBOllama container default memory limit: 2GB (adjustable via OLLAMA_MEMORY_LIMIT)
Cloud API (Gemini / Groq)4 GB8 GBNo local model weights downloaded
The validate-docker-setup.sh script in infrastructure/ warns you when available RAM appears to be below 8 GB and suggests reducing OLLAMA_MEMORY_LIMIT in your .env file.

Disk space

ResourceApproximate size
Docker images (API + ChromaDB + Ollama)~3 GB
LLM model weights (e.g., llama3.2)2–8 GB depending on the model
ChromaDB vector indexgrows with usage
A minimum of 20 GB of free disk space is recommended.

Operating system support

Linux

Fully supported. Native Docker engine provides best performance for local LLM inference.

macOS

Supported via Docker Desktop. Apple Silicon (M1/M2/M3) can use Metal GPU acceleration with Ollama.

Windows

Supported via Docker Desktop with WSL 2 backend. NVIDIA GPU pass-through requires the NVIDIA Container Toolkit for WSL.

Verifying your installation

Run the following commands to confirm both tools are available before proceeding:
# Check Docker version (must be 20.10+)
docker --version

# Check Compose version (must be v2.0+)
docker compose version

# Verify the Docker daemon is running
docker ps
You can also run the bundled validator from the repository root after cloning:
bash infrastructure/validate-docker-setup.sh
This script performs nine checks including Docker installation, daemon status, available ports (8000, 8001, 11434), disk space, and optional NVIDIA GPU detection.

Build docs developers (and LLMs) love