Problem: Optional stacks started before main stackSolution:
# Start main stack first to create networksdocker compose -f docker-compose.yml up -d# Then start optional stacksdocker compose -f docker-compose.yml -f docker-compose-langfuse.yml up -d
Prevention:
Always start docker-compose.yml first, then add optional stacks.
Cannot connect to PentAGI at localhost:8443
Possible causes:
Service not started
Port binding issue
Firewall blocking access
Diagnosis:
# Check if pentagi is runningdocker compose ps pentagi# Check port bindingdocker compose port pentagi 8443# Check logs for errorsdocker compose logs pentagi | grep -i error# Test local connectivitycurl -k https://localhost:8443/health
Solutions:
# Restart servicedocker compose restart pentagi# Change listen IP if needed (edit .env)PENTAGI_LISTEN_IP=0.0.0.0PENTAGI_LISTEN_PORT=8443# Check firewallsudo ufw statussudo ufw allow 8443/tcp
SSL certificate errors
Problem: Self-signed certificate warnings or connection refusedFor development (self-signed certificates):
# Access with -k flag to skip verificationcurl -k https://localhost:8443/health# Or add certificate to browser trust storedocker compose cp pentagi:/opt/pentagi/ssl/cert.pem ./pentagi-cert.pem# Import pentagi-cert.pem to browser
For production (Let’s Encrypt):
# Verify certificate paths in .envSERVER_SSL_CRT=/etc/letsencrypt/live/pentagi.example.com/fullchain.pemSERVER_SSL_KEY=/etc/letsencrypt/live/pentagi.example.com/privkey.pem# Check certificate is mounteddocker compose exec pentagi ls -la /etc/ssl/pentagi/# Test certificate validityopenssl s_client -connect localhost:8443 -servername pentagi.example.com
Worker node TLS connection failures
Problem: PentAGI cannot connect to worker node Docker APIDiagnosis:
# On main node - test connectiondocker --tlsverify \ --tlscacert=/opt/pentagi/docker-host-ssl/ca.pem \ --tlscert=/opt/pentagi/docker-host-ssl/cert.pem \ --tlskey=/opt/pentagi/docker-host-ssl/key.pem \ -H tcp://${PRIVATE_IP}:2376 version# Check certificate validityopenssl x509 -in /opt/pentagi/docker-host-ssl/cert.pem -noout -dates# Verify SAN includes worker IPopenssl x509 -in /opt/pentagi/docker-host-ssl/cert.pem -noout -text | grep -A1 "Subject Alternative Name"
Solutions:
# Regenerate certificates if SAN is missing# See worker node guide for certificate generation# Check firewall on worker nodesudo ufw status | grep 2376# Verify Docker daemon is listeningsudo netstat -tlnp | grep 2376# Check Docker daemon logssudo journalctl -u docker -n 100
# Verify OpenAI API key is validOPEN_AI_KEY=sk-...# Check model name is correctGRAPHITI_MODEL_NAME=gpt-4o-mini# Increase timeoutGRAPHITI_TIMEOUT=60# Restart Graphitidocker compose -f docker-compose-graphiti.yml restart graphiti
Error:Error: No LLM provider configured. Please set at least one provider.Solution:
# Edit .env and add at least one providerOPEN_AI_KEY=sk-...# ORANTHROPIC_API_KEY=sk-ant-...# ORGEMINI_API_KEY=...# Restart PentAGIdocker compose restart pentagi
OpenAI API rate limit exceeded
Problem: Hitting rate limits or quota errorsSolutions:
# Use different models with higher limits# Edit agent configuration to use cheaper models# Add delays between requests (future feature)# Use multiple API keys with load balancing (future feature)# Switch to Anthropic or other provider temporarilyANTHROPIC_API_KEY=sk-ant-...
AWS Bedrock rate limits
Problem: Very low default rate limits (2 RPM for Claude Sonnet 4)Solutions:
# Request quota increase via AWS Service Quotas console# Path: Service Quotas > AWS Bedrock > Model rate limits# Switch to provisioned throughputBEDROCK_USE_PROVISIONED_THROUGHPUT=true# Use alternative models with higher quotas# Edit Bedrock provider config to use Nova or Llama models# Switch to different LLM providerANTHROPIC_API_KEY=sk-ant-... # Direct API has higher limits
Ollama connection refused
Problem: Cannot connect to Ollama serverDiagnosis:
# Test Ollama APIcurl http://localhost:11434/api/tags# Check if Ollama is runningps aux | grep ollama# Check Ollama logsjournalctl -u ollama -n 100
Solutions:
# Start Ollama if not runningsystemctl start ollama# Verify server URL in .envOLLAMA_SERVER_URL=http://localhost:11434# For Docker OllamaOLLAMA_SERVER_URL=http://host.docker.internal:11434# Pull required modelollama pull llama3.1:8b-instruct-q8_0
Problem: Terminal breaks or installer doesn’t display correctlySolutions:
# Ensure terminal supports TUIecho $TERM # Should be xterm-256color or similar# Run in proper terminal emulator# Not in: screen, tmux, or IDE integrated terminals# Check minimum terminal sizetput cols # Should be >= 80tput lines # Should be >= 24# Check logstail -f log.json | jq '.'
Cannot save configuration
Problem: Environment variables not persistingDiagnosis:
# Check .env file permissionsls -la .env# Verify .env file is being writtencat .env | grep OPEN_AI_KEY