Skip to main content
Deploy Plane on Kubernetes for production environments that require high availability, scalability, and advanced orchestration capabilities.

Overview

The Kubernetes deployment of Plane provides:
  • High Availability: Multiple replicas of critical services
  • Auto-scaling: Horizontal pod autoscaling based on load
  • Rolling Updates: Zero-downtime deployments
  • Resource Management: CPU and memory limits/requests
  • Health Checks: Liveness and readiness probes
  • Persistent Storage: StatefulSets for databases

Prerequisites

1

Kubernetes Cluster

You need a running Kubernetes cluster (v1.21+). Options include:
  • Cloud Providers: AWS EKS, Google GKE, Azure AKS, DigitalOcean DOKS
  • Self-Managed: kubeadm, kops, Rancher
  • Local Development: minikube, kind, k3s
Verify cluster access:
kubectl cluster-info
kubectl get nodes
2

Helm 3

Install Helm, the Kubernetes package manager:
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Verify installation:
helm version
3

kubectl

Ensure kubectl is installed and configured:
kubectl version --client
4

Persistent Storage

Your cluster needs a StorageClass for persistent volumes:
kubectl get storageclass
Most cloud providers offer default storage classes (gp2, standard, etc.).

Installation via Helm

Plane provides an official Helm chart for Kubernetes deployments.

Add Helm Repository

helm repo add makeplane https://helm.plane.so
helm repo update
Verify the chart is available:
helm search repo makeplane/plane-ce

Install Plane

1

Create Namespace

kubectl create namespace plane
2

Create Configuration File

Create a values.yaml file with your custom configuration:
values.yaml
# Basic configuration
ingress:
  enabled: true
  host: plane.example.com
  tls:
    enabled: true
    secretName: plane-tls

# Application settings
env:
  web_url: "https://plane.example.com"
  cors_allowed_origins: "https://plane.example.com"

# Database (use external for production)
postgresql:
  enabled: true  # Set to false if using external DB
  auth:
    username: plane
    password: "change-me-in-production"
    database: plane
  primary:
    persistence:
      size: 10Gi

# Redis cache
redis:
  enabled: true  # Set to false if using external Redis
  auth:
    enabled: false
  master:
    persistence:
      size: 2Gi

# RabbitMQ
rabbitmq:
  enabled: true
  auth:
    username: plane
    password: "change-me-in-production"
  persistence:
    size: 2Gi

# MinIO (object storage)
minio:
  enabled: true  # Set to false if using S3
  auth:
    rootUser: access-key
    rootPassword: "change-me-in-production"
  persistence:
    size: 20Gi

# Service replicas (adjust based on load)
api:
  replicaCount: 2
  resources:
    requests:
      cpu: 500m
      memory: 1Gi
    limits:
      cpu: 2000m
      memory: 2Gi

worker:
  replicaCount: 2
  resources:
    requests:
      cpu: 250m
      memory: 512Mi
    limits:
      cpu: 1000m
      memory: 1Gi

beatWorker:
  replicaCount: 1
  resources:
    requests:
      cpu: 100m
      memory: 256Mi
    limits:
      cpu: 500m
      memory: 512Mi

web:
  replicaCount: 2
  resources:
    requests:
      cpu: 100m
      memory: 256Mi
    limits:
      cpu: 500m
      memory: 512Mi

space:
  replicaCount: 1
  resources:
    requests:
      cpu: 100m
      memory: 256Mi
    limits:
      cpu: 500m
      memory: 512Mi

admin:
  replicaCount: 1
  resources:
    requests:
      cpu: 100m
      memory: 256Mi
    limits:
      cpu: 500m
      memory: 512Mi

live:
  replicaCount: 1
  resources:
    requests:
      cpu: 100m
      memory: 256Mi
    limits:
      cpu: 500m
      memory: 512Mi
3

Install Chart

helm install plane makeplane/plane-ce \
  --namespace plane \
  --values values.yaml \
  --timeout 10m
Monitor the installation:
kubectl get pods -n plane -w
Wait for all pods to reach Running status.
4

Verify Installation

Check all resources:
kubectl get all -n plane
View logs if needed:
kubectl logs -n plane -l app=plane-api
kubectl logs -n plane -l app=plane-worker

Access Plane

Depending on your ingress configuration:
# Access via configured domain
open https://plane.example.com

Architecture on Kubernetes

Namespace: plane
├── Deployments
│   ├── plane-web (2 replicas)
│   ├── plane-space (1 replica)
│   ├── plane-admin (1 replica)
│   ├── plane-api (2 replicas)
│   ├── plane-worker (2 replicas)
│   ├── plane-beat-worker (1 replica)
│   └── plane-live (1 replica)
├── StatefulSets
│   ├── plane-postgresql (1 replica)
│   ├── plane-redis (1 replica)
│   ├── plane-rabbitmq (1 replica)
│   └── plane-minio (1 replica)
├── Services
│   ├── plane-proxy (LoadBalancer/ClusterIP)
│   ├── plane-api (ClusterIP)
│   ├── plane-postgresql (ClusterIP)
│   ├── plane-redis (ClusterIP)
│   ├── plane-rabbitmq (ClusterIP)
│   └── plane-minio (ClusterIP)
├── Jobs
│   └── plane-migrator (one-time)
├── PersistentVolumeClaims
│   ├── postgresql-data
│   ├── redis-data
│   ├── rabbitmq-data
│   └── minio-data
└── ConfigMaps & Secrets
    ├── plane-config
    └── plane-secrets

Configuration

Using External PostgreSQL

For production, use a managed database service:
values.yaml
postgresql:
  enabled: false  # Disable bundled PostgreSQL

env:
  database_url: "postgresql://user:password@your-db-host:5432/plane"
  postgres_host: "your-db-host"
  postgres_port: "5432"
  postgres_user: "plane"
  postgres_password: "your-password"
  postgres_db: "plane"

Using External Redis

values.yaml
redis:
  enabled: false  # Disable bundled Redis

env:
  redis_host: "your-redis-host"
  redis_port: "6379"
  redis_url: "redis://your-redis-host:6379/"

Using AWS S3 for Storage

values.yaml
minio:
  enabled: false  # Disable MinIO

env:
  use_minio: "0"
  aws_region: "us-east-1"
  aws_access_key_id: "your-access-key"
  aws_secret_access_key: "your-secret-key"
  aws_s3_endpoint_url: "https://s3.amazonaws.com"
  aws_s3_bucket_name: "plane-uploads"

Ingress Configuration

NGINX Ingress

values.yaml
ingress:
  enabled: true
  className: nginx
  host: plane.example.com
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    nginx.ingress.kubernetes.io/proxy-body-size: "10m"
  tls:
    enabled: true
    secretName: plane-tls

Traefik Ingress

values.yaml
ingress:
  enabled: true
  className: traefik
  host: plane.example.com
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: websecure
    traefik.ingress.kubernetes.io/router.tls: "true"
  tls:
    enabled: true
    secretName: plane-tls

SSL/TLS with cert-manager

1

Install cert-manager

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml
2

Create ClusterIssuer

letsencrypt-prod.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: [email protected]
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
    - http01:
        ingress:
          class: nginx
Apply:
kubectl apply -f letsencrypt-prod.yaml
3

Update Ingress

Update your values.yaml:
ingress:
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
Upgrade release:
helm upgrade plane makeplane/plane-ce -n plane -f values.yaml

Scaling

Manual Scaling

Scale deployments directly:
# Scale API servers
kubectl scale deployment -n plane plane-api --replicas=4

# Scale workers
kubectl scale deployment -n plane plane-worker --replicas=3
Or update values.yaml and upgrade:
values.yaml
api:
  replicaCount: 4
worker:
  replicaCount: 3
helm upgrade plane makeplane/plane-ce -n plane -f values.yaml

Horizontal Pod Autoscaling (HPA)

Create HPA for API pods:
api-hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: plane-api-hpa
  namespace: plane
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: plane-api
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80
Apply:
kubectl apply -f api-hpa.yaml

Upgrading

Always backup your data before upgrading.
1

Backup Data

Backup PVCs:
# Create snapshots or backup using your storage provider
kubectl get pvc -n plane
2

Update Helm Repository

helm repo update makeplane
3

Check New Version

helm search repo makeplane/plane-ce --versions
4

Upgrade Release

helm upgrade plane makeplane/plane-ce \
  --namespace plane \
  --values values.yaml \
  --timeout 10m
Monitor rollout:
kubectl rollout status deployment/plane-api -n plane
kubectl rollout status deployment/plane-worker -n plane

Monitoring

Resource Usage

# Pod resource usage
kubectl top pods -n plane

# Node resource usage
kubectl top nodes

Logs

# Stream logs from all API pods
kubectl logs -n plane -l app=plane-api -f

# View logs from specific pod
kubectl logs -n plane plane-api-7d8f9c5b6-xyz12

# Previous container logs (if crashed)
kubectl logs -n plane plane-api-7d8f9c5b6-xyz12 --previous

Events

# Recent events in namespace
kubectl get events -n plane --sort-by='.lastTimestamp'

# Events for specific pod
kubectl describe pod -n plane plane-api-7d8f9c5b6-xyz12

Troubleshooting

Pods Not Starting

# Check pod status
kubectl get pods -n plane

# Describe pod for events
kubectl describe pod -n plane <pod-name>

# Check logs
kubectl logs -n plane <pod-name>
Common issues:
  • ImagePullBackOff: Check image name and registry access
  • CrashLoopBackOff: Check logs for application errors
  • Pending: Check resource availability and PVC binding

Database Connection Issues

# Test database connectivity
kubectl run -it --rm debug --image=postgres:15 --restart=Never -n plane -- \
  psql -h plane-postgresql -U plane -d plane

# Check PostgreSQL logs
kubectl logs -n plane -l app=postgresql

PVC Not Binding

# Check PVC status
kubectl get pvc -n plane

# Describe PVC for events
kubectl describe pvc -n plane <pvc-name>

# Check available storage classes
kubectl get storageclass

Ingress Not Working

# Check ingress status
kubectl get ingress -n plane

# Describe ingress
kubectl describe ingress -n plane plane-ingress

# Check ingress controller logs
kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx

Backup and Disaster Recovery

Using Velero

Install Velero for cluster backups:
# Install Velero
helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
helm install velero vmware-tanzu/velero --namespace velero --create-namespace

# Backup Plane namespace
velero backup create plane-backup --include-namespaces plane

# Schedule daily backups
velero schedule create plane-daily --schedule="0 2 * * *" --include-namespaces plane

Manual PVC Backup

# Create a backup job for PostgreSQL
kubectl apply -f - <<EOF
apiVersion: batch/v1
kind: Job
metadata:
  name: postgres-backup
  namespace: plane
spec:
  template:
    spec:
      containers:
      - name: backup
        image: postgres:15
        command: ["/bin/sh", "-c"]
        args:
          - pg_dump -h plane-postgresql -U plane -d plane > /backup/plane-$(date +%Y%m%d).sql
        volumeMounts:
        - name: backup-volume
          mountPath: /backup
      restartPolicy: Never
      volumes:
      - name: backup-volume
        persistentVolumeClaim:
          claimName: postgres-backup-pvc
EOF

Next Steps

Configuration

Configure environment variables and integrations

Instance Admin

Set up instance administration

Additional Resources

Build docs developers (and LLMs) love