Skip to main content
Use Private Connect to give a multicluster Kubernetes API server and distributed nodes secure, bidirectional connectivity—no VPN, no same-network requirement. Works with kplane (CLI for virtual control planes).

The problem

Virtual or multicluster Kubernetes setups (e.g. one shared API server, many logical clusters, nodes in different regions) need:
  • Control plane ↔ nodes: API server reachable by every node; nodes reachable by the control plane (e.g. kubelet, metrics).
  • Bidirectional: Traffic both ways over private links.
  • Any location: Nodes (and control plane) can be in different clouds, regions, or behind firewalls.
Usually that means a VPN or a single virtual private network. VPNs add ops overhead, IP planning, and sometimes latency. You want private tunnels by name instead.

The solution: Private Connect

Private Connect gives you service-level tunnels by name. Run an agent on the API server host and on each node (or VM); expose what the other side needs; reach by name. No VPN, no firewall rules, no port forwarding.
┌─────────────────────┐         ┌───────┐         ┌─────────────────────┐
│  Node A (region 1)  │────────▶│  Hub  │◀────────│  Multicluster API    │
│  connect reach k8s-api │      └───────┘         │  connect expose 6443 │
└─────────────────────┘                           └─────────────────────┘
         ▲                                                 │
         │                                                 ▼
         │         ┌───────┐         ┌─────────────────────┐
         └─────────│  Hub  │◀────────│  Node B (region 2)  │
                   └───────┘         │  connect reach k8s-api │
                                    └─────────────────────┘
1

API server exposes its secure port

The API server exposes its secure port (e.g. 6443) as a named service (e.g. k8s-api).
2

Nodes reach the API server

Nodes run connect reach k8s-api and get a local port that tunnels to the API server.
3

Control plane reaches nodes (optional)

If the API server needs to reach node endpoints (e.g. kubelet), run an agent on each node, expose those endpoints by name, and reach them from the API server host.
Everything is outbound to the hub; no open ports. Works with kplane-dev/apiserver or any multicluster/virtual Kubernetes API server. kplane is a CLI for creating virtual Kubernetes control planes (VCPs). Each VCP is served by a shared API server with path-based isolation (/clusters/<name>/control-plane). The flow is kind-like: bring up the management plane, create a cluster, get credentials.

kplane + Private Connect workflow

1

Create a virtual cluster

Local management plane + VCP:
# Install kplane
curl -fsSL https://raw.githubusercontent.com/kplane-dev/kplane/main/scripts/install.sh | sh

kplane up
kplane create cluster demo
# get-credentials runs automatically after create; run it again if you switch context
kubectl get ns   # verify
kubectl cluster-info   # shows apiserver URL, e.g. https://127.0.0.1:8443/clusters/demo/control-plane
2

Expose the API server by name

On the machine where kplane is running, get the apiserver port from kubectl cluster-info:
Kubernetes control plane is running at https://127.0.0.1:8443/clusters/demo1/control-plane
Expose that port (e.g. 8443):
connect up
connect expose localhost:8443 --name demo
3

Reach from elsewhere

Another machine, region, or future worker node:
connect reach demo
# On this machine: create or edit kubeconfig so the cluster server is
# https://127.0.0.1:<reached-port>/clusters/demo/control-plane (same path as cluster-info).
# That local port is the tunnel to the control plane.
# Tip: copy the kubeconfig from the control-plane machine and change only the server port
# to <reached-port>; path and certs stay the same.
When kplane adds worker node management (join/leave), nodes will need to reach the control plane. Private Connect (or WireGuard, Tailscale, Datum) can be the transport: each node runs connect reach <cluster-name> and talks to the API over the tunnel. No VPN, no open ports.

Quick start: API server + one node

1. Run the API server (control plane host)

Use your multicluster API server (e.g. kplane-dev/apiserver):
# Example: etcd + apiserver (see apiserver README for full flags)
./apiserver --etcd-servers=http://127.0.0.1:2379 --secure-port=6443 ...
Install Private Connect and expose the API server:
curl -fsSL https://privateconnect.co/install.sh | bash
connect up
connect expose localhost:6443 --name k8s-api

2. Run a “node” (another machine / region)

On the machine that will act as a node (or run kubelet):
curl -fsSL https://privateconnect.co/install.sh | bash
connect up
connect reach k8s-api --port 6443
Use localhost:6443 (or the port you chose) as the API server address. The node talks to the control plane over Private Connect; no VPN required.

3. Bidirectional: control plane reaching nodes

If the API server (or other control-plane components) must reach node endpoints (e.g. kubelet read-only port, metrics): On each node:
connect expose localhost:10250 --name node-<name>
On the API server host:
connect reach node-<name>
Then point your control plane at the local port Private Connect provides.

Persistent setup (daemon)

For long-lived clusters, run the agent as a daemon on each host:
1

API server host

connect daemon install
connect expose localhost:6443 --name k8s-api
2

Each node

connect daemon install
connect reach k8s-api --port 6443
# Optionally expose node endpoints:
connect expose localhost:10250 --name node-us-west-1
Tunnels stay up and reconnect automatically.

Non-interactive / automation

For CI, VMs, or scripts (e.g. cloud-init, exe.dev):
# On API server host
connect up --api-key pc_xxx --label k8s-control-plane
connect expose localhost:6443 --name k8s-api

# On nodes
connect up --api-key pc_xxx --label k8s-node-1
connect reach k8s-api --port 6443
Use the same workspace API key so all agents can see the same services.

Summary

GoalWith Private Connect
Nodes reach API serverconnect reach k8s-api on each node
API server reach nodesExpose node endpoints, connect reach from control plane
No VPNOutbound-only agents; no firewall rules
Any locationWorks across regions, clouds, NAT
By nameUse k8s-api, node-* instead of IPs

See also

Build docs developers (and LLMs) love