The problem
Virtual or multicluster Kubernetes setups (e.g. one shared API server, many logical clusters, nodes in different regions) need:- Control plane ↔ nodes: API server reachable by every node; nodes reachable by the control plane (e.g. kubelet, metrics).
- Bidirectional: Traffic both ways over private links.
- Any location: Nodes (and control plane) can be in different clouds, regions, or behind firewalls.
The solution: Private Connect
Private Connect gives you service-level tunnels by name. Run an agent on the API server host and on each node (or VM); expose what the other side needs; reach by name. No VPN, no firewall rules, no port forwarding.API server exposes its secure port
The API server exposes its secure port (e.g. 6443) as a named service (e.g.
k8s-api).Nodes reach the API server
Nodes run
connect reach k8s-api and get a local port that tunnels to the API server.With kplane (recommended)
kplane is a CLI for creating virtual Kubernetes control planes (VCPs). Each VCP is served by a shared API server with path-based isolation (/clusters/<name>/control-plane). The flow is kind-like: bring up the management plane, create a cluster, get credentials.
kplane + Private Connect workflow
Expose the API server by name
On the machine where kplane is running, get the apiserver port from Expose that port (e.g.
kubectl cluster-info:8443):When kplane adds worker node management (join/leave), nodes will need to reach the control plane. Private Connect (or WireGuard, Tailscale, Datum) can be the transport: each node runs
connect reach <cluster-name> and talks to the API over the tunnel. No VPN, no open ports.Quick start: API server + one node
1. Run the API server (control plane host)
Use your multicluster API server (e.g. kplane-dev/apiserver):2. Run a “node” (another machine / region)
On the machine that will act as a node (or run kubelet):localhost:6443 (or the port you chose) as the API server address. The node talks to the control plane over Private Connect; no VPN required.
3. Bidirectional: control plane reaching nodes
If the API server (or other control-plane components) must reach node endpoints (e.g. kubelet read-only port, metrics): On each node:Persistent setup (daemon)
For long-lived clusters, run the agent as a daemon on each host:Tunnels stay up and reconnect automatically.
Non-interactive / automation
For CI, VMs, or scripts (e.g. cloud-init, exe.dev):Use the same workspace API key so all agents can see the same services.
Summary
| Goal | With Private Connect |
|---|---|
| Nodes reach API server | connect reach k8s-api on each node |
| API server reach nodes | Expose node endpoints, connect reach from control plane |
| No VPN | Outbound-only agents; no firewall rules |
| Any location | Works across regions, clouds, NAT |
| By name | Use k8s-api, node-* instead of IPs |
See also
- kplane-dev/kplane — CLI for creating virtual Kubernetes control planes (kind-like)
- kplane-dev/apiserver — multicluster Kubernetes API server with path-based cluster routing
- Tailscale and Private Connect — K8s service access from outside the cluster