Overview
Private Connect uses a hub-and-spoke architecture where agents connect to a central coordination layer (the Hub). Services are exposed from one machine and accessed from another through encrypted tunnels.Agents
Lightweight CLI running on each machine, managing connections
Hub
Central coordination layer for routing and metadata
Services
Exposed endpoints accessible by name across your workspace
Workspace
Isolated environment where all your services and agents live
Core Components
Agents
An agent runs on each machine where you want to expose or access services. Agents are responsible for:- Establishing secure WebSocket connections to the Hub
- Exposing local services to the workspace
- Creating tunnels to remote services
- Maintaining stable port mappings
- Reporting health and connectivity status
The Hub
The Hub is the central coordination service that:- Authenticates agents and maintains workspace isolation
- Stores service metadata (names, targets, ownership)
- Routes traffic between agents as an opaque relay
- Tracks connection metadata for audit logging
- Enforces access controls and permissions
The Hub sees metadata (service names, connection times) but does not inspect payload data. All traffic passes through as base64-encoded packets.
Services
Services are exposed endpoints that can be accessed by name:- Workspace-scoped: Only accessible within your workspace
- Name-based: Access by name, not IP or port
- Stable: Same service always maps to the same local port
- Private by default: Not accessible outside your workspace unless explicitly shared
Data Flow
Here’s how traffic flows when Agent B reaches a service exposed by Agent A:Step-by-Step Flow
Service Discovery
Agent B queries the Hub: “Where is prod-db?”Hub responds: “Agent A is exposing prod-db at localhost:5432”
Tunnel Establishment
Agent B creates a local listener on localhost:5432Agent B opens a WebSocket tunnel to the Hub, tagged for Agent A
What the Hub Sees
| Data | Visibility | Notes |
|---|---|---|
| Agent identity | ✓ Visible | Agent ID, label, workspace |
| Service names | ✓ Visible | e.g., “prod-db”, “redis” |
| Target host:port | ✓ Visible | e.g., “localhost:5432” |
| Connection metadata | ✓ Visible | When connections are made, duration |
| IP addresses | ✓ Visible | For audit logging (masked in logs) |
| Payload data | Opaque relay | Base64-encoded, not inspected |
Payload Handling
When data flows through the Hub:- Agent B sends data as base64-encoded packets
- Hub forwards packets to Agent A without inspection
- Agent A decodes and forwards to the target service
- Responses flow back the same way
- Decrypt or inspect payload contents
- Store payload data
- Log payload contents
Workspace Isolation
Every resource belongs to exactly one workspace:Isolation Guarantees
Database-level
Database-level
PostgreSQL Row Level Security (RLS) enforces workspace isolation at the database layer. All workspace-scoped tables have RLS policies that only allow access to rows matching the current workspace context.
Application-level
Application-level
All queries are additionally scoped by
workspaceId in the application code (defense-in-depth).API-level
API-level
Guards validate workspace ownership before any operation.
Realtime-level
Realtime-level
WebSocket rooms are isolated by workspace (
workspace:{id}).Agent-level
Agent-level
Agents can only access services within their workspace.
Who Can Access My Services?
Only authenticated members of your workspace. By default, exposed services are completely private.- Discover that your services exist
- List services in your workspace
- Connect to any of your services
Cross-Workspace Access
Services can be shared across workspace boundaries via:Service Shares
Token-based access with permissions:- Time-limited access
- Instant revocation
- Audit logging
- Per-service permissions
Public Links
Time-limited URLs with configurable restrictions:- No authentication required
- Rate limiting
- Method and path restrictions
- Automatic expiration
Encryption
In Transit
| Connection | Encryption |
|---|---|
| Agent ↔ Hub | TLS 1.2+ required (enforced for non-localhost) |
| Hub ↔ Database | TLS (when using managed PostgreSQL) |
| Web UI ↔ API | HTTPS |
HTTPS enforcement can be bypassed for local development only:
At Rest
- Database: Encryption depends on your PostgreSQL provider
- Hosted version: Uses Railway’s managed PostgreSQL with encryption at rest
- Self-hosted: Configure your database provider’s encryption settings
End-to-End Encryption (Future)
Currently, payload data passes through the Hub as an opaque relay. For environments requiring zero-knowledge relay, we’re considering optional agent-to-agent encryption where:- Agents negotiate keys directly
- Hub relays encrypted packets it cannot read
- Perfect forward secrecy via ephemeral keys
Deployment Options
- Hosted (Default)
- Self-Hosted
The production Hub at
Data Residency:
api.privateconnect.co runs on:| Component | Provider | Region |
|---|---|---|
| API Server | Railway | US (Oregon) |
| Database | Railway PostgreSQL | US (Oregon) |
| Web Frontend | Railway | US (Oregon) |
- All data resides in US (Oregon) region
- No data replication to other regions
- For EU data residency requirements, self-host in your preferred region
Agent Features
Stable Port Mapping
The same service always gets the same local port across restarts:~/.connect/ports.json.
Always-On Mode
Run as a daemon for persistent connections:Health Checks
Agents report health and connectivity:Auto-Discovery
Agents can auto-name services based on port:Multi-Region Support (Future)
Currently, agents connect to a single Hub. Multi-region Hub federation is on the roadmap for:- High availability deployments
- Reduced latency by connecting to nearest Hub
- Automatic failover between regions
- Cross-region service access
Performance Characteristics
Latency
- Hub overhead: ~1-2ms per request for relay
- Total latency: Hub overhead + network distance to Hub + network distance from Hub
- Debug mode: Additional ~1-2ms for packet capture
Throughput
- Limited by WebSocket connection bandwidth
- Typical throughput: 100+ MB/s for sustained transfers
- No artificial rate limiting at the Hub level
Connection Limits
- No hard limit on number of services per workspace
- No hard limit on concurrent tunnels
- Rate limiting recommended at load balancer level for self-hosted deployments
High Availability
What Happens if the Hub Goes Down?
- Existing TCP connections through the Hub will fail
- Agents will attempt to reconnect with exponential backoff
- No data is lost—the Hub doesn’t store payload data
- Service metadata is persisted in the database
Recommendations for Production
Open Source
The entire stack is open source under FSL-1.1-MIT license:- Agent: Go binary, built with Cobra CLI framework
- API Server: NestJS (TypeScript), PostgreSQL, WebSockets
- Web UI: Next.js, React, Tailwind CSS
- SDK: TypeScript library for programmatic access
GitHub Repository
View the source code, contribute, or self-host