Skip to main content
The coordinator is a standalone process that acts as a central registry for the Basis pub/sub network. Every TransportManager (one per process) connects to it and reports its publishers. The coordinator aggregates this information and broadcasts a NetworkInfo message back to all connected processes so that subscribers can discover where to connect.

What the coordinator does

On each update cycle (every 50 ms), the coordinator:
  1. Accepts new client connections — any process with a TransportManager connects on startup.
  2. Receives transport manager info — each client sends its current publisher list, including topics, schema IDs, and transport endpoints.
  3. Aggregates network info — builds a NetworkInfo protobuf message mapping each topic to all known publishers across all connected processes.
  4. Broadcasts to all clients — sends the aggregated NetworkInfo back to every connected process so subscribers can find publishers and negotiate connections.
  5. Manages schemas — collects MessageSchema registrations from clients and serves them on request (used by tools like basis schema print and basis topic print).
The coordinator is implemented as a single-threaded TCP server. It listens on a well-known port and accepts bidirectional framed protobuf messages over each connection.

Default port

The coordinator listens on TCP port 1492 (BASIS_PUBLISH_INFO_PORT), defined in:
// cpp/core/coordinator/include/basis/core/coordinator_default_port.h
constexpr uint16_t BASIS_PUBLISH_INFO_PORT = 1492;
All clients connect to 127.0.0.1:1492 by default. The port can be overridden when creating a Coordinator or CoordinatorConnector by passing a different port value.

Starting the coordinator

After building and installing Basis, start the coordinator from a terminal:
coordinator
The binary is installed to /opt/basis/bin/coordinator and is on PATH after sourcing the environment. It runs until interrupted (Ctrl+C). You can also start it through the basis launch command by including it in a launch file process, or run it manually in the background before launching other processes.
Only one coordinator should be running on a host at a time. A second coordinator will fail to bind the port and exit immediately.

How units connect

Units connect to the coordinator through WaitForCoordinatorConnection(), which is called automatically as part of the unit startup sequence in UnitThread:
unit->WaitForCoordinatorConnection();
unit->CreateTransportManager(recorder);
unit->Initialize();
WaitForCoordinatorConnection() uses WaitForCoordinator(), which retries every second until a connection is established:
inline std::unique_ptr<CoordinatorConnector> WaitForCoordinator() {
  std::unique_ptr<CoordinatorConnector> coordinator_connector;
  while (!coordinator_connector) {
    coordinator_connector = CoordinatorConnector::Create();
    if (!coordinator_connector) {
      BASIS_LOG_WARN("No connection to the coordinator, waiting 1 second and trying again");
      std::this_thread::sleep_for(std::chrono::seconds(1));
    }
  }
  return coordinator_connector;
}
This means units will block at startup and wait indefinitely for the coordinator to become available. Start the coordinator before launching any units.

CoordinatorConnector

CoordinatorConnector is the client-side class that units and tools use to communicate with the coordinator. It is a thin wrapper around a TCP connection with framed protobuf messages.
// Create a connector (returns nullptr if coordinator is not reachable)
auto connector = CoordinatorConnector::Create();

// Send this process's publisher list to the coordinator
connector->SendTransportManagerInfo(transport_manager_info);

// Register message schemas with the coordinator
connector->SendSchemas(schemas);

// Request schemas by ID (e.g., for introspection tools)
connector->RequestSchemas(schema_ids);

// Pump incoming messages (network info updates, schema responses)
connector->Update();

// Get the latest network topology
proto::NetworkInfo* info = connector->GetLastNetworkInfo();
Update() is non-blocking. It reads any available messages from the coordinator — primarily NetworkInfo updates and schema responses — and stores the latest values locally.

Multi-process setup

When a launch file defines multiple processes, each process gets its own TransportManager and its own connection to the coordinator. The coordinator merges publisher information from all processes into a single NetworkInfo and distributes it to everyone. This allows a subscriber in process A to discover a publisher in process B, negotiate a TCP transport connection, and receive messages directly — the coordinator is only involved in the discovery phase, not message delivery.
Process A                    Coordinator                  Process B
  │                              │                              │
  │── SendTransportManagerInfo ──▶│                              │
  │                              │◀── SendTransportManagerInfo ──│
  │                              │                              │
  │◀──────── NetworkInfo ────────│──────── NetworkInfo ─────────▶│
  │                              │                              │
  │◀═══════════ TCP data (direct, no coordinator) ══════════════▶│

What happens without the coordinator

If the coordinator is not running:
  • CoordinatorConnector::Create() returns nullptr.
  • WaitForCoordinator() (and WaitForCoordinatorConnection()) loops indefinitely, logging a warning every second.
  • The basis CLI commands that require a coordinator connection (basis topic ls, basis topic print, basis schema print) will print an error and exit immediately.
Units launched before the coordinator will block and not proceed to Initialize() or begin processing until the coordinator is reachable. Ensure the coordinator is started first.

Network transport negotiation

The coordinator facilitates transport negotiation but does not carry message data. After receiving NetworkInfo, a TransportManager inspects the publisher endpoints for each subscribed topic and establishes direct connections using the appropriate transport plugin (e.g., net_tcp for cross-process TCP, inproc for in-process zero-copy). Transport endpoint addresses are included in the publisher info that each TransportManager reports to the coordinator. The schema_id field in each publisher entry identifies the message type and serialization format, enabling schema lookup via basis schema print or runtime deserialization.

Build docs developers (and LLMs) love