The physical relationship between federates, cores, and brokers in a HELICS co-simulation is called its architecture or topology. Choosing the right architecture matters because it affects both the complexity of your configuration and the performance of the co-simulation. Simple co-simulations on a single machine need very little thought about architecture, while large federations running across many compute nodes benefit significantly from deliberate design. This guide describes the most common patterns and explains when each is appropriate.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/GMLC-TDC/HELICS/llms.txt
Use this file to discover all available pages before exploring further.
Architecture building blocks
Every HELICS co-simulation has three types of components:- Federates — The individual simulator instances. Each federate has a unique name and participates in time synchronization with all other federates.
- Cores — The communication layer embedded within each simulator. A core connects to a broker and manages the federate’s interfaces (publications, subscriptions, endpoints). In most cases, one core contains exactly one federate.
- Brokers — The message routing components that coordinate time synchronization and pass data between federates. Every federation has at least one broker. Brokers can be organized into hierarchies.
Single-broker federation (most common)
The most common HELICS architecture places all federates on the same machine, each connected through its own core to a single broker. This is the pattern used in all HELICS fundamental examples and is the right starting point for any new co-simulation.-f 3 flag tells the broker to wait for exactly three federates before initialization proceeds.
Multiple federates on a single core
For simulators that are multi-threaded by nature—such as a single application managing many controllers—multiple federates can share a single core. This eliminates the overhead of inter-process communication between those federates, replacing it with inter-thread communication.Multi-machine federation (distributed)
When a federation is too computationally demanding for a single machine, federates can be distributed across multiple compute nodes. All federates still connect to a common broker. The broker can run on one of the compute nodes or on a dedicated node.core_init_string:
Multi-broker hierarchy
When federates on the same compute node communicate frequently with each other but infrequently with federates on other nodes, placing a local broker on each node keeps most message traffic local. Only inter-node messages travel up the broker hierarchy to the root broker.Core type selection
ThecoreType field in the federate’s JSON configuration (or --coretype on the command line) determines the messaging technology used between the federate’s core and its broker. All federates and brokers in a given segment of the federation must use compatible core types.
General performance ranking from best to worst for typical use cases: MPI > IPC > UDP > TCP > ZMQ.
ZMQ (default)
Best for local development and multi-machine federations over standard networks. Reliable delivery, automatic reconnection, supports any number of machines. Default when
coreType is omitted.IPC
Fastest option for single-machine federations. Uses Boost interprocess communication (memory-mapped files). Cannot span multiple machines and does not support broker hierarchies.
TCP
Alternative to ZMQ for platforms where ZMQ is unavailable. Uses the asio library. Better raw throughput than ZMQ in some configurations since it avoids ZMQ overhead.
MPI
Designed for HPC cluster environments where MPI is installed and managed by the job scheduler. Provides the best performance in those environments. Still under active development.
Multi-protocol (multi-core) federation
In some scenarios, different parts of a federation must use different core types—for example, a set of federates running inside an HPC cluster using MPI and a set outside the cluster using ZMQ. HELICS supports this through a multi-broker configuration that accepts connections from multiple core types simultaneously.--zmq and --mpi flags to accept connections from both core types. See the HELICS multibroker example for a complete implementation.
Performance considerations
| Architecture | Federation size | Network | Relative complexity |
|---|---|---|---|
| Single broker, single machine | Small to medium | None required | Low |
| Single broker, multiple machines | Small to medium | LAN/WAN | Low |
| Multi-broker hierarchy | Large | LAN/WAN | Medium |
| MPI cluster with ZMQ bridge | Very large | HPC cluster | High |
- Broker placement — Put the broker physically close (low latency) to the federates it serves. In multi-machine setups, the broker’s network location matters.
- Message volume — Federates that exchange many messages at every time step benefit most from being on the same local broker.
- Time step granularity — Many small time steps generate more broker overhead than fewer large steps. Match period to the actual temporal resolution needed.
- Core type — IPC outperforms ZMQ on a single machine. For distributed co-simulations, ZMQ is the practical default unless you are on an HPC system.