Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/ohemilyy/universe/llms.txt

Use this file to discover all available pages before exploring further.

When you add or update a template on one cluster node, other nodes do not automatically receive it — each node reads templates from its own local ./templates/ directory. Template sync solves this by zipping the template directory on the source node, dispatching it to the target node via Hazelcast IExecutorService, and extracting it on the other side. You need to run a sync any time you update a template on the Master and want Wrapper nodes to use the new files when creating instances.
Template sync is a push operation: you run it from the node that has the files and push to the node that needs them. Both the nodeId attribute and the raw Hazelcast member UUID are accepted as the target identifier.

How template sync works

The TemplateSyncService handles the full lifecycle of a sync operation:
  1. Resolve — The pattern is matched against the local ./templates/ directory tree to produce a list of (group, name) pairs.
  2. Zip — Each matched template directory is recursively walked and written into an in-memory ZIP archive (ByteArrayOutputStream).
  3. Dispatch — The ZIP bytes are embedded in a TemplateSyncTask (serialized as a Gson JSON string) and submitted to the target Hazelcast member via IExecutorService.submitToMember().
  4. Extract — The target node receives the task, calls receiveSync(), and extracts the ZIP into ./templates/<group>/<name>/. Existing files at the destination are overwritten. The extractor guards against zip-slip path traversal attacks by verifying every entry resolves within the target directory.

Pattern syntax

The <pattern> argument controls which templates are included in a single sync operation.
PatternResolves to
<group>/<name>The single template at ./templates/<group>/<name>/
<group>/*All subdirectory templates inside ./templates/<group>/
*Every template in every group under ./templates/
Patterns are resolved against the source node’s local filesystem at the time the command runs. If a pattern matches no directories, the command prints No templates found matching pattern: <pattern> and exits without dispatching any tasks.

Syncing templates

1

Place templates on the source node

Templates must be present at ./templates/<group>/<name>/ on the node you are running the command from. The directory structure is:
templates/
  global/
    server/
      server.properties
      plugins/
  lobby/
    default/
      server.properties
      world/
If you are running inside Docker, copy files into the container’s /data/templates/ volume path before syncing.
2

Identify the target node

Use cluster nodes to find the nodeId values (or Hazelcast UUIDs) of all connected members:
cluster nodes
=== Cluster Nodes ===
  node-1 - a1b2c3d4-e5f6-7890-abcd-ef1234567890 (local)
  node-2 - 9f8e7d6c-b5a4-3210-fedc-ba0987654321
Use the nodeId value (e.g. node-2) as the <targetNode> argument. The raw UUID also works.
3

Run the sync command

Execute template sync with your chosen pattern and the target node ID:
template sync global/server node-2
The console prints one line per template:
Template sync complete: 2 succeeded, 0 failed
Synced global/server to node-2
Synced global/base to node-2
4

Verify on the target node

On the target node, use template list to confirm the templates arrived:
template list
=== Templates ===
  global/server
  global/base
  lobby/default
New instances created on that node will now use the updated template files.

What happens on the receiving node

When the Hazelcast executor task arrives, the target node calls TemplateSyncService.receiveSync():
  • The ZIP bytes are extracted into ./templates/<group>/<name>/ using ZipInputStream.
  • Missing parent directories are created automatically.
  • Existing files at the destination are replaced (via StandardCopyOption.REPLACE_EXISTING).
  • Any ZIP entry that resolves outside the target directory is skipped and a warning is logged (zip-slip protection).
A success log entry appears in the target node’s output:
Received and extracted template global/server

Failure cases

SituationBehaviour
Pattern matches no local directoriesCommand exits with “No templates found matching pattern” — nothing is dispatched
Target node not found in clustersyncTemplate returns false; a warning is logged; the failure is counted in the summary
ZIP extraction fails on targetreceiveSync returns false; an error is logged on the target node
Source directory disappears mid-zipAn exception is thrown; the failure is counted in the summary
Node IDs must match exactly. node-2 and Node-2 are treated as different identifiers. Use cluster nodes to copy the exact string before running a sync.

S3 storage as an alternative

For clusters with many nodes, pushing templates to each node individually adds operational overhead. The storage-s3 extension provides a centralized template backend: templates are uploaded to an S3 bucket once and downloaded by any node on demand, without running template sync at all. See S3 Storage Extension for setup and configuration details.
Use template sync for quick one-off updates during development or when only a subset of nodes needs a new template. Use S3 storage when all nodes in a larger cluster must share a consistent template set.

Build docs developers (and LLMs) love