Write the two functions that drive every Emergent simulation: initial_data_function seeds each agent at startup, and timestep_function updates the graph on every tick.
Every AgentModel requires two behavior functions before you can run a simulation. initial_data_function decides what data each agent starts with. timestep_function decides how agents update each tick. Both receive the AgentModel instance as their only argument, giving them full access to the graph and all parameters.See Agent model for where these functions fit in the simulation lifecycle.
Called once per node during initialize_graph(). The returned dictionary is merged into that node’s attribute store. Every key you return becomes a node attribute accessible later via graph.nodes[node]["key"].
If your function does not return a dictionary, initialize_graph() will fail when it tries to call node.update(initial_data). Always return a dict, even if it contains only one key.
Both functions receive the AgentModel instance, so they can read any parameter with subscript notation:
def initial_data(model): low = model["opinion_min"] # custom parameter set before initialize_graph() high = model["opinion_max"] return {"opinion": random.uniform(low, high)}def timestep(model): rate = model["influence_rate"] # read at runtime each tick graph = model.get_graph() # use rate to update nodes ...
Store anything your functions need as a model parameter rather than a global variable. This keeps simulations self-contained and easy to vary across runs.
Call model.get_graph() to get the underlying nx.Graph object. Node data is a dict-like store on each node:
graph = model.get_graph()# Read a single node's attributevalue = graph.nodes[0]["opinion"]# Iterate all nodes with their datafor node, data in graph.nodes(data=True): print(node, data["opinion"])# Update a single attributegraph.nodes[0]["opinion"] = 0.42# Merge multiple attributes at oncegraph.nodes[0].update({"opinion": 0.42, "confidence": 0.9})
Each agent picks a random neighbor and averages their values. This is the canonical opinion dynamics pattern.
import randomdef timestep(model): graph = model.get_graph() updates = {} for node in graph.nodes(): neighbors = list(graph.neighbors(node)) if not neighbors: continue partner = random.choice(neighbors) avg = (graph.nodes[node]["opinion"] + graph.nodes[partner]["opinion"]) / 2 updates[node] = avg # Write all updates after computing them for node, opinion in updates.items(): graph.nodes[node]["opinion"] = opinionmodel.set_timestep_function(timestep)
Every agent updates based on the current global state — for example, moving toward the population mean.
import numpy as npdef timestep(model): graph = model.get_graph() opinions = [data["opinion"] for _, data in graph.nodes(data=True)] global_mean = np.mean(opinions) rate = model["influence_rate"] for node in graph.nodes(): current = graph.nodes[node]["opinion"] graph.nodes[node]["opinion"] = current + rate * (global_mean - current)
Because every node reads the opinions list computed before any writes, there is no ordering bias in this pattern.
Agents only update when a condition is met — for example, when a neighbor’s value differs by more than a threshold.
def timestep(model): graph = model.get_graph() threshold = model["interaction_threshold"] updates = {} for node in graph.nodes(): opinion = graph.nodes[node]["opinion"] neighbors = list(graph.neighbors(node)) compatible = [ graph.nodes[n]["opinion"] for n in neighbors if abs(graph.nodes[n]["opinion"] - opinion) <= threshold ] if compatible: updates[node] = sum(compatible + [opinion]) / (len(compatible) + 1) for node, opinion in updates.items(): graph.nodes[node]["opinion"] = opinion
The AgentModel instance does not expose the graph directly. You must call model.get_graph() to retrieve it:
# Wrong — AgentModel has no .nodes attributedef timestep(model): for node in model.nodes(): # AttributeError ...# Correctdef timestep(model): graph = model.get_graph() for node in graph.nodes(): ...
Modifying nodes while iterating
Writing values back to the graph inside the same loop that reads them causes agents processed later in the tick to see already-updated values from earlier agents. This introduces an ordering bias that is usually unintended.
# Wrong — later nodes see updated opinions from earlier nodesdef timestep(model): graph = model.get_graph() for node in graph.nodes(): neighbors = list(graph.neighbors(node)) avg = sum(graph.nodes[n]["opinion"] for n in neighbors) / len(neighbors) graph.nodes[node]["opinion"] = avg # written immediately# Correct — compute all new values first, then writedef timestep(model): graph = model.get_graph() updates = {} for node in graph.nodes(): neighbors = list(graph.neighbors(node)) avg = sum(graph.nodes[n]["opinion"] for n in neighbors) / len(neighbors) updates[node] = avg for node, opinion in updates.items(): graph.nodes[node]["opinion"] = opinion
Collect new values into a separate dict (as shown above) before writing any of them back. This ensures every agent’s update is based on the state at the start of the tick.
initial_data_function not returning a dict
Emergent calls node.update(initial_data) on the return value. If your function returns None or any non-dict type, initialize_graph() will raise a TypeError.
# Wrong — returns a number, not a dictdef initial_data(model): return random.random()# Correctdef initial_data(model): return {"opinion": random.random()}