Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/quantumlib/Stim/llms.txt

Use this file to discover all available pages before exploring further.

Sinter is a companion library built on top of Stim for high-throughput Monte Carlo sampling of quantum error correction circuits. It handles the repetitive bookkeeping of a QEC experiment — distributing work across CPU cores, adaptively sizing batches, recording running statistics in a resumable CSV file, and plotting results — so you can focus on the circuits and decoders you care about.

Installation

pip install sinter
Installing sinter also makes the sinter command available on your PATH inside the active virtual environment. You will also need a decoder, for example:
pip install pymatching

How Sinter works

Sinter takes one or more Stim circuits annotated with noise, detectors, and logical observables. For each circuit it:
  1. Generates the detector error model using stim.Circuit.detector_error_model(decompose_errors=True).
  2. Configures the specified decoder from the DEM.
  3. Uses stim to sample detection events in batches.
  4. Passes each batch to the decoder and counts logical errors.
  5. Records cumulative statistics (shots, errors, discards, seconds) per circuit in CSV format.
  6. Repeats until the per-circuit stopping criteria (max_shots or max_errors) are satisfied.
All of this happens in parallel across as many worker processes as you specify.

Python API workflow

1. Define tasks

A sinter.Task bundles a circuit with optional metadata and decoder hints:
import stim
import sinter

def generate_tasks():
    for p in [0.001, 0.005, 0.01]:
        for d in [3, 5]:
            yield sinter.Task(
                circuit=stim.Circuit.generated(
                    "surface_code:rotated_memory_x",
                    rounds=d,
                    distance=d,
                    after_clifford_depolarization=p,
                ),
                json_metadata={
                    'p': p,
                    'd': d,
                },
            )

2. Collect statistics

Pass the tasks to sinter.collect(). It returns a list of sinter.TaskStats objects when all stopping criteria are met:
samples = sinter.collect(
    num_workers=4,
    max_shots=1_000_000,
    max_errors=1000,
    tasks=generate_tasks(),
    decoders=['pymatching'],
)
Key parameters:
ParameterDescription
num_workersNumber of parallel worker processes
max_shotsStop sampling a circuit after this many shots
max_errorsStop sampling a circuit after this many logical errors
tasksIterable of sinter.Task objects
decodersList of decoder names (e.g. ['pymatching'])

3. Print and plot results

import matplotlib.pyplot as plt

# Print as CSV
print(sinter.CSV_HEADER)
for stat in samples:
    print(stat.to_csv_line())

# Plot error rate curves
fig, ax = plt.subplots(1, 1)
sinter.plot_error_rate(
    ax=ax,
    stats=samples,
    group_func=lambda stat: f"d={stat.json_metadata['d']}",
    x_func=lambda stat: stat.json_metadata['p'],
)
ax.loglog()
ax.set_xlabel('Physical Error Rate')
ax.set_ylabel('Logical Error Probability (per shot)')
ax.set_title('Surface code threshold')
ax.legend()
fig.savefig('threshold.png')
plt.show()
The code inside sinter.collect() calls must be inside a function guarded by if __name__ == '__main__':. Sinter uses Python multiprocessing, and without this guard the worker subprocesses will re-execute module-level code on startup.

Complete example

The following reproduces the example from the Sinter README:
import stim
import sinter
import matplotlib.pyplot as plt


def generate_example_tasks():
    for p in [0.001, 0.005, 0.01]:
        for d in [3, 5]:
            yield sinter.Task(
                circuit=stim.Circuit.generated(
                    rounds=d,
                    distance=d,
                    after_clifford_depolarization=p,
                    code_task='surface_code:rotated_memory_x',
                ),
                json_metadata={'p': p, 'd': d},
            )


def main():
    samples = sinter.collect(
        num_workers=4,
        max_shots=1_000_000,
        max_errors=1000,
        tasks=generate_example_tasks(),
        decoders=['pymatching'],
    )

    print(sinter.CSV_HEADER)
    for sample in samples:
        print(sample.to_csv_line())

    fig, ax = plt.subplots(1, 1)
    sinter.plot_error_rate(
        ax=ax,
        stats=samples,
        group_func=lambda stat: f"Rotated Surface Code d={stat.json_metadata['d']}",
        x_func=lambda stat: stat.json_metadata['p'],
    )
    ax.loglog()
    ax.set_ylim(1e-5, 1)
    ax.grid()
    ax.set_title('Logical Error Rate vs Physical Error Rate')
    ax.set_ylabel('Logical Error Probability (per shot)')
    ax.set_xlabel('Physical Error Rate')
    ax.legend()
    fig.savefig('plot.png')
    plt.show()


if __name__ == '__main__':
    main()
Example CSV output:
     shots,    errors,  discards, seconds,decoder,strong_id,json_metadata
   1000000,       837,         0,    36.6,pymatching,9f7e20c...,"{""d"":3,""p"":0.001}"
     53498,      1099,         0,    6.52,pymatching,3f40432...,"{""d"":3,""p"":0.005}"
     16269,      1023,         0,    3.23,pymatching,17b2e0c...,"{""d"":3,""p"":0.01}"
   1000000,       151,         0,    77.3,pymatching,e179a18...,"{""d"":5,""p"":0.001}"
     61569,      1001,         0,    24.5,pymatching,2fefcc3...,"{""d"":5,""p"":0.005}"
     11363,      1068,         0,    12.5,pymatching,a4dec28...,"{""d"":5,""p"":0.01}"

CLI usage

Generate circuit files and collect statistics across all of them in parallel:
# Generate circuits
mkdir -p circuits
python -c "
import stim
for p in [0.001, 0.005, 0.01]:
    for d in [3, 5]:
        with open(f'circuits/d={d},p={p},b=X,type=rotated_surface_memory.stim', 'w') as f:
            c = stim.Circuit.generated(
                rounds=d, distance=d,
                after_clifford_depolarization=p,
                code_task='surface_code:rotated_memory_x')
            print(c, file=f)
"

# Collect statistics (resumable)
sinter collect \
    --processes auto \
    --circuits circuits/*.stim \
    --metadata_func auto \
    --decoders pymatching \
    --max_shots 1_000_000 \
    --max_errors 1000 \
    --save_resume_filepath stats.csv
--metadata_func auto parses filenames like d=3,p=0.001,b=X,type=rotated_surface_memory.stim into a JSON dictionary automatically.--save_resume_filepath enables merge mode: if the process is interrupted, restart it and it will pick up from where it left off without losing data.

CSV output format

Each row in the CSV file describes statistics accumulated for one (circuit, decoder) pair over some number of shots. Multiple rows with the same strong_id can be summed to get total statistics.
ColumnTypeDescription
shotsintTotal number of times the circuit was sampled
errorsintNumber of shots where the decoder predicted the wrong logical observable
discardsintShots discarded due to postselection (not counted as errors)
secondsfloatTotal CPU core-seconds spent sampling and decoding
decoderstrName of the decoder used
strong_idstrCryptographic hash of the circuit, DEM, decoder, and metadata — prevents accidentally merging rows from different circuits
json_metadatajsonFree-form metadata associated with the task (e.g. {"d": 3, "p": 0.001})
Use sinter.read_stats_from_csv_files(["stats.csv"]) to load saved statistics back into Python as a list of sinter.TaskStats objects for further analysis or plotting.

Build docs developers (and LLMs) love