Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/GridOPTICS/GridPACK/llms.txt

Use this file to discover all available pages before exploring further.

GridPACK is a C++ toolkit designed to let power system engineers build high-performance computing applications without getting bogged down in parallel decomposition, inter-processor data transfers, or matrix index bookkeeping. The framework encapsulates these concerns behind high-level abstractions so that developers can concentrate on the physics and mathematics of their problems. Four cross-cutting capabilities — network topology, grid components, algebraic mappings, and linear/nonlinear solvers — were identified from real power grid applications (powerflow, contingency analysis, state estimation, and dynamic simulation) and codified into reusable modules.

The four-layer model

GridPACK organizes its functionality into four tightly interlocked layers. The layers build on each other: the Network layer stores topology, Components decorate that topology with physics, Mappers translate component state into algebra, and the Math layer solves the resulting equations.
┌─────────────────────────────────────────────────────┐
│   Application code (user-supplied Bus/Branch logic) │
├──────────────┬──────────────┬───────────────────────┤
│   Network    │  Components  │       Factory         │
│ (topology +  │ (bus/branch  │  (initialization      │
│  partition)  │   physics)   │   & data loading)     │
├──────────────┴──────────────┴───────────────────────┤
│         Mappers  (network → matrix/vector)          │
├─────────────────────────────────────────────────────┤
│   Math module  (PETSc-backed solvers & matrices)    │
├─────────────────────────────────────────────────────┤
│   Parallel infrastructure  (MPI + Global Arrays)    │
└─────────────────────────────────────────────────────┘
1

Network layer

BaseNetwork<Bus, Branch> holds the distributed graph of buses (nodes) and branches (edges). It manages partitioning across MPI ranks, tracks ghost buses and branches on neighboring processors, and provides update operations to synchronize ghost data with current values from their home processors.
2

Components layer

Bus and branch classes derived from BaseBusComponent and BaseBranchComponent carry the power-system physics. Each component implements MatVecInterface methods (matrixDiagValues, matrixForwardValues, etc.) that return matrix and vector contributions, keeping solver mechanics out of the physics code.
3

Mappers layer

FullMatrixMap<MyNetwork> and BusVectorMap<MyNetwork> loop over all components, call their MatVecInterface methods, and assemble a distributed PETSc matrix or vector. Application code never manipulates matrix row/column indices directly.
4

Math/Solvers layer

The gridpack::math namespace wraps PETSc to provide Matrix, Vector, LinearSolver, NonlinearSolver, and DAESolver objects. Switching the underlying library requires only relinking — no application code changes.

Separating physics from HPC infrastructure

The central design principle of GridPACK is that power engineers should write physics, not parallel code. Consider a Y-bus assembly: without a framework a developer must track global matrix indices, manage halo exchanges, and call MPI routines directly. With GridPACK, the developer implements two functions in the bus class:
bool matrixDiagSize(int *isize, int *jsize) const override {
    *isize = 1; *jsize = 1;
    return true;
}

bool matrixDiagValues(ComplexType *values) override {
    // sum shunt admittances from attached branches
    std::vector<boost::shared_ptr<gridpack::component::BaseComponent>> branches;
    getNeighborBranches(branches);
    ComplexType y(0.0, 0.0);
    for (auto &b : branches) {
        MyBranch *br = dynamic_cast<MyBranch*>(b.get());
        y += br->getYContribution();
    }
    values[0] = y;
    return true;
}
FullMatrixMap then handles index calculation and parallel assembly automatically.

Namespaces and coding conventions

All GridPACK code lives under the gridpack namespace. Individual modules have their own nested namespaces:
NamespaceContents
gridpack::networkBaseNetwork, topology management
gridpack::componentBaseComponent, BaseBusComponent, BaseBranchComponent, DataCollection
gridpack::factoryBaseFactory and derived application factories
gridpack::mapperFullMatrixMap, BusVectorMap, GenMatrixMap
gridpack::mathMatrix, Vector, linear/nonlinear/DAE solvers
gridpack::parallelCommunicator, TaskManager, GlobalStore, GlobalVector
Application files include gridpack/include/gridpack.hpp to bring in all module definitions.

boost::shared_ptr usage

GridPACK uses boost::shared_ptr throughout to manage object lifetimes safely across module boundaries. Network accessor functions such as getBus(int idx) and getBranch(int idx) return shared pointers. When a raw pointer is needed, call .get() on the shared pointer:
boost::shared_ptr<MyBus> bus = network->getBus(i);
MyBus *raw = bus.get(); // use dynamic_cast for downcasting
Prefer dynamic_cast over C-style casts when downcasting from a base class pointer to an application-specific type, as this provides a runtime type check:
MyBranch *br = dynamic_cast<MyBranch*>(baseBranch.get());
if (br != nullptr) {
    // safe to call MyBranch-specific methods
}

The factory pattern for initialization

A factory is a lightweight manager class that bridges the network and its components. After the network is populated by a parser, components exist in an uninitialized state alongside DataCollection objects holding raw key-value pairs from the input file. The factory orchestrates initialization in a standard sequence:
boost::shared_ptr<MyNetwork> network(new MyNetwork(comm));

// 1. Parse input file → fills DataCollection objects
MyParser parser(network);
parser.parse("grid.raw");

// 2. Partition network across MPI ranks
network->partition();

// 3. Push topology into individual components
factory.setComponents();

// 4. Allocate exchange buffers for ghost updates
factory.setExchange();

// 5. Transfer DataCollection values into component fields
factory.load();
BaseFactory::setComponents() propagates neighbor pointers from the network into each bus and branch so that calls like getNeighborBranches() work correctly. BaseFactory::load() iterates over every bus and branch and calls each component’s load(data) method, passing the associated DataCollection pointer.

Network model

Buses, branches, ghost cells, and network partitioning

Bus and branch components

BaseComponent, MatVecInterface, and the load pattern

Parallel computing

Communicator, TaskManager, and distributed data structures

Build docs developers (and LLMs) love