Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/GridOPTICS/GridPACK/llms.txt

Use this file to discover all available pages before exploring further.

The gridpack::math namespace provides the complete numerical backbone for GridPACK applications. All types wrap the PETSc library through a Pimpl interface, so application code remains independent of the underlying solver backend. Matrices and vectors are distributed across MPI ranks, and every solver must be constructed and destroyed collectively on all processes sharing the same communicator.

Distributed data structures

GridPACK represents power-grid state as distributed algebraic objects. Each MPI rank owns a contiguous block of rows (for matrices) or elements (for vectors). The framework uses Global Arrays (GA) internally to coordinate index assignments across ranks.

Matrix

gridpack::math::Matrix stores complex values by default; RealMatrix stores double. Both support Sparse (default) and Dense storage types.
#include <gridpack/math/matrix.hpp>

// Construct a sparse complex matrix: each rank owns local_rows rows,
// total columns = cols.
gridpack::math::Matrix A(comm, local_rows, cols,
                         gridpack::math::Sparse);

// Set individual elements (zero-based global indices)
A.setElement(i, j, ComplexType(1.0, -0.5));

// Bulk set
A.setElements(n, row_indices, col_indices, values);

// Add rather than overwrite
A.addElement(i, j, value);

// Finalize internal data structures before use
A.ready();

// Query layout
int total  = A.rows();       // global row count
int local  = A.localRows();
int lo, hi;
A.localRowRange(lo, hi);     // [lo, hi] rows owned by this rank
Call ready() after all setElement / addElement calls and before passing the matrix to a solver or mapper. Skipping this step results in undefined behavior.

Vector

gridpack::math::Vector (complex) and RealVector (real) follow the same collective construction pattern.
#include <gridpack/math/vector.hpp>

gridpack::math::Vector x(comm, local_size);

x.setElement(idx, ComplexType(1.0, 0.0));
x.addElements(n, indices, values);
x.ready();

// Retrieve elements
ComplexType val;
x.getElement(idx, val);

LinearSolver

LinearSolverT<T, I> (aliased as LinearSolver for complex, RealLinearSolver for real) solves the system Ax = b in parallel. A solver instance is tied to a single coefficient matrix for its lifetime; call setMatrix() if the matrix changes between solves.
#include <gridpack/math/linear_solver.hpp>

// Build the coefficient matrix A (must remain alive as long as the solver)
gridpack::math::Matrix A = buildAdmittanceMatrix(network);
A.ready();

// Construct solver (collective: all ranks in A's communicator must call this)
gridpack::math::LinearSolver solver(A);

// Configure from the application XML block (optional but recommended)
gridpack::utility::Configuration::CursorPtr cursor =
    config->getCursor("Configuration.PowerFlow.LinearSolver");
solver.configure(cursor);

// Set initial guess in x, place RHS in b, then solve
gridpack::math::Vector b(comm, local_size);
gridpack::math::Vector x(comm, local_size);
// ... fill b ...
x.zero();   // initial guess
solver.solve(b, x);  // result returned in x

Configuring via XML

Solver parameters are read from the <LinearSolver> block nested inside the application section of the configuration file:
<LinearSolver>
  <PETScOptions>
    -ksp_type bcgs
    -pc_type ilu
    -ksp_max_it 500
    -ksp_rtol 1.0e-7
  </PETScOptions>
</LinearSolver>
PETSc options can also be placed in a .petscrc file in the working directory. Options in the XML file take precedence, but .petscrc is useful for temporary tuning without recompiling.

NonlinearSolver

NonlinearSolverT<T, I> (aliased NonlinearSolver / RealNonlinearSolver) implements a parallel Newton-Raphson iteration. The caller supplies two functors: one that builds the Jacobian matrix J(x) and one that evaluates the residual function vector F(x). The solver updates the Jacobian and residual on every iteration until convergence.
#include <gridpack/math/nonlinear_solver.hpp>

// JacobianBuilder: (Matrix& J, Vector& x) -> void
auto buildJacobian = [&](gridpack::math::Matrix& J,
                          const gridpack::math::Vector& x) {
    J.zero();
    // populate J from network component state encoded in x
    factory.setMode(JACOBIAN);
    mapper.mapToMatrix(J);
};

// FunctionBuilder: (Vector& F, Vector& x) -> void
auto buildResidual = [&](gridpack::math::Vector& F,
                          const gridpack::math::Vector& x) {
    F.zero();
    factory.setMode(RESIDUAL);
    busMapper.mapToVector(F);
};

gridpack::math::NonlinearSolver nls(comm, local_size,
                                    buildJacobian, buildResidual);

gridpack::utility::Configuration::CursorPtr cursor =
    config->getCursor("Configuration.PowerFlow");
nls.configure(cursor);

// Initial guess in x; solution returned in x
nls.solve(x);

Newton-Raphson parameters

The following XML parameters control Newton-Raphson convergence:
ParameterDefaultDescription
tolerance1.0e-6Residual norm below which the solver declares convergence
maxIterations50Maximum Newton-Raphson iterations
dampingFactor1.0Step-size damping (0 < d ≤ 1); decrease for hard nonlinearities
<PowerFlow>
  <tolerance>1.0e-6</tolerance>
  <maxIterations>50</maxIterations>
  <dampingFactor>0.8</dampingFactor>
</PowerFlow>

DAESolver

DAESolverT<T, I> (aliased DAESolver / RealDAESolver) integrates differential-algebraic systems of the form J(x)Δx = -F(x) over time. It wraps PETSc’s TS (time-stepping) framework and supports pre/post-step callbacks and event detection for modeling fault inception and clearing.

Constructor overloads

DAESolverT(comm, local_size,
           jacobianBuilder, functionBuilder,
           eventManagerPtr);

Dynamic simulation usage

#include <gridpack/math/dae_solver.hpp>

gridpack::math::DAESolver dae(comm, local_size,
                               jBuilder, fBuilder);

gridpack::utility::Configuration::CursorPtr cursor =
    config->getCursor("Configuration.Dynamic_simulation");
dae.configure(cursor);

// Set initial condition vector
gridpack::math::Vector x0(comm, local_size);
factory.setInitialConditions(x0);

double t0 = 0.0, dt0 = 0.005;
dae.initialize(t0, dt0, x0);

// Step to t=1.0 or 200 steps, whichever comes first
double tmax = 1.0;
int    nmax = 200;
dae.solve(tmax, nmax);  // tmax/nmax updated with actual final values

// Query solver state
double dt_current = dae.gettimestep();
int    n_steps    = dae.getstepnumber();

Performance knobs

// Reuse the preconditioner for up to 5 Newton iterations between rebuilds
dae.reusepreconditioner(5);

// Reuse the Jacobian for up to 3 time steps between rebuilds
dae.reusejacobian(3);

// After a discontinuity (fault clearing), restart the time-stepper
dae.restartstep();
Call restartstep() after every discontinuity (e.g., fault application or clearing). Without a restart, the implicit integrator may apply an inappropriately large initial step over the discontinuity boundary.

PETSc backend and .petscrc tuning

All solvers inherit from utility::WrappedConfigurable, so PETSc command-line options can be passed through the XML <PETScOptions> string or via a .petscrc file. Common options for power-flow workloads:
# .petscrc — place in the working directory
-ksp_type gmres
-ksp_gmres_restart 30
-pc_type asm
-sub_pc_type lu
-ksp_rtol 1.0e-8
-ksp_max_it 1000
For time-domain simulation, TS options are most relevant:
-ts_type beuler         # backward Euler
-ts_adapt_type basic    # adaptive time-step
-ts_rtol 1.0e-4
-ts_atol 1.0e-6

Network-to-matrix mappers

Populate distributed Matrix and Vector objects from network component state.

XML configuration system

Structure the input file that controls solvers, parsers, and Newton-Raphson parameters.

Build docs developers (and LLMs) love