Use this file to discover all available pages before exploring further.
The gridpack::math namespace provides the complete numerical backbone for GridPACK applications. All types wrap the PETSc library through a Pimpl interface, so application code remains independent of the underlying solver backend. Matrices and vectors are distributed across MPI ranks, and every solver must be constructed and destroyed collectively on all processes sharing the same communicator.
GridPACK represents power-grid state as distributed algebraic objects. Each MPI rank owns a contiguous block of rows (for matrices) or elements (for vectors). The framework uses Global Arrays (GA) internally to coordinate index assignments across ranks.
gridpack::math::Matrix stores complex values by default; RealMatrix stores double. Both support Sparse (default) and Dense storage types.
#include <gridpack/math/matrix.hpp>// Construct a sparse complex matrix: each rank owns local_rows rows,// total columns = cols.gridpack::math::Matrix A(comm, local_rows, cols, gridpack::math::Sparse);// Set individual elements (zero-based global indices)A.setElement(i, j, ComplexType(1.0, -0.5));// Bulk setA.setElements(n, row_indices, col_indices, values);// Add rather than overwriteA.addElement(i, j, value);// Finalize internal data structures before useA.ready();// Query layoutint total = A.rows(); // global row countint local = A.localRows();int lo, hi;A.localRowRange(lo, hi); // [lo, hi] rows owned by this rank
Call ready() after all setElement / addElement calls and before passing the matrix to a solver or mapper. Skipping this step results in undefined behavior.
LinearSolverT<T, I> (aliased as LinearSolver for complex, RealLinearSolver for real) solves the system Ax = b in parallel. A solver instance is tied to a single coefficient matrix for its lifetime; call setMatrix() if the matrix changes between solves.
#include <gridpack/math/linear_solver.hpp>// Build the coefficient matrix A (must remain alive as long as the solver)gridpack::math::Matrix A = buildAdmittanceMatrix(network);A.ready();// Construct solver (collective: all ranks in A's communicator must call this)gridpack::math::LinearSolver solver(A);// Configure from the application XML block (optional but recommended)gridpack::utility::Configuration::CursorPtr cursor = config->getCursor("Configuration.PowerFlow.LinearSolver");solver.configure(cursor);// Set initial guess in x, place RHS in b, then solvegridpack::math::Vector b(comm, local_size);gridpack::math::Vector x(comm, local_size);// ... fill b ...x.zero(); // initial guesssolver.solve(b, x); // result returned in x
PETSc options can also be placed in a .petscrc file in the working directory. Options in the XML file take precedence, but .petscrc is useful for temporary tuning without recompiling.
NonlinearSolverT<T, I> (aliased NonlinearSolver / RealNonlinearSolver) implements a parallel Newton-Raphson iteration. The caller supplies two functors: one that builds the Jacobian matrix J(x) and one that evaluates the residual function vector F(x). The solver updates the Jacobian and residual on every iteration until convergence.
#include <gridpack/math/nonlinear_solver.hpp>// JacobianBuilder: (Matrix& J, Vector& x) -> voidauto buildJacobian = [&](gridpack::math::Matrix& J, const gridpack::math::Vector& x) { J.zero(); // populate J from network component state encoded in x factory.setMode(JACOBIAN); mapper.mapToMatrix(J);};// FunctionBuilder: (Vector& F, Vector& x) -> voidauto buildResidual = [&](gridpack::math::Vector& F, const gridpack::math::Vector& x) { F.zero(); factory.setMode(RESIDUAL); busMapper.mapToVector(F);};gridpack::math::NonlinearSolver nls(comm, local_size, buildJacobian, buildResidual);gridpack::utility::Configuration::CursorPtr cursor = config->getCursor("Configuration.PowerFlow");nls.configure(cursor);// Initial guess in x; solution returned in xnls.solve(x);
DAESolverT<T, I> (aliased DAESolver / RealDAESolver) integrates differential-algebraic systems of the form J(x)Δx = -F(x) over time. It wraps PETSc’s TS (time-stepping) framework and supports pre/post-step callbacks and event detection for modeling fault inception and clearing.
// Reuse the preconditioner for up to 5 Newton iterations between rebuildsdae.reusepreconditioner(5);// Reuse the Jacobian for up to 3 time steps between rebuildsdae.reusejacobian(3);// After a discontinuity (fault clearing), restart the time-stepperdae.restartstep();
Call restartstep() after every discontinuity (e.g., fault application or clearing). Without a restart, the implicit integrator may apply an inappropriately large initial step over the discontinuity boundary.
All solvers inherit from utility::WrappedConfigurable, so PETSc command-line options can be passed through the XML <PETScOptions> string or via a .petscrc file. Common options for power-flow workloads:
# .petscrc — place in the working directory-ksp_type gmres-ksp_gmres_restart 30-pc_type asm-sub_pc_type lu-ksp_rtol 1.0e-8-ksp_max_it 1000
For time-domain simulation, TS options are most relevant: