Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/GridOPTICS/GridPACK/llms.txt

Use this file to discover all available pages before exploring further.

GridPACK is a large parallel framework with several required dependencies (MPI, PETSc, Global Arrays, Boost), and most problems encountered during setup fall into a small set of known categories. This page collects the most frequently asked questions and common failure modes, with specific commands and settings to resolve each one.
Full build instructions are maintained in docs/markdown/BASIC_INSTALL.md in the GridPACK repository. The general pattern is:
# Clone and initialise submodules (required for CMake helpers)
git clone https://github.com/GridOPTICS/GridPACK.git
cd GridPACK
git submodule update --init

# Create an out-of-source build directory
mkdir build && cd build

cmake .. \
  -DCMAKE_INSTALL_PREFIX=/usr/local/gridpack \
  -DPETSC_DIR=/path/to/petsc \
  -DPETSC_ARCH=arch-linux-c-opt \
  -DBoost_ROOT=/path/to/boost \
  -DGA_DIR=/path/to/ga \
  -DCMAKE_BUILD_TYPE=Release

make -j$(nproc)
make install
Platform-specific build scripts for clusters such as Constance and Deception can be found in src/scripts/ in the repository.
The git submodule update --init step is required when cloning directly from GitHub. Release tarballs from the Releases page already include the submodule content.
The GridPACK CMake configuration locates PETSc via the PETSC_DIR and PETSC_ARCH variables. Both must point to a completed PETSc installation.
cmake .. \
  -DPETSC_DIR=/opt/petsc/3.20.0 \
  -DPETSC_ARCH=arch-linux-c-opt
Common reasons the detection fails:
  • PETSC_DIR points to the source tree rather than the installation prefix — they are the same only when --prefix was not used during PETSc configuration.
  • PETSc was built with a different MPI installation than the one on PATH. Verify with $PETSC_DIR/$PETSC_ARCH/bin/mpicc --version.
  • The petscconf.h header is missing from $PETSC_DIR/$PETSC_ARCH/include/. This indicates an incomplete build; rerun make in the PETSc source directory.
To build the GridPACK Python interface, PETSc (and all other dependencies) must be built as shared libraries. Pass --with-shared-libraries=1 to PETSc’s configure script.
GridPACK requires Boost ≥ 1.49. If CMake finds the wrong version or cannot find Boost at all, set Boost_ROOT (CMake ≥ 3.12) or BOOST_ROOT explicitly:
cmake .. -DBoost_ROOT=/opt/boost/1.82.0
If multiple Boost installations exist on the system, also set -DBoost_NO_SYSTEM_PATHS=ON to prevent CMake from picking up an older system Boost.Confirm the version CMake found by examining the CMakeCache.txt after configuration:
grep Boost_VERSION build/CMakeCache.txt
This typically means a GridPACK object was created before the gridpack::Environment object, or after it was destroyed.
int main(int argc, char **argv)
{
  // Environment MUST be the first GridPACK object created
  gridpack::Environment env(argc, argv);
  {
    // All GridPACK objects go inside this nested scope
    MyApp app;
    app.execute();
  }
  // Destructors for app and all objects inside run here,
  // while MPI/GA are still active
} // env destructor calls MPI_Finalize and GA_Terminate here
The nested scope forces all GridPACK destructors to run before env goes out of scope and finalises the communication libraries.
Distributed memory problems on large networks are usually caused by one of:
  • Insufficient per-process memory: ensure mpiexec spreads processes across enough nodes. Use mpiexec -npernode or equivalent flags for your scheduler.
  • PETSc matrix pre-allocation: if PETSc reports excessive allocation warnings, the mapper’s non-zero estimate may be too low. This is normally handled automatically but can be tuned via solver configuration in input.xml.
  • Global Arrays default memory limit: set MA_MB (in megabytes) in the environment before launching to increase the Global Arrays memory pool:
    export MA_MB=4096
    mpiexec -np 16 ./myapp.x input.xml
    
GridPACK is designed primarily for Linux. It also builds on macOS (use a terminal to access a Unix-like environment). Windows is not supported.Tested configurations include:
  • Ubuntu and Debian-based Linux distributions
  • Red Hat Enterprise Linux (RHEL) and CentOS
  • macOS with GCC or Clang via Homebrew or MacPorts
  • HPC clusters running Linux with module-managed environments (e.g. Constance at PNNL)
Examples ship with the GridPACK source in two locations:
DirectoryContents
src/applications/examples/hello_world/Minimal bus/branch classes and serialWrite output
src/applications/examples/resistor_grid/Linear solve on a 2-D resistor mesh
src/applications/examples/powerflow/Full Newton-Raphson power flow (IEEE 14-bus and 118-bus)
src/applications/examples/contingency_analysis/Multi-task contingency using sub-communicators and task manager
src/applications/modules/Reusable power flow, dynamic simulation, and state estimation modules
After installation, the power flow example is also copied to $GRIDPACK_DIR/share/gridpack/example/powerflow/ with a ready-to-use CMakeLists.txt that links against the installed GridPACK libraries.
GridPACK ships parsers for PSS/E versions 23, 33, 34, 35, and 36. Select the one matching your network data:
// PSS/E version 23 (legacy)
gridpack::parser::PTI23_parser<MyNetwork> parser(network);

// PSS/E version 33
gridpack::parser::PTI33_parser<MyNetwork> parser(network);

// PSS/E version 34
gridpack::parser::PTI34_parser<MyNetwork> parser(network);

// PSS/E version 35
gridpack::parser::PTI35_parser<MyNetwork> parser(network);

// PSS/E version 36
gridpack::parser::PTI36_parser<MyNetwork> parser(network);
All parsers populate the same DataCollection keys, so switching between them requires only changing the parser type in your application driver. The power flow module can also auto-detect the PSS/E version from the RAW file header.
1

Build GridPACK with shared libraries

All dependencies (PETSc, Boost, Global Arrays) must also be shared. Pass -DBUILD_SHARED_LIBS=ON to the GridPACK CMake configuration.
2

Initialise the pybind11 submodule

cd GridPACK
git submodule update --init
3

Install the Python module

export GRIDPACK_DIR=/usr/local/gridpack
cd GridPACK/python
pip install --no-deps --upgrade --prefix=$GRIDPACK_DIR .
4

Set PYTHONPATH

export PYTHONPATH=$GRIDPACK_DIR/lib/python3.12/site-packages
Adjust the Python version string to match your installation.
5

Verify

python -c 'import gridpack; print("OK")'
See the Python interface page for a full walkthrough.
Run the tests with verbose output to see which test is failing:
cd build
ctest --output-on-failure -V
Common causes:
  • PETSc solver configuration: many tests use a gridpack.petscrc file to select the linear solver. If PETSc was built without a particular solver package (e.g. SuperLU, MUMPS), tests that require it will fail. Edit gridpack.petscrc to use a solver that is available, e.g. -pc_type lu -pc_factor_mat_solver_type petsc.
  • Wrong number of MPI processes: some tests are fixed to a specific process count. Check the CTestTestfile.cmake for the failing test to see the required count.
  • Missing input files: tests copy input data with add_custom_command. Run make <target>.input for the failing test to force the copy.
Pass -DBUILD_SHARED_LIBS=ON at configure time:
cmake .. \
  -DBUILD_SHARED_LIBS=ON \
  -DCMAKE_INSTALL_PREFIX=/usr/local/gridpack \
  ...
This is required for the Python interface. All dependency libraries (PETSc, Boost, Global Arrays) must also be shared objects when building GridPACK as shared libraries, otherwise the linker will produce errors about position-independent code.
The primary GridPACK reference is:
B. Palmer, W. Perkins, Y. Chen, S. Jin, D. Callahan, K. Glass, R. Diao, M. Rice, S. Elbert, M. Vallem, and Z. Huang, “GridPACK™: A framework for developing power grid simulations on high-performance computing platforms,” The International Journal of High Performance Computing Applications, vol. 30, no. 2, pp. 223–240, 2016. https://doi.org/10.1177/1094342015607609
The BibTeX entry is included in the repository README.md. Additional publications covering specific modules (dynamic simulation, state estimation, contingency analysis) can be found on the GridPACK project page at Pacific Northwest National Laboratory.
On RHEL 7 (and likely CentOS 7) with the distribution-provided OpenMPI packages, importing the gridpack Python module fails because the MPI shared library is not loaded into the global symbol table before pybind11 extension modules are initialised.Set this environment variable before running any GridPACK Python script on such systems:
export RHEL_OPENMPI_HACK=yes
This triggers a workaround in gridpack.cpp that calls dlopen with RTLD_GLOBAL on the MPI shared library before pybind11 registers the module. It is not needed with MPICH or with OpenMPI on Ubuntu, Debian, Fedora, or other non-RHEL distributions.

Build docs developers (and LLMs) love