GridPACK is a large parallel framework with several required dependencies (MPI, PETSc, Global Arrays, Boost), and most problems encountered during setup fall into a small set of known categories. This page collects the most frequently asked questions and common failure modes, with specific commands and settings to resolve each one.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/GridOPTICS/GridPACK/llms.txt
Use this file to discover all available pages before exploring further.
How do I build GridPACK?
How do I build GridPACK?
docs/markdown/BASIC_INSTALL.md in the GridPACK repository. The general pattern is:src/scripts/ in the repository.git submodule update --init step is required when cloning directly from GitHub. Release tarballs from the Releases page already include the submodule content.CMake reports 'PETSc not found' or 'Could not find PETSc'
CMake reports 'PETSc not found' or 'Could not find PETSc'
PETSC_DIR and PETSC_ARCH variables. Both must point to a completed PETSc installation.PETSC_DIRpoints to the source tree rather than the installation prefix — they are the same only when--prefixwas not used during PETSc configuration.- PETSc was built with a different MPI installation than the one on
PATH. Verify with$PETSC_DIR/$PETSC_ARCH/bin/mpicc --version. - The
petscconf.hheader is missing from$PETSC_DIR/$PETSC_ARCH/include/. This indicates an incomplete build; rerunmakein the PETSc source directory.
CMake reports a Boost version mismatch or Boost headers not found
CMake reports a Boost version mismatch or Boost headers not found
Boost_ROOT (CMake ≥ 3.12) or BOOST_ROOT explicitly:-DBoost_NO_SYSTEM_PATHS=ON to prevent CMake from picking up an older system Boost.Confirm the version CMake found by examining the CMakeCache.txt after configuration:Build fails with MPI-related errors
Build fails with MPI-related errors
Runtime error: MPI not initialised / GlobalArrays not initialised
Runtime error: MPI not initialised / GlobalArrays not initialised
gridpack::Environment object, or after it was destroyed.env goes out of scope and finalises the communication libraries.Out-of-memory errors on large networks
Out-of-memory errors on large networks
-
Insufficient per-process memory: ensure
mpiexecspreads processes across enough nodes. Usempiexec -npernodeor equivalent flags for your scheduler. -
PETSc matrix pre-allocation: if PETSc reports excessive allocation warnings, the mapper’s non-zero estimate may be too low. This is normally handled automatically but can be tuned via solver configuration in
input.xml. -
Global Arrays default memory limit: set
MA_MB(in megabytes) in the environment before launching to increase the Global Arrays memory pool:
What platforms are supported?
What platforms are supported?
- Ubuntu and Debian-based Linux distributions
- Red Hat Enterprise Linux (RHEL) and CentOS
- macOS with GCC or Clang via Homebrew or MacPorts
- HPC clusters running Linux with module-managed environments (e.g. Constance at PNNL)
Where are the example applications?
Where are the example applications?
| Directory | Contents |
|---|---|
src/applications/examples/hello_world/ | Minimal bus/branch classes and serialWrite output |
src/applications/examples/resistor_grid/ | Linear solve on a 2-D resistor mesh |
src/applications/examples/powerflow/ | Full Newton-Raphson power flow (IEEE 14-bus and 118-bus) |
src/applications/examples/contingency_analysis/ | Multi-task contingency using sub-communicators and task manager |
src/applications/modules/ | Reusable power flow, dynamic simulation, and state estimation modules |
$GRIDPACK_DIR/share/gridpack/example/powerflow/ with a ready-to-use CMakeLists.txt that links against the installed GridPACK libraries.How do I switch between PSS/E format versions?
How do I switch between PSS/E format versions?
DataCollection keys, so switching between them requires only changing the parser type in your application driver. The power flow module can also auto-detect the PSS/E version from the RAW file header.How do I enable the Python interface?
How do I enable the Python interface?
Build GridPACK with shared libraries
-DBUILD_SHARED_LIBS=ON to the GridPACK CMake configuration.Tests fail during 'make test'
Tests fail during 'make test'
- PETSc solver configuration: many tests use a
gridpack.petscrcfile to select the linear solver. If PETSc was built without a particular solver package (e.g. SuperLU, MUMPS), tests that require it will fail. Editgridpack.petscrcto use a solver that is available, e.g.-pc_type lu -pc_factor_mat_solver_type petsc. - Wrong number of MPI processes: some tests are fixed to a specific process count. Check the
CTestTestfile.cmakefor the failing test to see the required count. - Missing input files: tests copy input data with
add_custom_command. Runmake <target>.inputfor the failing test to force the copy.
How do I build GridPACK as shared libraries?
How do I build GridPACK as shared libraries?
How do I cite GridPACK in a publication?
How do I cite GridPACK in a publication?
B. Palmer, W. Perkins, Y. Chen, S. Jin, D. Callahan, K. Glass, R. Diao, M. Rice, S. Elbert, M. Vallem, and Z. Huang, “GridPACK™: A framework for developing power grid simulations on high-performance computing platforms,” The International Journal of High Performance Computing Applications, vol. 30, no. 2, pp. 223–240, 2016. https://doi.org/10.1177/1094342015607609The BibTeX entry is included in the repository
README.md. Additional publications covering specific modules (dynamic simulation, state estimation, contingency analysis) can be found on the GridPACK project page at Pacific Northwest National Laboratory.RHEL_OPENMPI_HACK — what is it and when do I need it?
RHEL_OPENMPI_HACK — what is it and when do I need it?
gridpack Python module fails because the MPI shared library is not loaded into the global symbol table before pybind11 extension modules are initialised.Set this environment variable before running any GridPACK Python script on such systems:gridpack.cpp that calls dlopen with RTLD_GLOBAL on the MPI shared library before pybind11 registers the module. It is not needed with MPICH or with OpenMPI on Ubuntu, Debian, Fedora, or other non-RHEL distributions.