Skip to main content
Every Basilisk module must have a unit test before it can be merged. Tests live in a _UnitTest/ subdirectory inside the module folder and are run with pytest. The cModuleTemplate and cppModuleTemplate test files are the canonical reference implementations.

Test file structure

myModule/
├── myModule.h
├── myModule.cpp          (or .c)
├── myModule.i
├── myModule.rst
└── _UnitTest/
    └── test_myModule.py
The test file name must start with test_. The test function inside must also start with test_.

Writing a unit test

The pattern used throughout Basilisk separates the pytest entry point from the actual simulation function. This lets the test run both under pytest and as a standalone Python script.
# _UnitTest/test_cModuleTemplate.py

import matplotlib.pyplot as plt
import numpy as np
from Basilisk.architecture import bskLogging
from Basilisk.architecture import messaging
from Basilisk.moduleTemplates import cModuleTemplate
from Basilisk.utilities import SimulationBaseClass
from Basilisk.utilities import macros
from Basilisk.utilities import unitTestSupport


def test_module(show_plots):
    r"""
    **Validation Test Description**

    Compose a general description of what is being tested in this unit test script.
    Add enough information so the reader understands the purpose and limitations of the test.

    **Description of Variables Being Tested**

    Here discuss what parameters are being checked. For example, in this file we are
    checking the values of the variables

    - ``dummy``
    - ``dataVector[3]``
    """
    [testResults, testMessage] = fswModuleTestFunction(show_plots)
    assert testResults < 1, testMessage


def fswModuleTestFunction(show_plots):
    testFailCount = 0
    testMessages = []
    unitTaskName = "unitTask"
    unitProcessName = "TestProcess"
    bskLogging.setDefaultLogLevel(bskLogging.BSK_WARNING)

    # --- Simulation setup ---
    unitTestSim = SimulationBaseClass.SimBaseClass()
    testProcessRate = macros.sec2nano(0.5)     # [ns] task update period
    testProc = unitTestSim.CreateNewProcess(unitProcessName)
    testProc.addTask(unitTestSim.CreateNewTask(unitTaskName, testProcessRate))

    # --- Module setup ---
    module = cModuleTemplate.cModuleTemplate()
    module.ModelTag = "cModuleTemplate"
    unitTestSim.AddModelToTask(unitTaskName, module)

    module.dummy = 1
    module.dumVector = [1., 2., 3.]

    # --- Input message ---
    inputMessageData = messaging.CModuleTemplateMsgPayload()
    inputMessageData.dataVector = [1.0, -0.5, 0.7]
    inputMsg = messaging.CModuleTemplateMsg().write(inputMessageData)
    module.dataInMsg.subscribeTo(inputMsg)

    # --- Recording ---
    dataLog = module.dataOutMsg.recorder()
    unitTestSim.AddModelToTask(unitTaskName, dataLog)

    variableName = "dummy"
    moduleLog = module.logger(variableName)
    unitTestSim.AddModelToTask(unitTaskName, moduleLog)

    # --- Run ---
    unitTestSim.InitializeSimulation()
    unitTestSim.ConfigureStopTime(macros.sec2nano(1.0))  # [ns]
    unitTestSim.ExecuteSimulation()

    # Reset the module and run more
    module.Reset(1)                                       # [ns] reset time
    unitTestSim.ConfigureStopTime(macros.sec2nano(2.0))  # [ns]
    unitTestSim.ExecuteSimulation()

    # --- Verify results ---
    variableState = unitTestSupport.addTimeColumn(
        moduleLog.times(), getattr(moduleLog, variableName)
    )

    trueVector = [
        [2.0, -0.5, 0.7],
        [3.0, -0.5, 0.7],
        [4.0, -0.5, 0.7],
        [2.0, -0.5, 0.7],
        [3.0, -0.5, 0.7]
    ]
    accuracy = 1e-12
    dummyTrue = [1.0, 2.0, 3.0, 1.0, 2.0]
    variableStateNoTime = np.transpose(variableState)[1]

    for i in range(len(trueVector)):
        if not unitTestSupport.isArrayEqual(dataLog.dataVector[i], trueVector[i], 3, accuracy):
            testFailCount += 1
            testMessages.append(
                "FAILED: " + module.ModelTag + " Module failed dataVector"
                + " unit test at t=" + str(dataLog.times()[i] * macros.NANO2SEC) + "sec\n"
            )
        if not unitTestSupport.isDoubleEqual(variableStateNoTime[i], dummyTrue[i], accuracy):
            testFailCount += 1
            testMessages.append(
                "FAILED: " + module.ModelTag + " Module failed "
                + variableName + " unit test at t="
                + str(variableState[i, 0] * macros.NANO2SEC) + "sec\n"
            )

    if testFailCount == 0:
        print("PASSED: " + module.ModelTag)
        print("This test uses an accuracy value of " + str(accuracy))
    else:
        print("FAILED " + module.ModelTag)
        print(testMessages)

    # Optional plots
    plt.close("all")
    plt.figure(1)
    plt.plot(variableState[:, 0] * macros.NANO2SEC, variableState[:, 1])
    plt.xlabel('Time [s]')
    plt.ylabel('Variable Description [unit]')
    plt.suptitle('Title of Sample Plot')

    if show_plots:
        plt.show()

    return [testFailCount, ''.join(testMessages)]


if __name__ == "__main__":
    fswModuleTestFunction(True)

Parameterized tests

For modules where you want to test multiple input scenarios, use pytest.mark.parametrize. The cppModuleTemplate test demonstrates this pattern:
import pytest
from Basilisk.utilities import SimulationBaseClass, macros, unitTestSupport
from Basilisk.moduleTemplates import cppModuleTemplate
from Basilisk.architecture import messaging, bskLogging


@pytest.mark.parametrize("accuracy", [1e-12])
@pytest.mark.parametrize("param1, param2", [
    (1, 1),
    (1, 3),
    (2, 2),
])
def test_module(show_plots, param1, param2, accuracy):
    r"""
    **Validation Test Description**

    Test the CppModuleTemplate over multiple input vector combinations.

    **Test Parameters**

    Args:
        param1 (int): First component of the input dataVector
        param2 (int): Second component of the input dataVector
        accuracy (float): absolute accuracy value used in the validation tests

    **Description of Variables Being Tested**

    - ``dummy``
    - ``dataVector[3]``
    """
    [testResults, testMessage] = cppModuleTestFunction(show_plots, param1, param2, accuracy)
    assert testResults < 1, testMessage


def cppModuleTestFunction(show_plots, param1, param2, accuracy):
    testFailCount = 0
    testMessages = []
    unitTaskName = "unitTask"
    unitProcessName = "TestProcess"
    bskLogging.setDefaultLogLevel(bskLogging.BSK_WARNING)

    unitTestSim = SimulationBaseClass.SimBaseClass()
    testProcessRate = macros.sec2nano(0.5)      # [ns]
    testProc = unitTestSim.CreateNewProcess(unitProcessName)
    testProc.addTask(unitTestSim.CreateNewTask(unitTaskName, testProcessRate))

    module = cppModuleTemplate.CppModuleTemplate()
    module.ModelTag = "cppModuleTemplate"
    unitTestSim.AddModelToTask(unitTaskName, module)

    module.setDummy(1)
    # Confirm that invalid setter input raises an error
    with pytest.raises(bskLogging.BasiliskError):
        module.setDumVector([1., -2., 3.])

    inputMessageData = messaging.CModuleTemplateMsgPayload()
    inputMessageData.dataVector = [param1, param2, 0.7]
    inputMsg = messaging.CModuleTemplateMsg().write(inputMessageData)
    module.dataInMsg.subscribeTo(inputMsg)

    dataLog = module.dataOutMsg.recorder()
    unitTestSim.AddModelToTask(unitTaskName, dataLog)

    variableName = "dummy"
    moduleLog = module.logger(variableName)
    unitTestSim.AddModelToTask(unitTaskName, moduleLog)

    unitTestSim.InitializeSimulation()
    unitTestSim.ConfigureStopTime(macros.sec2nano(1.0))   # [ns]
    unitTestSim.ExecuteSimulation()

    module.Reset(1)                                        # [ns]
    unitTestSim.ConfigureStopTime(macros.sec2nano(2.0))   # [ns]
    unitTestSim.ExecuteSimulation()

    # Build truth table based on parameters
    trueVector = []
    if param1 == 1 and param2 == 1:
        trueVector = [[2., 1., 0.7], [3., 1., 0.7], [4., 1., 0.7],
                      [2., 1., 0.7], [3., 1., 0.7]]
    elif param1 == 1 and param2 == 3:
        trueVector = [[2., 3., 0.7], [3., 3., 0.7], [4., 3., 0.7],
                      [2., 3., 0.7], [3., 3., 0.7]]
    elif param1 == 2 and param2 == 2:
        trueVector = [[3., 2., 0.7], [4., 2., 0.7], [5., 2., 0.7],
                      [3., 2., 0.7], [4., 2., 0.7]]

    dummyTrue = [1.0, 2.0, 3.0, 1.0, 2.0]
    variableState = unitTestSupport.addTimeColumn(
        moduleLog.times(), getattr(moduleLog, variableName)
    )
    import numpy as np
    variableState = np.transpose(variableState)[1]

    testFailCount, testMessages = unitTestSupport.compareArray(
        trueVector, dataLog.dataVector, accuracy, "Output Vector",
        testFailCount, testMessages
    )
    testFailCount, testMessages = unitTestSupport.compareDoubleArray(
        dummyTrue, variableState, accuracy, "dummy parameter",
        testFailCount, testMessages
    )

    if testFailCount == 0:
        print("PASSED: " + module.ModelTag)

    return [testFailCount, ''.join(testMessages)]


if __name__ == "__main__":
    test_module(False, 1, 1, 1e-12)

Setting up the simulation

1

Create the simulation container

Every test starts with an empty SimBaseClass and at least one process and task:
unitTestSim = SimulationBaseClass.SimBaseClass()
testProcessRate = macros.sec2nano(0.5)     # [ns] 0.5 second task period
testProc = unitTestSim.CreateNewProcess("TestProcess")
testProc.addTask(unitTestSim.CreateNewTask("unitTask", testProcessRate))
2

Instantiate and configure the module

Create the module, set its ModelTag, add it to the task, and set its parameters:
module = cModuleTemplate.cModuleTemplate()
module.ModelTag = "cModuleTemplate"
unitTestSim.AddModelToTask("unitTask", module)
module.dummy = 1
module.dumVector = [1., 2., 3.]
3

Create and connect input messages

Build a standalone message with the desired payload and subscribe the module’s input functor to it:
inputData = messaging.CModuleTemplateMsgPayload()
inputData.dataVector = [1.0, -0.5, 0.7]
inputMsg = messaging.CModuleTemplateMsg().write(inputData)
module.dataInMsg.subscribeTo(inputMsg)
4

Set up recorders

Attach a recorder to any output message you want to inspect, and optionally a variable logger for internal module state:
# Record the output message
dataLog = module.dataOutMsg.recorder()
unitTestSim.AddModelToTask("unitTask", dataLog)

# Log an internal module variable (use sparingly — this is slow)
moduleLog = module.logger("dummy")
unitTestSim.AddModelToTask("unitTask", moduleLog)
5

Initialize and execute

unitTestSim.InitializeSimulation()
unitTestSim.ConfigureStopTime(macros.sec2nano(1.0))  # [ns]
unitTestSim.ExecuteSimulation()
6

Optionally reset and continue

You can test the Reset path and continue simulating:
module.Reset(1)   # [ns] current simulation time
unitTestSim.ConfigureStopTime(macros.sec2nano(2.0))  # [ns]
unitTestSim.ExecuteSimulation()

Message recording and verification

After running the simulation, access the recorded data as NumPy arrays:
# Output message fields are NumPy arrays indexed by [time_step, ...]
print(dataLog.dataVector)          # shape (N, 3)
print(dataLog.times())             # simulation times in nanoseconds
For internal variable logs:
variableState = unitTestSupport.addTimeColumn(
    moduleLog.times(), getattr(moduleLog, "dummy")
)  # shape (N, 2) — column 0 is time, column 1 is value
The unitTestSupport module provides comparison helpers:
# Compare a vector at time step i against a truth vector
accuracy = 1e-12
if not unitTestSupport.isArrayEqual(dataLog.dataVector[i], trueVector[i], 3, accuracy):
    testFailCount += 1
    testMessages.append("FAILED: dataVector check at step " + str(i))

# Compare a scalar
if not unitTestSupport.isDoubleEqual(measured, expected, accuracy):
    testFailCount += 1

# Compare an entire array at once
testFailCount, testMessages = unitTestSupport.compareArray(
    trueVector, dataLog.dataVector, accuracy, "Output Vector",
    testFailCount, testMessages
)

testFailCount, testMessages = unitTestSupport.compareDoubleArray(
    dummyTrue, variableState, accuracy, "dummy parameter",
    testFailCount, testMessages
)

Running tests with pytest

From the Basilisk project root:
# Run the test for a single module
pytest src/moduleTemplates/cModuleTemplate/_UnitTest/test_cModuleTemplate.py -v

# Run all tests
python run_all_test.py

# Generate an HTML report
pytest src/moduleTemplates/cModuleTemplate/_UnitTest/ --report
The test file can also be run directly as a Python script (bypassing pytest), which is useful during development:
python src/moduleTemplates/cModuleTemplate/_UnitTest/test_cModuleTemplate.py
This works because of the if __name__ == "__main__": block at the bottom of every test file.

Module checkout list

Before submitting a module for review, verify each item in this checklist. Branch and build
  • Branch is rebased on the latest develop branch.
  • Clean build passes (python3 conanfile.py then build).
  • python run_all_test.py shows all tests passing.
Code style
  • Variables follow the Basilisk coding style guidelines.
  • Four spaces are used for indentation, not tabs.
  • All numeric literals with physical meaning have inline unit comments (//!< [m], //!< [s], etc.).
Module programming
  • All input and output messages are declared in the .i SWIG interface file.
  • Code contains appropriate general comments.
  • Code contains Doxygen-compatible function descriptions and variable definitions.
  • SelfInit declares all output messages (C modules only).
  • Reset resets all time-varying state variables and checks that required messages are connected.
  • Module uses bskLogger (or BSK_PRINT for general support libraries that are not modules).
  • C++ modules expose user-configurable variables through setter/getter methods with input validation.
Documentation
  • Module folder contains a myModule.rst documentation file.
  • The RST file includes: Executive Summary, Message Connection Descriptions, Module Assumptions and Limitations, and User Guide sections.
  • Documentation builds successfully with Sphinx.
Unit test
  • _UnitTest/ folder exists and contains a file named test_myModule.py.
  • Test function name starts with test_.
  • Test docstring describes what is being validated (Validation Test Description, Description of Variables Being Tested).
  • Test exercises the module in isolation and creates all required input messages.
  • Test checks output for all relevant input and configuration conditions.
  • Test compares against analytically computed or independently verified truth values.
  • Test can be run both with pytest and directly with python test_myModule.py.
Release notes
  • A release note snippet .rst file has been added to docs/source/Support/bskReleaseNotesSnippets/.
  • docs/source/Support/bskReleaseNotes.rst has not been edited directly.

Build docs developers (and LLMs) love