ODAI ships with a comprehensive test suite: 700+ tests across 67 test files, achieving 90%+ code coverage across all tested modules. Tests are run via a custom run_tests.py runner built on top of pytest, with support for parallel execution, coverage reporting, and file-level filtering.
Install test dependencies
pip install -r test_requirements.txt
Core testing packages:
| Package | Purpose |
|---|
pytest | Test framework |
pytest-asyncio | Async test support (asyncio_mode = auto) |
pytest-mock | Enhanced mocking utilities |
pytest-cov | Coverage reporting |
pytest-xdist | Parallel test execution (-n flag) |
pytest-timeout | Per-test timeout enforcement (10s default) |
httpx | HTTP client for FastAPI endpoint testing |
Running tests
Using the test runner (recommended)
run_tests.py is the primary way to run tests. It auto-detects virtual environments, sets PYTHONPATH, and suppresses noisy warnings.
# Run all tests
python run_tests.py
# Run with coverage report
python run_tests.py --coverage
# Run a specific test file (omit the test_ prefix and .py extension)
python run_tests.py --file auth_service
# Run with verbose output
python run_tests.py --verbose
# Run only fast tests (skip slow integration tests)
python run_tests.py --fast
# Run with parallel workers
python run_tests.py --workers 8
# Run with coverage and open the HTML report in a browser
python run_tests.py --coverage --open
# Install dependencies then run a specific file with coverage
python run_tests.py --install-deps --file chat_service --coverage --verbose
Test runner flags
| Flag | Short | Description |
|---|
--file <name> | -f | Run only tests/test_<name>.py |
--coverage | -c | Generate an HTML and terminal coverage report in htmlcov/ |
--verbose | -v | Pass -v to pytest for per-test output |
--workers <n> | -w | Run tests in parallel with n workers via pytest-xdist |
--fast | | Skip tests marked @pytest.mark.slow |
--open | | Open htmlcov/index.html in the browser after coverage runs |
--install-deps | | Run pip install -r test_requirements.txt before executing |
Direct pytest commands
You can also use pytest directly for more granular control:
# Run all tests
pytest tests/
# Run a specific test file
pytest tests/test_auth_service.py
# Run a specific test class
pytest tests/test_auth_service.py::TestValidateUserToken
# Run a specific test method
pytest tests/test_auth_service.py::TestValidateUserToken::test_validate_user_token_success
# Stop on first failure with verbose output
pytest tests/ -xvs
# Run in parallel (auto-detect CPU count)
pytest tests/ -n auto
# Generate HTML coverage report
pytest tests/ --cov=. --cov-report=html --cov-report=term
# Run coverage for specific modules
pytest tests/ --cov=services --cov=websocket --cov=firebase.models --cov-report=html
# Skip slow tests
pytest tests/ -m "not slow"
Test categories
Tests are organized into six categories, all living in the tests/ directory:
Unit tests
Test individual methods in complete isolation with all external dependencies mocked (OpenAI, Firebase, Google Cloud, third-party APIs). These make up the majority of the suite and run fast.
Integration tests
Validate interactions between components — for example, end-to-end authentication flows, chat session creation through the service and data layers, or complete WebSocket lifecycle management.
E2E tests
Full user-journey scenarios exercising the API, service, integration, and data layers together.
WebSocket tests
Real-time connection and streaming tests covering the ConnectionManager and WebSocketHandler classes. The ConnectionManager achieves 100% coverage.
Authentication tests
OAuth flow and token validation tests for AuthService, covering Firebase ID token validation, WebSocket authentication, HTTP request authentication, and Google OAuth 2.0.
Connector tests
Each third-party integration (Plaid, Gmail, FlightAware, Yelp, etc.) is tested independently with mocked HTTP responses.
The @pytest.mark.slow marker
Tests that interact with real external services or have long execution times are marked with @pytest.mark.slow. These are skipped when you want a quick feedback loop:
# Skip slow tests
python run_tests.py --fast
# Or directly with pytest
pytest tests/ -m "not slow"
Slow tests are still included in full CI runs and before deployments.
Parallel execution
The deploy scripts run tests with --workers 8 before every deployment. You can tune this based on your machine:
# Use 8 parallel workers (used in CI/deploy scripts)
python run_tests.py --workers 8
# Let pytest auto-detect based on available CPUs
pytest tests/ -n auto
# Disable parallelism (useful for debugging)
python run_tests.py --workers 0
Parallel execution via pytest-xdist requires tests to be properly isolated. All tests in this suite mock external dependencies, so parallel runs are safe by default.
Coverage reports
Running with --coverage produces both a terminal summary and an HTML report:
python run_tests.py --coverage
After the run, open htmlcov/index.html to browse coverage by file. Use --open to launch it automatically:
python run_tests.py --coverage --open
Current coverage targets
| Module | Coverage |
|---|
services/auth_service.py | 84% |
services/chat_service.py | 97% |
websocket/connection_manager.py | 100% |
websocket/handlers.py | 96% |
firebase/models/* | High across all models |
connectors/utils/* | High across all utilities |
| Overall | 90%+ |
pytest configuration
The pytest.ini in the project root configures the test environment:
[tool:pytest]
minversion = 6.0
addopts = -ra -v --tb=short --strict-markers --disable-warnings --timeout=10 --timeout-method=thread
testpaths = tests
timeout = 10
python_files = test_*.py
python_classes = Test*
python_functions = test_*
markers =
asyncio: marks tests as async (for pytest-asyncio)
unit: marks tests as unit tests
integration: marks tests as integration tests
slow: marks tests as slow running
asyncio_mode = auto
Key settings:
--timeout=10 — each test is killed after 10 seconds to prevent hangs
--strict-markers — unregistered markers cause an error, keeping the marker list clean
asyncio_mode = auto — all async def test functions are automatically treated as async tests without needing @pytest.mark.asyncio
--tb=short — concise traceback format for faster scanning of failures