Skip to main content

Testing Overview

Obsidian Chess Studio maintains comprehensive test coverage across both frontend (React/TypeScript) and backend (Rust) to ensure reliability and prevent regressions.

Test Statistics

Frontend Tests

  • 173 test files
  • 307 passing tests
  • Vitest with v8 coverage
  • ~22% coverage (growing)

Backend Tests

  • 234 passing tests
  • Cargo test framework
  • Comprehensive coverage
  • 3.44s execution time

Running Tests

Frontend Tests

# Run all tests once
pnpm test
This runs all frontend tests using Vitest.

Backend Tests

cd src-tauri

# Run all tests
cargo test
Runs all 234 Rust tests (completes in ~3.4 seconds).

Frontend Test Coverage

Current frontend coverage metrics:
MetricCoverageTarget
Statements21.06%60%+
Branches15.88%50%+
Functions17.97%60%+
Lines22.03%60%+

Well-Tested Areas

Excellent coverage for profile-related components:
  • PersonalCard.tsx - Player profile cards
  • ProfileAccountSelector.tsx - Account selection
  • ProfilePanel.tsx - Profile management
  • ProfileSelector.tsx - Profile switching
Example test:
describe('ProfileAccountSelector', () => {
  it('displays connected accounts', () => {
    render(<ProfileAccountSelector />);
    expect(screen.getByText('Lichess')).toBeInTheDocument();
  });
});
Comprehensive coverage for chess engine utilities:
  • Engine configuration
  • UCI command parsing
  • Engine path resolution
  • Process management
Location: src/utils/engines/__tests__/
High coverage for helper utilities:
  • i18n configuration
  • Formatters (dates, chess notation, scores)
  • Environment detection
  • Chess position utilities
Location: src/utils/__tests__/

Areas Needing Coverage

The following areas have lower coverage and need more tests:
  • File utilities and opening management
  • State management stores (tree, database views)
  • Some utility modules (tabs, storage, logger)
  • Complex UI components

Backend Test Coverage

Well-Tested Modules

Comprehensive coverage for:
  • Player statistics queries
  • Game filtering and search
  • Position caching
  • Bulk insert operations
  • Online sync state
Example:
#[test]
fn test_player_stats_calculation() {
    let stats = calculate_player_stats(player_id, &conn).unwrap();
    assert_eq!(stats.total_games, 150);
    assert!(stats.win_rate > 0.5);
}
Location: src-tauri/src/db/*/tests

Writing Frontend Tests

Test Structure

Frontend tests use Vitest with React Testing Library.
1

Create Test File

Add tests alongside the component:
features/boards/
├── components/
│   └── BoardGame.tsx
└── __tests__/
    └── BoardGame.test.tsx
2

Write Test

import { render, screen, fireEvent } from '@testing-library/react';
import { describe, it, expect, vi } from 'vitest';
import { BoardGame } from '../components/BoardGame';

describe('BoardGame', () => {
  it('renders chess board', () => {
    render(<BoardGame />);
    const board = screen.getByRole('main');
    expect(board).toBeInTheDocument();
  });
  
  it('handles move click', async () => {
    const onMove = vi.fn();
    render(<BoardGame onMove={onMove} />);
    
    const square = screen.getByTestId('square-e2');
    fireEvent.click(square);
    
    expect(onMove).toHaveBeenCalled();
  });
});
3

Run Test

pnpm vitest BoardGame

Testing Patterns

import { render, screen } from '@testing-library/react';
import userEvent from '@testing-library/user-event';

describe('MyComponent', () => {
  it('renders correctly', () => {
    render(<MyComponent />);
    expect(screen.getByText('Expected Text')).toBeInTheDocument();
  });
  
  it('handles user interaction', async () => {
    const user = userEvent.setup();
    render(<MyComponent />);
    
    await user.click(screen.getByRole('button'));
    expect(screen.getByText('Clicked')).toBeInTheDocument();
  });
});

Writing Backend Tests

Test Structure

Rust tests are typically in the same file as the code:
// src-tauri/src/db/search.rs

pub fn search_position(fen: &str) -> Result<Vec<Game>> {
    // Implementation
}

#[cfg(test)]
mod tests {
    use super::*;
    
    #[test]
    fn test_search_position() {
        let fen = "rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1";
        let result = search_position(fen);
        assert!(result.is_ok());
    }
}

Testing Patterns

#[test]
fn test_move_encoding() {
    let san_move = "e4";
    let encoded = encode_move(san_move).unwrap();
    let decoded = decode_move(encoded).unwrap();
    assert_eq!(decoded, san_move);
}

#[test]
fn test_fen_parsing() {
    let fen = "rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1";
    let position = parse_fen(fen).unwrap();
    assert_eq!(position.turn(), Color::White);
}

Test Helpers

// Common test utilities
#[cfg(test)]
mod test_helpers {
    use super::*;
    
    pub fn establish_test_connection() -> Connection {
        let conn = Connection::open_in_memory().unwrap();
        run_migrations(&conn);
        conn
    }
    
    pub fn create_test_game() -> Game {
        Game {
            id: 1,
            white: "TestPlayer1".to_string(),
            black: "TestPlayer2".to_string(),
            moves: vec!["e4", "e5", "Nf3", "Nc6"],
            result: "1-0".to_string(),
        }
    }
}

Continuous Integration

GitHub Actions

Tests run automatically on:
  • Every commit to any branch
  • Pull requests before merge
  • Pre-release builds
Workflow: .github/workflows/test.yml
name: Test
on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Frontend tests
        run: pnpm test
      - name: Backend tests
        run: cd src-tauri && cargo test

Coverage Goals

Our goal is to increase coverage to:
  • Frontend: 60%+ statement coverage
  • Backend: Maintain 90%+ coverage
All new features should include tests.

Viewing Coverage

pnpm vitest run --coverage
Shows summary in terminal output.

Best Practices

Write Tests First

Consider TDD (Test-Driven Development):
  1. Write failing test
  2. Implement feature
  3. Test passes

Test Edge Cases

Don’t just test happy paths:
  • Empty inputs
  • Invalid data
  • Boundary conditions
  • Error cases

Keep Tests Simple

  • One assertion per test (when possible)
  • Clear test names
  • Minimal setup
  • No complex logic in tests

Mock External Dependencies

  • Mock Tauri commands
  • Mock external APIs
  • Use in-memory databases
  • Avoid network calls

Next Steps

Contributing

Submit your code with tests

Architecture

Understand what to test

Build docs developers (and LLMs) love