Skip to main content
The sync_indicators module generates visual diagnostic images that help verify and understand video synchronization results. It detects motion regions, annotates frames with bounding boxes, and creates comparison composites.

generate_sync_indicators

Main entry point for generating all visual sync indicators. Creates annotated images showing detected motion regions and synchronization points.
from src.sync_indicators import generate_sync_indicators

generated_files = generate_sync_indicators(
    video_dir="./videos",
    selected_files=["cam1.mp4", "cam2.mp4", "cam3.mp4"],
    offsets={"cam1.mp4": 0.0, "cam2.mp4": 1.2, "cam3.mp4": 0.5},
    results_dir="./results",
    num_peaks=5
)

print(f"Generated {len(generated_files)} indicator images")

Parameters

video_dir
str
required
Directory containing original video files
selected_files
List[str]
required
List of video filenames being synchronized
offsets
Dict[str, float]
required
Dictionary mapping filename to computed synchronization offset in seconds
results_dir
str
required
Output directory for generated indicator images (typically project root /results)
num_peaks
int
default:"5"
Number of peak-motion frames to capture and annotate per video

Returns

generated_files
List[str]
List of file paths for all generated indicator images

Generated Images

The function creates several types of visualization:

1. Per-Video Motion Peaks

For each video, num_peaks frames with highest motion activity:
  • Filename: {video_name}_motion_peak_{1-5}.png
  • Content: Annotated frame with bounding boxes around detected motion regions
  • Annotations:
    • Header bar with video name, frame number, and timestamp
    • Cyan bounding boxes around motion regions
    • Intensity percentage labels
    • Region count badge

2. Motion Timeline Strips

Timeline showing all peak motion moments as thumbnails:
  • Filename: {video_name}_motion_timeline.png
  • Content: Horizontal strip with thumbnail of each peak frame
  • Annotations: Timestamp and region count for each peak

3. Sync Point Comparisons

Side-by-side comparison of all videos at corresponding sync points:
  • Filename: sync_comparison_{1-5}.png
  • Content: All videos shown at the same sync-adjusted time
  • Purpose: Verify that motion events align across cameras

4. Offset Summary Chart

Bar chart visualizing computed offsets:
  • Filename: offset_summary.png
  • Content: Horizontal bar chart with color-coded offsets
    • Cyan: Reference video (zero offset)
    • Green: Delayed videos (positive offset)
    • Red: Trimmed videos (negative offset)

Visual Styling

The indicators use consistent styling matching the project’s brand:
BOX_COLOR_PRIMARY = (0, 217, 255)    # Cyan (#00D9FF) - main boxes
BOX_COLOR_SECONDARY = (0, 204, 102)  # Green (#00CC66) - secondary boxes
TEXT_BG_COLOR = (0, 0, 0)            # Black - label backgrounds
TEXT_COLOR = (255, 255, 255)         # White text
HEADER_COLOR = (0, 119, 255)         # Blue (#0077FF) - header bars

Motion Detection Algorithm

The module uses frame differencing to detect motion:
  1. Frame Differencing: Computes absolute difference between consecutive frames
  2. Gaussian Blur: Reduces noise in difference image
  3. Thresholding: Binary threshold to isolate motion pixels
  4. Morphological Operations: Dilates to connect nearby regions
  5. Contour Detection: Finds bounding boxes around motion blobs
  6. Filtering: Removes regions smaller than min_area (default 500 pixels)
  7. Intensity Calculation: Computes mean pixel difference within each region

Usage in Flask UI

The sync indicators are automatically generated after successful synchronization:
src/ui.py
# In the Flask route after sync completes
from src.sync_indicators import generate_sync_indicators

indicator_files = generate_sync_indicators(
    video_dir=config.VIDEO_DIR,
    selected_files=app_state["selected_files"],
    offsets=app_state["offsets"],
    results_dir=config.RESULTS_DIR,
    num_peaks=5
)

# Files are saved to project root /results/ directory
# UI can display or provide download links

Programmatic Usage

# Generate indicators after synchronization
from src.visual_sync import sync_videos_by_motion
from src.sync_indicators import generate_sync_indicators

# 1. Run synchronization
offsets = sync_videos_by_motion(
    video_dir="./videos",
    selected_files=["cam1.mp4", "cam2.mp4"],
    max_offset_sec=20.0
)

# 2. Generate visual indicators
generated = generate_sync_indicators(
    video_dir="./videos",
    selected_files=["cam1.mp4", "cam2.mp4"],
    offsets=offsets,
    results_dir="./results",
    num_peaks=5
)

# 3. Review generated images
for img_path in generated:
    print(f"Generated: {img_path}")

Helper Functions

The module includes several internal helper functions that power the main functionality:

_find_motion_regions

def _find_motion_regions(frame_curr, frame_prev, min_area=500)
Detects motion regions between two consecutive frames using frame differencing. Returns: List of (x, y, w, h, intensity) tuples sorted by intensity

_draw_bounding_boxes

def _draw_bounding_boxes(frame, regions, video_name, frame_idx, timestamp)
Draws styled bounding boxes and annotations on a frame. Returns: Annotated frame as NumPy array

_find_peak_motion_frames

def _find_peak_motion_frames(video_path, num_peaks=5, step=2, min_area=500)
Scans a video and returns frames with highest motion activity, ensuring minimum spacing between peaks. Returns: List of dicts with keys: frame_idx, timestamp, frame, prev_frame, energy, regions

_create_comparison_composite

def _create_comparison_composite(frames_by_video, title="")
Creates a side-by-side comparison image from multiple annotated frames. Returns: Composite image as NumPy array

_create_motion_timeline

def _create_motion_timeline(video_path, peak_frames, output_path, video_name)
Creates a timeline strip showing peak motion moments as thumbnails. Returns: None (saves directly to output_path)

_create_offset_summary

def _create_offset_summary(selected_files, offsets, results_dir)
Creates a bar chart visualizing computed offsets for all videos. Returns: None (saves to results_dir/offset_summary.png)

Performance

Indicator generation is CPU-intensive but runs only once after sync completes: Typical timing for 3 videos (5 minutes each, 1080p):
  • Peak frame detection: ~15-30 seconds
  • Frame annotation: ~2-5 seconds
  • Composite generation: ~1-2 seconds
  • Total: ~20-40 seconds
The bottleneck is video frame reading. The step parameter (default 2) controls frame sampling - higher values process faster but may miss brief motion events.

Output Directory

Indicators are saved to the project-level results directory (default: PROJECT_ROOT/results/), which persists across sessions, unlike the temporary sync processing files.
results/
├── cam1_motion_peak_1.png
├── cam1_motion_peak_2.png
├── ...
├── cam1_motion_timeline.png
├── cam2_motion_peak_1.png
├── ...
├── sync_comparison_1.png
├── sync_comparison_2.png
└── offset_summary.png

See Also

visual_sync

Motion-based synchronization that feeds into these indicators

Interpreting Results

Guide to understanding sync indicator images

Build docs developers (and LLMs) love