Overview
AIOX implements a three-layer quality gate system that validates code at multiple checkpoints before it reaches production. Each layer has distinct responsibilities, automation levels, and enforcement mechanisms.Design Philosophy: Shift quality left - catch issues as early as possible when they’re cheapest to fix.
Layer 1: Pre-Commit Validation
Purpose
Prevent broken code from ever being committed to the repository.Enforcement
Local Git Hooks (.aiox-core/hooks/pre-commit)
Automated: Yes (100%)
Agent Responsible: @dev (Dex)
Validation Checks
Linting (ESLint)
Linting (ESLint)
Command:
npm run lintPurpose: Enforce coding standards and catch common errorsChecks:- Code style consistency
- Unused variables
- Potential bugs (e.g., missing await)
- Import order
- Absolute vs relative imports (Constitution: Absolute Imports)
.eslintrc.jsSeverity: BLOCK (must pass)Type Checking (TypeScript)
Type Checking (TypeScript)
Command:
npm run typecheckPurpose: Validate type safety across the codebaseChecks:- Type errors
- Missing type definitions
- Incorrect function signatures
- Null/undefined handling
tsconfig.jsonSeverity: BLOCK (must pass)Unit & Integration Tests
Unit & Integration Tests
Command:
npm testPurpose: Verify functionality and prevent regressionsChecks:- All tests pass
- Coverage >= previous level (no regression)
- No flaky tests (consistent results)
Build Verification
Build Verification
Command:
npm run buildPurpose: Ensure code compiles and builds successfullyChecks:- TypeScript compilation
- Asset bundling
- Tree-shaking optimization
- No build warnings
Metrics Collection
Layer 1 runs are tracked in.aiox/data/quality-metrics.json:
Bypass (Emergency Only)
Layer 2: PR Automation
Purpose
Automated review and validation in the CI/CD pipeline before human review.Enforcement
GitHub Actions (.github/workflows/)
CodeRabbit AI Review
Automated: Yes (100%)
Agent Responsible: @devops (Felix) + Quinn (@qa)
Validation Checks
- CI/CD Pipeline
- CodeRabbit Review
- Quinn Review
GitHub Actions Workflow:Multi-Environment Testing:
- Node 18, 20, 22
- Ubuntu, macOS, Windows
- Different dependency versions
Auto-Catch Rate
Layer 2 tracks how many issues are caught automatically vs. requiring human intervention:Layer 3: Human Review
Purpose
Final architectural validation and business logic review by human experts.Enforcement
GitHub PR Review (required approvals) Automated: No Responsible: Human reviewers (architects, senior developers)Review Criteria
Architectural Alignment
Architectural Alignment
Reviewers: @architect (Aria) + human architectsChecks:
- Design aligns with system architecture
- No architectural anti-patterns introduced
- Dependency directions correct
- Module boundaries respected
- Technical debt documented if introduced
- Does this change fit our architectural vision?
- Are there simpler alternatives?
- What are the long-term maintenance implications?
Business Logic Validation
Business Logic Validation
Reviewers: Product team + domain expertsChecks:
- Implementation matches business requirements
- Edge cases align with business rules
- User experience considerations addressed
- Regulatory/compliance requirements met
- Does this solve the actual user problem?
- Are there business scenarios not covered?
- What happens if this fails in production?
Security Review
Security Review
Reviewers: Security team (for sensitive changes)Checks:
- No hardcoded secrets or credentials
- Input validation on all user data
- Authentication/authorization correct
- Data encryption where required
- Audit logging for sensitive operations
- Authentication/authorization changes
- Database schema changes
- External API integrations
- Payment processing
Knowledge Transfer
Knowledge Transfer
Reviewers: Team members who will maintain the codeChecks:
- Code is understandable to team
- Documentation explains “why” not just “what”
- Complex logic has explanatory comments
- Runbook updated if operational changes
- Can someone else debug this at 2am?
- Is the reasoning behind decisions documented?
- Are there gotchas that need explanation?
Approval Workflow
Required Approvals: Configurable (default: 1 for standard changes, 2 for architectural) Review SLA:- Standard changes: 24 hours
- Urgent hotfixes: 4 hours
- Architectural changes: 48 hours
Metrics Collection
Quality Metrics Dashboard
Overall Quality Trends
Viewing Metrics
Quality Gate Schema
All metrics conform to the schema defined in.aiox-core/quality/schemas/quality-metrics.schema.json:
Schema Overview
Schema Overview
Metrics Collector API
Programmatic access to quality metrics:.aiox-core/quality/metrics-collector.js for full API documentation.
Constitution Enforcement
Quality gates enforce Constitution principles:| Principle | Layer | Enforcement |
|---|---|---|
| CLI First | L1, L2 | WARN if UI created before CLI functional |
| Agent Authority | All | BLOCK if @dev tries to push (only @devops) |
| Story-Driven | L1 | BLOCK if no valid story associated |
| No Invention | L2 (Quinn) | BLOCK if spec contains invented features |
| Quality First | All | BLOCK on any gate failure |
| Absolute Imports | L1 (ESLint) | ERROR on relative imports |
Best Practices
Run Locally Before Push
Run Locally Before Push
Always run quality checks locally before pushing:This catches issues before they trigger CI failures.
Incremental Fixes
Incremental Fixes
Don’t accumulate quality debt:
- Fix linting issues as you code (use IDE integration)
- Write tests alongside implementation (TDD)
- Address CodeRabbit findings immediately
- Don’t defer minor issues to “later”
Meaningful Test Coverage
Meaningful Test Coverage
Quality over quantity:Good:Bad (False Coverage):
Review Your Own PRs
Review Your Own PRs
Before requesting review:
- Review your own diff on GitHub
- Check for:
- Debug statements left in
- Commented-out code
- TODO comments
- Accidental file inclusions
- Add PR description with context
- Link to story: “Closes #123”
Troubleshooting
Metrics Retention
History is retained for 30 days by default (configurable):Next Steps
Development Cycle
See how quality gates integrate into the development workflow
Constitution
Review the principles enforced by quality gates
Agent: QA
Learn about Quinn’s review methodology
Metrics Collector
API documentation for programmatic metrics access