Overview
Dependify uses Modal’s serverless container infrastructure to analyze codebases in parallel, detecting outdated syntax patterns across multiple programming languages. The analysis pipeline leverages Groq’s LLM models to intelligently identify code that needs modernization.Architecture
Modal Container Setup
The analysis system runs on Modal with optimized container configuration:containers.py
Modal containers are ephemeral and spin up on-demand, providing cost-effective parallel processing without managing infrastructure.
Analysis Function
The core analysis function clones the repository and scans all files:containers.py:19-51
File Scanning Process
Recursive File Discovery
The analyzer walks the entire repository tree:checker.py:59-69
File Filtering
Certain files are automatically excluded from analysis:checker.py:144-149
Filtered file types:
- Configuration files (
.json,.env,.gitignore) - Styles (
.css) - Documentation (
.md) - Assets (
.svg,.ico) - Git internals (
.git/) - Hidden files (starting with
.)
AI-Powered Detection
LLM Analysis
Each file is analyzed by Groq’s llama-3.1-8b-instant model for fast pattern detection:checker.py:94-103
Structured Output
The LLM returns structured data using Pydantic models:checker.py:53-57
Detected Patterns
The analysis identifies various outdated code patterns:- JavaScript/TypeScript
- Python
- React
Common Patterns:
var→const/let- Promise chains →
async/await - Class components → Function components (React)
require()→ ES6import- Callback functions → Promises
- Template strings over concatenation
Real-Time Status Updates
Analysis progress is broadcast via Supabase real-time updates:checker.py:107-125
Status updates appear in real-time on the dashboard, showing which file is currently being analyzed.
Performance Characteristics
Speed
- Small repos (< 50 files): 30-60 seconds
- Medium repos (50-200 files): 1-3 minutes
- Large repos (200+ files): 3-8 minutes
Scalability
Modal’s serverless architecture enables:- Automatic scaling: Containers spin up based on demand
- No cold starts: Warm containers remain active during analysis
- Cost efficiency: Pay only for compute time used
Error Handling
checker.py:127-134
API Integration
The analysis is triggered from the main FastAPI server:server.py:171-178
Next Steps
AI Refactoring
Learn how detected files are refactored using AI
Real-Time Tracking
See how progress is tracked live