Overview
Porffor’s ahead-of-time (AOT) compilation model means optimization happens at compile time, not runtime. Understanding how to write code that compiles efficiently is key to achieving maximum performance. This guide covers compiler optimization flags, coding patterns that optimize well, and techniques to avoid.Compiler Optimization Levels
Porffor provides multiple optimization levels that control compilation behavior.-O0: No Optimization
- Fastest compilation
- Largest code size
- Easiest to debug
- Predictable output
- Debugging compilation issues
- Testing compiler behavior
- Developing and iterating quickly
-O1: Basic Optimization (Default)
- Instruction simplification
- Dead code elimination
- Wasm import tree-shaking
- Basic constant folding
- General development
- Balanced compilation speed and performance
- First production builds
-O2: Advanced Optimization
- All -O1 optimizations
- Partial evaluation (Cyclone optimizer)
- Advanced constant folding
- More aggressive inlining
- Type-based optimizations
- Production builds
- Performance is critical
- You’ve tested for correctness
Advanced Optimizer Flags
Cyclone: Partial Evaluator
Cyclone performs compile-time evaluation and simplification:- Evaluates constant expressions at compile time
- Simplifies known branches
- Eliminates dead code paths
- Optimizes type switches
- Reduces local variable usage
Profile-Guided Optimization (PGO)
PGO is available for native compilation only.
- Porffor instruments your code with profiling hooks
- You run the binary with representative workload
- Profiling data identifies hot paths and type patterns
- Compiler optimizes based on actual usage
- Long-running applications
- Predictable workloads
- CPU-intensive code
- Applications with clear hot paths
Fast Length Optimization
Non-compliant optimization for faster.length access:
- Caches array/string length
- Skips dynamic length updates in some cases
- Assumes length doesn’t change unexpectedly
- Faster array iteration (5-10%)
- May break code that modifies length dynamically
- Not spec-compliant
Writing Optimizable Code
Prefer Numbers Over Mixed Types
Porffor optimizes numeric code especially well:Use ByteString When Possible
ByteStrings (ASCII/Latin-1) are twice as memory-efficient as regular strings:Cache Array Length
Manually cache length for frequently-accessed arrays:- Cached length: ~2-5% faster
- With —fast-length: Difference is negligible
Avoid Object-Heavy Code
Objects require more complex code generation:Hoist Invariants Out of Loops
Move loop-invariant computations outside:Cyclone optimizer (-O2) performs some of this automatically, but explicit hoisting is still beneficial.
Use Specific Loop Patterns
Porffor optimizes standard loop patterns best:Minimize Dynamic Property Access
Direct access is faster than computed properties:TypeScript Optimization
When using TypeScript, add type annotations for better optimization:- Skip runtime type checks
- Generate specialized code paths
- Optimize numeric operations
- Improve array access patterns
Native Compilation Optimizations
When compiling to native binaries, combine multiple optimization levels:Maximum Performance
- Porffor optimizations (-O2): Wasm-level optimization
- 2c compilation: Efficient Wasm-to-C translation
- C compiler optimizations (—cO=Ofast): Maximum native optimization
Balanced Build
- Reasonable compilation time
- Strong performance improvements
- Better stability than Ofast
Development Build
- Fast compilation
- Debuggable output
- Quick iteration
Benchmarking Your Code
Always measure performance:benchmark.js
Avoiding Performance Pitfalls
Don’t Optimize Prematurely
- Write correct code first
- Profile to find bottlenecks
- Optimize hot paths only
- Measure impact
Avoid These Patterns
Avoid eval and Function constructor
Avoid eval and Function constructor
Not supported in AOT compilation:Use static functions instead.
Avoid complex scope chains
Avoid complex scope chains
Porffor has limited scope support:Use parameters and return values instead:
Avoid heavy async patterns
Avoid heavy async patterns
Promise and await have known bugs:Prefer synchronous code for production use.
Optimization Checklist
Performance Expectations
Compared to Interpreted JavaScript
Porffor (with optimization) is typically:- 5-50x faster than basic interpreters
- 2-10x faster than Node.js without JIT
Compared to JIT JavaScript
Porffor native binaries:- Similar or slower than V8/SpiderMonkey with JIT (after warmup)
- Much faster for short-running scripts (no warmup time)
- More consistent performance (no deoptimization)
Compared to Hand-Written Wasm
Porffor-generated Wasm:- 80-95% of hand-optimized Wasm performance
- Much faster to develop than hand-written Wasm
- Good for most use cases
Next Steps
Debugging
Debug optimized code when things go wrong
TypeScript Support
Use types for better optimization
Native Compilation
Build optimized native binaries
Profiling
Profile your code for bottlenecks