What it does
This example provides a high-quality translation CLI with:- Automatic language detection - Intelligently detects input language and translates accordingly
- Superior performance - CHrF++ 34.61 / BLEU 13.21 on Flores-200 benchmark, outperforming Google’s Gemma-3 4B and Alibaba’s Qwen3 4B
- Efficient inference - Runs on modest hardware with merged adapters for speed
- Easy-to-use CLI - Simple command-line interface powered by Fire
This project was built and released by Kiwoong Yeom with the support of Maxime Labone. View the original announcement on LinkedIn.
Performance benchmarks
Tested on Flores-200 benchmark (1,012 samples):| Model | Parameters | CHrF++ | BLEU |
|---|---|---|---|
| LFM2-KoEn-v8-rl | 1.2B | 34.61 | 13.21 |
| Gemma-3-4B | 4B | 32.83 | 11.36 |
| Qwen3-4B | 4B | 25.62 | 7.46 |
Quick start
Understanding the architecture
The system uses a two-stage training approach:- Supervised fine-tuning (SFT): 280K high-quality Korean-English parallel datasets establish the translation foundation
- Reinforcement learning (RL): GRPO optimization with 10K additional samples refines translation quality
Model components
- Base model:
gyung/lfm2-1.2b-koen-mt-v6.4-merged- SFT fine-tuned LFM2 1.2B - Adapter:
gyung/lfm2-1.2b-koen-mt-v8-rl-10k-adapter- LoRA adapter trained with GRPO - Automatic detection: Regular expression pattern matching for Korean text (Hangul syllables, Jamo)
CLI usage
Example usage
Further improvements
Next steps for enhanced performance and efficiency:- Speed optimization with quantization techniques (GGUF, AWQ, GPTQ)
- llama.cpp integration for faster CPU inference
- Full parameter RL training with expanded compute resources
- Length normalization removal based on recent Qwen team findings
- Extended dataset training with 200K SFT + 25K RL samples
Performance optimization
The current implementation uses adapter merging for faster inference. Future improvements include:- Quantized model variants for resource-constrained environments
- Streaming inference for real-time translation
- Batch processing for large document translation