reval is a pure Go library for computing standard evaluation metrics used in information retrieval and natural language generation. It has zero external dependencies — only the Go standard library.
The module path is
github.com/itsubaki/reval. All exported functions live in the reval package.When to use reval
reval is useful when you need to measure the quality of systems that rank or generate text:
- Search engines — measure how well results match user intent with Precision@K, MAP, and NDCG
- Recommendation systems — evaluate ranked item lists against known relevant items
- Text summarizers — compare generated summaries to reference text using ROUGE
- RAG pipelines — assess retrieval quality and generation quality in a single library
Metric families
reval provides five metric families:| Family | Functions | Use case |
|---|---|---|
| Precision / MAP | Precision, AveragePrecision, MeanAveragePrecision | Ranking quality — how many top results are relevant |
| Recall | Recall | Coverage — how many relevant items were retrieved |
| NDCG | NDCG, DCG | Graded ranking quality with position discounting |
| ROUGE | ROUGE1, ROUGEL, ROUGELsum | Text overlap between generated and reference text |
| BERTScore | BERTScore | Semantic similarity using dense vector embeddings |
Module path
Install and import the library using the module pathgithub.com/itsubaki/reval:
Next steps
Quickstart
Install reval and compute your first metric in minutes.
Precision & MAP
Precision@K, Average Precision, and Mean Average Precision.
ROUGE
ROUGE-1, ROUGE-L, and ROUGE-Lsum for text overlap scoring.
BERTScore
Semantic similarity scoring with dense vector embeddings.