Skip to main content
reval is a pure Go library for computing standard evaluation metrics used in information retrieval and natural language generation. It has zero external dependencies — only the Go standard library.
The module path is github.com/itsubaki/reval. All exported functions live in the reval package.

When to use reval

reval is useful when you need to measure the quality of systems that rank or generate text:
  • Search engines — measure how well results match user intent with Precision@K, MAP, and NDCG
  • Recommendation systems — evaluate ranked item lists against known relevant items
  • Text summarizers — compare generated summaries to reference text using ROUGE
  • RAG pipelines — assess retrieval quality and generation quality in a single library

Metric families

reval provides five metric families:
FamilyFunctionsUse case
Precision / MAPPrecision, AveragePrecision, MeanAveragePrecisionRanking quality — how many top results are relevant
RecallRecallCoverage — how many relevant items were retrieved
NDCGNDCG, DCGGraded ranking quality with position discounting
ROUGEROUGE1, ROUGEL, ROUGELsumText overlap between generated and reference text
BERTScoreBERTScoreSemantic similarity using dense vector embeddings
All ranking metrics accept graded (integer) relevance scores, not just binary labels. This lets you express degrees of relevance rather than a simple relevant/not-relevant judgment.

Module path

Install and import the library using the module path github.com/itsubaki/reval:
go get github.com/itsubaki/reval
import "github.com/itsubaki/reval"
The library requires Go 1.24.5 or later.

Next steps

Quickstart

Install reval and compute your first metric in minutes.

Precision & MAP

Precision@K, Average Precision, and Mean Average Precision.

ROUGE

ROUGE-1, ROUGE-L, and ROUGE-Lsum for text overlap scoring.

BERTScore

Semantic similarity scoring with dense vector embeddings.

Build docs developers (and LLMs) love