Detect hallucinations in AI responses with confidence
PAS2 (Paraphrase-based Approach for Scrutinizing Systems) is a powerful hallucination detection system that uses semantic paraphrasing and multi-model verification to identify factual inconsistencies in LLM responses.How it works
PAS2 sends semantically equivalent variations of your query to an LLM, then uses a judge model to analyze the responses for factual inconsistencies. When an AI hallucinates, it often gives different answers to the same question asked in different ways.Quick start
Get up and running with PAS2 in under 5 minutes
How it works
Understand the paraphrase-based detection system
API reference
Explore the complete API documentation
Configuration
Customize detection parameters and models
Key features
Multi-model architecture
PAS2 uses Mistral Large to generate responses and OpenAI’s o3-mini as an independent judge to detect hallucinations. This separation ensures unbiased analysis.Paraphrase generation
Automatically generates semantically equivalent variations of queries using Mistral’s JSON mode. Each paraphrase preserves the original meaning while varying structure and wording.Real-time progress tracking
Visual feedback during analysis with detailed progress updates for paraphrase generation, response collection, and judgment phases.Detailed analysis output
Persistent feedback storage
Built-in SQLite database stores detection results and user feedback with support for Hugging Face Spaces persistent storage.Use cases
QA systems
Validate factual accuracy in customer support bots
Content generation
Verify consistency in AI-generated articles
Research tools
Ensure reliability in AI research assistants
Educational apps
Detect misinformation in tutoring systems
Data extraction
Validate AI-extracted facts from documents
Fact-checking
Cross-verify AI claims automatically
Get started
PAS2 requires API keys for both Mistral AI and OpenAI. The system uses parallel API calls for efficient processing.