Skip to main content
Evaly provides detailed analytics to help you understand test performance, identify learning gaps, and improve future assessments.

Overview Analytics

Get a high-level view of test results:

Participation

  • Total participants
  • Completed vs in-progress
  • Completion rate percentage

Score Statistics

  • Average score
  • Median score
  • Highest and lowest scores
  • Score distribution

Time Analysis

  • Average time spent
  • Time per section
  • Completion patterns

Question Insights

  • Hardest questions
  • Easiest questions
  • Success rates

Participant Results

Detailed results for each participant:

Results Table

result: {
  participantId: string,
  participantName: string,
  participantImage?: string,
  totalScore: 85,           // Points earned
  maxPossibleScore: 100,    // Total points available
  percentage: 85,           // Calculated percentage
  isCompleted: true,        // All sections finished
  completedSectionsCount: 4,
  completedAt: timestamp    // When they finished
}

Sorting and Filtering

  • Default Sort: Highest score first
  • Filter by Status: Completed only, in-progress, all
  • Search: Find participants by name or email
  • Export: Download results as CSV or Excel
Participants are only included in analytics if they have started at least one section.

Score Calculation

Evaly calculates scores based on section scoring mode:

Percentage Mode (Default)

// Each question worth 1 point
maxScore = 1

// Score calculation
if (answer.isCorrect) {
  score = 1
} else {
  score = 0
}

// Percentage
percentage = (totalScore / totalQuestions) * 100

Point-Based Mode

// Custom point values per question
maxScore = question.pointValue || 1

// Partial credit possible
score = grade.finalScore // 0 to maxScore

// Percentage
percentage = (totalScore / maxPossibleScore) * 100
Scoring mode is set per section. Mixed modes are calculated correctly in the total score.

Score Distribution

Visualize how participants performed across score ranges:
scoreDistribution: {
  "0-20": 2,    // 2 participants scored 0-20%
  "20-40": 5,
  "40-60": 12,
  "60-80": 18,
  "80-100": 15
}
Bar chart showing distribution:
  • Quick identification of score clustering
  • Spot bimodal distributions
  • Identify if test is too easy or too hard

Question Difficulty Analysis

Identify which questions were hardest and easiest:

Success Rate Calculation

// For each question
successRate = (correctAttempts / totalAttempts) * 100

questionStats: {
  questionId: string,
  question: "What is the capital of France?",
  successRate: 92.5,      // 92.5% got it right
  totalAttempts: 57,
  correctAttempts: 53
}

Hardest Questions

Questions with lowest success rates:
hardestQuestions: [
  {
    question: "Explain quantum entanglement",
    successRate: 23.5,
    totalAttempts: 57,
    correctAttempts: 13
  },
  // Top 5 hardest questions
]
Very low success rates (below 20%) may indicate poorly worded questions, incorrect answer keys, or content not covered in instruction.

Easiest Questions

Questions with highest success rates:
easiestQuestions: [
  {
    question: "What is 2 + 2?",
    successRate: 98.2,
    totalAttempts: 57,
    correctAttempts: 56
  },
  // Top 5 easiest questions
]
Very high success rates (above 95%) may indicate questions are too easy or testing basic knowledge.

Section Analysis

Breakdown of performance by test section:
sectionAnalysis: [
  {
    sectionId: string,
    sectionTitle: "Multiple Choice",
    averageScore: 78.5,       // Percentage
    questionsCount: 20,
    completedBy: 57           // Participants who finished
  },
  // All sections
]

Section Insights

  • Difficulty Comparison: Compare average scores across sections
  • Time Analysis: See which sections took longest
  • Completion Rates: Identify dropout points
  • Question Distribution: Balance question counts

Answer Distribution

For multiple-choice questions, see how participants answered:
// For each question
optionsAnswer: {
  "option_1": 23,  // 23 participants chose option 1
  "option_2": 34,  // 34 chose option 2 (correct)
  "option_3": 8,
  "option_4": 2
}

Distractor Analysis

Understand why wrong answers were chosen:
  • Popular Distractors: Wrong answers chosen frequently indicate common misconceptions
  • Unused Options: Options rarely chosen may not be plausible enough
  • Pattern Analysis: Detect if participants are guessing randomly
Good distribution across options indicates quality distractors:
Option A (correct): 45%
Option B: 25%
Option C: 20%
Option D: 10%

Progress Tracking

Monitor ongoing test progress:
progress: {
  workingInProgress: 15,     // Currently taking test
  submissions: 42,           // Completed all sections
  averageTime: 2847,         // Seconds
  completionRate: 74         // Percentage who finished
}

Metrics Explained

  • Working in Progress: Participants who started but haven’t completed all sections
  • Submissions: Participants who completed all sections
  • Average Time: Mean completion time for finished participants (in seconds)
  • Completion Rate: (Submissions / Total Participants) × 100

Exporting Data

Download analytics data for external analysis:

CSV Export

Download participant results:
  • Name, email, score, percentage
  • Section-by-section breakdown
  • Time spent per section
  • Question-level responses

Excel Export

Formatted spreadsheet with:
  • Multiple sheets (overview, results, questions)
  • Charts and visualizations
  • Pivot table ready format

Best Practices

  • Look at median score, not just average (outliers can skew average)
  • Compare section averages to identify difficult content areas
  • Review hardest questions for clarity and accuracy
  • Check if score distribution is normal or skewed
  • Investigate very high or very low overall averages
  • Revise questions with very low success rates
  • Remove or replace questions with near-100% success rates
  • Update distractors that are never chosen
  • Adjust point values based on question difficulty
  • Balance section difficulty for fair assessment
  • Questions with 30-70% success rates discriminate well
  • Below 30% may be too hard or unclear
  • Above 70% may be too easy for assessment purposes
  • Check answer distribution for guessing patterns
  • Use insights to improve future question writing
  • Export data after each test for records
  • Compare results across test versions
  • Track improvement over time for repeat assessments
  • Use section analysis to guide curriculum decisions
  • Share aggregate (not individual) data with stakeholders

Comprehensive Analytics Query

The analytics engine provides all metrics in a single query:
getComprehensiveAnalytics({
  testId
}) → {
  testTitle: string,
  overview: {
    totalParticipants: number,
    completedParticipants: number,
    completionRate: number,
    averageScore: number,
    medianScore: number,
    highestScore: number,
    lowestScore: number,
    averageTimeSpent: number
  },
  scoreDistribution: {...},
  hardestQuestions: [...],
  easiestQuestions: [...],
  sectionAnalysis: [...]
}
Analytics update in real-time as participants complete sections and answers are graded.

Next Steps

Manual Grading

Grade open-ended questions to complete results

Live Monitoring

Monitor tests in real-time during administration

Export Results

Download data for external analysis

Improve Tests

Use analytics to refine your assessments

Build docs developers (and LLMs) love