Skip to main content
Evaly’s manual grading system allows you to review participant submissions for open-ended questions, assign scores, provide feedback, and manage the grading workflow efficiently.

When Manual Grading is Needed

Certain question types cannot be automatically graded and require manual review:

Text Field

Short answer and essay responses need human evaluation for content, reasoning, and accuracy.

File Upload

Uploaded documents, images, or other files must be reviewed manually.

Audio Response

Recorded audio answers require listening and evaluation.

Video Response

Video submissions need viewing and assessment.
Questions marked as “Needs Review” appear in the grading queue until you assign a final score.

Accessing Participant Submissions

View all submissions for a test:
1

Navigate to Results

Go to the test results page and select a participant to grade.
2

View Submission

See all sections, questions, answers, and current grades for that participant.
3

Grade Questions

Review each answer and assign scores with optional feedback.
4

Track Progress

Monitor grading progress with completion percentage.

Submission View Structure

The submission view organizes data by sections:
submission: {
  participant: {
    _id: string,
    name: string,
    email: string,
    image?: string
  },
  sections: [
    {
      section: Section,
      attempt: TestAttempt,
      questions: [
        {
          question: Question,
          answer: TestAttemptAnswer,
          grade?: QuestionGrade
        }
      ]
    }
  ]
}

Section Organization

  • Section Header: Section title and description
  • Section Attempt: Start time, finish time, duration
  • Questions List: All questions in order with answers and grades

Grading Interface

For each question requiring manual grading:

Question Display

  • Question Text: Full question with any media (images, audio)
  • Point Value: Maximum points available
  • Participant Answer: Text response, uploaded file, or media recording

Grading Controls

Assign a score between 0 and the maximum:
// Validation
if (score < 0 || score > maxScore) {
  throw new ConvexError({
    message: `Score must be between 0 and ${maxScore}`
  });
}
Maximum score is determined by the section’s scoring mode (1 point in percentage mode, or custom points in point-based mode).

Grade Data Model

Each graded answer creates or updates a grade record:
questionGrade: {
  questionId: string,
  participantId: string,
  testAttemptId: string,
  testAttemptAnswerId: string,
  testId: string,
  
  // Scoring
  autoScore?: number,        // Auto-graded score (if applicable)
  manualScore: number,       // Your assigned score
  finalScore: number,        // Manual score overrides auto score
  maxScore: number,          // Maximum possible
  
  // Grading metadata
  feedback?: string,
  gradedBy: string,          // Organizer who graded
  gradedAt: number,          // Timestamp
  needsReview: boolean,      // Still needs review
  isOverridden: true         // Manual score present
}

Score Priority

When both auto and manual scores exist:
// Manual score always takes precedence
finalScore = manualScore ?? autoScore

// Example: Override incorrect auto-grading
autoScore = 0        // AI marked wrong
manualScore = 1      // You marked correct
finalScore = 1       // Manual score wins

Grading Workflow

Creating New Grades

When grading a previously ungraded answer:
await gradeAnswer({
  testAttemptAnswerId: string,
  manualScore: number,
  feedback?: string,
  needsReview?: boolean
})
// Creates new questionGrade record
// Returns: { gradeId, updated: false }

Updating Existing Grades

When regrading an already graded answer:
await gradeAnswer({
  testAttemptAnswerId: string,
  manualScore: number,  // New score
  feedback?: string,     // Updated feedback
  needsReview?: boolean  // Updated review status
})
// Updates existing questionGrade record
// Preserves autoScore if it existed
// Returns: { gradeId, updated: true }

Grading Progress Tracking

Monitor overall grading progress for a test:
getGradingProgress({ testId }) → {
  totalAnswers: 285,      // All answers in test
  gradedAnswers: 142,     // Answers with grades
  pendingAnswers: 143,    // Still need grading
  progress: 50            // Percentage complete
}
Progress percentage is calculated as: (gradedAnswers / totalAnswers) × 100

Progress Indicators

  • Fully Graded: All answers have grades, no review flags
  • Partially Graded: Some answers graded, others pending
  • Needs Review: Grades exist but needsReview flags set
  • Not Started: No grades assigned yet

Bulk Grading Strategies

Review all participant responses to the same question:
  • Ensures consistent grading standards
  • Easier to compare answer quality
  • Develop rubric as you grade
  • Adjust scores if standards shift
Review all questions for one participant:
  • See full context of participant’s work
  • Provide comprehensive feedback
  • Assess overall understanding
  • Better for small participant counts
Focus on one section at a time:
  • Balance between question and participant approaches
  • Complete sections one by one
  • Good for tests with distinct section topics

Scoring Modes and Grading

Percentage Mode

Each question worth 1 point:
maxScore = 1
// Award 0, 0.5, or 1 point
// Partial credit possible

manualScore: 0.5  // Half credit
manualScore: 1    // Full credit

Point-Based Mode

Custom point values per question:
// 5-point question
maxScore = 5
manualScore: 0    // No credit
manualScore: 2.5  // Half credit
manualScore: 5    // Full credit

// 10-point question
maxScore = 10
manualScore: 7    // 70% credit
Partial credit is always possible in manual grading, regardless of scoring mode.

Feedback Best Practices

  • Be specific about what was correct/incorrect
  • Explain the reasoning behind the score
  • Reference source material for learning
  • Encourage improvement without being discouraging
  • Use positive language even for low scores
  • Develop scoring criteria before grading
  • Apply same standards to all participants
  • Document rubric for future reference
  • Adjust all grades if rubric changes
  • Share rubric with participants if appropriate
  • Create feedback templates for common issues
  • Use short codes or references (“See rubric item 3”)
  • Balance detail with grading speed
  • Prioritize feedback for close-call scores
  • Save detailed feedback for struggling participants

Releasing Results

Control when participants see their grades:
// Hide results until grading complete
resultsReleased: false

// Show results including grades and feedback
resultsReleased: true
Only release results after completing all grading to ensure fairness and avoid partial visibility.

Partial Grading

If you release results before grading is complete:
  • Auto-graded questions show scores immediately
  • Manually graded questions show “Pending” until graded
  • Participant’s total score updates as you grade
  • Feedback becomes visible when you submit it

Regrading Scenarios

Correcting Errors

If you made a grading mistake:
  1. Find the participant’s submission
  2. Update the score and feedback
  3. Existing grade is updated (not duplicated)
  4. Participant’s total score recalculates automatically

Adjusting Rubric

If you change grading standards mid-way:
  1. Update rubric documentation
  2. Regrade already-graded submissions with new standards
  3. Ensure all submissions use same rubric
  4. Consider announcing rubric change to participants

Answer Key Corrections

If the answer key was wrong:
  1. Update the question’s correct answer
  2. System detects existing grades
  3. Confirm regrade of all affected submissions
  4. Auto-grading runs again with correct answer
  5. Manual grades can be adjusted if needed
Automatic regrading only affects auto-graded questions. Manual grades must be updated individually.

Next Steps

View Analytics

Analyze overall test performance and results

Export Results

Download graded results for records

Release Results

Share grades and feedback with participants

Question Types

Learn about different question types

Build docs developers (and LLMs) love