Automatic vs Manual Grading
Understanding which questions are automatically graded helps you plan your grading workload.Auto-Graded Question Types
These questions are graded instantly when participants submit:Multiple Choice
Compares selected options to correct answers. Supports single or multiple selection.
Yes or No
Simple binary choice with predefined correct answer.
Image Choice
Multiple choice with images, same scoring logic.
Audio Choice
Multiple choice with audio, same scoring logic.
Fill in the Blank
Exact text matching (case-insensitive) for each blank.
Ranking (Exact)
Full credit only if entire order matches exactly.
- Automatically graded with partial credit
- Each item in correct position earns proportional points
- Example: 3/5 correct positions = 60% of points
Manually Graded Question Types
These questions require educator review:Text Field
Short or long text responses. Requires reading and judgment.
File Upload
Documents, images, code, presentations. Requires content review.
Audio Response
Voice recordings. Requires listening and evaluation.
Video Response
Video submissions. Requires watching and assessment.
Matching Pairs
Complex matching logic. Requires review (for now).
Slider Scale
Subjective ratings. Typically used for surveys.
Likert Scale
Opinion scales. Typically used for feedback.
Tests with manually graded questions cannot show full results to participants until you complete grading. Participants see a “Grading in Progress” message.
Grading Workflow
Follow this workflow to efficiently grade participant submissions.Step 1: Navigate to Results
Step 2: Select a Participant
Results View Shows:- Participant name and email
- Completion status
- Current score (for auto-graded questions)
- Pending manual reviews
- Submission time
- By submission time (newest first)
- By name (alphabetical)
- By score (if partially graded)
- By grading status (needs review first)
Step 3: Review Submission
The submission view displays:- Section by Section
- Question View
Navigate through test sections:
- Section Title displayed at top
- All questions in that section
- Auto-graded results shown with checkmarks/x’s
- Manual review questions highlighted
- Section score calculated as you grade
Step 4: Grade Manual Questions
For questions requiring manual review:Review Answer
Read/view/listen to the participant’s submission:
- Text: Read response in rich text viewer
- Files: Download and review files
- Audio: Play audio with playback controls
- Video: Watch video with playback controls
Assign Score
Enter the score based on your rubric:
- Percentage mode: Enter 0 or 1 (0% or 100%)
- Point-based mode: Enter 0 to max points for question
- Partial credit: Decimal values allowed (e.g., 7.5/10)
Add Feedback (Optional)
Provide comments to help the participant:
- Explain score
- Highlight strengths
- Suggest improvements
- Clarify misconceptions
Flag for Review (Optional)
Mark question for follow-up:
- Unclear answer
- Possible plagiarism
- Technical issue
- Needs second opinion
Step 5: Complete All Questions
- Grade all manual questions for the participant
- Review auto-graded results (can override if needed)
- Ensure all questions have final scores
- Progress indicator updates in real-time
Step 6: Move to Next Participant
- Click Next Participant or return to results list
- Repeat grading workflow
- Track overall grading progress
Scoring Modes
How scores are calculated depends on the section’s scoring mode.Percentage Mode
Each question is worth 1 point, regardless of complexity. Calculation:- Assign 0 or 1 for each question
- Partial credit: 0.5 = half credit
- Final score is percentage
Point-Based Mode
Questions have custom point values based on importance/difficulty. Calculation:- Each question shows its point value
- Enter score from 0 to max points
- Decimals allowed (e.g., 3.5/5)
- System calculates percentage
- Edit question in Questions page
- Find “Point Value” field
- Enter custom value (default: 1)
- Save question
Example Point Distribution
Example Point Distribution
Overriding Auto-Grades
You can manually change auto-graded results if needed.When to Override
- Answer is technically correct but marked wrong
- Multiple interpretations are valid
- Spelling/formatting issue in fill-in-blank
- Want to award partial credit
- Technical glitch affected grading
How to Override
Add Explanation
Explain why you’re overriding:
- Helps you remember later
- Useful for grade appeals
- Maintains transparency
Overriding marks the grade as manually scored, so it won’t be affected by auto-regrading if question is edited.
Regrading After Question Changes
If you modify a question’s correct answer after participants have submitted, you can regrade affected submissions.Triggering Regrade
When you change correct answers on a question:- System detects existing grades
- Shows affected submissions count
- Prompts: “Regrade X affected answers?”
- Choose to regrade or skip
Regrade Process
Choose Action
- Regrade All: Applies new scoring to all submissions
- Skip: Keeps old scores, only affects new submissions
- Cancel: Don’t save question changes
- All submissions for that question
- Uses new correct answer
- Recalculates scores automatically
- Manual overrides (preserved)
- Questions you manually graded already
- Other questions in the test
Providing Effective Feedback
Feedback helps participants learn from their mistakes and improves future performance.Feedback Best Practices
Be Specific
Good: “Your explanation of mitosis was accurate, but you forgot to mention cytokinesis.”Poor: “Good job” or “Needs work”
Be Constructive
Good: “Consider organizing your essay with clear topic sentences for each paragraph.”Poor: “Your writing is messy.”
Highlight Strengths
Acknowledge what they did well:
- “Excellent analysis of the data”
- “Your introduction was compelling”
- “Great use of examples”
Guide Improvement
Give actionable advice:
- “Review pages 45-48 in the textbook”
- “Practice similar problems in chapter 3”
- “See me during office hours to discuss”
Feedback Templates
Create reusable feedback for common situations:Missing Key Concept
Missing Key Concept
Incomplete Answer
Incomplete Answer
Excellent Work
Excellent Work
Off Topic
Off Topic
Rubric-Based Grading
Use consistent criteria for manual grading:Grading Progress Tracking
Monitor your grading workload: Progress Dashboard:- ✅ Fully Graded: 52 participants
- 🟡 Partially Graded: 18 participants
- 🔴 Not Started: 15 participants
Bulk Operations
Apply Same Feedback
For common errors across participants:- Grade first occurrence with detailed feedback
- Copy feedback text
- Paste and modify for similar cases
- Maintains consistency
Export for External Grading
If grading offline:- Export participant responses
- Grade in spreadsheet or grading software
- Import scores back to Evaly
- (Feature availability depends on plan)
Releasing Results
Once grading is complete:Release Results
Toggle “Results Released” in test settings:
- Participants can now see their scores
- Detailed feedback is visible
- Correct answers shown
You can release results before completing all grading, but participants with ungraded questions will see “Grading in Progress” for those items.
Best Practices
- Grade Consistently: Use the same rubric for all participants
- Grade by Question: Grade all participants’ answers to Question 1, then Question 2, etc. for consistency
- Take Breaks: Grading fatigue affects consistency and fairness
- Blind Grading: Consider hiding names to reduce bias
- Second Opinions: For borderline cases, get colleague input
- Document Criteria: Write down your grading standards
- Be Timely: Return grades promptly while content is fresh
Next Steps
Analytics
Analyze test performance and question difficulty
Results Management
Export and manage participant results
Question Types
Learn about auto-grading for each question type
Test Settings
Configure scoring modes