Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/4rt21/backend-proyecto/llms.txt

Use this file to discover all available pages before exploring further.

Moderation is the process by which administrators review, approve, or reject user-submitted fraud reports to ensure quality and accuracy.

Moderation Workflow

When users submit fraud reports, they enter a moderation queue with status_id: 1 (pending). Administrators review these reports and make decisions about their visibility.
1

Report Submission

User creates a fraud report through /users/report or /reportsInitial status: Pending (status_id: 1)
2

Admin Review

Administrator reviews the report details:
  • Title and description quality
  • Evidence (screenshots)
  • URL validity
  • Category appropriateness
3

Decision

Admin approves or rejects:
  • Approve: Set status_id: 2 (aprobada)
  • Reject: Set status_id: 3 (rechazada)
4

Publication

Approved reports become publicly visible and searchable

Report Statuses

Pending

Status ID: 1Awaiting admin review

Approved

Status ID: 2Publicly visible

Rejected

Status ID: 3Hidden from public

Approving Reports

To approve a report, update its status_id to 2:
PUT /reports/48

{
  "status_id": 2
}

Rejecting Reports

To reject a report, update its status_id to 3:
PUT /reports/48

{
  "status_id": 3
}
Rejected reports remain in the database but are hidden from public endpoints. Consider adding a reason field for transparency with users.

Moderating Report Content

Administrators can also modify report content during review:
PUT /reports/48

{
  "title": "Updated title for clarity",
  "description": "More detailed description",
  "status_id": 2
}
The update logic ensures at least one field is provided:
// reports.controller.ts:114
async updateReport(@Body() body: UpdateReportDTO, @Param('id') id: string) {
  if (
    body.category === undefined &&
    body.description === undefined &&
    body.status_id === undefined &&
    body.title === undefined &&
    body.image === undefined &&
    body.report_url === undefined
  ) {
    throw new BadRequestException(
      'At least one field must be provided for update',
    );
  }

  return await this.reportsService.updateReport(id, body);
}

Filtering Pending Reports

Administrators can filter reports by status to view their moderation queue:
GET /reports?status_id=1

Dashboard Metrics

The admin dashboard provides moderation statistics:
GET /dashboard
Response includes:
{
  "stats": {
    "total_reports": 30,
    "pending_reports": 18,
    "approved_reports": 8,
    "rejected_reports": 4,
    "protected_people": 11,
    "total_users": 12
  },
  "recentAlerts": [
    {
      "id": 45,
      "title": "televisor Smart a precio muy bajo"
    },
    {
      "id": 44,
      "title": "ropa de marca original a súper descuento"
    }
  ]
}
The dashboard helps administrators track moderation workload and identify reports that need review.

Deleting Reports

In cases where a report violates policies or contains inappropriate content, administrators can permanently delete it:
DELETE /reports/48
The deletion process removes both the database record and associated files:
// reports.service.ts:94
async deleteReport(id: string) {
  const report = await this.reportsRepository.findByReportId(id);

  if (!report) {
    throw new NotFoundException(`Report with ID ${id} not found`);
  }

  await this.imagesService.deleteFile(report.image);

  return this.reportsRepository.deleteReport(id);
}
Deletion is permanent and cannot be undone. Use rejection (status_id: 3) instead of deletion when possible to maintain audit trails.

User Management

Administrators can also manage users through admin endpoints:

View All Users

GET /admin/user/list
Returns a list of all registered users with their roles, email addresses, and registration dates.

View User Details

GET /admin/user/13

Update User Information

Administrators can modify user accounts:
PUT /admin/user/13

{
  "email": "updated@example.com",
  "name": "Updated Name",
  "username": "new_username"
}
This is useful for:
  • Correcting user information
  • Resolving email conflicts
  • Managing user roles (future enhancement)

User Statistics

Get total user count:
GET /admin/user/count
Response:
{
  "count": 12
}

Best Practices

Timely moderation ensures that legitimate fraud alerts reach users quickly while preventing misinformation from spreading.
When rejecting reports, consider implementing a feedback mechanism to help users understand why their report was rejected and how to improve future submissions.
Avoid deleting reports unless absolutely necessary. Rejected reports can serve as valuable data for understanding fraud patterns and improving detection.
Always verify that report URLs are valid and accessible. Update or correct URLs during the review process if needed.

Moderation API Reference

Here’s a quick reference for common moderation tasks:
TaskEndpointMethodBody
View pending reports/reports?status_id=1GET-
Approve report/reports/:idPUT{"status_id": 2}
Reject report/reports/:idPUT{"status_id": 3}
Update content/reports/:idPUT{"title": "...", "description": "..."}
Delete report/reports/:idDELETE-
Count pending/reports/count?status_id=1GET-
View dashboard/dashboardGET-
List users/admin/user/listGET-
Update user/admin/user/:idPUT{"name": "...", "email": "..."}

Future Enhancements

Potential improvements to the moderation system:
  • Rejection Reasons: Add a field to explain why reports are rejected
  • Review History: Track who reviewed each report and when
  • Auto-Moderation: Implement basic checks for spam or low-quality submissions
  • Bulk Actions: Allow approving/rejecting multiple reports at once
  • Appeal Process: Let users request reconsideration of rejected reports
  • Moderator Roles: Create different levels of moderator permissions

Build docs developers (and LLMs) love