Skip to main content

Overview

Two collections manage background operations: media_jobs for BullMQ job records, and hitl_sessions for human-in-the-loop login requests.

media_jobs

Tracks BullMQ background jobs for media processing operations. Each job record corresponds to a queue entry in Redis.

Purpose

  • Track status of background media operations
  • Store job metadata, progress, and results
  • Enable job retry and error handling
  • Link jobs to creator profiles and media items
  • Provide audit trail for all media operations

Job Types

TypeDescriptionWorker Queue
scrape_profileScrape platform for new content and metricsscrape-jobs
publish_postPublish scheduled post to platform(s)media-jobs
apply_watermarkAdd watermark to image/video using ImageMagickmedia-jobs
create_teaserGenerate video preview using FFmpegmedia-jobs
crop_mediaCrop/resize media filemedia-jobs
compress_mediaCompress media for faster deliverymedia-jobs

Key Fields

FieldTypeDescription
idUUIDPrimary key
job_typeStringOne of the job types listed above
statusStringqueued, processing, completed, failed
creator_profile_idForeign KeyOptional link to creator_profiles
media_idForeign KeyOptional link to scraped_media
scheduled_post_idForeign KeyOptional link to scheduled_posts
bull_job_idStringBullMQ Redis job ID for status tracking
paramsJSONJob-specific parameters
progressInteger0-100 completion percentage
resultJSONJob output data on completion
error_messageTextError details if status=failed
retry_countIntegerNumber of retry attempts
started_atDateTimeWhen job processing began
completed_atDateTimeWhen job finished (success or failure)
created_atDateTimeJob creation timestamp

Example Queries

Create Scrape Job

await use_mcp_tool({
  server_name: "directus",
  tool_name: "create-item",
  arguments: {
    collection: "media_jobs",
    data: {
      job_type: "scrape_profile",
      creator_profile_id: "profile-uuid",
      status: "queued",
      params: {
        platform: "onlyfans",
        scrape_type: "full",
        max_items: 100
      }
    }
  }
});

Monitor Job Status

await use_mcp_tool({
  server_name: "directus",
  tool_name: "read-item",
  arguments: {
    collection: "media_jobs",
    id: "job-uuid",
    fields: ["id", "job_type", "status", "progress", "error_message"]
  }
});

List Recent Jobs

await use_mcp_tool({
  server_name: "directus",
  tool_name: "read-items",
  arguments: {
    collection: "media_jobs",
    fields: ["id", "job_type", "status", "progress", "created_at"],
    sort: ["-created_at"],
    limit: 20
  }
});

Find Failed Jobs

await use_mcp_tool({
  server_name: "directus",
  tool_name: "read-items",
  arguments: {
    collection: "media_jobs",
    fields: ["id", "job_type", "error_message", "retry_count", "created_at"],
    filter: {
      status: { _eq: "failed" },
      created_at: { _gte: "$NOW(-24 hours)" }
    },
    sort: ["-created_at"]
  }
});

hitl_sessions

Human-in-the-loop login requests displayed as yellow dashboard alerts when automated scraping requires manual intervention.

Purpose

  • Request manual login when platform cookies expire
  • Display browser extension flow prompts in dashboard
  • Track session resolution status
  • Coordinate between automated scraping and human intervention

Key Fields

FieldTypeDescription
idUUIDPrimary key
creator_profile_idForeign KeyLinks to creator_profiles
platformStringPlatform requiring login
session_typeStringlogin_required, cookies_expired, 2fa_required
statusStringpending, in_progress, resolved, cancelled
alert_messageTextMessage shown in dashboard alert
resolution_methodStringHow resolved: browser_extension, manual_cookies, oauth
resolved_atDateTimeWhen session was resolved
expires_atDateTimeAuto-cancel if not resolved by this time
created_atDateTimeSession creation timestamp

Dashboard Integration

HITL sessions trigger yellow banner alerts in the dashboard:
// Dashboard polling logic
const activeSessions = await directus.items('hitl_sessions').readByQuery({
  filter: {
    creator_profile_id: currentUser.id,
    status: { _in: ['pending', 'in_progress'] }
  }
});

if (activeSessions.data.length > 0) {
  showYellowBanner(activeSessions.data[0].alert_message);
}

Example Queries

Create HITL Session

await use_mcp_tool({
  server_name: "directus",
  tool_name: "create-item",
  arguments: {
    collection: "hitl_sessions",
    data: {
      creator_profile_id: "profile-uuid",
      platform: "onlyfans",
      session_type: "cookies_expired",
      status: "pending",
      alert_message: "OnlyFans login required. Click to connect using browser extension.",
      expires_at: "$NOW(+24 hours)"
    }
  }
});

List Active Sessions

await use_mcp_tool({
  server_name: "directus",
  tool_name: "read-items",
  arguments: {
    collection: "hitl_sessions",
    fields: ["id", "platform", "session_type", "alert_message", "status"],
    filter: {
      status: { _in: ["pending", "in_progress"] },
      expires_at: { _gte: "$NOW" }
    }
  }
});

Resolve Session

await use_mcp_tool({
  server_name: "directus",
  tool_name: "update-item",
  arguments: {
    collection: "hitl_sessions",
    id: "session-uuid",
    data: {
      status: "resolved",
      resolution_method: "browser_extension",
      resolved_at: new Date().toISOString()
    }
  }
});
  • creator_profiles - Platform accounts that jobs operate on
  • scraped_media - Content processed by media jobs
  • scheduled_posts - Posts published by publish_post jobs
  • platform_sessions - Browser cookies that resolve HITL sessions

Workflow Integration

Scraping with HITL Flow

  1. User clicks “Let’s Go” scrape button on dashboard
  2. System checks platform_sessions for valid cookies
  3. If cookies exist: Create media_jobs entry with type scrape_profile
  4. If no cookies: Create hitl_sessions entry instead
  5. Dashboard shows yellow banner: “Login required. Click to connect.”
  6. User completes login via browser extension
  7. Extension captures cookies → stored in platform_sessions
  8. HITL session marked resolved
  9. System creates media_jobs entry, scraping proceeds

Media Processing Flow

  1. User clicks crop/watermark/teaser button in Media Library
  2. Frontend creates media_jobs entry via POST /api/queue/enqueue
  3. BullMQ worker picks up job from Redis queue
  4. Worker updates job status and progress fields
  5. On completion: job result contains output file URLs
  6. Frontend polls job status and displays result

Best Practices

  1. Poll job status - Use bull_job_id to track BullMQ queue position
  2. Implement retry logic - Jobs with retry_count < 3 can be retried
  3. Clean up completed jobs - Archive jobs older than 30 days
  4. Monitor HITL expiry - Auto-cancel sessions after 24 hours
  5. Handle concurrent HITLs - Only show one HITL alert per platform at a time

BullMQ Integration

The media worker (media-worker/index.js) processes jobs from Redis queues:
// Queue configuration
const scrapeQueue = new Queue('scrape-jobs', { connection: redis });
const mediaQueue = new Queue('media-jobs', { connection: redis });

// Worker processors
const scrapeWorker = new Worker('scrape-jobs', async (job) => {
  // Update media_jobs status
  await directus.items('media_jobs').updateOne(job.data.recordId, {
    status: 'processing',
    bull_job_id: job.id
  });
  
  // Process job...
});

See Also

Build docs developers (and LLMs) love