Skip to main content

Overview

Directus CMS serves as the primary data layer for Genie Helper, providing:
  • Headless CMS: RESTful API for all application data
  • Authentication: User management, JWT tokens, role-based access control
  • Collections: 11 core collections for creators, media, jobs, and platform data
  • Flows: Low-code automation workflows (scraping, media processing, notifications)
  • Admin Panel: Visual interface for data management and debugging
Service Details:
  • Port: 8055 (proxied to /api/directus/ in production)
  • Process: pm2 agentx-cms
  • Admin URL: https://geniehelper.com/admin (iframe in dashboard)

Architecture Integration

Data Flow

React Dashboard → /api/directus/* → Directus REST API (port 8055)

                          PostgreSQL Collections (11 tables)

              ← Directus Flows → Media Worker (BullMQ) → Stagehand

            AnythingLLM Agent ← Directus MCP (17 tools)

MCP Integration

The Directus MCP Server exposes 17 tools to the AnythingLLM agent: Location: /home/daytona/workspace/source/scripts/directus-mcp-server.mjs:1 Tools:
ToolDescription
list-collectionsGet all collection names
get-collection-schemaGet fields + relationships for a collection
read-itemsQuery items with filters, sorting, pagination
read-itemGet single item by ID
create-itemInsert new record
update-itemPATCH existing record
delete-itemDelete record by ID
search-itemsFull-text search across collection
trigger-flowManually trigger a Directus Flow
get-meGet current authenticated user
list-usersGet all users
get-userGet user by ID
update-userUpdate user record
create-userCreate new user
list-filesGet uploaded files
get-fileGet file metadata by ID
list-flowsGet all automation flows
Agent Use Case:
User: "How many posts did I schedule this week?"

Agent:
1. Uses read-items tool on scheduled_posts collection
2. Filters by date_created >= 7 days ago
3. Returns count + summary

Key Collections

Genie Helper uses 11 core Directus collections:

1. creator_profiles

Purpose: Platform account credentials and scrape configuration
FieldTypeDescription
idUUIDPrimary key
user_idM2O (directus_users)Owner
platformStringPlatform name (onlyfans, fansly, etc.)
usernameStringPlatform username
credentialsJSONEncrypted login credentials (AES-256-GCM)
scrape_enabledBooleanEnable automated scraping
scrape_frequencyStringCron expression (e.g., “0 */6 * * *“)
last_scraped_atTimestampLast successful scrape
scrape_statusStringidle, running, success, error
profile_dataJSONCached profile stats (followers, earnings)
Encryption: The credentials field uses server-side AES-256-GCM encryption via credentialsCrypto.js.

2. scraped_media

Purpose: Content library with engagement metrics
FieldTypeDescription
idUUIDPrimary key
creator_profile_idM2OSource platform account
platform_post_idStringExternal post ID
media_typeStringimage, video, gallery
file_idM2O (directus_files)Uploaded media file
captionTextPost caption
tagsJSONArray of tags
published_atTimestampOriginal publish date
likesIntegerEngagement count
commentsIntegerComment count
earningsDecimalRevenue from post (if available)
taxonomy_conceptsJSON6-concept classification
Taxonomy Integration: See taxonomy_dimensions collection for concept definitions.

3. scheduled_posts

Purpose: Cross-platform post queue
FieldTypeDescription
idUUIDPrimary key
creator_profile_idM2OTarget platform
media_idM2O (scraped_media)Media to publish
scheduled_forTimestampPublish time
statusStringpending, publishing, published, failed
captionTextAI-generated or user-edited caption
platform_specific_configJSONPlatform options (hashtags, visibility, etc.)
published_atTimestampActual publish time
error_messageTextFailure details
Worker Integration: The post_scheduler worker polls this collection every 60s for pending posts where scheduled_for <= NOW().

4. media_jobs

Purpose: BullMQ job tracking for media operations
FieldTypeDescription
idUUIDPrimary key
job_typeStringscrape_profile, watermark, teaser, publish_post
queue_nameStringBullMQ queue name
bull_job_idStringBullMQ job ID
statusStringqueued, active, completed, failed
payloadJSONJob parameters
resultJSONJob output data
errorTextFailure message
created_atTimestampJob creation time
started_atTimestampJob start time
completed_atTimestampJob completion time
Dashboard Polling: Dashboard uses this collection to show real-time job progress.

5. hitl_sessions

Purpose: Human-in-the-loop login requests
FieldTypeDescription
idUUIDPrimary key
creator_profile_idM2OPlatform requiring login
platformStringPlatform name
statusStringpending, completed, expired, failed
requested_atTimestampHITL trigger time
completed_atTimestampUser login completion
notesTextError details
Trigger: Created when scraping fails due to missing cookies. See Browser Extension for HITL flow.

6. platform_sessions

Purpose: Encrypted browser cookies for automated login
FieldTypeDescription
idUUIDPrimary key
creator_profile_idM2OCreator account
platformStringPlatform name
cookiesJSONEncrypted cookie array (AES-256-GCM)
user_agentStringBrowser user agent
captured_atTimestampCookie capture time
expires_atTimestampEstimated expiration
last_used_atTimestampLast Stagehand injection
Security: Cookies encrypted with same credentialsCrypto.js module as creator_profiles.credentials.

7. taxonomy_dimensions

Purpose: 6 super-concept content classification system
FieldTypeDescription
idUUIDPrimary key
concept_nameStringDimension name (e.g., “Intimacy Level”)
descriptionTextConcept definition
display_orderIntegerUI sort order
Current Dimensions (from README):
  1. Intimacy Level
  2. Production Quality
  3. Content Type
  4. Audience Appeal
  5. Platform Fit
  6. Engagement Potential

8. taxonomy_mapping

Purpose: 3,208 classified tags across 6 concepts
FieldTypeDescription
idUUIDPrimary key
tagStringRaw tag (e.g., “lingerie”)
dimension_idM2O (taxonomy_dimensions)Parent concept
weightDecimalClassification confidence (0-1)
aliasesJSONAlternative tag spellings
AI Classification: The taxonomy-tag Action Runner flow uses this mapping to auto-classify new content.

9. fan_profiles

Purpose: Fan engagement data
FieldTypeDescription
idUUIDPrimary key
creator_profile_idM2OCreator account
platform_fan_idStringExternal fan ID
usernameStringFan username
engagement_scoreIntegerCalculated engagement metric
total_spentDecimalTotal revenue from fan
last_interactionTimestampMost recent message/like
notesTextCreator notes

10. action_flows

Purpose: Action Runner flow definitions
FieldTypeDescription
idUUIDPrimary key
slugStringFlow identifier (e.g., “scout-analyze”)
nameStringDisplay name
descriptionTextFlow purpose
stepsJSONArray of step configurations
activeBooleanEnable/disable flow
Action Runner: When agent outputs [ACTION:slug:{"params"}], the action-runner plugin executes the matching flow. Seeded Flows (from README):
  • scout-analyze: Scrape URL + AI analysis
  • taxonomy-tag: Auto-classify content
  • post-create: Draft platform-specific post
  • message-generate: Fan engagement message
  • memory-recall: Search stored data + summarize
  • media-process: Queue media job (watermark, teaser, compress)

11. agent_audits

Purpose: Action execution logs
FieldTypeDescription
idUUIDPrimary key
user_idM2O (directus_users)User who triggered action
action_slugStringFlow slug
paramsJSONInput parameters
statusStringsuccess, error, miss
resultJSONFlow output
error_messageTextFailure details
executed_atTimestampExecution time
Debugging: Use this collection to debug why actions failed or missed.

Directus Flows

Directus Flows are low-code automation workflows triggered by:
  • Webhooks: External HTTP POST triggers
  • Schedule: Cron-based triggers
  • Events: Collection CRUD events (insert, update, delete)
  • Manual: Triggered via trigger-flow MCP tool or API

Example Flow: Platform Scraping

Flow Name: platform_scrape_flow Trigger: Manual (via trigger-flow MCP tool or dashboard button) Steps:
  1. stagehand_cookie_login: Inject cookies from platform_sessions + navigate to creator profile
  2. stagehand_extract: Extract profile stats (followers, earnings, recent posts)
  3. create-item (Directus): Insert/update scraped_media records
  4. stagehand_close: End browser session
Seeding Script: /home/daytona/workspace/source/scripts/hitl/seed_platform_scrape_flow.mjs:1

Creating Custom Flows

Directus admin panel provides a visual flow builder:
  1. Navigate to https://geniehelper.com/admin → Flows
  2. Click “Create Flow”
  3. Select trigger type
  4. Add operations:
    • Webhook / HTTP Request: Call external APIs
    • Run Script: Execute Node.js code
    • Condition: Branching logic
    • Create/Update/Delete Data: Directus CRUD
    • Send Notification: Email, push, webhook
    • Trigger Another Flow: Flow chaining

Authentication & RBAC

User Roles

Genie Helper uses Directus roles:
RolePermissions
AdministratorFull access to all collections, flows, and settings
CreatorCRUD on own data (filtered by user_id field)
ViewerRead-only access to public data

RBAC Sync

User creation in dashboard automatically syncs to Directus: Sync Endpoint: /api/rbacSync (webhook triggered by Directus user creation) Flow:
  1. User registers in dashboard /register
  2. Dashboard calls /api/register (proxy to Directus /users with admin token)
  3. Directus creates user + triggers RBAC sync webhook
  4. Sync webhook creates user_personas collection record
  5. User can now log in and access dashboard
Related Files:
  • /home/daytona/workspace/source/server/endpoints/api/register.js:1
  • /home/daytona/workspace/source/server/endpoints/api/rbacSync.js:1

JWT Tokens

Directus uses JWT for authentication:
  • Access Token: Short-lived (15 min), used for API requests
  • Refresh Token: Long-lived (7 days), used to renew access token
Dashboard Storage:
  • Access token: localStorage.getItem('directus_token')
  • Refresh token: Auto-refreshed via Directus SDK
Admin Token: Server-side operations use DIRECTUS_ADMIN_TOKEN env var for elevated permissions.

API Proxy

Production traffic routes through nginx proxy: Nginx Config (Plesk):
location /api/directus/ {
    proxy_pass http://127.0.0.1:8055/;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
}
Dashboard API Client:
// dashboard/src/utils/api.js
import axios from 'axios';

const directusApi = axios.create({
  baseURL: '/api/directus',
  headers: {
    Authorization: `Bearer ${localStorage.getItem('directus_token')}`
  }
});

export const getScheduledPosts = () => 
  directusApi.get('/items/scheduled_posts?filter[status][_eq]=pending');

Environment Variables

Directus requires these env vars:
VariableDescription
DB_CLIENTDatabase type (pg for PostgreSQL)
DB_HOSTDatabase host
DB_PORTDatabase port (5432)
DB_DATABASEDatabase name
DB_USERDatabase user
DB_PASSWORDDatabase password
KEYDirectus encryption key (32+ chars)
SECRETJWT signing secret (32+ chars)
ADMIN_EMAILAdmin user email
ADMIN_PASSWORDAdmin user password
PUBLIC_URLPublic-facing URL (https://geniehelper.com)
CONTENT_SECURITY_POLICY_DIRECTIVES__FRAME_ANCESTORSAllowed iframe parents
DIRECTUS_ADMIN_TOKENStatic admin token for server-side API calls

Debugging

Check Directus Logs

pm2 logs agentx-cms --lines 50

Restart Directus

pm2 restart agentx-cms

Query Collections via CLI

curl -H "Authorization: Bearer $DIRECTUS_ADMIN_TOKEN" \
  https://geniehelper.com/api/directus/items/media_jobs?limit=5

Access Admin Panel

URL: https://geniehelper.com/admin Credentials: Features:
  • Browse collections
  • Edit records
  • View flows
  • Monitor activity logs
  • Manage users and roles

Performance Tips

Enable Caching

Directus supports Redis caching for improved performance: Env Vars:
CACHE_ENABLED=true
CACHE_STORE=redis
REDIS=redis://localhost:6379

Optimize Queries

Use fields parameter to reduce payload size:
// ❌ Bad: Fetches all fields + relationships
api.get('/items/scraped_media');

// ✅ Good: Only fetch needed fields
api.get('/items/scraped_media?fields=id,caption,likes,published_at');

Indexed Fields

Ensure frequently queried fields have database indexes:
  • creator_profiles.user_id
  • scraped_media.creator_profile_id
  • scheduled_posts.scheduled_for
  • media_jobs.status
  • platform_sessions.platform

Build docs developers (and LLMs) love