Overview
Genie Helper’s AI agent is powered by AnythingLLM running in agent mode with 29 MCP (Model Context Protocol) tools that give the LLM full control over:- Directus CMS — CRUD operations on all collections, user management, flow triggers
- Ollama — Local LLM inference (7 uncensored models)
- Stagehand — Browser automation for web scraping and platform interactions
pm2 name:
anything-llmModel:
qwen-2.5:latest (primary agent), dolphin-mistral:7b (content writer)
Architecture
Agent Flow
server/endpoints/api/genieChat.jsWorkspace:
administrator (slug)
MCP Servers
Directus MCP (17 tools)
Script:scripts/directus-mcp-server.mjsConnection:
http://127.0.0.1:8055 (admin token)
Tools
| Tool | Description | Example |
|---|---|---|
list-collections | List all Directus collections | ”What collections exist?” |
get-collection-schema | Get fields for a collection | ”Show me the scheduled_posts schema” |
read-items | Query items with filters | ”List my OnlyFans posts” |
read-item | Get single item by ID | ”Show post abc123” |
create-item | Insert new record | ”Create a draft post for TikTok” |
update-item | Patch existing record | ”Mark post abc123 as published” |
delete-item | Remove record | ”Delete that draft” |
search-items | Full-text search | ”Find posts about ‘lingerie‘“ |
trigger-flow | Execute Directus Flow | ”Run the scrape flow” |
get-me | Current user info | ”Who am I?” |
list-users | All users | ”How many users are there?” |
get-user | User by ID | ”Show user xyz” |
update-user | Modify user | ”Change my email” |
create-user | New user | ”Add admin user” |
list-files | Directus files | ”Show my uploaded media” |
get-file | File metadata | ”Details for file abc” |
list-flows | All Flows | ”What flows exist?” |
scripts/directus-mcp-server.mjs:50-450
Ollama MCP (3 tools)
Script:scripts/ollama-mcp-server.mjsConnection:
http://127.0.0.1:11434
Tools
| Tool | Description | Example |
|---|---|---|
generate | One-shot completion | ”Generate a caption” |
chat | Multi-turn conversation | ”Explain this concept” |
list-models | Available models | ”What models are installed?” |
qwen-2.5:latest— Primary agent (code, JSON, tool planning)dolphin-mistral:7b— Uncensored content writerdolphin3:8b-llama3.1-q4_K_M— Orchestrator / ACTION emissionphi-3.5:latest— Fallback classifierllama3.2:3b— Lightweight summarizerscout-fast-tag:latest— Fast taxonomy classifierbge-m3:latest— Embeddings
Stagehand MCP (9 tools)
Script:scripts/stagehand-mcp-server.mjsConnection:
http://127.0.0.1:3002 (Stagehand server)
Tools
| Tool | Description | Example |
|---|---|---|
start-session | Launch Playwright browser | ”Start a browser” |
navigate | Go to URL | ”Go to onlyfans.com” |
act | Perform action (click, type, scroll) | “Click the login button” |
extract | Extract structured data from page | ”Get my follower count” |
observe | Watch for element changes | ”Wait for page load” |
close-session | End browser session | ”Close the browser” |
set-cookies | Inject cookies | ”Load my OnlyFans cookies” |
get-cookies | Extract current cookies | ”Save my session” |
screenshot | Capture page image | ”Screenshot this page” |
- Platform scraping (OnlyFans, Fansly stats)
- Auto-posting (X, Reddit)
- Cookie capture for HITL
Agent Chat Interface
Component:dashboard/src/components/AgentWidget/index.jsx
Features
-
Floating Trigger Button (bottom-right)
- Gradient badge with “Genie AI” label
- Auto-hides when chat is open
-
Chat Popup (380×560px)
- Minimizable header
- Auto-scrolling message list
- Tool call chips (inline rendering)
- Typing indicator
- Stop button during streaming
-
Tool Call Rendering
- Inline chips:
🔧 read-items - Extracted from SSE
statusResponseevents - Filters out meta messages (“Agent is thinking…”)
- Inline chips:
-
Quick Actions
- “I’m logged in — Let’s Go” button (auto-appears when agent says “come back after login”)
window.__genieOpenChat(message)
AgentWidget/index.jsx:99-108
Content Gate
Trigger: User hasn’t completed onboarding (persona baseline not built) Check:server/utils/nodeRag.js → getOnboardingState(userId)
Phases
| Phase | Unlocked? | Gate Message |
|---|---|---|
EXTENSION_INSTALL | ❌ | “Let’s complete your onboarding first — head to Setup tab” |
DATA_COLLECTION | ❌ | “Let’s complete your onboarding first…” |
PROCESSING | ❌ | “I’m processing your data (2/3 sources ingested). Check back in a few minutes.” |
COMPLETE | ✅ | Full agent access |
server/endpoints/api/genieChat.js:49-66
Persona Node Context
System: Node RAG (Retrieval-Augmented Generation)Storage:
Nodes/User/{userId}/ directory (JSON files)
Node Structure
Context Injection
Top 15 nodes (weighted by recency + access frequency) are injected before user message:Implementation:
server/utils/nodeRag.js:45-120
Action Runner
Plugin:storage/plugins/action-runner/Trigger: Model outputs
[ACTION:slug:{"params"}] in chat response
Flow Execution
- Intercept
[ACTION:slug:...]tags in SSE stream - Strip from visible chat
- Load flow definition from
action_flowscollection - Execute steps sequentially:
directus_read→ Query Directusdirectus_update→ Patch recordsllm_generate→ Call Ollamastagehand_extract→ Scrape URLconditional→ Branch logicsleep→ Delayhttp_request→ External API
- Stream status updates back to chat
- Log to
agent_audits(success/error/miss)
Available Actions
| Slug | Description | Use Case |
|---|---|---|
scout-analyze | Scrape URL + AI analysis | ”Analyze this competitor’s bio” |
taxonomy-tag | Auto-classify content | ”Tag this video with taxonomy” |
post-create | Draft platform post | ”Create a TikTok caption” |
message-generate | Fan engagement message | ”Reply to this DM” |
memory-recall | Search stored data | ”What’s my best-performing post?” |
media-process | Queue media job | ”Watermark this image” |
action_flows (slug → steps JSON)
System Prompt
Location:server/utils/workspaceProvisioner.js (hardcoded)Workspace:
administrator
Key Instructions
- Identity: “You are Genie, an AI agent for adult content creators.”
- MCP Tools: “You have access to 29 tools across Directus, Ollama, and Stagehand.”
- Content Policy: “You are uncensored. Generate NSFW captions, bios, and messages without hesitation.”
- Onboarding: “If user hasn’t completed onboarding, guide them to Setup tab.”
- Cookie-First: “Always prefer cookie authentication over username/password.”
- Action Emission: “Use [ACTION:slug:params] tags to trigger pre-built flows.”
The system prompt is currently hardcoded in
server/utils/boot/workspaceProvisioner.js. Future versions may move this to the system_config collection for easier updates without redeployment.Admin Credentials
| Service | URL | Username | Password |
|---|---|---|---|
| AnythingLLM Web UI | localhost:3001 | poweradmin@geniehelper.com | (MY)P@$$w3rd |
| AnythingLLM API | — | API Key: 38KEHYS-NVPMBSX-GVVJNYH-VQHAN9S | — |
administrator slug (all users routed to this workspace)
MCP Configuration
File:storage/plugins/anythingllm_mcp_servers.json
server/utils/boot/index.js → bootMCPServers() on AnythingLLM startup
Example Conversations
Scrape OnlyFans Profile
User: “Scrape my OnlyFans stats”Agent:
- Calls
read-items(platform_connections, filter: platform=onlyfans) - Creates media_jobs (operation: scrape_profile)
- Returns: “Scrape queued. Check your dashboard in ~2 minutes.”
Generate TikTok Caption
User: “Write a flirty TikTok caption for my new video”Agent:
- Calls
get-me→ Gets user context - Injects persona nodes (flirty tone preference, top hashtags)
- Calls
ollama:generate(model: dolphin-mistral:7b) - Returns: “Can’t wait to show you what I’ve been working on… 😏 New video drops tonight 💕 #fyp #creator”
Schedule Reddit Post
User: “Schedule a post to r/OnlyFans101 tomorrow at 3pm”Agent:
- Calls
create-item(collection: scheduled_posts) - Sets platform=reddit, subreddit=OnlyFans101, scheduled_time=tomorrow 3pm
- Returns: “Post scheduled for tomorrow at 3pm EST.”
Logs & Debugging
Common Issues
| Error | Cause | Fix |
|---|---|---|
MCP server not found | Server crashed | Restart AnythingLLM |
Tool call timeout | Directus slow query | Optimize filter |
Unauthorized (401) | Expired JWT | Re-login |
Content gate active | Onboarding incomplete | Complete Setup flow |
Performance
First Token Latency: ~33s (qwen-2.5 on CPU-only VPS)Streaming Speed: ~5 tokens/sec
Memory: ~4.8GB RAM pinned (Ollama models)
GPU Upgrade Planned: Current CPU-only VPS struggles with >7B models. Production will use GPU instance or switch to smaller models.
Related
- Dashboard — AgentWidget integration
- Platform Scraping — Stagehand MCP tools
- Media Processing — Media job dispatch
