Overview
The Captions API generates platform-optimized social media captions using the dolphin-mistral:7b model via Ollama. The model is specifically chosen for uncensored, creator-style content writing. Base Path:/api/captions
Key Features:
- Platform-specific style guides (OnlyFans, Fansly, X, Instagram, TikTok, etc.)
- Tone customization (playful, professional, teasing, etc.)
- Length control (short, medium, long)
- Creator username personalization
POST /api/captions/generate
Generate a social media caption using AI. Authentication: Directus JWT (Bearer token)Request Parameters
Target platform for the caption. Determines style guide and formatting.Supported values:
onlyfans- Intimate, teasing, personal (1-3 sentences, emojis OK)fansly- Flirty, direct, confident (encourages subscription)x- Punchy, under 280 chars, hook + valuereddit- Genuine, community-aware, conversationaltiktok- Energetic, with hashtags, max 150 charsinstagram- Visually descriptive, lifestyle tone, 3-5 hashtagsyoutube- Keyword-rich, first 2 lines as hook, timestampssnapchat- Very short, playful, max 30 chars
Caption tone/voice. Examples:
playful, professional, teasing, motivational, casual, confidentMain topic or theme for the caption. Examples:
new photo setbehind the scenesworkout motivationproduct announcementQ&A session
Desired caption length.Options:
short- 1-2 sentences, punchymedium- 2-4 sentences, balanced detaillong- 4-8 sentences, detailed and engaging
Creator’s username on the platform (optional). Used to personalize the caption.
Response
Request success status
Generated caption text (trimmed, ready to post)
AI model used for generation (e.g.,
dolphin-mistral:7b)Platform identifier (echoed from request)
Example Request
OnlyFans Caption:Platform Style Guides
The API uses platform-specific system prompts (fromcaptions.js:18):
OnlyFans
OnlyFans
Style: Intimate, teasing, personalFormat: 1-3 short sentencesGuidelines:
- Emojis encouraged
- No external links (platform restriction)
- Direct, conversational tone
- Encourage engagement/DMs
Fansly
Fansly
Style: Flirty, direct, confidentFormat: 1-3 sentencesGuidelines:
- Encourage subscription/tips
- Bold, unapologetic tone
- Call-to-action focused
X (Twitter)
X (Twitter)
Style: Punchy, under 280 charsFormat: Hook + valueGuidelines:
- No fluff, direct value
- Thread-starter friendly
- High engagement potential
Instagram
Style: Visually descriptive, lifestyle toneFormat: 3-5 hashtags at endGuidelines:
- First line is hook (visible in feed)
- Storytelling encouraged
- Hashtags for discoverability
TikTok
TikTok
Style: Energetic, hashtag-richFormat: Max 150 chars for captionGuidelines:
- Trending hashtags prioritized
- CTA for watch/like/follow
- Match video energy
YouTube
YouTube
Style: Keyword-rich, SEO-optimizedFormat: First 2 lines as hook, timestamps if relevantGuidelines:
- First 2 lines visible before “Show More”
- Include chapters/timestamps
- CTA to subscribe
Reddit
Style: Genuine, community-aware, conversationalFormat: Natural, un-salesyGuidelines:
- Match subreddit culture
- No overt self-promotion
- Value-first approach
Snapchat
Snapchat
Style: Very short, playfulFormat: Max 30 charsGuidelines:
- Complements visual story
- Emoji-heavy acceptable
- Casual, spontaneous
Model Configuration
The caption generation uses Ollama’s/api/generate endpoint with optimized settings:
From captions.js:88:
Why dolphin-mistral:7b?
- Uncensored: No content filters (critical for adult creator use cases)
- Fast inference: 7B parameter model runs on CPU in less than 2 seconds
- Style matching: Fine-tuned for conversational, creative writing
- Consistency: Reliable output structure (no meta-commentary)
Environment Configuration
Error Handling
Unauthorized (no token):Best Practices
Caption Quality Tips
- Be specific with topics: Instead of “new content”, use “new yoga tutorial series”
- Match tone to platform:
playfulworks for OnlyFans,professionalfor LinkedIn - Iterate on length: Start with
medium, adjust based on platform engagement - Test multiple generations: Run 3-5 variations, pick the best
Integration Patterns
Client-side caching:Performance
Typical response times:- CPU inference (dolphin-mistral:7b): 1.5-3s
- GPU inference (if available): 0.3-0.8s
- Network overhead: Less than 100ms
- Pre-warm Ollama: Run a dummy generation on server start to load model into memory
- Use GPU: Set
OLLAMA_GPU_LAYERS=-1for full GPU offload - Adjust
num_predict: Reduce to 150 for shorter captions (faster generation) - Connection pooling: Reuse HTTP connections to Ollama
Example Use Cases
Content Calendar Automation
Generate a week’s worth of captions for a creator:A/B Testing Captions
Generate variations for split testing:Next Steps
Queue API
Enqueue caption generation as a background job
Genie Chat
Use AI agent for interactive caption brainstorming
