Skip to main content

Prerequisites

Before you begin, make sure you have:
  • Bun (bun@1.3.2 or later) — Pindeck uses Bun as its package manager and runtime
  • A Convex account and a production deployment at convex.dev
  • An OpenRouter API key for VLM-powered image analysis
  • A fal.ai API key for cinematic image variation generation
Pindeck is production-first — dev, build, serve, and deploy:convex all verify your Convex URL targets the production deployment before running. Commands fail immediately if .env.local points to a non-production deployment.

Set up the project

1

Install dependencies

Clone the repository and install all packages:
bun install
2

Copy the env file

Create your local environment file from the provided example:
cp .env.example .env.local
3

Set your Convex deployment URL

Open .env.local and set VITE_CONVEX_URL to your production Convex deployment URL. Use the .convex.cloud URL — not the .convex.site URL.
VITE_CONVEX_URL=https://your-deployment-name.convex.cloud
This value is validated before every build, serve, and deploy:convex run. If it is missing or points to a non-production deployment, the command exits immediately.
4

Deploy Convex functions

Push your backend functions to your production Convex deployment:
bun run deploy:convex
This verifies your production target first, then runs convex deploy.
5

Build and serve the app

Build the production bundle and start the preview server:
bun run build
The app is available at http://localhost:4173. bun run serve automatically kills any existing process on port 4173 before starting.

Sign in

Pindeck supports two authentication methods:
  • Password — register with an email address and password
  • Anonymous — sign in instantly without credentials
Both options are presented on the sign-in screen. Authentication is backed by Convex Auth with RSA/JWT token signing.

Upload your first image

  1. After signing in, open the Upload tab.
  2. Drag and drop image files onto the upload area, or click to open the file picker.
  3. Fill in any metadata (title, category, group) — or leave fields blank and let AI fill them in. Click Upload to submit. Images start in draft state.

Watch AI analysis run

After upload, Pindeck automatically triggers analysis via OpenRouter:
  • A Vision Language Model (default: qwen/qwen3-vl-8b-instruct) generates a title, description, tags, a 5-color palette, visual style, category, and mood board suggestions.
  • Image status moves from draftprocessing (AI analysis) → review queue → active once you finalize the upload.
  • Once active, the image appears in the Gallery with all generated metadata visible in the detail modal.
From the detail modal you can trigger fal.ai variation generation — select a modification mode (Shot Variation, B-Roll, Style Variation, and more) to produce cinematic variants of your uploaded image.

Build docs developers (and LLMs) love