Skip to main content

Prerequisites

Before you begin, make sure you have:
  • Node.js 18.0 or higher installed
  • An OpenAI API key (get one here)
  • Basic familiarity with React and npm

Installation

1

Clone or download the project

Get the source code on your local machine:
git clone https://github.com/remotion-dev/remotion.git
cd remotion/packages/template-prompt-to-motion-graphics
2

Install dependencies

Install all required packages:
npm install
This installs the full Remotion stack including:
  • remotion - Core Remotion library
  • @remotion/player - Embedded video player
  • @remotion/transitions - Smooth scene transitions
  • @remotion/shapes - Geometric shape primitives
  • next - Next.js framework for the web interface
  • ai - Vercel AI SDK for streaming responses
  • @ai-sdk/openai - OpenAI provider
3

Configure environment variables

Create a .env file in the root directory:
cp .env.example .env
Edit .env and add your OpenAI API key:
OPENAI_API_KEY=sk-proj-...
The system uses GPT-5.2 by default. You can select different models (including reasoning models with low/medium/high effort) from the UI dropdown.
4

Start the development server

Launch the application:
npm run dev
Open http://localhost:3000 in your browser.

Create your first animation

1

Enter a prompt

On the landing page, you’ll see a large text input that says “What do you want to create?”Try this example prompt:
Create an animated counter that counts from 0 to 100 with a 
bouncy spring animation. Use a large bold font, center it on 
the screen, and use the color #3b82f6.
You can also click one of the example prompt pills to auto-fill the input.
2

Watch the code generate

After submitting, you’ll be redirected to /generate where you’ll see:
  • Left sidebar: Chat history showing your prompt and AI responses
  • Code tab: Monaco editor with syntax highlighting showing the generated code
  • Preview tab: Live Remotion Player rendering your animation
The code streams in token-by-token. Syntax highlighting is disabled during streaming for performance.
3

Review the generated code

Once generation completes, switch to the Code tab to see the generated component:
import { useCurrentFrame, useVideoConfig, AbsoluteFill, spring } from "remotion";

export const Counter = () => {
  /**
   * Animated counter that counts from 0 to 100 with a bouncy spring animation.
   * Large bold font, centered on screen, blue color.
   */
  const frame = useCurrentFrame();
  const { fps, durationInFrames } = useVideoConfig();
  
  // Editable constants
  const TEXT_COLOR = "#3b82f6";
  const BG_COLOR = "#ffffff";
  const FONT_SIZE = 120;
  const TARGET_VALUE = 100;
  const ANIMATION_DURATION = 90; // frames
  
  // Spring animation for counter
  const progress = spring({
    frame,
    fps,
    config: {
      damping: 12,
      stiffness: 100,
      mass: 0.5,
    },
    durationInFrames: ANIMATION_DURATION,
  });
  
  const currentValue = Math.round(progress * TARGET_VALUE);
  
  return (
    <AbsoluteFill
      style={{
        backgroundColor: BG_COLOR,
        justifyContent: "center",
        alignItems: "center",
        fontFamily: "Inter, sans-serif",
      }}
    >
      <div
        style={{
          fontSize: FONT_SIZE,
          fontWeight: "bold",
          color: TEXT_COLOR,
        }}
      >
        {currentValue}
      </div>
    </AbsoluteFill>
  );
};
Notice the constants-first design: All colors, sizes, and timing values are declared as editable constants at the top.
4

Preview the animation

Switch to the Preview tab to see your animation render in real-time.Player controls:
  • Play/pause button
  • Timeline scrubber to jump to any frame
  • Frame counter showing current position
  • Settings button to adjust duration and FPS
The default duration is 150 frames at 30 FPS (5 seconds). You can change these values in the settings modal.
5

Make edits with follow-up prompts

The chat sidebar at the left lets you refine your animation with natural language:
Change the color to green and make it count to 1000 instead
The AI will apply targeted edits to the existing code rather than regenerating from scratch. You’ll see which lines changed in the assistant’s response.You can also manually edit the code in the Monaco editor. The system tracks manual edits and preserves them during AI refinements.

Understanding the generated code

Every generated component follows consistent patterns:

Component structure

import { 
  useCurrentFrame, 
  useVideoConfig, 
  AbsoluteFill, 
  interpolate, 
  spring 
} from "remotion";

Animation patterns

Spring physics for organic motion:
const bounceIn = spring({
  frame,
  fps,
  config: {
    damping: 12,     // Higher = less bouncy
    stiffness: 100,  // Higher = faster
    mass: 0.5,       // Higher = slower
  },
});
Interpolate for linear transitions:
const fadeIn = interpolate(
  frame,
  [0, 30],      // Input range (frames)
  [0, 1],       // Output range (opacity)
  {
    extrapolateLeft: "clamp",
    extrapolateRight: "clamp",
  }
);
Staggered delays for multiple elements:
items.map((item, index) => {
  const delay = index * 5; // 5 frames between each
  const opacity = interpolate(
    frame - delay,
    [0, 20],
    [0, 1],
    { extrapolateLeft: "clamp", extrapolateRight: "clamp" }
  );
  return <div key={index} style={{ opacity }}>{item}</div>;
})

Next steps

Installation guide

Learn about all dependencies, configuration options, and deployment

Prompting best practices

Master prompt engineering to get better results

Skills system

Understand how domain expertise is injected into generation

API reference

Explore the generation API and request/response schemas

Troubleshooting

The system includes automatic error correction with up to 3 retry attempts. If compilation fails:
  1. Check the error message in the Preview tab
  2. The AI will automatically attempt to fix syntax errors
  3. You can manually edit the code to fix issues
  4. Send a follow-up prompt describing the problem
Be more specific in your prompt:
  • Include exact colors (hex codes)
  • Specify timing (“fade in over 1 second” = 30 frames at 30fps)
  • Describe layout precisely (“centered”, “top left corner”, “20px padding”)
  • Mention animation feel (“bouncy”, “smooth”, “sharp”)
Generation speed depends on:
  • Model selection (GPT-5.2 with reasoning effort is slower but higher quality)
  • Prompt complexity (more skills detected = more context)
  • OpenAI API response time
Try using a faster model from the dropdown or simplifying your prompt.
If targeted edits fail to apply:
  • The AI will automatically retry with better context matching
  • You can manually apply changes in the code editor
  • Try rephrasing your edit request more specifically
  • The system will fall back to full code replacement if edits keep failing

Build docs developers (and LLMs) love