Skip to main content
ForgeAI uses an AI agent powered by GPT-5.2 to turn a plain-text prompt into a fully functional Next.js application. You describe what you want to build, and the agent writes files, installs dependencies, and runs commands inside a sandboxed environment — all without leaving the browser.

How the agent works

When you submit a prompt, an Inngest function starts the code agent. The agent runs in a loop — reading context, calling tools, and writing code — until the app is complete or it reaches the iteration limit. The agent operates inside a pre-configured Next.js 16 sandbox with:
  • Tailwind CSS — all styling uses utility classes; no .css files are written
  • Shadcn UI — a full set of accessible components already installed under components/ui/
  • A running dev server on port 3000 with hot reload
The agent runs for a maximum of 20 iterations per request. Complex prompts may produce partial results if the task exceeds this limit. Break large features into smaller follow-up prompts for best results.

Agent tools

The agent has four tools available during each generation run:
ToolWhat it does
terminalRuns shell commands — primarily npm install <package> --yes to add dependencies
createOrUpdateFilesWrites or overwrites files in the sandbox filesystem
readFilesReads existing files to understand the current project state
unsplashImageSearches Unsplash, downloads a photo, and saves it to /public/assets/unsplash/

What the agent generates

Every project is rooted at app/page.tsx, which the agent always creates as the main entry point. From there it may create additional components, utilities, and pages under app/. The agent follows these conventions:
  • Imports Shadcn UI components from @/components/ui/<component>
  • Uses only Tailwind utility classes for styling — never writes .css, .scss, or .sass files
  • Adds "use client" only to files that use React hooks or browser APIs
  • Uses kebab-case filenames and PascalCase component names
  • Fetches real photos via unsplashImage instead of leaving placeholder images

Writing effective prompts

The more context you give the agent, the better the output. Vague prompts lead to generic results; specific prompts produce polished apps.
Tell the agent what sections the page should have, how they’re arranged, and how much content each should include.Instead of: “Make a landing page”Try: “Build a SaaS landing page with a full-width hero, a three-column features section, a pricing table with three tiers, and a FAQ accordion.”
If you want a data table, a modal dialog, a sidebar, or animated transitions — say so explicitly. The agent won’t add features it hasn’t been asked for.
Provide real copy, item counts, and field names rather than asking the agent to “add some content.” The agent uses static/local data by default and won’t call external APIs unless you ask.
Mention mobile breakpoints, keyboard navigation, or ARIA requirements if they matter for your project.
After the initial generation, keep prompts focused on one change at a time. The agent loads the previous file state so it can iterate without losing existing work.

Example prompts

Build a modern SaaS landing page with pricing, FAQ, and newsletter signup.
Create a Kanban board with three columns (To Do, In Progress, Done), draggable cards,
and a dialog to add new cards with a title and description field.
Build a recipe app with a search bar, a grid of recipe cards showing an image, title,
and cook time, and a detail page with ingredients and steps.
If you need a real photo — for a hero image, team avatar, or product card — describe it in the prompt. The agent will use the unsplashImage tool to download an appropriate photo into /public/assets/unsplash/ and add a proper attribution link.

Project naming

After generation completes, a separate naming agent reads your original prompt and produces a short, product-like title (2–5 words, Title Case). This name is saved to the project and shown in the dashboard.

Build docs developers (and LLMs) love