Skip to main content
Vibra Code turns plain-English descriptions into fully functional mobile apps. You write a prompt, the AI builds the app inside a cloud sandbox, and you see a live preview directly on your phone — all in real time, streamed via Convex.

Starting a new session

Every app you build lives in a session. A session tracks the conversation, the E2B sandbox, and the tunnel URL used to preview your app.
1

Open the Create tab

Tap the Create tab at the bottom of the Vibra Code app. This opens VibraCreateAppScreen, where you choose a template and write your initial prompt.
2

Choose a template

Select the template that matches what you want to build. For native mobile apps, choose Expo React Native. For web apps, choose Next.js or another web template. See App templates for the full list.
3

Describe your app

Type what you want to build in plain English. Be as specific or as high-level as you like. You can also use voice input or attach image mockups at this step.
4

Tap Send

Your message is sent to the backend, a session is created in Convex, and Inngest kicks off the sandbox build pipeline.
5

Watch the build

The chat view shows real-time status updates as the sandbox progresses through each stage. Once the session reaches RUNNING, your app is live.
6

Preview your app

When the tunnel URL is ready, the native preview pane loads your app. You can interact with it right on your phone.

The chat interface

Once a session is open, the interface has three main areas.

Top bar

The top bar shows the current app name. In preview mode the name is centered; in chat mode it shifts left. The top bar also contains:
  • Refresh button — reloads the app preview
  • Chevron down — closes the chat and returns to full-screen preview
  • Three-dots menu — opens modals for files, logs, environment variables, GitHub publish, and more

Chat view

The main scrollable area that shows all messages exchanged with the AI agent. Messages stream in real time from the E2B sandbox via Convex. The chat is rendered by a high-performance native stack — Texture (AsyncDisplayKit) + IGListKit — for smooth 60 fps scrolling.

Bottom bar

The input area at the bottom contains:
  • Text input — type follow-up instructions or questions
  • Send button — submit your message
  • Mic button — tap to record voice input (transcribed by AssemblyAI)
  • Image button — attach mockup screenshots or reference images
  • Model selector — switch between AI providers (Claude, Cursor, Gemini)

Message types

The chat displays several distinct message types, each rendered differently so you can tell at a glance what the AI is doing.
TypeVisualWhat it shows
messagePlain text with markdownUser messages and AI assistant replies
readBlue accent groupA file the agent read from the sandbox
editOrange accent groupA file edit the agent applied
bashGreen accent groupA terminal command and its output
tasksLiquid Glass cardThe agent’s current to-do list
statusShimmer indicator”Working…” while the agent is active
Tap on a read, edit, or bash group to expand or collapse the details.

Session status flow

After you send your first message, the session moves through a fixed sequence of statuses. These are displayed in the chat and stored on the sessions table in Convex.
StatusWhat is happening
IN_PROGRESSSession created; sandbox is being provisioned
CLONING_REPOTemplate repository is being cloned into the sandbox
INSTALLING_DEPENDENCIESnpm install / yarn is running
STARTING_DEV_SERVERThe dev server (e.g., expo start) is starting
CREATING_TUNNELA public tunnel URL is being created for preview
RUNNINGThe app is live and the preview is available
CUSTOMThe AI agent is actively generating or editing code
The CUSTOM status appears repeatedly throughout a session as the agent works through tasks. It does not mean the session has finished.

Iterating and refining

Once your session is RUNNING, you can send as many follow-up messages as you like. The agent reads your new instruction, modifies the code in the sandbox, and the preview updates automatically. Effective iteration tips:
  • Be specific — “Make the button blue and rounded” beats “improve the design”
  • One change at a time — smaller, focused requests are easier for the agent to execute correctly
  • Reference what you see — describe what is on screen and what you want changed
  • Attach a screenshot — if you have a reference image or a mockup, attach it so the agent can match the design

Previewing on your phone

While the session is RUNNING, the native preview pane displays your app via its tunnel URL. For Expo React Native projects, this is a full Expo preview running inside the modified Expo Go environment. For web projects, the preview uses a WKWebView. You can interact with the preview normally — tap buttons, fill forms, scroll — just as if the app were installed on your device.

Stopping the agent

To stop the AI agent mid-generation, tap the Stop button that appears in the bottom bar while the agent is running. This sets agentStopped: true on the session in Convex, which signals the backend to halt the current Inngest job.
Stopping the agent mid-run may leave the code in an incomplete state. You can always send a new message to ask the agent to finish or fix any incomplete work.

Build docs developers (and LLMs) love