This guide walks you through creating a Tambo app from scratch. You’ll have a working chat interface with AI-powered component rendering in under 5 minutes.
Prerequisites
Node.js 22 or higher
npm 11 or higher
A code editor
Create a new project
Create your app
Use the Tambo app generator to create a new project: npm create tambo-app my-tambo-app
This creates a new directory with a Next.js app pre-configured for Tambo.
Initialize Tambo
Run the initialization command: You’ll be prompted to choose between:
Tambo Cloud (hosted, free to start)
Self-hosted (run on your infrastructure)
The CLI will guide you through getting an API key.
The template includes a working chat interface. Try asking “Show me a graph” or “Create a form” to see generative UI in action.
Add to existing project
If you have an existing Next.js app:
Install Tambo
npm install @tambo-ai/react zod
Initialize Tambo
This sets up your API key and configuration.
Wrap your app with TamboProvider
Update your root layout: "use client" ;
import { TamboProvider } from "@tambo-ai/react" ;
export default function RootLayout ({ children }) {
return (
< html lang = "en" >
< body >
< TamboProvider
apiKey = { process . env . NEXT_PUBLIC_TAMBO_API_KEY }
userKey = "demo-user" // Replace with actual user ID
>
{ children }
</ TamboProvider >
</ body >
</ html >
);
}
You must provide either
userKey or
userToken to identify the thread owner. See
User Authentication for details.
Build your chat interface
Create a simple chat component: "use client" ;
import { useTambo , useTamboThreadInput } from "@tambo-ai/react" ;
export default function ChatPage () {
const { messages , isStreaming } = useTambo ();
const { value , setValue , submit , isPending } = useTamboThreadInput ();
return (
< main className = "flex flex-col h-screen p-4" >
< div className = "flex-1 overflow-y-auto space-y-4" >
{ messages . map (( msg ) => (
< div key = { msg . id } className = "p-4 rounded-lg bg-gray-100" >
{ msg . content . map (( content , i ) => {
if ( content . type === "text" ) {
return < p key = { i } > { content . text } </ p > ;
}
if ( content . type === "component" ) {
return (
< div key = { i } >
Component: { content . componentName }
</ div >
);
}
return null ;
}) }
</ div >
)) }
{ isStreaming && < div > AI is thinking... </ div > }
</ div >
< form
onSubmit = {async ( e ) => {
e . preventDefault ();
await submit ();
} }
className = "flex gap-2 mt-4"
>
< input
type = "text"
value = { value }
onChange = { ( e ) => setValue ( e . target . value ) }
placeholder = "Type a message..."
className = "flex-1 p-2 border rounded"
/>
< button
type = "submit"
disabled = { isPending }
className = "px-4 py-2 bg-blue-500 text-white rounded disabled:opacity-50"
>
Send
</ button >
</ form >
</ main >
);
}
Register your first component
Let’s register a simple component so the AI can render it:
"use client" ;
import { TamboProvider } from "@tambo-ai/react" ;
import { z } from "zod" ;
// Define a simple Weather component
function WeatherCard ({ location , temperature , condition }) {
return (
< div className = "p-4 border rounded-lg" >
< h3 className = "font-bold" > { location } </ h3 >
< p className = "text-2xl" > { temperature } °C </ p >
< p className = "text-gray-600" > { condition } </ p >
</ div >
);
}
// Register the component
const components = [
{
name: "WeatherCard" ,
description: "Displays weather information for a location" ,
component: WeatherCard ,
propsSchema: z . object ({
location: z . string (),
temperature: z . number (),
condition: z . string (),
}),
},
];
export default function RootLayout ({ children }) {
return (
< html lang = "en" >
< body >
< TamboProvider
apiKey = { process . env . NEXT_PUBLIC_TAMBO_API_KEY }
userKey = "demo-user"
components = { components }
>
{ children }
</ TamboProvider >
</ body >
</ html >
);
}
Now try asking: “Show me the weather in San Francisco”
Add pre-built components
Tambo includes a library of pre-built components:
npx tambo add form
npx tambo add graph
npx tambo add message-thread-full
See the component library for the full list.
What you built
You now have:
A working chat interface
A registered component the AI can use
Streaming responses from the LLM
Thread persistence (messages are saved)
Next steps
Core concepts Learn about generative and interactable components
Register tools Add functions the AI can call
MCP integration Connect to external services
Component library Browse pre-built components