Skip to main content

Groq API Key

The chatbot requires a Groq API key to call the inference API. Without it the route will return an authentication error.
1

Create a Groq account

Go to console.groq.com and sign up or log in.
2

Generate an API key

Navigate to API Keys in the left sidebar and click Create API Key. Copy the key — it is only shown once.
3

Add the key to your environment

Create or edit .env.local in the project root:
# .env.local
GROQ_API_KEY=gsk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
The key is read in app/api/chat/route.ts:
const groq = createGroq({
  apiKey: process.env.GROQ_API_KEY,
});
4

Restart the dev server

Next.js only reads .env.local at startup. Run pnpm dev (or npm run dev) after adding the key.
Never commit .env.local to version control. The project’s .gitignore should already exclude it. If you are deploying to Vercel, add GROQ_API_KEY as an environment variable in the project settings instead.

Changing the AI Model

The model is specified in app/api/chat/route.ts:
const result = streamText({
  model: groq("llama-3.1-8b-instant"), // <-- change this string
  system: portfolioContext,
  messages: await convertToModelMessages(messages),
});
Replace "llama-3.1-8b-instant" with any model ID available on Groq. Common options:
Model IDStrengthsTrade-offs
llama-3.1-8b-instantVery fast, low latencySmaller context window, less nuanced
llama-3.3-70b-versatileHigher quality, stronger reasoningSlower, higher token cost
mixtral-8x7b-32768Large 32 k context windowModerate speed
gemma2-9b-itEfficient, instruction-tunedLess capable than 70B models
Check the Groq model documentation for the current list of available models and their context limits.

Adjusting Model Parameters

The streamText call in app/api/chat/route.ts is where you tune model behaviour. The Vercel AI SDK accepts all standard parameters alongside the required fields:
// app/api/chat/route.ts
const result = streamText({
  model: groq("llama-3.1-8b-instant"),
  system: portfolioContext,
  messages: await convertToModelMessages(messages),
  // Optional parameters — add as needed:
  // temperature: 0.7,
  // maxTokens: 512,
});
maxDuration controls the maximum wall-clock time (in seconds) the serverless function is allowed to run before Vercel terminates it:
// app/api/chat/route.ts
export const maxDuration = 30; // seconds
Increase this value if you switch to a slower model or expect longer responses. The maximum value depends on your Vercel plan. Context cache duration controls how often the system prompt is rebuilt from the data files:
const CONTEXT_CACHE_DURATION = 5 * 60 * 1000; // 5 minutes in ms
During development you can set this to 0 to force a rebuild on every request, which is useful when iterating on portfolio-context.ts.

Customising the UI

initialOpen prop

AIDrawer accepts one prop:
initialOpen
boolean
default:"false"
When true, the chat sidebar opens immediately when the component mounts. AIDrawerLazy passes initialOpen={true} after the user’s first click so the drawer opens as soon as the bundle loads.

Colors

The chat UI uses CSS custom properties defined in your Tailwind theme. The primary interactive color (bg-primary, text-primary, border-primary) is referenced throughout:
  • Floating button background: bg-primary
  • Chat header background: bg-primary
  • User message bubble: bg-primary
  • Send button: bg-primary
  • Input focus ring: focus:border-primary
  • Streaming cursor: bg-primary/40
  • The pulsing dot on the floating button uses bg-accent
Change the --primary and --accent CSS variables in your global stylesheet to retheme the entire chat UI at once.

Floating button placement

The floating button is rendered by the FloatingButton sub-component inside AIDrawer, and by the placeholder button inside AIDrawerLazy (shown before the bundle loads). Both use the same Tailwind classes:
className="fixed bottom-8 right-8 z-40 ..."
Adjust bottom-8 and right-8 to reposition the button.

Where the component is mounted

AIDrawerLazy is the component you should place in your layout. It is a client component ("use client") that wraps the full AIDrawer behind a dynamic import with ssr: false:
const AIDrawer = dynamic(() => import("@/components/AIDrawer"), {
  ssr: false,
});

API Endpoint Reference

POST /api/chat

The single endpoint consumed by the chatbot UI. Request
messages
UIMessage[]
required
The full conversation history, including the message just sent. Each entry is a UIMessage from the Vercel AI SDK (ai package v6). User messages use role: "user" and assistant messages use role: "assistant". Each message carries a parts array rather than a plain content string.
{
  "messages": [
    {
      "id": "welcome",
      "role": "assistant",
      "parts": [{ "type": "text", "text": "Hello! How can I help?" }]
    },
    {
      "id": "msg-1",
      "role": "user",
      "parts": [{ "type": "text", "text": "What projects has Roger built?" }]
    }
  ]
}
Response
body
text/plain (stream)
A plain-text stream of the assistant’s response tokens. The Content-Type header is text/plain; charset=utf-8. The client reads this stream via TextStreamChatTransport.
Notes
  • Authentication: None. The endpoint relies on same-origin browser requests; there is no API key check on incoming requests.
  • Timeout: 30 seconds (maxDuration = 30). Vercel terminates the function after this limit.
  • Context: The system prompt is injected server-side and is not part of the request body.

Build docs developers (and LLMs) love