Architecture
The feature is split across four layers:| Layer | File | Responsibility |
|---|---|---|
| Lazy loader | components/AIDrawerLazy.tsx | Defers bundle load until first click; prefetches on hover/focus |
| Chat UI | components/AIDrawer.tsx | Floating button, slide-in sidebar, message list, input field |
| API route | app/api/chat/route.ts | Validates request, retrieves cached context, streams from Groq |
| Context builder | data/portfolio-context.ts | Compiles portfolio data files into the system prompt string |
Request Flow
User opens the drawer
AIDrawerLazy renders a placeholder floating button. On the first click it sets loaded = true, which mounts the real AIDrawer component (with initialOpen set to true). While the user hovers or focuses the button, the component chunk is prefetched.User sends a message
AIDrawer calls sendMessage({ text: input.trim() }) from the useChat hook (@ai-sdk/react). The hook is configured with a TextStreamChatTransport that targets POST /api/chat.POST /api/chat receives the request
The route handler deserialises
{ messages: UIMessage[] } from the request body, retrieves the cached portfolio context string (regenerated every 5 minutes), and calls convertToModelMessages to transform the UI message format into the model message format expected by streamText.Groq streams the response
streamText sends the conversation to llama-3.1-8b-instant on Groq with the portfolio context as the system prompt. The result is returned as a plain-text stream via result.toTextStreamResponse().Components
AIDrawerLazy
The public entry point. It avoids loading the full chat bundle until the user interacts with it.
AIDrawer
The full chat interface. Sub-components are co-located in the same file:
| Sub-component | Description |
|---|---|
FloatingButton | Fixed-position button, bottom-right corner, uses --primary color |
ChatHeader | Drawer header with bot icon and close button |
MessageItem | Renders a single user or assistant message; calls formatMessageContent |
ChatInput | Textarea + send/stop button; Enter sends, Shift+Enter inserts newline |
ErrorMessage | Red left-border alert rendered when useChat returns an error |
Overlay | Semi-transparent backdrop; click or Escape closes the drawer |
useChat hook manages all message state:
status value drives loading state:
API Route
convertToModelMessages (from the ai package) transforms the UIMessage[] format — which uses a parts[] array — into the ModelMessage[] format that streamText expects with a content field.
Model
The chatbot uses Llama 3.1 8B Instant (llama-3.1-8b-instant) served by Groq. This model is optimised for low-latency inference, which makes it well suited for streaming chat.
Context Caching
Building the system prompt string is deterministic but involves iterating over the data arrays. The route caches the result in a module-level variable and invalidates it after 5 minutes:Message Rendering
formatMessageContent (from lib/chat-utils.tsx) post-processes the assistant’s text before rendering:
- Detects
https?://URLs and wraps them in<a>tags with anExternalLinkicon - Detects
**bold**markers and renders them as<strong>elements
Error Handling
useChatexposes anerrorobject. When it is non-null,AIDrawerrenders theErrorMessagecomponent above the message list.- The user can abort an in-flight response at any time by clicking the stop button (■), which calls
stop()from theuseChathook. - The route has a hard
maxDurationof 30 seconds; Vercel will terminate the function after this time.
Conversation History
Every call toPOST /api/chat includes the full message history in the request body. The model receives the entire conversation on each request — there is no server-side session state.