Skip to main content
The AI chatbot is a portfolio assistant built on Groq and the Vercel AI SDK. It runs entirely within the Next.js app, uses server-sent text streaming, and answers questions about Roger’s skills, projects, and experience using a statically-generated system prompt.

Architecture

The feature is split across four layers:
LayerFileResponsibility
Lazy loadercomponents/AIDrawerLazy.tsxDefers bundle load until first click; prefetches on hover/focus
Chat UIcomponents/AIDrawer.tsxFloating button, slide-in sidebar, message list, input field
API routeapp/api/chat/route.tsValidates request, retrieves cached context, streams from Groq
Context builderdata/portfolio-context.tsCompiles portfolio data files into the system prompt string

Request Flow

1

User opens the drawer

AIDrawerLazy renders a placeholder floating button. On the first click it sets loaded = true, which mounts the real AIDrawer component (with initialOpen set to true). While the user hovers or focuses the button, the component chunk is prefetched.
2

User sends a message

AIDrawer calls sendMessage({ text: input.trim() }) from the useChat hook (@ai-sdk/react). The hook is configured with a TextStreamChatTransport that targets POST /api/chat.
3

POST /api/chat receives the request

The route handler deserialises { messages: UIMessage[] } from the request body, retrieves the cached portfolio context string (regenerated every 5 minutes), and calls convertToModelMessages to transform the UI message format into the model message format expected by streamText.
4

Groq streams the response

streamText sends the conversation to llama-3.1-8b-instant on Groq with the portfolio context as the system prompt. The result is returned as a plain-text stream via result.toTextStreamResponse().
5

UI renders the stream

TextStreamChatTransport pipes the incoming text chunks into the messages state managed by useChat. The MessageItem component renders a blinking cursor on the last assistant message while status is "streaming".

Components

AIDrawerLazy

The public entry point. It avoids loading the full chat bundle until the user interacts with it.
// components/AIDrawerLazy.tsx
export default function AIDrawerLazy() {
  const [loaded, setLoaded] = useState(false);

  const handlePrefetch = () => {
    // Warm-up the chunk before the real click
    import("@/components/AIDrawer");
  };

  if (!loaded) {
    return (
      <button
        onClick={() => setLoaded(true)}
        onMouseEnter={handlePrefetch}
        onFocus={handlePrefetch}
        // ...
      >
        <MessageCircle className="w-6 h-6" />
      </button>
    );
  }

  return <AIDrawer initialOpen />;
}

AIDrawer

The full chat interface. Sub-components are co-located in the same file:
Sub-componentDescription
FloatingButtonFixed-position button, bottom-right corner, uses --primary color
ChatHeaderDrawer header with bot icon and close button
MessageItemRenders a single user or assistant message; calls formatMessageContent
ChatInputTextarea + send/stop button; Enter sends, Shift+Enter inserts newline
ErrorMessageRed left-border alert rendered when useChat returns an error
OverlaySemi-transparent backdrop; click or Escape closes the drawer
The useChat hook manages all message state:
const { messages, sendMessage, stop, status, error } = useChat({
  transport: new TextStreamChatTransport({ api: "/api/chat" }),
  messages: INITIAL_MESSAGES,
});
The status value drives loading state:
const isLoading = status === "submitted" || status === "streaming";

API Route

// app/api/chat/route.ts
export const maxDuration = 30;

export async function POST(request: Request) {
  const { messages } = await request.json();

  const portfolioContext = getPortfolioContext();

  const result = streamText({
    model: groq("llama-3.1-8b-instant"),
    system: portfolioContext,
    messages: await convertToModelMessages(messages),
  });

  return result.toTextStreamResponse();
}
convertToModelMessages (from the ai package) transforms the UIMessage[] format — which uses a parts[] array — into the ModelMessage[] format that streamText expects with a content field.

Model

The chatbot uses Llama 3.1 8B Instant (llama-3.1-8b-instant) served by Groq. This model is optimised for low-latency inference, which makes it well suited for streaming chat.

Context Caching

Building the system prompt string is deterministic but involves iterating over the data arrays. The route caches the result in a module-level variable and invalidates it after 5 minutes:
let cachedContext: string | null = null;
let lastContextUpdate = 0;
const CONTEXT_CACHE_DURATION = 5 * 60 * 1000; // 5 minutes in ms

function getPortfolioContext() {
  const now = Date.now();
  if (!cachedContext || now - lastContextUpdate > CONTEXT_CACHE_DURATION) {
    cachedContext = generatePortfolioContext();
    lastContextUpdate = now;
  }
  return cachedContext;
}

Message Rendering

formatMessageContent (from lib/chat-utils.tsx) post-processes the assistant’s text before rendering:
  • Detects https?:// URLs and wraps them in <a> tags with an ExternalLink icon
  • Detects **bold** markers and renders them as <strong> elements

Error Handling

  • useChat exposes an error object. When it is non-null, AIDrawer renders the ErrorMessage component above the message list.
  • The user can abort an in-flight response at any time by clicking the stop button (■), which calls stop() from the useChat hook.
  • The route has a hard maxDuration of 30 seconds; Vercel will terminate the function after this time.

Conversation History

Every call to POST /api/chat includes the full message history in the request body. The model receives the entire conversation on each request — there is no server-side session state.

Build docs developers (and LLMs) love