Skip to main content

Overview

The Pope Bot framework provides two chat-related exports:
  1. thepopebot/chat - React UI components
  2. thepopebot/chat/api - Streaming route handler
  3. thepopebot/chat/actions - Server actions for chat operations

thepopebot/chat

React components for chat interface.

Import

import { ChatInterface, MessageList, ChatInput } from 'thepopebot/chat';

Components

Exports from lib/chat/components/index.js:
  • ChatInterface - Full chat UI (message list + input)
  • MessageList - Display conversation history
  • ChatInput - User message input with file upload
  • MessageBubble - Individual message rendering
  • ToolCallDisplay - Show tool invocations
Usage:
import { ChatInterface } from 'thepopebot/chat';

export default function ChatPage() {
  return <ChatInterface chatId="thread-123" />;
}

thepopebot/chat/api

Streaming chat route handler with session authentication.

Import

import { POST } from 'thepopebot/chat/api';

Route Handler

POST
function
Stream chat responses using AI SDK’s createUIMessageStream
Usage in app/stream/chat/route.js:
import { POST } from 'thepopebot/chat/api';
export { POST };

How It Works

  1. Validates session via auth()
  2. Extracts user message from AI SDK v5 message format
  3. Processes file attachments (images, PDFs, text files)
  4. Calls chatStream() to invoke LLM
  5. Streams response chunks back to browser
  6. Saves messages to database
export async function POST(request) {
  const session = await auth();
  if (!session?.user?.id) {
    return Response.json({ error: 'Unauthorized' }, { status: 401 });
  }

  const body = await request.json();
  const { messages, chatId: rawChatId, trigger, codeMode, repo, branch, workspaceId } = body;

  // Extract last user message
  const lastUserMessage = [...messages].reverse().find((m) => m.role === 'user');
  
  // Process text and file parts
  let userText = lastUserMessage.parts
    ?.filter((p) => p.type === 'text')
    .map((p) => p.text)
    .join('\n') || lastUserMessage.content || '';

  const fileParts = lastUserMessage.parts?.filter((p) => p.type === 'file') || [];
  const attachments = [];

  for (const part of fileParts) {
    const { mediaType, url } = part;
    if (mediaType.startsWith('image/') || mediaType === 'application/pdf') {
      attachments.push({ category: 'image', mimeType: mediaType, dataUrl: url });
    } else if (mediaType.startsWith('text/')) {
      const base64Data = url.split(',')[1];
      const textContent = Buffer.from(base64Data, 'base64').toString('utf-8');
      userText += `\n\nFile: ${part.name}\n\`\`\`\n${textContent}\n\`\`\``;
    }
  }

  const threadId = rawChatId || uuidv4();
  const chunks = chatStream(threadId, userText, attachments, {
    userId: session.user.id,
    skipUserPersist: trigger === 'regenerate-message',
  });

  // Stream to browser using AI SDK format
  return createUIMessageStreamResponse({ stream });
}

Request Body

messages
array
required
AI SDK message array with role, parts, and content fields
chatId
string
Thread ID (generated if not provided)
trigger
string
Action trigger: regenerate-message skips user message persistence
codeMode
boolean
Enable code workspace context
repo
string
Repository name for code mode
branch
string
Branch name for code mode
workspaceId
string
Code workspace ID

File Attachments

Supports:
  • Images (image/*) - Passed to LLM for vision analysis
  • PDFs (application/pdf) - Passed to LLM for document analysis
  • Text files (text/*, application/json) - Decoded and inlined into message

Streaming Format

Uses AI SDK v5 streaming protocol:
// Text chunks
{ type: 'text-start', id: string }
{ type: 'text-delta', id: string, delta: string }
{ type: 'text-end', id: string }

// Tool calls
{ type: 'tool-input-start', toolCallId: string, toolName: string }
{ type: 'tool-input-available', toolCallId: string, toolName: string, input: object }
{ type: 'tool-output-available', toolCallId: string, output: any }

// Message boundaries
{ type: 'start' }
{ type: 'finish' }

thepopebot/chat/actions

Server actions for chat operations.

Import

import { getChatHistory, deleteChat, updateChatTitle } from 'thepopebot/chat/actions';

Functions

getChatHistory
function
Fetch user’s chat threads with metadata
deleteChat
function
Delete a chat thread and all messages
updateChatTitle
function
Rename a chat thread
Usage:
'use server';
import { getChatHistory } from 'thepopebot/chat/actions';

export async function loadChats() {
  const chats = await getChatHistory();
  return chats.map(c => ({ id: c.id, title: c.title }));
}

Authentication

Chat API uses session auth, not API keys.The /stream/chat route is separate from /api/* endpoints. It validates the authjs.session-token cookie, not the x-api-key header.

Environment Variables

LLM_PROVIDER
string
LLM provider: anthropic, openai, google, custom (default: anthropic)
LLM_MODEL
string
Model name (default: claude-sonnet-4-20250514)
LLM_MAX_TOKENS
number
Max output tokens (default: 4096)
ANTHROPIC_API_KEY
string
Required for anthropic provider
OPENAI_API_KEY
string
Required for openai provider
GOOGLE_API_KEY
string
Required for google provider

Build docs developers (and LLMs) love