Overview
The chat assistant is powered by OpenAI’s GPT models and provides:- Tool discovery and recommendations from the curated guide
- Side-by-side comparisons (e.g., “Compare Cursor and Replit”)
- Quick answers about specific tools and their use cases
- General hackathon tips and best practices
How It Works
The chat implementation spans three layers:Architecture
- Frontend (
src/components/ChatPanel.tsx) - React modal with markdown rendering - Shared Logic (
shared/chatPolicy.ts) - Message handling, OpenAI API calls, and tool ranking - Backend (
server/chat.ts) - Vercel serverless function endpoint
chatPolicy.ts contains all business logic, ensuring identical behavior in both development (via Vite middleware) and production (via serverless function).
Chat Flow
- User sends a message through the chat UI
ChatPanelposts to/api/chatwith message history- Backend validates messages and sanitizes input
- System prompt is constructed based on mode and context
- OpenAI API is called with conversation history
- Response is returned and rendered as markdown
Using the Chat
Opening the Chat
Click the chat icon in the sidebar or use any “Ask AI” button throughout the guide. The chat opens as a modal overlay.Example Queries
Chat Features
- Markdown Support - Responses include formatting, lists, and links
- Multi-turn Conversations - Context is maintained across messages
- Auto-scroll - Automatically scrolls to new messages
- Keyboard Support - Press Enter to send, Shift+Enter for new line
Setup Instructions
Get an OpenAI API Key
Create an API key at platform.openai.com/api-keys
Configure Environment Variable
For local development, copy Alternatively, export it in your shell:
.env.example to .env and add your key:The chat requires
OPENAI_API_KEY to be configured. Without it, you’ll see an error: “OPENAI_API_KEY is not configured”.Production Deployment
When deploying to Vercel:Set Environment Variable
Go to your Vercel project settings:Project Settings → Environment VariablesAdd
OPENAI_API_KEY with your API key as the value.Make sure to set the environment variable for all environments (Production, Preview, Development) where you want the chat to work.
Implementation Details
Message Validation
All messages go through sanitization (shared/chatPolicy.ts:231-244):
- Only valid message objects are processed
- Content is properly trimmed
- Empty messages are filtered out
- Only ‘user’ and ‘assistant’ roles are allowed
System Prompt
The default system prompt (shared/chatPolicy.ts:70-71) is:
Model Selection
The chat uses different models based on the mode:- Default mode:
gpt-5.2for general questions - Suggest-stack mode:
gpt-5.2with JSON response format
Tool Context
You can provide tool-specific context when opening the chat (src/components/ChatPanel.tsx:16):
Error Handling
The chat handles several error scenarios:- Missing API Key: Returns 500 with “OPENAI_API_KEY is not configured”
- Invalid Messages: Returns 400 with “messages array is required” or “must contain user/assistant messages”
- OpenAI API Errors: Returns the API error status and message
- Network Errors: Shows “Network error. Please try again.” in the UI
API Endpoint
Request Format
Response Format
Related
- Tech Stack Recommendations - Learn about the suggest-stack mode