Skip to main content

Overview

PolyChat-AI features a modern, intuitive chat interface designed for seamless interaction with AI language models. The interface supports multiple simultaneous conversations, message history management, and advanced features like response regeneration.

Key Features

Multi-Window Support

Handle multiple conversations simultaneously without losing context:
  • Independent Chat Windows: Each conversation maintains its own context and settings
  • Quick Switching: Navigate between conversations with keyboard shortcuts (Ctrl/Cmd + N)
  • Session Persistence: All conversations are saved locally and persist across sessions
  • Model Per Conversation: Each chat can use a different AI model
Use Ctrl/Cmd + N to quickly start a new conversation while keeping your existing chats active.

Message History Management

Intelligent History
  • Save and manage conversations with automatic timestamp tracking
  • Search functionality to find specific conversations or messages
  • Export conversations for documentation or backup
  • Delete or archive old conversations to maintain organization
Local Storage
// All conversations are stored locally in your browser
// Location: localStorage under 'PolyChat-AI' namespace
// Format: JSON with message content, timestamps, and metadata

Message Controls

Inline Model Information Every assistant response displays which model generated it:
  • Model name and version
  • Response generation time
  • Character count for tracking usage
Response Regeneration Not satisfied with a response? Regenerate it with one click:
  • Uses the same model and system prompt
  • Maintains conversation context
  • Previous response is replaced (not deleted)
  • No additional setup required
interface Message {
  id: string;
  role: 'user' | 'assistant' | 'system';
  content: string | MessageContent[];
  timestamp: Date;
  model?: string; // Which model generated this response
}

Real-time Features

Streaming Responses

Experience fluid AI responses with real-time streaming:
// From: src/services/openRouter.ts
export async function streamAIResponse(
  messages: Message[],
  apiKey: string,
  model: string,
  onChunk: (delta: string) => void,
  systemPrompt?: string,
  abortController?: AbortController
): Promise<string | MessageContent[]>
Benefits:
  • See responses as they’re generated (character by character)
  • Cancel mid-response if needed with abort controller
  • Live character count and loading animations
  • Automatic fallback to non-streaming if connection fails

Loading States

Visual Feedback
  • Typing indicators while AI is generating
  • Loading animations during model processing
  • Character count updates in real-time
  • Progress indication for long responses

Interface Customization

Themes

Choose from 2 distinct visual themes:
ThemeDescription
Dark ModeElegant interface with dark background
Light ModeClean, modern light interface

Accent Colors

Personalize with 8 accent color options:
  • Violet, Blue, Green, Rose
  • Orange, Teal, Red, Cyan
Theme and color preferences are saved locally and persist across sessions.

Keyboard Shortcuts

Boost your productivity with these shortcuts:
ShortcutAction
Ctrl/Cmd + NNew conversation
Ctrl/Cmd + SSave conversation
Ctrl/Cmd + KOpen settings
Ctrl/Cmd + UUsage dashboard
Ctrl/Cmd + /Show help

Advanced Features

System Instructions

Customize AI behavior with custom system prompts:
Settings → Advanced → System Instructions
Configuration Options:
  • Global system prompt applied to all conversations
  • Per-conversation system prompts via templates
  • Conversation tone: Neutral, Formal, Friendly, Professional, Enthusiastic

Dynamic Model Switching

Change models mid-conversation seamlessly:
  • No context loss when switching models
  • Previous messages remain intact
  • New responses use the selected model
  • Model information displayed per message
Switching models mid-conversation works seamlessly, but different models may interpret previous context differently.

Best Practices

  • Use descriptive names when saving conversations
  • Archive completed conversations to reduce clutter
  • Export important conversations for backup
  • Regularly review and delete test conversations
  • Clear old conversations periodically (local storage has limits)
  • Use streaming for long responses
  • Enable RAG only when needed for better performance
  • Close unused conversation tabs
  • Keep conversations focused on a single topic
  • Start new conversations for different subjects
  • Use templates for consistent conversation structures
  • Regenerate responses if context seems lost

Technical Details

Message Persistence

Messages are stored using browser localStorage:
// Location: src/services/localStorage.ts
// Storage structure:
{
  conversations: [
    {
      id: string,
      title: string,
      messages: Message[],
      model: string,
      createdAt: Date,
      updatedAt: Date
    }
  ]
}

API Integration

All messages are sent to OpenRouter API:
// From: src/services/openRouter.ts
const API_URL = 'https://openrouter.ai/api/v1/chat/completions';

// Headers include:
{
  'Authorization': `Bearer ${apiKey}`,
  'Content-Type': 'application/json',
  'HTTP-Referer': window.location.origin,
  'X-Title': 'PolyChat AI'
}

Next: Multi-Model Chat

Learn how to run up to 3 models simultaneously for comparison

Build docs developers (and LLMs) love