Skip to main content
The chat endpoint provides an interactive conversational interface with AI-powered search and research capabilities. It returns streaming responses with structured blocks for a rich chat experience.

Endpoint

http://localhost:3000/api/chat
Replace localhost:3000 with your Perplexica instance URL if running on a different host or port.

Request body

message
object
required
The message object containing the user’s input and metadata.
chatModel
object
required
Defines the chat model to be used. Get available providers and models from the /api/providers endpoint.
embeddingModel
object
required
Defines the embedding model for similarity-based searching.
optimizationMode
string
required
Optimization mode to control performance and quality balance. Available values: speed, balanced, quality.
sources
array
default:[]
Which search sources to enable. Available values: web, academic, discussions.
history
array
default:[]
An array of message pairs representing the conversation history. Each pair consists of a role (either human or assistant) and the message content.
files
array
default:[]
An array of file IDs to include in the context for this message.
systemInstructions
string
default:""
Custom instructions to guide the AI’s response. Set to null or empty string for default behavior.

Response

The chat endpoint returns a streaming response with Content-Type: text/event-stream. Each line contains a newline-delimited JSON object representing different types of events.

Stream event types

block
object
A new content block has been created. Contains the block object with its initial state.
updateBlock
object
An existing block has been updated.
researchComplete
object
Indicates that the research phase is complete and the AI is ready to generate the final response.
messageEnd
object
Indicates the message stream has completed successfully.
error
object
An error occurred during processing.

Request example

curl -X POST http://localhost:3000/api/chat \
  -H "Content-Type: application/json" \
  -d '{
    "message": {
      "messageId": "msg-123",
      "chatId": "chat-456",
      "content": "What is the latest news about AI?"
    },
    "chatModel": {
      "providerId": "550e8400-e29b-41d4-a716-446655440000",
      "key": "gpt-4o-mini"
    },
    "embeddingModel": {
      "providerId": "550e8400-e29b-41d4-a716-446655440000",
      "key": "text-embedding-3-large"
    },
    "optimizationMode": "balanced",
    "sources": ["web"],
    "history": [],
    "files": [],
    "systemInstructions": ""
  }'

Response example

{"type":"block","block":{"id":"block-1","type":"text","content":""}}
{"type":"updateBlock","blockId":"block-1","patch":{"content":"Here are the latest "}}
{"type":"updateBlock","blockId":"block-1","patch":{"content":"Here are the latest developments in AI..."}}
{"type":"researchComplete"}
{"type":"messageEnd"}
All request body fields must pass validation. The endpoint uses Zod schemas to validate input and will return detailed error messages for invalid requests.

Error responses

400
Bad Request
Returned if the request body is invalid or missing required fields. The response will include detailed error information with field paths and validation messages.
500
Internal Server Error
Returned if an error occurs while processing the chat request.

Build docs developers (and LLMs) love