Overview
ThesessionConfig object defines the conversation behavior for Highway’s OpenAI Realtime API integration. This configuration controls voice detection, audio formats, model behavior, and available functions.
Session Configuration
The session configuration is defined inconversationConfig.js and sent to OpenAI when establishing a WebSocket connection.
conversationConfig.js
Configuration Parameters
Voice Activity Detection (VAD) settings for determining when a user has finished speaking.
Audio codec for incoming audio from Twilio. Must be
"g711_ulaw" for Twilio compatibility.Audio codec for outgoing audio to Twilio. Must be
"g711_ulaw" for Twilio compatibility.OpenAI voice to use for text-to-speech. Default is
"shimmer". See OpenAI’s voice options for alternatives: alloy, echo, fable, onyx, nova, shimmer.System message that defines the AI assistant’s behavior. See System Prompts for details.
Communication modes enabled for the conversation. Include both
"text" and "audio" for voice calls.Controls randomness in responses (0.0 to 1.0). Lower values (like 0.6) produce more consistent, focused responses - ideal for verification workflows.
Function definitions available to the AI during the call. See Available Tools below.
Available Tools
Highway provides two built-in functions that the AI can call during conversations:hang_up_call
hang_up_call
Ends the phone call gracefully.Description:
This function terminates the active phone call. The AI is instructed to only hang up if:Implementation:Behavior:
When this function is called (websocket.js:95-99), Highway closes the WebSocket connection, ending the call.
This function terminates the active phone call. The AI is instructed to only hang up if:
- The customer explicitly asks to hang up
- All system prompts are finished
- The AI has said “thank you” before hanging up
Must be
true to hang up the call.When this function is called (websocket.js:95-99), Highway closes the WebSocket connection, ending the call.
call_reflection_data
call_reflection_data
Sends call metadata and status information to the backend after the call completes.Description:
This function records the outcome of the call. The AI is explicitly instructed NOT to run this function unless told to by the system.Parameters:Implementation:Behavior:
When called (websocket.js:86-92), the function updates the call record in Supabase with the provided status.
This function records the outcome of the call. The AI is explicitly instructed NOT to run this function unless told to by the system.Parameters:
Call outcome status. Must be one of:
"user_hung_up"- Customer ended the call"system_error"- Technical error occurred"successful_call"- Call completed successfully"unsuccessful_call"- Call did not achieve its goal"in_progress"- Call is still active
When called (websocket.js:86-92), the function updates the call record in Supabase with the provided status.
Customizing Conversation Behavior
Adjusting Voice Detection Sensitivity
If users are being cut off mid-sentence, lower thethreshold:
Changing the Voice
Modify theVOICE constant in config.js:
config.js
Adjusting Response Creativity
For more consistent responses:Adding Custom Functions
Add new tool definitions to thetools array:
Usage in Code
The session configuration is sent to OpenAI when the WebSocket connection opens:websocket.js
Related
- System Prompts - Configure AI behavior and instructions
- WebSocket Media Stream - WebSocket implementation details
- OpenAI Realtime API - Official OpenAI documentation