Participant Types
Participant types define the structure for endpoints sharing LLM access within rooms.ParticipantInfo
Complete participant information including endpoint, model, specs, and status.Unique identifier for the participant
Display name chosen by the participant
Name of the LLM model being shared (e.g., “llama3:8b”, “gpt-4”)
OpenAI-compatible API endpoint URL (Ollama, LM Studio, etc.)
Generation parameters for this participant’s model. See GenerationConfig.
Hardware specifications of the participant’s machine. See MachineSpecs.
Current availability status:
"online", "busy", or "offline"Unix timestamp (milliseconds) when the participant joined the room
Unix timestamp (milliseconds) of the last health check
ParticipantStatus
Enum representing the availability status of a participant."online"- Participant is available and can accept requests"busy"- Participant is currently processing a request"offline"- Participant has not sent a health check within the timeout period
MachineSpecs
Hardware specifications for a participant’s machine.GPU model (e.g., “NVIDIA RTX 4090”)
Video RAM in GB
System RAM in GB
CPU model (e.g., “AMD Ryzen 9 7950X”)
GenerationConfig
Generation parameters compatible with OpenAI-like APIs. These are the common parameters supported by most providers.Sampling temperature between 0 and 2. Higher values make output more random.
Nucleus sampling parameter between 0 and 1
Maximum number of tokens to generate
Array of sequences where generation will stop
Penalty for token frequency between -2 and 2
Penalty for token presence between -2 and 2
Random seed for deterministic generation
Health Check Constants
Constants that control participant health monitoring.Interval in milliseconds between health checks (10 seconds)
Timeout in milliseconds before marking a participant as offline (30 seconds, or 3 missed health checks)
Usage Example
Related Types
- RoomInfo - Room metadata where participants join
- Protocol Messages - Participant-related SSE events
- LlmMetrics - Metrics for LLM requests