QuantAgent uses two distinct model roles: one for the individual analysis agents and one for the decision-making graph. You can configure each independently, including using different providers for each role.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/Y-Research-SBU/QuantAgent/llms.txt
Use this file to discover all available pages before exploring further.
The two model roles
graph_llm_model
Used by the Indicator, Pattern, Trend (vision analysis step), and Decision agents. This is the primary reasoning model. Must be vision-capable because Pattern and Trend agents pass chart images to it.
agent_llm_model
Used only by the Pattern and Trend agents for the tool-dispatch step (calling chart generation tools). A lighter, more cost-efficient model works well here.
Default models by provider
When you switch providers via the web UI or thePOST /api/update-provider endpoint, QuantAgent automatically sets these default models:
- OpenAI
- Anthropic
- Qwen
| Role | Default model |
|---|---|
agent_llm_model | gpt-4o-mini |
graph_llm_model | gpt-4o |
gpt-4o-mini supports vision and is significantly cheaper than gpt-4o, making it a good fit for the high-frequency agent calls. gpt-4o provides stronger reasoning for the final decision synthesis.Overriding model names
You can specify any model name supported by the provider in the config dict. The model name is passed directly to the underlying LangChain client (ChatOpenAI, ChatAnthropic, or ChatQwen).
Model names must exactly match the identifiers used by the provider’s API. An invalid model name will cause the LLM initialization to fail at startup.
Temperature
Both model roles default totemperature: 0.1. This low value produces consistent, near-deterministic outputs — important for trading analysis where you want repeatable results given the same market data.
0.1 or lower.
Switching providers at runtime
You can switch providers without restarting by callingPOST /api/update-provider. QuantAgent automatically sets the appropriate default models for the new provider:
agent_llm_model and graph_llm_model explicitly in your config before the switch — the provider update logic only resets model names if they don’t already match the new provider’s prefix (claude, qwen, or gpt).