Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/Y-Research-SBU/QuantAgent/llms.txt

Use this file to discover all available pages before exploring further.

QuantAgent uses two distinct model roles: one for the individual analysis agents and one for the decision-making graph. You can configure each independently, including using different providers for each role.

The two model roles

graph_llm_model

Used by the Indicator, Pattern, Trend (vision analysis step), and Decision agents. This is the primary reasoning model. Must be vision-capable because Pattern and Trend agents pass chart images to it.

agent_llm_model

Used only by the Pattern and Trend agents for the tool-dispatch step (calling chart generation tools). A lighter, more cost-efficient model works well here.
The Pattern and Trend agents pass chart images to graph_llm_model for visual analysis. You must select a model with vision (image input) support for graph_llm_model. The agent_llm_model is only used for tool dispatch (generating chart images) and does not require vision support.

Default models by provider

When you switch providers via the web UI or the POST /api/update-provider endpoint, QuantAgent automatically sets these default models:
RoleDefault model
agent_llm_modelgpt-4o-mini
graph_llm_modelgpt-4o
gpt-4o-mini supports vision and is significantly cheaper than gpt-4o, making it a good fit for the high-frequency agent calls. gpt-4o provides stronger reasoning for the final decision synthesis.
TradingGraph(config={
    "agent_llm_provider": "openai",
    "graph_llm_provider": "openai",
    "agent_llm_model": "gpt-4o-mini",
    "graph_llm_model": "gpt-4o",
    "api_key": "sk-...",
})

Overriding model names

You can specify any model name supported by the provider in the config dict. The model name is passed directly to the underlying LangChain client (ChatOpenAI, ChatAnthropic, or ChatQwen).
TradingGraph(config={
    "agent_llm_provider": "openai",
    "graph_llm_provider": "openai",
    "agent_llm_model": "gpt-4o",       # upgrade agent model
    "graph_llm_model": "gpt-4o",
    "api_key": "sk-...",
})
Model names must exactly match the identifiers used by the provider’s API. An invalid model name will cause the LLM initialization to fail at startup.

Temperature

Both model roles default to temperature: 0.1. This low value produces consistent, near-deterministic outputs — important for trading analysis where you want repeatable results given the same market data.
TradingGraph(config={
    "agent_llm_temperature": 0.1,   # deterministic agent analysis
    "graph_llm_temperature": 0.1,   # deterministic trade decisions
    # ...other config keys
})
Raise the temperature only if you want more varied outputs (e.g., for research or experimentation). For production trading analysis, keep it at 0.1 or lower.

Switching providers at runtime

You can switch providers without restarting by calling POST /api/update-provider. QuantAgent automatically sets the appropriate default models for the new provider:
POST /api/update-provider
Content-Type: application/json

{ "provider": "anthropic" }
To keep your current model names when switching providers, update agent_llm_model and graph_llm_model explicitly in your config before the switch — the provider update logic only resets model names if they don’t already match the new provider’s prefix (claude, qwen, or gpt).

Build docs developers (and LLMs) love