Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/Y-Research-SBU/QuantAgent/llms.txt

Use this file to discover all available pages before exploring further.

QuantAgent decomposes the trading analysis problem into four focused agents, each responsible for a distinct analytical layer. A LangGraph StateGraph wires them into a sequential pipeline that runs from market data ingestion through to a final LONG/SHORT order.

Graph structure

The compiled graph follows a strict linear topology:
1

START → Indicator Agent

Raw OHLCV data enters the graph. The Indicator Agent computes momentum and oscillator values using five TA-Lib tools.
2

Indicator Agent → Pattern Agent

Indicator results and an indicator_report are written into the shared state. The Pattern Agent reads this state and generates a candlestick chart for visual pattern recognition.
3

Pattern Agent → Trend Agent

A pattern_report is appended to state. The Trend Agent generates a trendline-annotated chart and uses a vision LLM to interpret support and resistance dynamics.
4

Trend Agent → Decision Maker

A trend_report is appended to state. The Decision Maker reads all three reports simultaneously and issues a structured JSON trade decision.
5

Decision Maker → END

The final_trade_decision JSON is written into state and the graph terminates.
In code, the graph is assembled by SetGraph.set_graph() in graph_setup.py:
graph = StateGraph(IndicatorAgentState)

graph.add_edge(START, "Indicator Agent")
graph.add_edge("Indicator Agent", "Pattern Agent")
graph.add_edge("Pattern Agent", "Trend Agent")
graph.add_edge("Trend Agent", "Decision Maker")
graph.add_edge("Decision Maker", END)

return graph.compile()

Shared state

All four agents communicate exclusively through a single IndicatorAgentState TypedDict defined in agent_state.py. No agent calls another directly — every result is written to state and read by the next node.

Input fields

kline_data, time_frame, stock_name — provided by the caller before graph invocation.

Indicator Agent writes

rsi, macd, macd_signal, macd_hist, stoch_k, stoch_d, roc, willr, indicator_report

Pattern Agent writes

pattern_image, pattern_image_filename, pattern_image_description, pattern_report

Trend Agent writes

trend_image, trend_image_filename, trend_image_description, trend_report
The Decision Maker reads indicator_report, pattern_report, and trend_report and writes final_trade_decision.

Two LLM roles

QuantAgent uses two distinct LLM instances with different capability requirements:
RoleConfig keyDefault modelResponsibilities
graph_llmgraph_llm_modelgpt-4oPrimary LLM — used by the Indicator agent for tool-calling and report generation, and by the Pattern and Trend agents for vision-based chart analysis and the Decision agent for final synthesis. Must be vision-capable.
agent_llmagent_llm_modelgpt-4o-miniTool-dispatch LLM — used only in the Pattern and Trend agents to call the image-generation tools (generate_kline_image, generate_trend_image). A lighter model is sufficient here.
The Indicator Agent uses graph_llm exclusively. The Pattern and Trend agents use agent_llm for the tool-dispatch step (generating charts) and graph_llm for the vision analysis step (interpreting charts). The Decision Agent uses graph_llm for final synthesis.

Why vision-capable LLMs are required

The Pattern Agent and Trend Agent both encode candlestick charts as base64 PNG images and pass them directly to the LLM using the image_url content type:
image_prompt = [
    {"type": "text", "text": "This is a candlestick chart..."},
    {
        "type": "image_url",
        "image_url": {"url": f"data:image/png;base64,{image_b64}"},
    },
]
Any model you assign to graph_llm must support multimodal (vision) inputs. OpenAI gpt-4o, Anthropic Claude 3+ models, and Qwen VL models all satisfy this requirement.
Assigning a text-only model to graph_llm will cause runtime errors in the Pattern Agent and Trend Agent nodes. Always verify that your chosen model supports image inputs before running the graph.

Supported LLM providers

TradingGraph supports three providers, configured via the agent_llm_provider and graph_llm_provider config keys:
  • openaiChatOpenAI (e.g., gpt-4o, gpt-4o-mini)
  • anthropicChatAnthropic (e.g., claude-3-5-sonnet-20241022)
  • qwenChatQwen (e.g., qwen-vl-max-latest)

Build docs developers (and LLMs) love