llm_provider in config.toml to select a provider, then fill in the matching fields for that provider.
Only the fields for your chosen provider need to be filled in. You can leave all other provider sections empty.
OpenAI
OpenAI
llm_provider = "openai"Get your API key at platform.openai.com/api-keys. Check available models at platform.openai.com/account/limits.| Key | Required | Default |
|---|---|---|
openai_api_key | Yes | — |
openai_model_name | Yes | "gpt-4o-mini" |
openai_base_url | No | https://api.openai.com/v1 |
Google Gemini
Google Gemini
llm_provider = "gemini"Get your API key at aistudio.google.com.| Key | Required | Default |
|---|---|---|
gemini_api_key | Yes | — |
gemini_model_name | Yes | "gemini-1.0-pro" |
Gemini is also used for TTS when you select a
gemini: voice. The same gemini_api_key is shared between LLM and TTS calls.DeepSeek
DeepSeek
llm_provider = "deepseek"Get your API key at platform.deepseek.com/api_keys.| Key | Required | Default |
|---|---|---|
deepseek_api_key | Yes | — |
deepseek_model_name | Yes | "deepseek-chat" |
deepseek_base_url | No | https://api.deepseek.com |
Qwen (Alibaba)
Qwen (Alibaba)
llm_provider = "qwen"Get your API key at dashscope.console.aliyun.com/apiKey. See model options in the DashScope documentation.| Key | Required | Default |
|---|---|---|
qwen_api_key | Yes | — |
qwen_model_name | Yes | "qwen-max" |
Qwen uses the
dashscope Python SDK rather than the OpenAI client. Make sure dashscope is installed in your environment.Moonshot
Moonshot
llm_provider = "moonshot"Get your API key at platform.moonshot.cn/console/api-keys.| Key | Required | Default |
|---|---|---|
moonshot_api_key | Yes | — |
moonshot_model_name | Yes | "moonshot-v1-8k" |
moonshot_base_url | No | https://api.moonshot.cn/v1 |
Azure OpenAI
Azure OpenAI
llm_provider = "azure"Create a deployment in the Azure AI Studio. Your azure_base_url is the endpoint shown in the Azure portal (e.g. https://<resource>.openai.azure.com/), and azure_model_name must match the deployment name you created.| Key | Required | Default |
|---|---|---|
azure_api_key | Yes | — |
azure_base_url | Yes | — |
azure_model_name | Yes | "gpt-35-turbo" |
azure_api_version | No | "2024-02-15-preview" |
Ollama (local)
Ollama (local)
llm_provider = "ollama"Ollama runs models locally with no API key required. Start the Ollama server and pull a model before pointing MoneyPrinterTurbo at it.| Key | Required | Default |
|---|---|---|
ollama_model_name | Yes | — |
ollama_base_url | No | http://localhost:11434/v1 |
Pollinations AI
Pollinations AI
llm_provider = "pollinations"Pollinations offers free public access without an account. An API key is optional and only needed for private or higher-rate access.| Key | Required | Default |
|---|---|---|
pollinations_model_name | No | "openai-fast" |
pollinations_base_url | No | https://pollinations.ai/api/v1 |
pollinations_api_key | No | — |
ModelScope
ModelScope
llm_provider = "modelscope"Get your API key at modelscope.cn. You must bind an Alibaba Cloud account before the API key activates.| Key | Required | Default |
|---|---|---|
modelscope_api_key | Yes | — |
modelscope_model_name | Yes | "Qwen/Qwen3-32B" |
modelscope_base_url | No | https://api-inference.modelscope.cn/v1/ |
ModelScope uses streaming responses internally. The
enable_thinking parameter is set to false automatically.OneAPI
OneAPI
llm_provider = "oneapi"OneAPI is a self-hosted OpenAI-compatible gateway that aggregates multiple providers under one endpoint.| Key | Required | Default |
|---|---|---|
oneapi_api_key | Yes | — |
oneapi_base_url | Yes | — |
oneapi_model_name | Yes | — |
g4f (gpt4free)
g4f (gpt4free)
llm_provider = "g4f"gpt4free provides free, unofficial access to several LLMs. No API key is needed, but availability is not guaranteed.| Key | Required | Default |
|---|---|---|
g4f_model_name | No | "gpt-3.5-turbo" |
Choosing a provider
| Provider | Cost | Quality | Setup effort |
|---|---|---|---|
| OpenAI | Paid | High | Low |
| Gemini | Free tier available | High | Low |
| DeepSeek | Low cost | High | Low |
| Ollama | Free (local GPU/CPU) | Depends on model | Medium |
| Pollinations | Free | Moderate | None |
| g4f | Free | Variable | None |