Skip to main content
MoneyPrinterTurbo uses an LLM to write the video script and produce the stock-footage search terms. Set llm_provider in config.toml to select a provider, then fill in the matching fields for that provider.
llm_provider = "openai"   # change to any value listed below
Only the fields for your chosen provider need to be filled in. You can leave all other provider sections empty.

llm_provider = "openai"Get your API key at platform.openai.com/api-keys. Check available models at platform.openai.com/account/limits.
llm_provider = "openai"
openai_api_key = "sk-..."
openai_model_name = "gpt-4o-mini"

# Optional: set a custom base URL if using a proxy
# openai_base_url = "https://your-proxy.example.com/v1"
openai_base_url = ""
KeyRequiredDefault
openai_api_keyYes
openai_model_nameYes"gpt-4o-mini"
openai_base_urlNohttps://api.openai.com/v1
llm_provider = "gemini"Get your API key at aistudio.google.com.
llm_provider = "gemini"
gemini_api_key = "AIza..."
gemini_model_name = "gemini-1.0-pro"
KeyRequiredDefault
gemini_api_keyYes
gemini_model_nameYes"gemini-1.0-pro"
Gemini is also used for TTS when you select a gemini: voice. The same gemini_api_key is shared between LLM and TTS calls.
llm_provider = "deepseek"Get your API key at platform.deepseek.com/api_keys.
llm_provider = "deepseek"
deepseek_api_key = "sk-..."
deepseek_model_name = "deepseek-chat"
deepseek_base_url = "https://api.deepseek.com"
KeyRequiredDefault
deepseek_api_keyYes
deepseek_model_nameYes"deepseek-chat"
deepseek_base_urlNohttps://api.deepseek.com
llm_provider = "qwen"Get your API key at dashscope.console.aliyun.com/apiKey. See model options in the DashScope documentation.
llm_provider = "qwen"
qwen_api_key = "sk-..."
qwen_model_name = "qwen-max"
KeyRequiredDefault
qwen_api_keyYes
qwen_model_nameYes"qwen-max"
Qwen uses the dashscope Python SDK rather than the OpenAI client. Make sure dashscope is installed in your environment.
llm_provider = "moonshot"Get your API key at platform.moonshot.cn/console/api-keys.
llm_provider = "moonshot"
moonshot_api_key = "sk-..."
moonshot_model_name = "moonshot-v1-8k"
moonshot_base_url = "https://api.moonshot.cn/v1"
KeyRequiredDefault
moonshot_api_keyYes
moonshot_model_nameYes"moonshot-v1-8k"
moonshot_base_urlNohttps://api.moonshot.cn/v1
llm_provider = "azure"Create a deployment in the Azure AI Studio. Your azure_base_url is the endpoint shown in the Azure portal (e.g. https://<resource>.openai.azure.com/), and azure_model_name must match the deployment name you created.
llm_provider = "azure"
azure_api_key = "..."
azure_base_url = "https://<resource>.openai.azure.com/"
azure_model_name = "gpt-35-turbo"        # your deployment name
azure_api_version = "2024-02-15-preview"
KeyRequiredDefault
azure_api_keyYes
azure_base_urlYes
azure_model_nameYes"gpt-35-turbo"
azure_api_versionNo"2024-02-15-preview"
azure_model_name must be the deployment name in your Azure workspace, not the model family name.
llm_provider = "ollama"Ollama runs models locally with no API key required. Start the Ollama server and pull a model before pointing MoneyPrinterTurbo at it.
ollama pull llama3
ollama serve          # starts on http://localhost:11434 by default
llm_provider = "ollama"
ollama_model_name = "llama3"

# Only needed if Ollama is running on a different host or port
# ollama_base_url = "http://localhost:11434/v1"
ollama_base_url = ""
KeyRequiredDefault
ollama_model_nameYes
ollama_base_urlNohttp://localhost:11434/v1
Browse available models at ollama.com/library.
llm_provider = "pollinations"Pollinations offers free public access without an account. An API key is optional and only needed for private or higher-rate access.
llm_provider = "pollinations"
pollinations_model_name = "openai-fast"
pollinations_base_url = "https://pollinations.ai/api/v1"

# Leave empty for public (free) access
pollinations_api_key = ""
KeyRequiredDefault
pollinations_model_nameNo"openai-fast"
pollinations_base_urlNohttps://pollinations.ai/api/v1
pollinations_api_keyNo
llm_provider = "modelscope"Get your API key at modelscope.cn. You must bind an Alibaba Cloud account before the API key activates.
llm_provider = "modelscope"
modelscope_api_key = "..."
modelscope_model_name = "Qwen/Qwen3-32B"
modelscope_base_url = "https://api-inference.modelscope.cn/v1/"
KeyRequiredDefault
modelscope_api_keyYes
modelscope_model_nameYes"Qwen/Qwen3-32B"
modelscope_base_urlNohttps://api-inference.modelscope.cn/v1/
ModelScope uses streaming responses internally. The enable_thinking parameter is set to false automatically.
llm_provider = "oneapi"OneAPI is a self-hosted OpenAI-compatible gateway that aggregates multiple providers under one endpoint.
llm_provider = "oneapi"
oneapi_api_key = "..."
oneapi_base_url = "http://your-oneapi-host/v1"
oneapi_model_name = "gpt-4o"
KeyRequiredDefault
oneapi_api_keyYes
oneapi_base_urlYes
oneapi_model_nameYes
llm_provider = "g4f"gpt4free provides free, unofficial access to several LLMs. No API key is needed, but availability is not guaranteed.
llm_provider = "g4f"
g4f_model_name = "gpt-3.5-turbo"
KeyRequiredDefault
g4f_model_nameNo"gpt-3.5-turbo"
g4f relies on unofficial, reverse-engineered APIs. Availability and reliability may vary. Use it for testing only.
See the gpt4free model list for supported values.

Choosing a provider

ProviderCostQualitySetup effort
OpenAIPaidHighLow
GeminiFree tier availableHighLow
DeepSeekLow costHighLow
OllamaFree (local GPU/CPU)Depends on modelMedium
PollinationsFreeModerateNone
g4fFreeVariableNone
For production use, OpenAI (gpt-4o-mini) and DeepSeek (deepseek-chat) offer a good balance of quality and cost. For local, offline use, Ollama with a capable model (e.g. llama3) is the best option.

Build docs developers (and LLMs) love