Skip to main content
Manual installation gives you full control over the Python environment and is a good choice when Docker is not available, or when you want to develop or modify the project locally. System requirements: Minimum 4 CPU cores, 4 GB RAM. A GPU is not required. Windows 10+, macOS 11+, or a modern Linux distribution.

Setup

1

Clone the repository

git clone https://github.com/harry0703/MoneyPrinterTurbo.git
cd MoneyPrinterTurbo
2

Create a Python virtual environment

Python 3.11 is recommended to match the Docker image and ensure all dependencies resolve correctly.
conda create -n MoneyPrinterTurbo python=3.11
conda activate MoneyPrinterTurbo
3

Install Python dependencies

pip install -r requirements.txt
This installs all required packages:
moviepy==2.1.2
streamlit==1.45.0
edge_tts==6.1.19
fastapi==0.115.6
uvicorn==0.32.1
openai==1.56.1
faster-whisper==1.1.0
loguru==0.7.3
google.generativeai==0.8.3
dashscope==1.20.14
g4f==0.5.2.2
azure-cognitiveservices-speech==1.41.1
redis==5.2.0
python-multipart==0.0.19
pyyaml
requests>=2.31.0
4

Install ImageMagick

ImageMagick is required for subtitle rendering. Without it, video generation will fail.
  1. Download the static library installer from imagemagick.org/script/download.php. Select the Windows version labeled static, for example: ImageMagick-7.1.1-32-Q16-x64-static.exe.
  2. Run the installer. Do not change the default installation path, and avoid paths containing Chinese characters.
  3. Open config.toml and set imagemagick_path to the actual path of the magick.exe binary:
[app]
imagemagick_path = "C:\\Program Files\\ImageMagick-7.1.1-Q16-HDRI\\magick.exe"
Use double backslashes (\\) for Windows paths in TOML files.
5

Configure the application

Copy the example configuration and fill in your API keys:
cp config.example.toml config.toml
Open config.toml and set at minimum:
[app]
# Get a free key at https://www.pexels.com/api/
pexels_api_keys = ["your_pexels_api_key"]

# LLM provider: "openai", "moonshot", "azure", "ollama", "deepseek", etc.
llm_provider = "openai"
openai_api_key = "sk-..."
openai_model_name = "gpt-4o-mini"
MoneyPrinterTurbo supports many LLM providers. Set llm_provider to the provider you want to use and fill in that provider’s section in config.toml.
6

Start the Web UI

Run the following command from the root directory of the project:
Double-click webui.bat, or run it from the command prompt:
webui.bat
The script sets PYTHONPATH automatically and launches Streamlit. Your browser should open automatically.
The Web UI runs at http://localhost:8501.
If you need to download the Whisper model and HuggingFace is not accessible in your region, uncomment the mirror line in webui.sh before launching:
export HF_ENDPOINT=https://hf-mirror.com
7

Start the API server (optional)

The API server is separate from the Web UI and exposes a REST interface for programmatic use.
python main.py
Once running, API documentation is available at:

ffmpeg

ffmpeg is required for video processing. In most environments it is downloaded automatically by the moviepy package at runtime. If automatic download fails, you will see an error like:
RuntimeError: No ffmpeg exe could be found.
Install ffmpeg on your system, or set the IMAGEIO_FFMPEG_EXE environment variable.
To resolve this, download a pre-built binary from gyan.dev/ffmpeg/builds, extract it, and set the path in config.toml:
[app]
# Windows example — use double backslashes
ffmpeg_path = "C:\\Users\\you\\Downloads\\ffmpeg.exe"

# macOS/Linux example
# ffmpeg_path = "/usr/local/bin/ffmpeg"

Subtitle generation modes

The subtitle_provider setting in config.toml controls how subtitles are generated:
ModeSpeedQualityNotes
edgeFastGoodNo extra setup required. Recommended default.
whisperSlowMore accurateDownloads a ~3 GB model from HuggingFace on first use.
Leave subtitle_provider blank to disable subtitle generation entirely.
Whisper mode requires a reliable internet connection on first run to download the whisper-large-v3 model (~3 GB). If you are in a region where HuggingFace is blocked, download the model manually and place it at ./models/whisper-large-v3/.

Build docs developers (and LLMs) love