Skip to main content

Prerequisites

Before installing simpE, ensure you have the following:

Required

Python 3.14+

simpE requires Python 3.14 or higher

uv Package Manager

Fast Python package and project manager

LM-Studio

Local LLM inference engine with API support

Git

For cloning the repository

System Requirements

  • Operating System: Linux, macOS, or Windows (with WSL)
  • RAM: 8GB minimum (16GB+ recommended for larger models)
  • Storage: 2GB for simpE + model storage space
  • Network: Internet connection for initial setup

Installation Steps

1

Install uv Package Manager

Install uv using the official installer:
curl -LsSf https://astral.sh/uv/install.sh | sh
Verify the installation:
uv --version
Expected output:
uv 0.x.x (or higher)
2

Install LM-Studio

Download and install LM-Studio from lmstudio.ai.After installation:
  1. Launch LM-Studio
  2. Download a model (e.g., Llama 3.2 1B, Qwen 2.5 3B)
  3. Load the model
  4. Enable the local API server
The default API endpoint is http://127.0.0.1:1234/v1. You can verify it’s running by visiting http://127.0.0.1:1234 in your browser.
3

Clone simpE Repository

Clone the simpE repository from GitHub:
git clone https://github.com/Dariton4000/simpE.git
Navigate to the project directory:
cd simpE
Verify the repository contents:
ls -la
You should see:
-rw-r--r-- README.md
-rw-r--r-- main.py
-rw-r--r-- analyze_results.py
-rw-r--r-- pyproject.toml
-rw-r--r-- uv.lock
4

Install Dependencies

Use uv to install all required dependencies:
uv sync
This will:
  • Create a virtual environment (.venv)
  • Install Python 3.14+ if not already available
  • Install required packages from pyproject.toml:
    • openai - For API communication
    • questionary - For interactive CLI prompts
The uv sync command reads from pyproject.toml:
[project]
name = "simpe"
version = "0.1.0"
requires-python = ">=3.14"
dependencies = [
    "openai",
    "questionary>=2.1.1",
]

Configuration

Basic Configuration

Open main.py in your favorite editor and configure the following parameters near the top of the file (lines 14-23):
main.py
# Model Configuration
llm = ""  # Leave empty to auto-select the loaded LM-Studio model
baseurl = "http://127.0.0.1:1234/v1"  # LM-Studio API endpoint
reasoning_effort = "low"  # Reasoning level: "low", "medium", "high"

# Benchmark Parameters
tries = 100  # Number of tests per benchmark
timeout_time = 400  # Timeout in seconds (not yet implemented)
max_tokens = 512 * 1  # Maximum output tokens

Configuration Options Explained

Type: stringDefault: "" (empty string)Leave empty to automatically use the currently loaded model in LM-Studio. This is the recommended setting.
llm = ""  # Auto-select loaded model
The actual model name will be captured from API responses and used for naming result files.
Type: stringDefault: "http://127.0.0.1:1234/v1"The OpenAI-compatible API endpoint for your LLM server.
baseurl = "http://127.0.0.1:1234/v1"  # LM-Studio default
# baseurl = "http://localhost:8080/v1"  # Custom port
# baseurl = "http://192.168.1.100:1234/v1"  # Remote server
Type: stringDefault: "low"Options: "low", "medium", "high"Controls the reasoning effort for models that support explicit reasoning modes.
reasoning_effort = "low"  # Fast, basic reasoning
# reasoning_effort = "medium"  # Balanced
# reasoning_effort = "high"  # Deep, thorough reasoning
Higher reasoning efforts significantly increase response time and token usage.
Type: intDefault: 100Number of tests to run per benchmark type.
tries = 100  # Standard evaluation
# tries = 10  # Quick test
# tries = 1000  # Comprehensive evaluation
Total runtime = tries × 3 benchmarks × average response time
Type: intDefault: 512Maximum number of tokens the model can generate per response.
max_tokens = 512 * 1  # Standard models
# max_tokens = 512 * 2  # Reasoning models
# max_tokens = 512 * 4  # Deep reasoning models
Set this too low and reasoning models may be cut off mid-response.

Advanced Configuration

For advanced users, additional configuration options are available:
main.py
# Directory Configuration (lines 10-11)
logs_directory = "logs"  # Where to store execution logs
results_directory = "results"  # Where to save benchmark results

# Timeout Configuration (line 20)
timeout_time = 400  # Seconds before timing out a response (not yet implemented)

Verification

Verify your installation is working correctly:
1

Test the CLI Entry Point

Run simpE without executing benchmarks:
uv run python -c "from main import main; print('Import successful')"
Expected output:
Import successful
2

Verify LM-Studio Connection

Test the API endpoint:
curl http://127.0.0.1:1234/v1/models
Expected output (example):
{
  "object": "list",
  "data": [
    {
      "id": "meta-llama-llama-3.2-1b-instruct",
      "object": "model",
      "created": 1709493600,
      "owned_by": "lm-studio"
    }
  ]
}
If you get a connection error, make sure LM-Studio is running with the API server enabled.
3

Run a Quick Test

Modify tries to 1 in main.py for a quick verification:
tries = 1  # Just for testing
Run the benchmark:
uv run simpe
You should see output similar to:
Directory 'logs' created successfully.
Directory 'results' created successfully.
String Reversal 1/1  0.00%
Thinking... 2.34s
COMPLETE String Reversal: 1/1
Results: 100.00%
Don’t forget to change tries back to 100 (or your preferred value) after testing.
4

Verify Results and Logs

Check that output directories were created:
ls -la
You should see:
drwxr-xr-x logs/
drwxr-xr-x results/
Verify files were created:
ls logs/ results/
Expected output:
logs/:
log_2026-03-03_14-23-45.txt
log_recent.txt

results/:
result_model-name_2026-03-03_14-23-45.json

Troubleshooting

Common Issues

Problem: Shell can’t find the uv command after installation.Solution: Restart your terminal or manually add uv to your PATH:
# Add to ~/.bashrc or ~/.zshrc
export PATH="$HOME/.local/bin:$PATH"
Then reload:
source ~/.bashrc  # or source ~/.zshrc
Problem: API endpoint not accessible.Solutions:
  1. Verify LM-Studio is running
  2. Check that a model is loaded
  3. Ensure the API server is enabled (look for the server toggle in LM-Studio)
  4. Try accessing http://127.0.0.1:1234 in your browser
  5. Check if another application is using port 1234
Problem: System Python is older than 3.14.Solution: uv will automatically download and use Python 3.14+. Verify with:
uv run python --version
Should output:
Python 3.14.x
Problem: Can’t create logs/ or results/ directories.Solution: Ensure you have write permissions in the simpE directory:
chmod +w .
Or run from a directory where you have write access.
Problem: Missing openai or questionary modules.Solution: Re-run the sync command:
uv sync --reinstall
This will reinstall all dependencies from scratch.

Next Steps

Quick Start Guide

Run your first benchmark suite

Understanding Benchmarks

Learn about the three benchmark types

Analysis Guide

Deep dive into result analysis

GitHub Repository

View source code and contribute

System Architecture

Understanding how simpE works:
┌─────────────────┐
│   simpE CLI     │
│   (main.py)     │
└────────┬────────┘

         ├──► Logs Directory
         │    (execution logs)

         ├──► Results Directory
         │    (JSON benchmark data)

         └──► LM-Studio API
              (http://127.0.0.1:1234/v1)

              ┌──────┴──────┐
              │   LM-Studio │
              │   (Local)   │
              └─────────────┘

              ┌──────┴──────┐
              │  LLM Model  │
              │  (Loaded)   │
              └─────────────┘
simpE is designed to work with any OpenAI-compatible API endpoint, not just LM-Studio. You can point it to other local inference engines or even remote APIs by changing the baseurl parameter.

Build docs developers (and LLMs) love