Skip to main content

Overview

The --model (or -m) option allows you to specify which AI model Qwen Code should use for the session. Different models offer different capabilities, speeds, and costs.

Syntax

qwen --model <model-name>
qwen -m <model-name>

Available Models

Qwen Models (Dashscope)

Best for complex coding tasks
qwen --model qwen-coder-plus
Specifications:
  • Context: 256K tokens
  • Quality: Highest
  • Speed: Medium
  • Cost: $$$
Use for:
  • Multi-file refactoring
  • Complex algorithms
  • Architecture decisions
  • Large codebase navigation

OpenAI Models

When using OpenAI authentication:
# GPT-4 Turbo
qwen --model gpt-4-turbo --auth-type openai

# GPT-4
qwen --model gpt-4 --auth-type openai

# GPT-3.5 Turbo
qwen --model gpt-3.5-turbo --auth-type openai

Anthropic Models

When using Anthropic authentication:
# Claude 3 Opus (most capable)
qwen --model claude-3-opus-20240229 --auth-type anthropic

# Claude 3 Sonnet (balanced)
qwen --model claude-3-sonnet-20240229 --auth-type anthropic

# Claude 3 Haiku (fastest)
qwen --model claude-3-haiku-20240307 --auth-type anthropic

Basic Usage

Specify Model at Launch

qwen --model qwen-coder-plus

With Other Options

# Headless mode with specific model
qwen --model qwen-turbo --prompt "Quick question"

# YOLO mode with powerful model
qwen --model qwen-coder-plus --yolo --prompt "Refactor entire project"

# JSON output with balanced model
qwen --model qwen-max --prompt "Generate code" --output-format json

Model Selection Strategy

For Development Tasks

# Complex refactoring
qwen --model qwen-coder-plus --prompt "Refactor authentication system"

# Quick fixes
qwen --model qwen-turbo --prompt "Fix this type error"

# Code review
qwen --model qwen-max --prompt "Review these changes"

For Different Project Sizes

# Large monorepo (needs large context)
cd large-project
qwen --model qwen-coder-plus

# Small utility (fast model is fine)
cd small-util
qwen --model qwen-turbo

# Medium project (balanced)
cd medium-app
qwen --model qwen-max

Model Comparison

ModelContextSpeedQualityCostBest For
qwen-coder-plus256K⭐⭐⭐⭐⭐$$$Complex coding
qwen-max200K⭐⭐⭐⭐$$General purpose
qwen-turbo128K⭐⭐⭐⭐⭐$Quick tasks
gpt-4-turbo128K⭐⭐⭐⭐⭐$$$$Advanced reasoning
claude-3-opus200K⭐⭐⭐⭐⭐$$$$Complex analysis
claude-3-haiku200K⭐⭐⭐⭐⭐$Fast responses

Configuration

Via Settings File

Set a default model:
// settings.json
{
  "ai": {
    "model": "qwen-coder-plus",
    "provider": "dashscope"
  }
}

Via Environment Variable

export QWEN_MODEL="qwen-coder-plus"
qwen

Project-Specific Defaults

// .qwen/settings.json
{
  "ai": {
    "model": "qwen-max",  // Override global default
    "provider": "dashscope"
  }
}

Precedence Order

  1. Command-line --model flag (highest priority)
  2. Environment variable QWEN_MODEL
  3. Project settings .qwen/settings.json
  4. Global settings ~/.qwen/settings.json (lowest priority)

Advanced Usage

Switch Models Mid-Session

In interactive mode:
qwen --model qwen-turbo
> Ask quick questions
> /model  # Switch to qwen-coder-plus
> Now do complex refactoring

Model-Specific Prompts

# Use fast model for exploration
qwen --model qwen-turbo --prompt "Explore the codebase and summarize"

# Then use powerful model for implementation
qwen --model qwen-coder-plus --continue --prompt "Implement the feature"

Cost Optimization

#!/bin/bash
# Use appropriate model based on task complexity

TASK_COMPLEXITY=$1
PROMPT=$2

if [ "$TASK_COMPLEXITY" = "simple" ]; then
  MODEL="qwen-turbo"
elif [ "$TASK_COMPLEXITY" = "complex" ]; then
  MODEL="qwen-coder-plus"
else
  MODEL="qwen-max"
fi

qwen --model "$MODEL" --prompt "$PROMPT"

Custom Models

OpenAI-Compatible APIs

Use custom or self-hosted models:
qwen --model custom-model \
     --openai-base-url https://your-api.com/v1 \
     --openai-api-key your-key

Configuration for Custom Models

{
  "ai": {
    "provider": "openai",
    "model": "custom-model",
    "baseUrl": "https://your-api.com/v1",
    "apiKey": "${CUSTOM_API_KEY}"
  }
}

Model Capabilities

Tool Support

All models support Qwen Code tools: ✅ File operations (read, write, edit)
✅ Shell commands
✅ Code search
✅ Git operations
✅ Web search

Feature Comparison

Featureqwen-coder-plusqwen-maxqwen-turbo
Multi-file editsExcellentGoodBasic
Code generationExcellentGoodGood
ExplanationsExcellentExcellentGood
DebuggingExcellentGoodBasic
SpeedMediumMediumFast
Context size256K200K128K

Real-World Examples

Web Development

# Frontend (fast iteration)
qwen --model qwen-turbo --prompt "Add a button to the navbar"

# Backend (complex logic)
qwen --model qwen-coder-plus --prompt "Implement OAuth flow"

# Full-stack (balanced)
qwen --model qwen-max --prompt "Build user profile page"

Data Science

# Quick analysis
qwen --model qwen-turbo --prompt "Plot this data"

# Complex ML model
qwen --model qwen-coder-plus --prompt "Build gradient boosting model"

DevOps

# Simple scripts
qwen --model qwen-turbo --prompt "Write deploy script"

# Complex infrastructure
qwen --model qwen-coder-plus --prompt "Design Kubernetes architecture"

Performance Tips

Match Model to Task

# Don't use powerful model for simple tasks
# Bad:
qwen --model qwen-coder-plus --prompt "What's 2+2?"

# Good:
qwen --model qwen-turbo --prompt "What's 2+2?"

Use Fast Models for Iteration

# Iterate quickly with turbo
for i in {1..5}; do
  qwen --model qwen-turbo --prompt "Try approach $i"
done

# Then implement with coder-plus
qwen --model qwen-coder-plus --prompt "Implement best approach"

Monitor Token Usage

# Check context usage
qwen --model qwen-coder-plus --prompt "Task" --output-format json | \
  jq '.usage.totalTokens'

# Switch model if approaching limit
if [ $TOKENS -gt 200000 ]; then
  # Context nearly full, compress or switch model
  qwen --prompt "/compress"
fi

Troubleshooting

Model Not Available

Error: Model 'qwen-coder-plus' is not available
Solutions:
  1. Check authentication:
    qwen --auth-type dashscope --dashscope-api-key "$KEY"
    
  2. Verify model name:
    # List available models
    qwen --model invalid 2>&1 | grep "Available models"
    
  3. Check API access:
    # Test with curl
    curl -H "Authorization: Bearer $DASHSCOPE_API_KEY" \
         https://dashscope.aliyuncs.com/api/v1/models
    

Authentication Issues

Error: Authentication type not available
Solution:
# Set auth type matching the model
qwen --model qwen-coder-plus --auth-type dashscope

# Or for OpenAI models
qwen --model gpt-4 --auth-type openai

Invalid Model Name

Error: Unknown model: 'qwen-coder'
Correct names:
  • qwen-coder-plus (not qwen-coder)
  • qwen-turbo (not qwen-fast)
  • qwen-max (not qwen-large)

Best Practices

Begin with fast models, upgrade as needed:
# Explore with turbo
qwen --model qwen-turbo
> Understand the codebase

# Switch to coder-plus for implementation
> /model
# Select qwen-coder-plus
> Implement the feature
Set appropriate defaults per project:
// Large enterprise app
{
  "ai": {
    "model": "qwen-coder-plus"  // Need large context
  }
}

// Small utility
{
  "ai": {
    "model": "qwen-turbo"  // Speed matters more
  }
}
Monitor costs and adjust:
# Track usage
qwen --prompt "Task" --output-format json | \
  jq '.stats.estimatedCost'

# Switch to cheaper model if budget-conscious
qwen --model qwen-turbo
Choose models with appropriate context:
# Large codebase, many files
qwen --model qwen-coder-plus  # 256K context

# Small, focused task
qwen --model qwen-turbo  # 128K sufficient

See Also

/model Command

Switch models in interactive mode

Authentication

Set up model provider authentication

Model Comparison

Detailed model benchmarks and comparison

Configuration

Configure default models and settings

Build docs developers (and LLMs) love