OpenClaw Model Setup: OpenAI, Anthropic, Ollama

ModelsConfiguration

OpenClaw Model Setup: OpenAI, Anthropic, Ollama

Basic guide to configuring models in OpenClaw. Links to official docs for complete provider documentation.

3 min readLast updated Feb 18, 2026
Stuck?Check the troubleshooting index or ask in Discord.

Overview

OpenClaw supports multiple model providers including OpenAI, Anthropic, Google, Ollama, and more. This guide covers how to configure models, set up fallbacks, and optimize for your use case.

Prerequisites
You should have OpenClaw installed and a model backend (Ollama, vLLM) or API keys ready.

Quick Start

The fastest way to get started is setting environment variables:

.env
# Option 1: OpenAI
OPENAI_API_KEY=sk-...

# Option 2: Anthropic
ANTHROPIC_API_KEY=sk-ant-...

# Option 3: Ollama (local)
MODEL_BACKEND_URL=http://localhost:11434

Then set your model in config:

config.yaml
agents:
  defaults:
    model:
      primary: openai/gpt-5

Model Providers

OpenClaw supports many providers. Here's how to configure each:

OpenAI

env
OPENAI_API_KEY=sk-...
MODEL_BACKEND_URL=https://api.openai.com/v1

Anthropic

env
ANTHROPIC_API_KEY=sk-ant-...
MODEL_BACKEND_URL=https://api.anthropic.com

Ollama (Local)

env
MODEL_BACKEND_URL=http://localhost:11434

Make sure Ollama is running: ollama serve

Google (Gemini)

env
GOOGLE_API_KEY=your-google-api-key
MODEL_BACKEND_URL=https://generativelanguage.googleapis.com/v1

Azure OpenAI

env
AZURE_OPENAI_API_KEY=your-key
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
AZURE_OPENAI_DEPLOYMENT=gpt-4

Configuration Options

Full model configuration in your config file:

config.yaml
agents:
  defaults:
    model:
      primary: anthropic/claude-sonnet-4-5
      imageModel:
        primary: openai/gpt-5o
      thinkingDefault: "low"  # off | low | medium | high
      verboseDefault: "off"   # off | on
      elevatedDefault: "on"   # on | off
      timeoutSeconds: 600
      mediaMaxMb: 5
      contextTokens: 200000
      maxConcurrent: 3

Fallback Models

Set fallback models in case your primary fails:

config.yaml
agents:
  defaults:
    model:
      primary: anthropic/claude-opus-4-6
      fallbacks:
        - anthropic/claude-sonnet-4-5
        - openai/gpt-5
        - google/gemini-2-pro
How fallbacks work
If the primary model fails (rate limit, error, timeout), OpenClaw automatically tries the next model in the list.

Model Aliases

Create shortcuts for commonly used models:

config.yaml
agents:
  defaults:
    models:
      "opus":
        provider: anthropic/claude-opus-4-6
      "sonnet":
        provider: anthropic/claude-sonnet-4-5
      "gpt":
        provider: openai/gpt-5
      "mini":
        provider: openai/gpt-5-mini

Now you can switch models with /model opus or /model gpt

Context Windows & Tuning

Adjust context settings for your needs:

Context Tokens

Set the maximum context window. Higher = more memory but more expensive.

yaml
contextTokens: 200000  # Claude 3 Opus max)
contextTokens: 128000  # GPT-4 Turbo max)
contextTokens: 32000   # Smaller models)

Temperature

Control randomness (0 = deterministic, 1 = creative).

yaml
models:
  "my-model":
    params:
      temperature: 0.7

Max Tokens

Limit response length.

yaml
models:
  "my-model":
    params:
      maxTokens: 4096

Image Models

Configure a separate model for image analysis (used when primary model doesn't support images):

config.yaml
agents:
  defaults:
    model:
      primary: anthropic/claude-opus-4-6
      imageModel:
        primary: openrouter/qwen/qwen-2.5-vl-72b-instruct:free
        fallbacks:
          - openrouter/google/gemini-2.0-flash-vision:free
Free image models
OpenRouter offers several free vision models that work well for basic image analysis.