Run Local AI for Free: OpenClaw + Ollama
Run AI models locally with Ollama. No API costs, complete privacy. Configure OpenClaw to use local models.
Overview
Ollama lets you run AI models locally on your machine. No API costs, complete privacy, and full control. OpenClaw integrates with Ollama's native API, supporting streaming and tool calling.
Quick Start
1. Install Ollama
Download from ollama.ai
2. Pull a model
ollama pull llama3.3
# or
ollama pull mistral
# or
ollama pull qwen2.5-coder3. Enable in OpenClaw
# Set environment variable
export OLLAMA_API_KEY="ollama-local"4. Use the model
openclaw models set ollama/llama3.3Installation
Download and install Ollama from the official website. After installation, Ollama runs as a service on your machine.
# macOS / Linux
curl -fsSL https://ollama.ai/install.sh | bash
# Windows - download from ollama.aiVerify installation:
ollama --versionConfiguration
OpenClaw auto-discovers Ollama when OLLAMA_API_KEY is set. For custom setups:
{
"models": {
"providers": {
"ollama": {
"baseUrl": "http://localhost:11434",
"apiKey": "ollama-local"
}
}
}
}Or use environment variable:
export OLLAMA_API_KEY="ollama-local"Available Models
Popular models for OpenClaw:
General purpose, excellent reasoning. ~70B parameters.
Great balance of speed and capability.
Optimized for code. Best for programming tasks.
Reasoning model with strong capabilities.
See all models:
ollama list
openclaw models listTroubleshooting
Ollama not detected
Make sure Ollama is running:
ollama serve
# Verify
curl http://localhost:11434/api/tagsNo models available
Pull a model:
ollama pull llama3.3Connection refused
Check if Ollama is running:
ps aux | grep ollama
ollama serve