Models / Ollama
Ollama Models
Run AI models locally. Free, private, and runs on your own hardware.
Why Ollama?
- • Free — No API costs, you only pay for electricity
- • Private — Your data never leaves your machine
- • Fast — No network latency once models are loaded
- • Offline — Works without internet
Popular Models
🦙
Llama 3.1
Meta's latest. Great all-around model. 4GB+ RAM.
🫧
Mistral
Fast and efficient. Good for coding. 4GB+ RAM.
💻
Codellama
Specialized for code. Great for developers. 4GB+ RAM.
🌊
Phi
Microsoft's small model. Runs on just 2GB RAM.
Setup
- Install Ollama:
curl -fsSL https://ollama.com/install.sh | bash - Run the onboarding:
openclaw onboard - Select "Ollama" as your provider
- Choose your model (or let it auto-detect)
Manual Setup
Or configure manually:
{
"models": {
"default": "ollama/llama3.1",
"providers": {
"ollama": {
"url": "http://localhost:11434"
}
}
}
}Requirements
- • Mac — Apple Silicon (M1+) works best
- • Linux — Any modern distro with 8GB+ RAM
- • Windows — WSL2 recommended
- • RAM — 8GB minimum, 16GB recommended