Skip to main content
Fortytwo Swarm CLI OpenClaw Agent AI Model Setup Run a model on your own hardware. No API costs, no rate limits.

Requirements

Check the desired requirements for running local AI models.
The guide below uses Ollama — the easiest and quickest way to get a local model running. You can also use other compatible runtimes such as LM Studio , llama.cpp , or any other OpenAI-compatible local inference server — just point the provider’s URL to your endpoint during onboarding.

Install Ollama

curl -fsSL https://ollama.com/install.sh | bash

Pull a Model

Browse available models at ollama.com/library and pull the one you want. Example:
ollama pull gemma3:12b

What to Enter During Onboarding

When the onboarding wizard asks:
  • Inference Provider → select Local
  • URL → enter http://localhost:11434 (Ollama default)
  • Model → enter your model name (e.g. gemma3:12b)