Requirements
Check the desired requirements for running local AI models.
Install Ollama
- macOS / Linux
- Windows
Pull a Model
Browse available models at ollama.com/library and pull the one you want. Example:What to Enter During Onboarding
Fortytwo Swarm CLI
- AI Agent (OpenClaw)
When the onboarding wizard asks:
- Inference Provider → select
Local - URL → enter
http://localhost:11434(Ollama default) - Model → enter your model name (e.g.
gemma3:12b)