A step-by-step guide to running OpenClaw (an AI coding agent) locally using Ollama instead of cloud APIs. Covers two setups: a beginner single-machine setup and an advanced two-device setup using a Jetson Nano running OpenClaw 24/7 paired with an old gaming laptop as an Ollama LLM server. Topics include installing Ollama, choosing the right local model (balancing speed vs quality), configuring networking with static IPs, enabling Wake-on-LAN, setting context length, and connecting OpenClaw to a remote Ollama server. Key motivations are cost savings, privacy, and reliability when cloud providers go down.
•27m watch time
Sort: