A walkthrough on configuring GitHub Copilot CLI to use local LLMs via Ollama instead of cloud-based models. Covers installation, environment variable setup, non-interactive/CI mode, and model selection (Qwen, Llama, Mistral, DeepSeek Coder). The author concludes that while the setup is technically feasible and appealing for privacy and offline use, it requires at least 48 GB of VRAM to be practically workable, making it unsuitable for most consumer hardware.
Table of contents
Why combine Copilot CLI with Ollama?Quick Setup: Copilot CLI + OllamaManual Setup (Environment Variables)Non‑interactive mode (CI/CD, Automation)Choosing the right local modelIs it workable?Final thoughtsMore informationSort: