A walkthrough on configuring GitHub Copilot CLI to use local LLMs via Ollama instead of cloud-based models. Covers installation, environment variable setup, non-interactive/CI mode, and model selection (Qwen, Llama, Mistral, DeepSeek Coder). The author concludes that while the setup is technically feasible and appealing for privacy and offline use, it requires at least 48 GB of VRAM to be practically workable, making it unsuitable for most consumer hardware.

4m read timeFrom bartwullems.blogspot.com
Post cover image
Table of contents
Why combine Copilot CLI with Ollama?Quick Setup: Copilot CLI + OllamaManual Setup (Environment Variables)Non‑interactive mode (CI/CD, Automation)Choosing the right local modelIs it workable?Final thoughtsMore information

Sort: