A walkthrough of setting up local LLM integration in Visual Studio Code using the Continue extension and Gemma 4 (8B parameter model) via Ollama. Covers installing VS Code from scratch, downloading the Gemma 4 model with Ollama, installing and configuring the Continue extension, adjusting tool permissions for agent mode, and completing simple coding tasks with the local model. Also notes that paid models like Claude or Gemini can be connected via API keys using the same setup.
ā¢8m watch time
Sort: