A comprehensive guide to building a private AI infrastructure using local hardware. The setup involves configuring Proxmox virtualization with GPU passthrough, deploying NixOS VMs with Docker and NVIDIA drivers, and running Ollama with Open WebUI for local LLM access. Tailscale provides secure remote access to the AI stack from
Sort: