NVIDIA NemoClaw is an open-source reference stack for deploying a secure, sandboxed AI coding agent locally on NVIDIA DGX Spark hardware. The tutorial covers the full setup: configuring Docker with the NVIDIA container runtime, installing Ollama to serve the Nemotron 3 Super 120B model locally, installing the NemoClaw stack via a one-line installer, and connecting the agent to Telegram for remote access. OpenShell provides sandbox isolation with real-time network policy controls, ensuring all inference stays on-device with no external data dependencies. The guide also covers Web UI access, SSH tunneling for remote machines, policy approval workflows for extending agent network access, and management commands for ongoing operations.
Table of contents
Quick links to the model and codePrerequisitesConfigure the runtimesInstall OllamaInstall NemoClawConnect to TelegramWhat commands can I reference for deployment?Commands for a clean uninstallExtending agent access with policy approvalsGet startedSort: