A step-by-step guide to deploying the NVIDIA AI-Q blueprint with LangChain to build enterprise deep research agents. Covers setup with Docker Compose, configuring shallow and deep research agents using Nemotron and GPT models, monitoring execution traces with LangSmith, and extending the system with custom enterprise data sources via the NeMo Agent Toolkit. The deep agent architecture uses a planner-researcher sub-agent pattern with structured context passing to avoid token bloat and improve multi-step reasoning quality.
Table of contents
What you’ll build: A deep agentSet upHow to build long-running data agents with NVIDIA and LangChainInstall and run the blueprintCustomize AI-Q: Workflow, tracing and model configurationMonitor the tracesOptimize a deep agentAdd a data sourceGoing furtherSort: