Cisco launched the UCS C880A M8 Rack Server featuring NVIDIA HGX B300 SXM GPUs and Intel Xeon 6th-Gen CPUs. The HGX B300 delivers up to 11× higher inference throughput and 4× faster training for large language models compared to previous generations. The server includes up to 86 cores per socket, DDR5-6400 memory support, and integrated AI accelerators. It integrates with Cisco Intersight for centralized management and fits within Cisco AI PODs for scalable deployment. Key use cases include real-time LLM inference, large-scale model training, AI pipelines, AI-native data centers, and HPC simulations.
Table of contents
NVIDIA: HGX B300 — Unprecedented AI PerformanceIntel: Xeon 6th-Gen CPUs — CPU Power Meets AI AccelerationCisco: Intersight Management + AI POD IntegrationKey Use Cases Enabled by HGX B300 (SXM)Summary TableFinal ThoughtsDiscover the power of next-gen AI infrastructure—read the Cisco UCS C880A M8 Data SheetSort: