Meta details the rapid evolution of its in-house MTIA (Meta Training and Inference Accelerator) chip family, developed with Broadcom. Four generations — MTIA 300, 400, 450, and 500 — were shipped in under two years, with HBM bandwidth increasing 4.5x and compute FLOPS increasing 25x from MTIA 300 to 500. The strategy centers on

10m read timeFrom ai.meta.com
Post cover image

Sort: