PyTorch 2.11 has been released with 2723 commits from 432 contributors. Key highlights include differentiable collectives for distributed training (enabling backpropagation through collective operations), FlexAttention with a FlashAttention-4 backend on Hopper and Blackwell GPUs (1.2×–3.2× speedups over Triton), expanded MPS

4m read timeFrom pytorch.org
Post cover image
Table of contents
API-UNSTABLE Features

Sort: