AMD has released MiniDXNN, an open-source MLP inference library built natively for HLSL and DirectX 12. It targets game developers and graphics programmers who want to integrate small neural networks (e.g., neural radiance caching, neural texture compression) into DX12 rendering pipelines without compute API interop overhead. The library leverages AMD Radeon RX 9000 series matrix cores via cooperative vector APIs, delivering performance comparable to dedicated ML frameworks. This initial release includes optimized MLP inference kernels, unit tests, sample applications, full source code, and documentation. Training support is planned for a future release. Cooperative vector support currently requires AMD developer drivers and RX 9000 series hardware.
Sort: