A guide covering how to optimize neural network inference on AMD hardware using ONNX Runtime with the DirectML execution provider. The tutorial demonstrates leveraging DirectX 12 for improved performance when running machine learning models on AMD GPUs.
Table of contents
Introduction1. Problem description2. Setting up ONNX Runtime with DirectX 123. Mapping DirectX 12 resource into ONNX Runtime4. Faster preprocessing and postprocessing5. ConclusionSort: