AutoPatchBench is a new benchmark developed to evaluate the effectiveness of AI-assisted tools in repairing security vulnerabilities found through fuzzing. It aims to standardize the evaluation of AI patch generation systems, facilitating transparency and reproducibility in research. Available on GitHub, it provides insights into AI's role in automating and improving vulnerability repair processes in software.
Table of contents
Fixing fuzzing-found vulnerabilities with AIWhy we developed AutoPatchBenchInside AutoPatchBenchReference implementationA case study with AutoPatchBench-LiteGet started with AutoPatchBenchSort: