Meta AI's LLM Compiler is an open llama-architecture model trained on LLVM intermediate representation (IR) and Linux x86-64 assembly code, featuring a 16k token context window. Compute Heavy Industries conducted a review focused on translating LLVM IR to assembly using this model. They highlighted the model's performance on a RTX 3090 GPU and discussed its four modes of operation. Key findings included challenges with function size limits, the need for instruction count and binary size parameters, and issues when combining generated assembly code with existing binaries. Despite these challenges, the model showcases the potential for deep learning in code generation, with promising results for smaller functions.
Table of contents
LLM CompilerSetupThe PromptsOur Test ProgramPrompt ParametersInferenceEvaluationResultsThoughts1 Comment
Sort: