Skip to content
AINews

Meta Unveils Meta Large Language Model Compiler for Code Optimization

It has been trained on a massive dataset of 546 billion tokens of LLVM-IR and assembly code and fine-tuned to better interpret compiler behavior.

  • Meta introduces LLM Compiler, a collection of pre-trained models designed to enhance code optimization.
  • It is built on Code Llama and trained on extensive datasets of LLVM-IR and assembly code.

Facebook’s parent company, Meta has recently introduced Large Language Model Compiler (LLM Compiler), a collection of powerful, freely available pre-trained models aimed at improving code optimization tasks. 

The LLM Compiler, built on Code Llama, improves the understanding of compiler intermediate representations (IRs), assembly language, and optimization techniques. It has been trained on a massive dataset of 546 billion tokens of LLVM-IR and assembly code and fine-tuned to better interpret compiler behavior.

Image Source: Meta’s Research Paper
“Large Language Models (LLMs) have demonstrated remarkable capabilities across a variety of software engineering and coding tasks. However, their application in the domain of code and compiler optimization remains underexplored. Training LLMs is resource-intensive, requiring substantial GPU hours and extensive data collection, which can be prohibitive,” reads Meta’s research paper. 

According to the announcement, Meta is releasing LLM Compiler 7B & 13B models under a permissive license for both research and commercial use. The idea is to simplify the use for developers and researchers alike to advance new research in this space.

Meta has been pushing a lot when it comes to artificial intelligence (AI). Recently, the tech major has officially released its AI chatbot, Meta AI, powered by Llama-3, to all users in India. Users can now utilize Meta AI within feeds, chats, and other areas across the company's apps to accomplish tasks, generate content, and explore topics without switching apps.


Edited by Harshajit Sarmah

Latest