Greening Large Language Models of Code

Large language models of code have shown remarkable effectiveness across various software engineering tasks. Despite the availability of many cloud services built upon these powerful models, there remain several scenarios where developers cannot take full advantage of them, stemming from factors suc...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE/ACM International Conference on Software Engineering: Software Engineering in Society (Online) s. 142 - 153
Hlavní autoři: Shi, Jieke, Yang, Zhou, Kang, Hong Jin, Xu, Bowen, He, Junda, Lo, David
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: ACM 14.04.2024
Témata:
ISSN:2832-7616
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Large language models of code have shown remarkable effectiveness across various software engineering tasks. Despite the availability of many cloud services built upon these powerful models, there remain several scenarios where developers cannot take full advantage of them, stemming from factors such as restricted or unreliable internet access, institutional privacy policies that prohibit external transmission of code to third-party vendors, and more. Therefore, developing a compact, efficient, and yet energy-saving model for deployment on developers' devices becomes essential.To this aim, we propose Avatar, a novel approach that crafts a deployable model from a large language model of code by optimizing it in terms of model size, inference latency, energy consumption, and carbon footprint while maintaining a comparable level of effectiveness (e.g., prediction accuracy on downstream tasks). The key idea of Avatar is to formulate the optimization of language models as a multi-objective configuration tuning problem and solve it with the help of a Satisfiability Modulo Theories (SMT) solver and a tailored optimization algorithm. The SMT solver is used to form an appropriate configuration space, while the optimization algorithm identifies the Pareto-optimal set of configurations for training the optimized models using knowledge distillation. We evaluate Avatar with two popular language models of code, i.e., CodeBERT and GraphCodeBERT, on two popular tasks, i.e., vulnerability prediction and clone detection. We use Avatar to produce optimized models with a small size (3 MB), which is 160× smaller than the original large models. On the two tasks, the optimized models significantly reduce the energy consumption (up to 184× less), carbon footprint (up to 157× less), and inference latency (up to 76× faster), with only a negligible loss in effectiveness (1.67%).Lay AbstractLarge language models of code have proven to be highly effective for various software engineering tasks, such as spotting program defects and helping developers write code. While many cloud services built on these models (e.g., GitHub Copilot) are now accessible, several factors, such as unreliable internet access (e.g., over 20% of GitHub Copilot's issues are related to network connectivity [22]) and privacy concerns (e.g., Apple has banned the internal use of external AI tools to protect confidential data [53]), hinder developers from fully utilizing these services. Therefore, deploying language models of code on developers' devices like laptops appears promising. However, local deployment faces challenges: (1) Consumer-grade personal devices typically lack sufficient memory and the high-performance CPUs/GPUs required for efficient model execution; (2) Even if the hardware requirements are met, deploying the models on many devices can result in considerable energy consumption and carbon emissions, negatively impacting environmental sustainability.To address these challenges, we present Avatar, an innovative approach that optimizes large language models of code and enables their deployment on consumer-grade devices. Avatar can optimize two popular models from a large size of 481 MB to a compact size of 3 MB, resulting in significant reductions in inference time, energy consumption, and carbon emissions by hundreds of times. Our technique effectively lowers the entry barrier for leveraging large language models of code, making them available to ordinary developers without the need for high-performance computing equipment. Furthermore, it also contributes to a more sustainable and user-friendly software development environment.
ISSN:2832-7616
DOI:10.1145/3639475.3640097