LLaMA-Reviewer: Advancing Code Review Automation with Large Language Models through Parameter-Efficient Fine-Tuning
The automation of code review activities, a long-standing pursuit in software engineering, has been primarily addressed by numerous domain-specific pre-trained models. Despite their success, these models frequently demand extensive resources for pre-training from scratch. In contrast, Large Language...
Uloženo v:
| Vydáno v: | Proceedings - International Symposium on Software Reliability Engineering s. 647 - 658 |
|---|---|
| Hlavní autoři: | , , , , |
| Médium: | Konferenční příspěvek |
| Jazyk: | angličtina |
| Vydáno: |
IEEE
09.10.2023
|
| Témata: | |
| ISSN: | 2332-6549 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | The automation of code review activities, a long-standing pursuit in software engineering, has been primarily addressed by numerous domain-specific pre-trained models. Despite their success, these models frequently demand extensive resources for pre-training from scratch. In contrast, Large Language Models (LLMs) provide an intriguing alternative, given their remarkable capabilities when supplemented with domain-specific knowledge. However, their potential for automating code review tasks remains largely unexplored.In response to this research gap, we present LLaMA-Reviewer, an innovative framework that leverages the capabilities of LLaMA, a popular LLM, in the realm of code review. Mindful of resource constraints, this framework employs parameter-efficient fine-tuning (PEFT) methods, delivering high performance while using less than 1% of trainable parameters.An extensive evaluation of LLaMA-Reviewer is conducted on two diverse, publicly available datasets. Notably, even with the smallest LLaMA base model consisting of 6.7B parameters and a limited number of tuning epochs, LLaMA-Reviewer equals the performance of existing code-review-focused models.The ablation experiments provide insights into the influence of various fine-tuning process components, including input representation, instruction tuning, and different PEFT methods. To foster continuous progress in this field, the code and all PEFT-weight plugins have been made open-source. |
|---|---|
| AbstractList | The automation of code review activities, a long-standing pursuit in software engineering, has been primarily addressed by numerous domain-specific pre-trained models. Despite their success, these models frequently demand extensive resources for pre-training from scratch. In contrast, Large Language Models (LLMs) provide an intriguing alternative, given their remarkable capabilities when supplemented with domain-specific knowledge. However, their potential for automating code review tasks remains largely unexplored.In response to this research gap, we present LLaMA-Reviewer, an innovative framework that leverages the capabilities of LLaMA, a popular LLM, in the realm of code review. Mindful of resource constraints, this framework employs parameter-efficient fine-tuning (PEFT) methods, delivering high performance while using less than 1% of trainable parameters.An extensive evaluation of LLaMA-Reviewer is conducted on two diverse, publicly available datasets. Notably, even with the smallest LLaMA base model consisting of 6.7B parameters and a limited number of tuning epochs, LLaMA-Reviewer equals the performance of existing code-review-focused models.The ablation experiments provide insights into the influence of various fine-tuning process components, including input representation, instruction tuning, and different PEFT methods. To foster continuous progress in this field, the code and all PEFT-weight plugins have been made open-source. |
| Author | Yang, Li Zuo, Chun Li, Xiaojia Yu, Lei Lu, Junyi |
| Author_xml | – sequence: 1 givenname: Junyi surname: Lu fullname: Lu, Junyi email: lujunyi21@mails.ucas.ac.cn organization: Institute of Software,Chinese Academy of Sciences,Beijing,China – sequence: 2 givenname: Lei surname: Yu fullname: Yu, Lei email: yulei21@mails.ucas.ac.cn organization: Institute of Software,Chinese Academy of Sciences,Beijing,China – sequence: 3 givenname: Xiaojia surname: Li fullname: Li, Xiaojia email: lixj21@mails.tsinghua.edu.cn organization: Tsinghua University,School of Software,Beijing,China – sequence: 4 givenname: Li surname: Yang fullname: Yang, Li email: yangli2017@iscas.ac.cn organization: Institute of Software,Chinese Academy of Sciences,Beijing,China – sequence: 5 givenname: Chun surname: Zuo fullname: Zuo, Chun email: zuochun@sinosoft.com.cn organization: Sinosoft Company Limited,Beijing,China |
| BookMark | eNotjttKAzEYhKMoWGvfQCEvkJrDnuLdUlotbFHa3pe_mz_bSJuVbLbFt3eh3swMzMcwj-TOtx4JeRF8KgTXr8vNZj1PdZEUU8mlmnLOZXZDJjrXhUq5EqlO1C0ZSaUky9JEP5DHrvseKJ4IOSJdVcGqZGs8O7xgeKOlOYOvnW_orDVIrwUt-9ieILrW04uLB1pBaHBQ3_QwhNWAHjsaD6HtmwP9ggAnjBjY3FpXO_SRLpxHtu39sPxE7i0cO5z8-5hsF_Pt7INVn-_LWVkxN5yLTGizVypTKGuZYZJnAmwO3IKQyqQq0xpB1JlNc4PG2LrmSVELC-k-2RtZqDF5vs46RNz9BHeC8LsTXGqtVaH-ANkLXu8 |
| CODEN | IEEPAD |
| ContentType | Conference Proceeding |
| DBID | 6IE 6IL CBEJK RIE RIL |
| DOI | 10.1109/ISSRE59848.2023.00026 |
| DatabaseName | IEEE Electronic Library (IEL) Conference Proceedings IEEE Xplore POP ALL IEEE Xplore All Conference Proceedings IEEE/IET Electronic Library (IEL) (UW System Shared) IEEE Proceedings Order Plans (POP All) 1998-Present |
| DatabaseTitleList | |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISBN | 9798350315943 |
| EISSN | 2332-6549 |
| EndPage | 658 |
| ExternalDocumentID | 10299938 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: Science and Technology Service Network Plan funderid: 10.13039/501100013315 |
| GroupedDBID | 6IE 6IF 6IH 6IK 6IL 6IN AAJGR AAWTH ABLEC ACGFS ADZIZ ALMA_UNASSIGNED_HOLDINGS BEFXN BFFAM BGNUA BKEBE BPEOZ CBEJK CHZPO IEGSK IPLJI M43 OCL RIE RIL RNS |
| ID | FETCH-LOGICAL-i204t-19db3363e2c26e4761af7a0fa123d53699ea1c6f57deddfcc048c1fa5b4bd283 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 54 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001096886300057&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| IngestDate | Wed Aug 27 02:30:37 EDT 2025 |
| IsPeerReviewed | false |
| IsScholarly | true |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-i204t-19db3363e2c26e4761af7a0fa123d53699ea1c6f57deddfcc048c1fa5b4bd283 |
| PageCount | 12 |
| ParticipantIDs | ieee_primary_10299938 |
| PublicationCentury | 2000 |
| PublicationDate | 2023-Oct.-9 |
| PublicationDateYYYYMMDD | 2023-10-09 |
| PublicationDate_xml | – month: 10 year: 2023 text: 2023-Oct.-9 day: 09 |
| PublicationDecade | 2020 |
| PublicationTitle | Proceedings - International Symposium on Software Reliability Engineering |
| PublicationTitleAbbrev | ISSRE |
| PublicationYear | 2023 |
| Publisher | IEEE |
| Publisher_xml | – name: IEEE |
| SSID | ssj0020412 |
| Score | 2.5704036 |
| Snippet | The automation of code review activities, a long-standing pursuit in software engineering, has been primarily addressed by numerous domain-specific pre-trained... |
| SourceID | ieee |
| SourceType | Publisher |
| StartPage | 647 |
| SubjectTerms | Automation Code Review Automation Codes Deep Learning Large Language Models (LLMs) LLaMA Parameter-Efficient Fine-Tuning (PEFT) Quality assurance Software engineering Software Quality Assurance Software reliability Task analysis Tuning |
| Title | LLaMA-Reviewer: Advancing Code Review Automation with Large Language Models through Parameter-Efficient Fine-Tuning |
| URI | https://ieeexplore.ieee.org/document/10299938 |
| WOSCitedRecordID | wos001096886300057&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV07a8MwEBZN6NApfaT0jYauam3ZlqxuISS04IbQhJIt6HGCQIlLYvf3V5KdtEuHLsbIGJkT59N9uu8-hO65SxEU9WycmBmSugyFyNxYwigFyF3AhoDpvhd8MskXCzFtyeqBCwMAofgMHvxtOMs3pa49VOY83P08RZJ3UIdz1pC19tmVbxzVUnTiSDy-zGZvo0zkoX6L-jamoYHCLwmVEEHGvX_OfYz6P1w8PN1HmRN0AOtT1NuJMeDWN8_Qtijk64A0WD9snnDQS9buHTwsDeDmAR7UVdmwFbGHYHHhK8HdtUEtsZdG-9jiVr0HT6Wv3XLzkFFoNeE-EY_dvpTMa4-n9NF8PJoPn0mrqEBWzj4ViYVRScISoJoySDmLpeUystLFL5MlTAiQsWY24waMsVo7_9axlZlKlXEbkXPUXZdruEA4l4qqyCoe6TyFJFXUMpmolLoBrTm7RH1vw-Vn0zNjuTPf1R_j1-jIL1MokxM3qFttarhFh_qrWm03d2GlvwHqGKud |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3NS8MwFA86BT3Nj4nf5uA12qZt2ngbY2PDbgxXZLeRj1cYyCpb699vknbTiwcvpaSUlBdeX94v7_d-CD3GJkWQ1LJxfKZJaDIUIhKdE0YpQGICNjhM9z2NJ5NkPufThqzuuDAA4IrP4MneurN8XajKQmXGw83PkwfJPjqIwpB6NV1rl1_Z1lENScf3-PNoNnvrRzxxFVzUNjJ1LRR-iai4GDJo_3P2E9T5YePh6S7OnKI9WJ2h9laOATfeeY42aSrGXVKj_bB-wU4xWZl3cK_QgOsHuFuVRc1XxBaExamtBTfXGrfEVhztY4Mb_R48FbZ6y8xD-q7ZhPlEPDA7U5JVFlHpoGzQz3pD0mgqkKWxT0l8rmUQsACoogzCmPkij4WXCxPBdBQwzkH4iuVRrEHrXCnj4crPRSRDqc1W5AK1VsUKLhFOhKTSy2XsqSSEIJQ0ZyKQITUDSsXsCnWsDRefddeMxdZ813-MP6CjYTZOF-lo8nqDju2SuaI5fota5bqCO3SovsrlZn3vVv0bARau5A |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=Proceedings+-+International+Symposium+on+Software+Reliability+Engineering&rft.atitle=LLaMA-Reviewer%3A+Advancing+Code+Review+Automation+with+Large+Language+Models+through+Parameter-Efficient+Fine-Tuning&rft.au=Lu%2C+Junyi&rft.au=Yu%2C+Lei&rft.au=Li%2C+Xiaojia&rft.au=Yang%2C+Li&rft.date=2023-10-09&rft.pub=IEEE&rft.eissn=2332-6549&rft.spage=647&rft.epage=658&rft_id=info:doi/10.1109%2FISSRE59848.2023.00026&rft.externalDocID=10299938 |