LLaMA-Reviewer: Advancing Code Review Automation with Large Language Models through Parameter-Efficient Fine-Tuning

The automation of code review activities, a long-standing pursuit in software engineering, has been primarily addressed by numerous domain-specific pre-trained models. Despite their success, these models frequently demand extensive resources for pre-training from scratch. In contrast, Large Language...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Proceedings - International Symposium on Software Reliability Engineering s. 647 - 658
Hlavní autori: Lu, Junyi, Yu, Lei, Li, Xiaojia, Yang, Li, Zuo, Chun
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 09.10.2023
Predmet:
ISSN:2332-6549
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract The automation of code review activities, a long-standing pursuit in software engineering, has been primarily addressed by numerous domain-specific pre-trained models. Despite their success, these models frequently demand extensive resources for pre-training from scratch. In contrast, Large Language Models (LLMs) provide an intriguing alternative, given their remarkable capabilities when supplemented with domain-specific knowledge. However, their potential for automating code review tasks remains largely unexplored.In response to this research gap, we present LLaMA-Reviewer, an innovative framework that leverages the capabilities of LLaMA, a popular LLM, in the realm of code review. Mindful of resource constraints, this framework employs parameter-efficient fine-tuning (PEFT) methods, delivering high performance while using less than 1% of trainable parameters.An extensive evaluation of LLaMA-Reviewer is conducted on two diverse, publicly available datasets. Notably, even with the smallest LLaMA base model consisting of 6.7B parameters and a limited number of tuning epochs, LLaMA-Reviewer equals the performance of existing code-review-focused models.The ablation experiments provide insights into the influence of various fine-tuning process components, including input representation, instruction tuning, and different PEFT methods. To foster continuous progress in this field, the code and all PEFT-weight plugins have been made open-source.
AbstractList The automation of code review activities, a long-standing pursuit in software engineering, has been primarily addressed by numerous domain-specific pre-trained models. Despite their success, these models frequently demand extensive resources for pre-training from scratch. In contrast, Large Language Models (LLMs) provide an intriguing alternative, given their remarkable capabilities when supplemented with domain-specific knowledge. However, their potential for automating code review tasks remains largely unexplored.In response to this research gap, we present LLaMA-Reviewer, an innovative framework that leverages the capabilities of LLaMA, a popular LLM, in the realm of code review. Mindful of resource constraints, this framework employs parameter-efficient fine-tuning (PEFT) methods, delivering high performance while using less than 1% of trainable parameters.An extensive evaluation of LLaMA-Reviewer is conducted on two diverse, publicly available datasets. Notably, even with the smallest LLaMA base model consisting of 6.7B parameters and a limited number of tuning epochs, LLaMA-Reviewer equals the performance of existing code-review-focused models.The ablation experiments provide insights into the influence of various fine-tuning process components, including input representation, instruction tuning, and different PEFT methods. To foster continuous progress in this field, the code and all PEFT-weight plugins have been made open-source.
Author Yang, Li
Zuo, Chun
Li, Xiaojia
Yu, Lei
Lu, Junyi
Author_xml – sequence: 1
  givenname: Junyi
  surname: Lu
  fullname: Lu, Junyi
  email: lujunyi21@mails.ucas.ac.cn
  organization: Institute of Software,Chinese Academy of Sciences,Beijing,China
– sequence: 2
  givenname: Lei
  surname: Yu
  fullname: Yu, Lei
  email: yulei21@mails.ucas.ac.cn
  organization: Institute of Software,Chinese Academy of Sciences,Beijing,China
– sequence: 3
  givenname: Xiaojia
  surname: Li
  fullname: Li, Xiaojia
  email: lixj21@mails.tsinghua.edu.cn
  organization: Tsinghua University,School of Software,Beijing,China
– sequence: 4
  givenname: Li
  surname: Yang
  fullname: Yang, Li
  email: yangli2017@iscas.ac.cn
  organization: Institute of Software,Chinese Academy of Sciences,Beijing,China
– sequence: 5
  givenname: Chun
  surname: Zuo
  fullname: Zuo, Chun
  email: zuochun@sinosoft.com.cn
  organization: Sinosoft Company Limited,Beijing,China
BookMark eNotjttKAzEYhKMoWGvfQCEvkJrDnuLdUlotbFHa3pe_mz_bSJuVbLbFt3eh3swMzMcwj-TOtx4JeRF8KgTXr8vNZj1PdZEUU8mlmnLOZXZDJjrXhUq5EqlO1C0ZSaUky9JEP5DHrvseKJ4IOSJdVcGqZGs8O7xgeKOlOYOvnW_orDVIrwUt-9ieILrW04uLB1pBaHBQ3_QwhNWAHjsaD6HtmwP9ggAnjBjY3FpXO_SRLpxHtu39sPxE7i0cO5z8-5hsF_Pt7INVn-_LWVkxN5yLTGizVypTKGuZYZJnAmwO3IKQyqQq0xpB1JlNc4PG2LrmSVELC-k-2RtZqDF5vs46RNz9BHeC8LsTXGqtVaH-ANkLXu8
CODEN IEEPAD
ContentType Conference Proceeding
DBID 6IE
6IL
CBEJK
RIE
RIL
DOI 10.1109/ISSRE59848.2023.00026
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP All) 1998-Present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISBN 9798350315943
EISSN 2332-6549
EndPage 658
ExternalDocumentID 10299938
Genre orig-research
GrantInformation_xml – fundername: Science and Technology Service Network Plan
  funderid: 10.13039/501100013315
GroupedDBID 6IE
6IF
6IH
6IK
6IL
6IN
AAJGR
AAWTH
ABLEC
ACGFS
ADZIZ
ALMA_UNASSIGNED_HOLDINGS
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CBEJK
CHZPO
IEGSK
IPLJI
M43
OCL
RIE
RIL
RNS
ID FETCH-LOGICAL-i204t-19db3363e2c26e4761af7a0fa123d53699ea1c6f57deddfcc048c1fa5b4bd283
IEDL.DBID RIE
ISICitedReferencesCount 54
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001096886300057&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
IngestDate Wed Aug 27 02:30:37 EDT 2025
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i204t-19db3363e2c26e4761af7a0fa123d53699ea1c6f57deddfcc048c1fa5b4bd283
PageCount 12
ParticipantIDs ieee_primary_10299938
PublicationCentury 2000
PublicationDate 2023-Oct.-9
PublicationDateYYYYMMDD 2023-10-09
PublicationDate_xml – month: 10
  year: 2023
  text: 2023-Oct.-9
  day: 09
PublicationDecade 2020
PublicationTitle Proceedings - International Symposium on Software Reliability Engineering
PublicationTitleAbbrev ISSRE
PublicationYear 2023
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssj0020412
Score 2.5704656
Snippet The automation of code review activities, a long-standing pursuit in software engineering, has been primarily addressed by numerous domain-specific pre-trained...
SourceID ieee
SourceType Publisher
StartPage 647
SubjectTerms Automation
Code Review Automation
Codes
Deep Learning
Large Language Models (LLMs)
LLaMA
Parameter-Efficient Fine-Tuning (PEFT)
Quality assurance
Software engineering
Software Quality Assurance
Software reliability
Task analysis
Tuning
Title LLaMA-Reviewer: Advancing Code Review Automation with Large Language Models through Parameter-Efficient Fine-Tuning
URI https://ieeexplore.ieee.org/document/10299938
WOSCitedRecordID wos001096886300057&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV07b8IwELYK6tCJPqj6loeubkmc2HE3hECtRBEqDGzIsc8SUkUqSPj9PTuBdunQJcpDka2zLt_lfN99hDxGCNs9bjMm05SzxErLlHB4qTSAsrnQ4ILYhJxMssVCTRuyeuDCAEAoPoMnfxr28m1hKp8qQw_Hj6fiWYu0pJQ1Wevwd-UbRzUUHZzA89ts9jFMVRbqt2LfxjQ0UPgloRIQZNT559inpPvDxaPTA8qckSNYn5POXoyBNr55QbbjsX7vszrXD5sXGvSSDb5DB4UFWj-g_aosarYi9SlYOvaV4Hiss5bUS6N9bmmj3kOn2tdu4ThsGFpN4BTpCONSNq98PqVL5qPhfPDKGkUFtkL7lCxC63MuOMQmFpBIEWkndc9pxC-bcqEU6MgIl0oL1jpj0L9N5HSaJ7nFQOSStNfFGq4ItSYRXGrQDuMRcIj6PXCpNi4XkOdxdE263obLr7pnxnJvvps_7t-SE79MoUxO3ZF2uangnhybXbnabh7CSn8DBqitEw
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV07T8MwELagIMFUHkW88cBqaF5OzFahVq1Iq4pm6FY59lmqhBrUJvx-zk5aWBhYojwU2Trr8l3O991HyKOHsN0NdMLiKApYqGPNBDd4KSSA0DmXYJzYRDyZJPO5mDZkdceFAQBXfAZP9tTt5etCVTZVhh6OH08RJPvkIApD36vpWrv_K9s6qiHp4BSeR7PZez8Siavg8m0jU9dC4ZeIisOQQfufo5-Qzg8bj053OHNK9mB1RtpbOQbaeOc52aSpHPdYne2H9Qt1iskK36GvhQZaP6C9qixqviK1SVia2lpwPNZ5S2rF0T42tNHvoVNpq7dwHNZ3zSZwinSAkSnLKptR6ZBs0M9eh6zRVGBLtE_JPLR_EPAAfOVzCGPuSRPLrpGIYDoKuBAgPcVNFGvQ2iiFHq48I6M8zDWGIhektSpWcEmoViEPYgnSYEQCBnG_CyaSyuQc8tz3rkjH2nDxWXfNWGzNd_3H_QdyNMzG6SIdTd5uyLFdMlc0J25Jq1xXcEcO1Ve53Kzv3ap_AzDTsFo
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=Proceedings+-+International+Symposium+on+Software+Reliability+Engineering&rft.atitle=LLaMA-Reviewer%3A+Advancing+Code+Review+Automation+with+Large+Language+Models+through+Parameter-Efficient+Fine-Tuning&rft.au=Lu%2C+Junyi&rft.au=Yu%2C+Lei&rft.au=Li%2C+Xiaojia&rft.au=Yang%2C+Li&rft.date=2023-10-09&rft.pub=IEEE&rft.eissn=2332-6549&rft.spage=647&rft.epage=658&rft_id=info:doi/10.1109%2FISSRE59848.2023.00026&rft.externalDocID=10299938