Impact of Large Language Models of Code on Fault Localization

Identifying the point of error is imperative in software debugging. Traditional fault localization (FL) techniques rely on executing the program and using the code coverage matrix in tandem with test case results to calculate a suspiciousness score for each method or line. Recently, learning-based F...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:2025 IEEE Conference on Software Testing, Verification and Validation (ICST) s. 302 - 313
Hlavní autori: Ji, Suhwan, Lee, Sanghwa, Lee, Changsup, Han, Yo-Sub, Im, Hyeonseung
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 31.03.2025
Predmet:
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Identifying the point of error is imperative in software debugging. Traditional fault localization (FL) techniques rely on executing the program and using the code coverage matrix in tandem with test case results to calculate a suspiciousness score for each method or line. Recently, learning-based FL techniques have harnessed machine learning models to extract meaningful features from the code coverage matrix and improve FL performance. These techniques, however, require compilable source code, existing test cases, and specialized tools for generating the code coverage matrix for each programming language of interest. In this paper, we propose, for the first time, a simple but effective sequence generation approach for fine-tuning large language models of code (LLMCs) for FL tasks. LLMCs have recently received much attention for various software engineering problems. In line with these, we leverage the innate understanding of code that LLMCs have acquired through pre-training on large code corpora. Specifically, we fine-tune 13 representative encoder, encoder-decoder, and decoder-based LLMCs (across 7 different architectures) for FL tasks. Unlike previous approaches, LLM Cs can analyze code sequences that do not compile. Still, they have a limitation on the length of the input data. Therefore, for a fair comparison with existing FL techniques, we extract methods with errors from the project-level benchmark, Defects4J, and analyze them at the line level. Experimental results show that LLMCs fine-tuned with our approach successfully pinpoint error positions in 50.6%, 64.2%, and 72.3% of 1,291 methods in Defects4J for Top-1/3/5 prediction, outperforming the best learning-based state-of-the-art technique by up to 1.35, 1.12, and 1.08 times, respectively. We also conduct an in-depth investigation of key factors that may affect the FL performance of LLMCs. Our findings suggest promising research directions for FL and automated program repair tasks using LLMCs.
AbstractList Identifying the point of error is imperative in software debugging. Traditional fault localization (FL) techniques rely on executing the program and using the code coverage matrix in tandem with test case results to calculate a suspiciousness score for each method or line. Recently, learning-based FL techniques have harnessed machine learning models to extract meaningful features from the code coverage matrix and improve FL performance. These techniques, however, require compilable source code, existing test cases, and specialized tools for generating the code coverage matrix for each programming language of interest. In this paper, we propose, for the first time, a simple but effective sequence generation approach for fine-tuning large language models of code (LLMCs) for FL tasks. LLMCs have recently received much attention for various software engineering problems. In line with these, we leverage the innate understanding of code that LLMCs have acquired through pre-training on large code corpora. Specifically, we fine-tune 13 representative encoder, encoder-decoder, and decoder-based LLMCs (across 7 different architectures) for FL tasks. Unlike previous approaches, LLM Cs can analyze code sequences that do not compile. Still, they have a limitation on the length of the input data. Therefore, for a fair comparison with existing FL techniques, we extract methods with errors from the project-level benchmark, Defects4J, and analyze them at the line level. Experimental results show that LLMCs fine-tuned with our approach successfully pinpoint error positions in 50.6%, 64.2%, and 72.3% of 1,291 methods in Defects4J for Top-1/3/5 prediction, outperforming the best learning-based state-of-the-art technique by up to 1.35, 1.12, and 1.08 times, respectively. We also conduct an in-depth investigation of key factors that may affect the FL performance of LLMCs. Our findings suggest promising research directions for FL and automated program repair tasks using LLMCs.
Author Lee, Sanghwa
Han, Yo-Sub
Ji, Suhwan
Im, Hyeonseung
Lee, Changsup
Author_xml – sequence: 1
  givenname: Suhwan
  surname: Ji
  fullname: Ji, Suhwan
  email: shji@yonsei.ac.kr
  organization: Yonsei University,Seoul,Republic of Korea
– sequence: 2
  givenname: Sanghwa
  surname: Lee
  fullname: Lee, Sanghwa
  email: lion0738@kangwon.ac.kr
  organization: Kangwon National University,Chuncheon,Republic of Korea
– sequence: 3
  givenname: Changsup
  surname: Lee
  fullname: Lee, Changsup
  email: cslee@kangwon.ac.kr
  organization: Kangwon National University,Chuncheon,Republic of Korea
– sequence: 4
  givenname: Yo-Sub
  surname: Han
  fullname: Han, Yo-Sub
  email: emmous@yonsei.ac.kr
  organization: Yonsei University,Seoul,Republic of Korea
– sequence: 5
  givenname: Hyeonseung
  surname: Im
  fullname: Im, Hyeonseung
  email: hsim@kangwon.ac.kr
  organization: Kangwon National University,Chuncheon,Republic of Korea
BookMark eNo1T81KAzEYjKAHrX0DwbzArvnb_Bw8yGJ1YcVD23P5kk1KYJuU7fagT29Ee5kZZmCYuUPXKSeP0CMlNaXEPHXteiOZkaZmhDV1sbQhXF6hpVFGc04boqlgt-i5OxzBzTgH3MO09wXT_gxFfOTBj6ffoC0K54RXcB5n3GcHY_yGOeZ0j24CjCe__OcF2q5eN-171X--de1LX0Wq9FxZYSQA04yqoRmcBqEUC4xCsLosCdxa6WTDRYmsDnJwIrjgXBBGeyUlX6CHv97ovd8dp3iA6Wt3ecV_APEGRsQ
ContentType Conference Proceeding
DBID 6IE
6IL
CBEJK
RIE
RIL
DOI 10.1109/ICST62969.2025.10989036
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP All) 1998-Present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Xplore
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
EISBN 9798331508142
EndPage 313
ExternalDocumentID 10989036
Genre orig-research
GroupedDBID 6IE
6IL
CBEJK
RIE
RIL
ID FETCH-LOGICAL-i178t-b496aa28217d5dc8a4772f21afb8150f3bb6c6534c8ab8f6dc4fcfccf498e7663
IEDL.DBID RIE
ISICitedReferencesCount 0
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001506893900027&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
IngestDate Thu May 29 05:57:30 EDT 2025
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i178t-b496aa28217d5dc8a4772f21afb8150f3bb6c6534c8ab8f6dc4fcfccf498e7663
PageCount 12
ParticipantIDs ieee_primary_10989036
PublicationCentury 2000
PublicationDate 2025-March-31
PublicationDateYYYYMMDD 2025-03-31
PublicationDate_xml – month: 03
  year: 2025
  text: 2025-March-31
  day: 31
PublicationDecade 2020
PublicationTitle 2025 IEEE Conference on Software Testing, Verification and Validation (ICST)
PublicationTitleAbbrev ICST
PublicationYear 2025
Publisher IEEE
Publisher_xml – name: IEEE
Score 1.9039987
Snippet Identifying the point of error is imperative in software debugging. Traditional fault localization (FL) techniques rely on executing the program and using the...
SourceID ieee
SourceType Publisher
StartPage 302
SubjectTerms Benchmark testing
Codes
Computer architecture
Deep Learning
Fault Localization
Feature extraction
Fine-Tuning
Large Language Model of Code
Large language models
Location awareness
Software debugging
Software engineering
Software testing
Source coding
Vulnerability Detection
Title Impact of Large Language Models of Code on Fault Localization
URI https://ieeexplore.ieee.org/document/10989036
WOSCitedRecordID wos001506893900027&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LSwMxEB5s8eBJxYpvcvC61d1k8zgXF4VSClboreQJhbIr7a6_v5PdreLBg5cQMgNhEpJ5ZL4MwGOqTPbMDU88pSFhwvPE0MDQS1E2p6gBPXVtsQkxm8nlUs17sHqLhfHet8lnfhy77Vu-q2wTQ2V4wpVUeOUOYCCE6MBafc4Wkp7eJu8Lnike8SdZPj5w_6qb0qqN4vSfE57B6AeAR-bfquUcjnx5ATEAoG1NqkCmMX8b2y7WSGJBs80uEibYI1VJCt1sajKNiqoHWo7go3hZTF6TvvpBsk6FrBPDFNcaPaJUuNxZqRkawiFLdTASrbhAjeGW43oiycjAnWXBBmsDU9ILNCQuYVhWpb8CYpRlLr4AGmTnAX2sVEuXZU7THDfDXsMoyr767D64WB3Evvlj_BZO4gp30Lw7GNbbxt_Dsf2q17vtQ7ste0Mqjus
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LSwMxEB60CnpSsWJ95uB1q5vNZpNzsbS4loIVeit5QqHsSrv19zvZ3SoePHgJITMQJiGZR-bLADzEUtMnrnnkksRHLHM80oln6KVIkyaoAV1i62IT2WQi5nM5bcHqNRbGOVcnn7l-6NZv-bY02xAqwxMuhcQrdx8OUsZo3MC12qwtJD6OB28zTiUPCBSa9nf8vyqn1IpjePLPKU-h-wPBI9Nv5XIGe644hxACUKYipSd5yODGtok2klDSbLUJhAH2SFmQodquKpIHVdVCLbvwPnyeDUZRW_8gWsaZqCLNJFcKfaI4s6k1QjE0hT2NldcC7TifaM0NxxVFkhaeW8O88cZ4JoXL0JS4gE5RFu4SiJaG2fAGqJGde_SyYiUspVYlKW6H6UE3yL74aL64WOzEvvpj_B6ORrPXfJGPJy_XcBxWuwHq3UCnWm_dLRyaz2q5Wd_VW_QF5EiSMg
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2025+IEEE+Conference+on+Software+Testing%2C+Verification+and+Validation+%28ICST%29&rft.atitle=Impact+of+Large+Language+Models+of+Code+on+Fault+Localization&rft.au=Ji%2C+Suhwan&rft.au=Lee%2C+Sanghwa&rft.au=Lee%2C+Changsup&rft.au=Han%2C+Yo-Sub&rft.date=2025-03-31&rft.pub=IEEE&rft.spage=302&rft.epage=313&rft_id=info:doi/10.1109%2FICST62969.2025.10989036&rft.externalDocID=10989036