Federated Low-Rank Adaptation for Large Models Fine-Tuning Over Wireless Networks

The emergence of large language models (LLMs) with multi-task generalization capabilities is expected to improve the performance of artificial intelligence (AI)-as-a-service provision in 6G networks. By fine-tuning LLMs, AI services can become more precise and tailored to the demands of different do...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on wireless communications Ročník 24; číslo 1; s. 659 - 675
Hlavní autoři: Sun, Haofeng, Tian, Hui, Ni, Wanli, Zheng, Jingheng, Niyato, Dusit, Zhang, Ping
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York IEEE 01.01.2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:1536-1276, 1558-2248
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The emergence of large language models (LLMs) with multi-task generalization capabilities is expected to improve the performance of artificial intelligence (AI)-as-a-service provision in 6G networks. By fine-tuning LLMs, AI services can become more precise and tailored to the demands of different downstream tasks. However, centralized fine-tuning paradigms pose a potential risk to user privacy, and existing distributed fine-tuning methods incur significant wireless transmission burdens due to the large-scale parameter transmission of LLMs. To tackle these challenges, by leveraging the low rank feature in LLM fine-tuning, we propose a wireless over-the-air federated learning (AirFL) based low-rank adaptation (LoRA) framework that integrates LoRA and over-the-air computation (AirComp) to achieve efficient fine-tuning and aggregation. Based on multiple-input multiple-output (MIMO) and orthogonal frequency division multiplexing (OFDM), we design a multi-stream AirComp scheme to fulfill the aggregation requirement of AirFL-LoRA. Furthermore, by deriving an optimality gap, we gain theoretical insights into the joint impact of rank selection and gradient aggregation distortion on the fine-tuning performance of AirFL-LoRA. Next, we formulate a non-convex problem to minimize the optimality gap, which is solved by the proposed backtracking-based alternating algorithm and the manifold optimization algorithm iteratively. Through fine-tuning LLMs for different downstream tasks, experimental results reveal that the AirFL-LoRA framework outperforms the state-of-the-art baselines on both training loss and perplexity, closely approximating the performance of FL with ideal aggregation.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1536-1276
1558-2248
DOI:10.1109/TWC.2024.3497998