MIRA: A Method of Federated Multi-Task Learning for Large Language Models
Funding Sponsor
European Commission
Author's Department
Computer Science & Engineering Department
Second Author's Department
Electronics & Communications Engineering Department
Third Author's Department
Computer Science & Engineering Department
Fourth Author's Department
Electronics & Communications Engineering Department
Find in your Library
https://doi.org/10.1109/LNET.2025.3539810
Document Type
Research Article
Publication Title
IEEE Networking Letters
Publication Date
1-1-2025
doi
10.1109/LNET.2025.3539810
Abstract
In this letter, we introduce a method for fine-tuning Large Language Models (LLMs), inspired by Multi-Task learning in a federated manner. Our approach leverages the structure of each client’s model and enables a learning scheme that considers other clients’ tasks and data distribution. To mitigate the extensive computational and communication overhead often associated with LLMs, we utilize a parameter-efficient fine-tuning method, specifically Low-Rank Adaptation (LoRA), to reduce the number of trainable parameters. Experimental results, with different datasets and models, demonstrate the proposed method’s effectiveness compared to existing frameworks for federated fine-tuning of LLMs in terms of global and local performances. The proposed scheme outperforms existing baselines by achieving lower local loss for each client, while maintaining comparable global performance.
First Page
171
Last Page
175
Recommended Citation
APA Citation
Elbakary, A.
Ben Issaid, C.
ElBatt, T.
Seddik, K.
&
Bennis, M.
(2025). MIRA: A Method of Federated Multi-Task Learning for Large Language Models. IEEE Networking Letters, 7(3), 171–175.
https://doi.org/10.1109/LNET.2025.3539810
MLA Citation
Elbakary, Ahmed, et al.
"MIRA: A Method of Federated Multi-Task Learning for Large Language Models." IEEE Networking Letters, vol. 7, no. 3, 2025, pp. 171–175.
https://doi.org/10.1109/LNET.2025.3539810
