MIRA: A Method of Federated Multi-Task Learning for Large Language Models

Funding Sponsor

European Commission

Author's Department

Computer Science & Engineering Department

Second Author's Department

Electronics & Communications Engineering Department

Third Author's Department

Computer Science & Engineering Department

Fourth Author's Department

Electronics & Communications Engineering Department

Find in your Library

https://doi.org/10.1109/LNET.2025.3539810

All Authors

Ahmed Elbakary Chaouki Ben Issaid Tamer ElBatt Karim Seddik Mehdi Bennis

Document Type

Research Article

Publication Title

IEEE Networking Letters

Publication Date

1-1-2025

doi

10.1109/LNET.2025.3539810

Abstract

In this letter, we introduce a method for fine-tuning Large Language Models (LLMs), inspired by Multi-Task learning in a federated manner. Our approach leverages the structure of each client’s model and enables a learning scheme that considers other clients’ tasks and data distribution. To mitigate the extensive computational and communication overhead often associated with LLMs, we utilize a parameter-efficient fine-tuning method, specifically Low-Rank Adaptation (LoRA), to reduce the number of trainable parameters. Experimental results, with different datasets and models, demonstrate the proposed method’s effectiveness compared to existing frameworks for federated fine-tuning of LLMs in terms of global and local performances. The proposed scheme outperforms existing baselines by achieving lower local loss for each client, while maintaining comparable global performance.

First Page

171

Last Page

175

Share

COinS