How to incentivize multi-tenant federated learning for LLMs?
Incentivizing Multi-Tenant Split Federated Learning for Foundation Models at the Network Edge
March 10, 2025
https://arxiv.org/pdf/2503.04971This paper proposes PRINCE, a new incentive mechanism for training large language models (LLMs) using split federated learning (SFL) in a multi-tenant environment at the network edge. It addresses the challenge of motivating self-interested devices with varying resources and data quality to contribute to different LLM training tasks simultaneously.
Key points for LLM-based multi-agent systems:
- Bias-Resilient SFL: Addresses potential model biases introduced by independent device participation, crucial in multi-agent scenarios where agents (devices) might have uneven participation.
- SFL Convergence Bound: Predicts heterogeneous device contributions to LLM performance without completing training, enabling effective incentive allocation across agents.
- Congestion Game Modeling: Models competition between different LLM training tasks (tenants) for device resources, a core aspect of multi-agent resource allocation.
- Decentralized Algorithm: Provides a practical implementation for distributing LLM training across multiple agents and tasks, coordinating their actions towards a global goal (optimal LLM performance).