Abstract
Federated learning is a new scheme of distributed machine learning, which enables a large number of edge computing devices to jointly learn a shared model without private data sharing. Federated learning allows nodes to synchronize only the locally trained models instead of their own private data, which provides a guarantee for privacy and security. However, due to the challenges of heterogeneity in federated learning, which are: (1) heterogeneous model architecture among devices; (2) statistical heterogeneity in real federated dataset, which do not obey independent-identical-distribution, resulting in poor performance of traditional federated learning algorithms. To solve the problems above, this paper proposes FedDistill, a new distributed training method based on knowledge distillation. By introducing personalized model on each device, the personalized model aims to improve the local performance even in a situation that global model fails to adapt to the local dataset, thereby improving the ability and robustness of the global model. The improvement of the performance of local device benefits from the effect of knowledge distillation, which can guide the improvement of global model by knowledge transfer between heterogeneous networks. Experiments show that FedDistill can significantly improve the accuracy of classification tasks and meet the needs of heterogeneous users.
Original language | English |
---|---|
Pages | 163-167 |
DOIs | |
Publication status | Published Online - 1 Mar 2021 |
Event | 2020 International Conference on Artificial Intelligence and Computer Engineering (ICAICE) - Duration: 2 Jan 0001 → … |
Conference
Conference | 2020 International Conference on Artificial Intelligence and Computer Engineering (ICAICE) |
---|---|
Period | 2/01/01 → … |
Keywords
- Federated learning
- Knowledge distillation
- Non-independent-identical-distribution
- Heterogeneous network