Electronic Thesis and Dissertation Repository

Optimizing federated learning in wireless networks: a knowledge distillation approach to mitigating data heterogeneity and resource scarcity

Yushen Chen, Western University

Abstract

Federated learning (FL), as a privacy-preserving learning paradigm, enables multiple users to collaboratively train a global model without sharing their local data. However, the data distribution among user devices is often non-independent and identically distributed (non-IID) in nature, making efficient global aggregation of local models difficult. Moreover, when conducting FL over wireless networks, the training and communication efficiency of FL is often restricted by the limited computational and communication resources of user devices. Existing methods generally focus either on alleviating the impact of non-IID data or developing more efficient resource allocation schemes, but rarely address both aspects simultaneously. In this thesis, we propose a novel user selection scheme for knowledge distillation-based global aggregation to select users whose local models can be more efficiently aggregated. Subsequently, by investigating the impact of user resource allocation on FL performance over wireless networks, we propose a resource allocation scheme for the selected users to improve the training and communication efficiency of FL. To further improve communication efficiency in FL, a knowledge distillation based algorithm is proposed to directly reduce the communication overhead between the server and user devices (e.g. the exchange of global model parameters), without compromising the accuracy of the global model. Finally, extensive experiments demonstrate our proposed scheme and algorithm achieve superior performance in terms of accuracy, training and communication efficiency.