Accuracy-Security Tradeoff with Balanced Aggregation and Artificial Noise for Wireless Federated Learning
In federated learning (FL), a number of devices train their local models and upload the corresponding parameters or gradients to the base station (BS) for global model updates. However, the eavesdropper can recover data from parameters or gradients, resulting in data leakage. To defend against eavesdropping attacks, in this article, we propose an algorithm that divides the transmit power proportionally between the transmitted signal and artificial noise (AN) to counteract the eavesdropper for wireless FL. In this algorithm, due to the limited communication resources, the ratio of signal power to total power and the aggregation frequency need to be carefully chosen, to guarantee the model accuracy and security at the same time. In order to achieve this goal, we maximize the secrecy rate with the system/user power and model performance constraints. To make this problem tractable, we derive two bounds of the secrecy rate and loss function, which allows us to obtain closed-form expressions for the power of AN and the aggregation frequency. Furthermore, in order to make our analysis more realistic, we consider the FL model with channel fading and additive white Gaussian noise (AWGN) over uplink and downlink, respectively. Specifically, we discuss the convergence of FL over noisy multiple access channels (MACs). Simulation results confirm the convergence and the effectiveness of the proposed algorithm.