Federated Deep Reinforcement Learning for Prediction-Based Network Slice Mobility in 6 G Mobile Networks
Network slices are generally coupled with services and face service continuity/unavailability concerns due to the high mobility and dynamic requests from users. Network slice mobility (NSM), which considers user mobility, service migration, and resource allocation from a holistic view, is witnessed as a key technology in enabling network slices to respond quickly to service degradation. Existing studies on NSM either ignored the trigger detection before NSM decision-making or didn’t consider the prediction of future system information to improve the NSM performance, and the training of deep reinforcement learning (DRL) agents also faces challenges with incomplete observations. To cope with these challenges, we consider that network slices migrate periodically and utilize the prediction of system information to assist NSM decision-making. The periodical NSM problem is further transformed into a Markov decision process, and we creatively propose a prediction-based federated DRL framework to solve it. Particularly, the learning processes of the prediction model and DRL agents are performed in a federated learning paradigm. Based on extensive experiments, simulation results demonstrate that the proposed scheme outperforms the considered baseline schemes in improving long-term profit, reducing communication overhead, and saving transmission time.