Age of Information-Aware Resource Management in UAV-Assisted Mobile-Edge Computing Systems
This paper investigates the problem of age of information (AoI)-aware resource awareness in an unmanned aerial vehicle (UAV)-assisted mobile-edge computing (MEC) system, which is deployed by an infrastructure provider (InP). A service provider leases resources from the InP to serve the mobile users (MUs) with sporadic computation requests. Due to the limited number of channels and the finite shared I/O resource of the UAV, the MUs compete to schedule local and remote task computations in accordance with the observations of system dynamics. The aim of each MU is to selfishly maximize the expected long-term computation performance. We formulate the non-cooperative interactions among the MUs as a stochastic game. To approach the Nash equilibrium solutions, we propose a novel online deep reinforcement learning (DRL) scheme, which enables each MU to behave using its local conjectures only. The DRL scheme employs two separate deep Q- networks to approximate the Q-factor and the post-decision Q-factor for each MU. Numerical experiments show the potentials of the online DRL scheme in balancing the tradeoff between AoI and energy consumption.