Meta Reinforcement Learning for Resource Allocation in Aerial Active-RIS-Assisted Networks With Rate-Splitting Multiple Access
Mounting a reconfigurable intelligent surface (RIS) on an unmanned aerial vehicle (UAV) holds promise for improving traditional terrestrial network performance. Unlike conventional methods deploying passive RIS on UAVs, this study delves into the efficacy of an aerial active RIS (AARIS). Specifically, the downlink transmission of an AARIS network is investigated, where the base station (BS) leverages rate-splitting multiple access (RSMA) for effective interference management and benefits from the support of an AARIS for jointly amplifying and reflecting the BS’s transmit signals. Considering both the non-trivial energy consumption of the active RIS and the limited energy storage of the UAV, we propose an innovative element selection strategy for optimizing the on/off status of active RIS elements, which adaptively and remarkably manages the system’s power consumption. To this end, a resource management problem is formulated, aiming to maximize the system energy efficiency (EE) by jointly optimizing the transmit beamforming at the BS, the element activation, the phase shift and the amplification factor at the active RIS, the RSMA common data rate at users, as well as the UAV’s trajectory. Due to the dynamicity nature of UAV and user mobility, a deep reinforcement learning (DRL) algorithm is designed for resource allocation, utilizing meta-learning to adaptively handle fast time-varying system dynamics. According to simulations, integrating meta-learning yields a notable 36% increase in system EE. Additionally, substituting AARIS for fixed terrestrial active RIS results in a 26% EE enhancement.