Deep Reinforcement Learning for Practical Phase-Shift Optimization in RIS-Aided MISO URLLC Systems
We study the joint active/passive beamforming and channel blocklength (CBL) allocation in a nonideal reconfigurable intelligent surface (RIS)-aided ultrareliable and low-latency communication (URLLC) system. The considered scenario is a finite blocklength (FBL) regime and the problem is solved by leveraging a deep reinforcement learning (DRL) algorithm named twin-delayed deep deterministic policy gradient (TD3). First, assuming an industrial automation system, the signal-to-interference-plus-noise ratio and achievable rate in the FBL regime are identified for each actuator. Next, the joint active/passive beamforming and CBL optimization problem (OP) is formulated where the objective is to maximize the total achievable FBL rate in all actuators, subject to nonlinear amplitude response at the RIS elements, BS transmit power budget, and total available CBL. Since the formulated problem is highly nonconvex and nonlinear, we resort to employing an actor–critic policy gradient DRL algorithm based on TD3. The considered method relies on interacting RIS with the industrial automation environment by taking actions which are the phase shifts at the RIS elements, CBL variables, and BS beamforming to maximize the expected observed reward, i.e., the total FBL rate. We assess the performance loss of the system when the RIS is nonideal, i.e., with nonlinear amplitude response, and compare it with ideal RIS without impairments. The numerical results show that optimizing the RIS phase shifts, BS beamforming, and CBL variables via the TD3 method with deterministic policy outperforms conventional methods and it is highly beneficial for improving the network total FBL rate considering finite CBL size.