Deep Reinforcement Learning-Based Deterministic Routing and Scheduling for Mixed-Criticality Flows
Deterministic networking has recently drawn much attention by investigating deterministic flow scheduling. Combined with artificial intelligent (AI) technologies, it can be leveraged as a promising network technology for facilitating automated network configuration in the Industrial Internet of Things (IIoT). However, the stricter requirements of the IIoT have posed significant challenges, that is, deterministic and bounded latency for time-critical applications. This article incorporates deep reinforcement learning (DRL) in cycle specified queuing and forwarding and proposes a DRL-based deterministic flow scheduler (Deep-DFS) to solve the deterministic flow routing and scheduling problem. Novel delay aware network representations, action masking and criticality aware reward function design are proposed to make deep-DFS more scalable and efficient. Simulation experiments are conducted to evaluate the performances of deep-DFS, and the results show that deep-DFS can schedule more flows than the other benchmark methods (heuristic- and AI-based methods).