Adaptive Semantic-Spatio-Temporal Graph Convolutional Network for Lip Reading
The goal of this work is to recognize words phrases and sentences being spoken by a talking face without given the audio. Current deep learning approaches for lip reading focus on exploring the appearance and optical flow information of videos. However these methods do not fully exploit the characteristics of lip motion. In addition to appearance and optical flow the mouth contour deformation usually conveys significant information that is complementary to others. However the modeling of dynamic mouth contour has received little attention than that of appearance and optical flow. In this work we propose a novel model of dynamic mouth contours called Adaptive Semantic-Spatio-Temporal Graph Convolution Network (ASST-GCN) to go beyond previous methods by automatically learning both the spatial and temporal information from videos. To combine the complementary information from appearance and mouth contour a two-stream visual front-end network is proposed. Experimental results demonstrate that the proposed method significantly outperforms the state-of-the-art lip reading methods on several large-scale lip reading benchmarks.