Characterizing subtle facial movements via Riemannian manifold
Characterizing subtle facial movements from videos is one of the most intensive topics in computer vision research. It is, however, challenging, since (1) the intensity of subtle facial muscle movement is usually low, (2) the duration may be transient, and (3) datasets containing spontaneous subtle movements with reliable annotations are painful to obtain and often of small sizes.This article is targeted at addressing these problems for characterizing subtle facial movements from both the aspects of motion elucidation and description. First, we propose an efficient method for elucidating hidden and repressed movements to make them easier to get noticed. We explore the feasibility of linearizing motion magnification and temporal interpolation, which is obscured by the architecture of existing methods. On this basis, we propose a consolidated framework, termed MOTEL, to expand temporal duration and amplify subtle facial movements simultaneously. Second, we make our contribution to dynamic description. One major challenge is to capture the intrinsic temporal variations caused by movements and omit extrinsic ones caused by different individuals and various environments. To diminish the influences of such extrinsic diversity, we propose the tangent delta descriptor to characterize the dynamics of short-term movements using the differences between points on the tangent spaces to the manifolds, rather than the points themselves. We then relax the trajectory-smooth assumption of the conventional manifold-based trajectory modeling methods and incorporate the tangent delta descriptor with the sequential inference approaches to cover the period of facial movements. The proposed motion modeling approach is validated by a series of experiments on publicly available datasets in the tasks of micro-expression recognition and visual speech recognition.