MyoKey
The seamless textual input in Augmented Reality (AR) is very challenging and essential for enabling user-friendly AR applications. Existing approaches such as speech input and vision-based gesture recognition suffer from environmental obstacles and the large default keyboard size, sacrificing the majority of the screen’s real estate in AR. In this paper, we propose MyoKey, a system that enables users to effectively and unobtrusively input text in a constrained environment of AR by jointly leveraging surface Electromyography (sEMG) and Inertial Motion Unit (IMU) signals transmitted by wearable sensors on a user’s forearm. MyoKey adopts a deep learning-based classifier to infer hand gestures using sEMG. In order to show the feasibility of our approach, we implement a mobile AR application using the Unity application building framework. We present novel interaction and system designs to incorporate information of hand gestures from sEMG and arm motions from IMU to provide seamless text entry solution. We demonstrate the applicability of MyoKey by conducting a series of experiments achieving the accuracy of 0.91 on identifying five gestures in real-time (Inference time: 97.43 ms).