← Back to Papers

Attention modeling with temporal shift in sign language recognition

Ahmet Faruk Çelimli, Oğulcan Özdemir, Lale Akarun

2022 30th Signal Processing and Communications Applications Conference (SIU)

Abstract

Sign languages are visual languages expressed with multiple cues including facial expressions, upper-body and hand gestures. These different visual cues can be used together or at different instants to convey the message. In order to recognize sign languages, it is crucial to model what, where and when to attend. In this study, we developed a model to use different visual cues at the same time by using Temporal Shift Modules (TSMs) and attention modeling. Our experiments are conducted with BospohorusSign22k dataset. Our system has achieved 92.46% recognition accuracy and improved the performance approximately 14% compared to the baseline study with 78.85% accuracy.