A team of researchers has improved the precision of artificial intelligence in recognizing sign language at the word level by adding more data. This includes information about the signer’s hand and facial expressions as well as skeletal details that show how the hands are positioned in relation to the body.
Across the world, different countries have established unique sign languages based on their specific modes of communication, leading to each language having thousands of unique signs. As a result, acquiring and understanding these languages can be quite difficult. Nonetheless, a research team from Osaka Metropolitan University has made noteworthy advancements in enhancing the precision of AI-based translations of signs into words, known as word-level sign language recognition.
Previous methods focused on grasping the overall actions of the signer. However, these approaches had accuracy issues mainly due to subtle differences in hand shapes and their varying positions concerning the body, which could lead to different meanings.
Associate Professors Katsufumi Inoue and Masakazu Iwamura from the Graduate School of Informatics teamed up with researchers from the Indian Institute of Technology Roorkee to advance the AI’s recognition skills. They enriched the existing data on the signer’s upper body movements by including information about hand and facial expressions, along with skeletal data regarding hand positioning.
“We have improved the accuracy of word-level sign language recognition by 10-15% compared to previous methods,” said Professor Inoue. “Additionally, we believe that our approach can be applied to various sign languages, which could enhance communication with deaf or hard-of-hearing individuals in many countries.”