Author: Ahmed Soliman
Mentor: Mohammad Azhar
Institution: BMCC
Abstract: The COVID-19 pandemic has changed the way we learn and receive an education. One of the unique challenges that people with hearing impairments have faced during the pandemic is communication in remote education environments. The current e-learning Virtual reality platforms do not provide user-friendly support systems such as American sign language (ASL) for hearing or speech disabilities. Our research explores how Augmented Reality (AR) or Virtual Reality (VR) can help in applying the learning methods to assist people with hearing or speech disabilities. We developed a prototype to recognize and translate English alphabet sign language letters from the real-time computer camera feed based on the ASL standards. This ASL Interpreter application can be used by deaf people to seek help for translating their thoughts or ask for medical assistance. Our prototype demonstrated that a computer camera can be used to translate hand gestures. Currently, we are integrating our ASL interpreter with the VR environment. We are also exploring ways to extract sign language hand gestures from cameras and use machine learning to translate these gestures to written or spoken words that will highly help deaf people and non-disabled people to communicate freely. In the future, we will work towards developing a mobile application prototype that uses artificial intelligence to mimic and recognize patterns specific to hand and body gestures that can translate spoken words into sign language and vise versa. The overall goal of the project is to enhance the communication process for deaf people on e-learning VR platforms.