Growing up, I always experienced a communication barrier with my grandfather’s brother, who is hard of hearing. At family gatherings, only a select few who understand Arabic Sign Language (ArSL) could successfully communicate with him. This situation has been frustrating, as he has many adventures and stories that remain unshared and misunderstood by most of our family.
While there are systems available that translate American Sign Language (ASL) into English, the representation of Arabic in the technological domain of sign language translation is lacking. This disparity has not only limited the communication within diverse linguistic communities but also shows the urgent need for inclusive technology that bridges linguistic and sensory gaps.
My goal is to develop a real-time translation system for Arabic Sign Language using pose estimation combined with proximity sensing. The goal is to enable direct communication for ArSL users by translating their sign language into written Arabic. It would be nice to use machine learning models that specialize in pose estimation but I would need to do more research.