Developing Convolutional Neural Networks for Recognition of American Sign Language

AbstractKey wordsDOI
Rather than using speech to communicate with one another, the deaf and dumb use a set of signs known as “sign language”. Yet, utilizing signs to interact with this society is too difficult for non-sign language speakers. To facilitate communication for the deaf public, an application that can identify sign language motions must be developed. Regarding its importance, there are approaches with differing degrees of accuracy for recognizing American Sign Language ASL. The study aims to enhance the accuracy of current ASL identification approaches by putting forward a deep-learning model. A CNN was developed and trained to correctly recognize hand gestures that describe the ASL letters (A-Z). The proposed model performs exceptionally well, attaining high accuracy on the dataset, with a test accuracy of 99.97%. The model is a possible tool for practical applications in assistive technology for the hearing impaired since the results show that it can distinguish between distinct ASL hand signs.
Hand gesture, American Sign Language, CNN, Sign Recognition, ASL letters.

Farah Jawad Al-Ghanim1, Salwa Shakir Baawi2 and Nisreen Ryadh Hamza3
1,3 Department of Computer Science, College of Computer Science and Information Technology, University of Al-Qadisiyah.
2 Department of Computer Information Systems, College of Computer Science and Information Technology, University of Al-Qadisiyah.
*Corresponding Author: farah.jawad@qu.edu.iq, salwa.baawi@qu.edu.iq, nesreen.readh@qu.edu.iq
Received 12 Feb. 2025, Accepted 21 Mar. 2025, Published 30 June. 2025.

Download full article