Afficher la notice abrégée
| dc.contributor.author |
GUERGOUR, Ghada malak |
|
| dc.date.accessioned |
2025-10-15T14:23:01Z |
|
| dc.date.available |
2025-10-15T14:23:01Z |
|
| dc.date.issued |
2025 |
|
| dc.identifier.uri |
https://dspace.univ-guelma.dz/jspui/handle/123456789/18260 |
|
| dc.description.abstract |
Effective communication between Deaf and hearing individuals remains a major societal chal- lenge, particularly in contexts where sign language is not understood by the general popula- tion. Sign languages are complete natural languages, yet the lack of shared linguistic knowl- edge continues to hinder accessibility and inclusion in vital domains such as education, health- care, and employment. In response to this issue, this thesis presents a deep learning-based system for real-time, bidirectional communication between Deaf and hearing users, using hand gesture sign language as a primary medium. The proposed system integrates computer vision, and 3D animation technologies to trans- late between sign language and spoken/written language. Three model architectures were implemented and evaluated: CNN-LSTM, MediaPipe-Bi-LSTM, and MediaPipe-GCN-BERT. While the MediaPipe-LSTM model achieved over 98% accuracy on isolated gesture recog- nition tasks, it exhibited limitations in handling longer sequences due to its memory-based structure. To overcome this, a graph-based approach was adopted, where spatial relationships between hand landmarks were modeled using Graph Convolutional Networks (GCNs), com- bined with BERT embeddings for semantic context. This resulted in improved generalization and performance on complex and continuous gestures. The system was deployed as a mobile application built with React Native and Expo, inte- grating real-time speech recognition, and sign-to-text translation. Experimental evaluations using cross-validation, confusion matrices, and Word Error Rate (WER) confirmed the robust- ness, accuracy, and usability of the platform in real-time scenarios. This work contributes a significant step toward accessible and inclusive communication technology for the Deaf and hard-of-hearing communities. |
en_US |
| dc.language.iso |
en |
en_US |
| dc.publisher |
University of Guelma |
en_US |
| dc.subject |
Sign Language Recognition, Deep Learning, MediaPipe, LSTM, Graph Convolu- tional Network, BERT, Real-Time Communication, Accessibility, Human-Centered AI |
en_US |
| dc.title |
A Deep Learning-Based System for Bidirectional Communication between Deaf and Hearing Users using Hand Gesture Sign Language |
en_US |
| dc.type |
Working Paper |
en_US |
Fichier(s) constituant ce document
Ce document figure dans la(les) collection(s) suivante(s)
Afficher la notice abrégée