Please use this identifier to cite or link to this item: https://dspace.univ-guelma.dz/jspui/handle/123456789/18260
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGUERGOUR, Ghada malak-
dc.date.accessioned2025-10-15T14:23:01Z-
dc.date.available2025-10-15T14:23:01Z-
dc.date.issued2025-
dc.identifier.urihttps://dspace.univ-guelma.dz/jspui/handle/123456789/18260-
dc.description.abstractEffective communication between Deaf and hearing individuals remains a major societal chal- lenge, particularly in contexts where sign language is not understood by the general popula- tion. Sign languages are complete natural languages, yet the lack of shared linguistic knowl- edge continues to hinder accessibility and inclusion in vital domains such as education, health- care, and employment. In response to this issue, this thesis presents a deep learning-based system for real-time, bidirectional communication between Deaf and hearing users, using hand gesture sign language as a primary medium. The proposed system integrates computer vision, and 3D animation technologies to trans- late between sign language and spoken/written language. Three model architectures were implemented and evaluated: CNN-LSTM, MediaPipe-Bi-LSTM, and MediaPipe-GCN-BERT. While the MediaPipe-LSTM model achieved over 98% accuracy on isolated gesture recog- nition tasks, it exhibited limitations in handling longer sequences due to its memory-based structure. To overcome this, a graph-based approach was adopted, where spatial relationships between hand landmarks were modeled using Graph Convolutional Networks (GCNs), com- bined with BERT embeddings for semantic context. This resulted in improved generalization and performance on complex and continuous gestures. The system was deployed as a mobile application built with React Native and Expo, inte- grating real-time speech recognition, and sign-to-text translation. Experimental evaluations using cross-validation, confusion matrices, and Word Error Rate (WER) confirmed the robust- ness, accuracy, and usability of the platform in real-time scenarios. This work contributes a significant step toward accessible and inclusive communication technology for the Deaf and hard-of-hearing communities.en_US
dc.language.isoenen_US
dc.publisherUniversity of Guelmaen_US
dc.subjectSign Language Recognition, Deep Learning, MediaPipe, LSTM, Graph Convolu- tional Network, BERT, Real-Time Communication, Accessibility, Human-Centered AIen_US
dc.titleA Deep Learning-Based System for Bidirectional Communication between Deaf and Hearing Users using Hand Gesture Sign Languageen_US
dc.typeWorking Paperen_US
Appears in Collections:Master

Files in This Item:
File Description SizeFormat 
F5_8_GUERGOUR_Ghada malak_1751925194 (1).pdf6,23 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.