Dépôt DSpace/Manakin

A Deep Learning Framework for Bilingual Sign Language Interpretation

Afficher la notice abrégée

dc.contributor.author Bouchama Amel, Chihaoui Aya
dc.date.accessioned 2025-10-15T13:37:22Z
dc.date.available 2025-10-15T13:37:22Z
dc.date.issued 2025
dc.identifier.uri https://dspace.univ-guelma.dz/jspui/handle/123456789/18254
dc.description.abstract Sign language is one of the oldest and most widely used means of communication, primarily used by deaf or mute individuals. However, these individuals face significant challenges in communication due to the general lack of awareness about sign language and the shortage of qualified interpreters. To address this issue, we have developed a continuous detection method based on neural networks, capable of recognizing both Arabic Sign Language (ArSL) and American Sign Language (ASL). Our system directly analyzes hand images using a classification model to accurately predict the correct sign category. We adopted three different approaches to achieve high recognition accuracy: Convolutional Neural Networks (CNN), a combination of CNN and Long Short-Term Memory networks (CNN-LSTM), and the YOLO (You Only Look Once) method. CNNs are deep learning systems that assign learnable weights to different parts of the model, enabling them to learn from images and effectively distinguish between signs. While CNNs are highly efficient at extracting spatial features from individual frames, they do not consider the temporal sequence of signs in continuous translation scenarios. Therefore, we combined CNN with LSTM, a type of recurrent neural network capable of learning long-term dependencies. The CNN-LSTM model first extracts spatial features through CNN layers, then feeds these features into LSTM layers to capture the temporal dynamics of the sign sequence. This architecture is particularly effective for continuous sign language recognition, where the order and flow of gestures are key factors. Additionally, we used the YOLO algorithm for real-time object detection, allowing fast and accurate identification and classification of hand signs within a video stream. Our integrated approach ensures high accuracy and responsiveness in real time, providing an effective solution to bridge the communication gap within the deaf and hard-of-hearing community. en_US
dc.language.iso en en_US
dc.publisher University of Guelma en_US
dc.subject A Deep Learning Framework ; Bilingual Sign Language Interpretation en_US
dc.title A Deep Learning Framework for Bilingual Sign Language Interpretation en_US
dc.type Working Paper en_US


Fichier(s) constituant ce document

Ce document figure dans la(les) collection(s) suivante(s)

Afficher la notice abrégée

Chercher dans le dépôt


Recherche avancée

Parcourir

Mon compte