Please use this identifier to cite or link to this item: https://dspace.univ-guelma.dz/jspui/handle/123456789/18254
Title: A Deep Learning Framework for Bilingual Sign Language Interpretation
Authors: Bouchama Amel, Chihaoui Aya
Keywords: A Deep Learning Framework ; Bilingual Sign Language Interpretation
Issue Date: 2025
Publisher: University of Guelma
Abstract: Sign language is one of the oldest and most widely used means of communication, primarily used by deaf or mute individuals. However, these individuals face significant challenges in communication due to the general lack of awareness about sign language and the shortage of qualified interpreters. To address this issue, we have developed a continuous detection method based on neural networks, capable of recognizing both Arabic Sign Language (ArSL) and American Sign Language (ASL). Our system directly analyzes hand images using a classification model to accurately predict the correct sign category. We adopted three different approaches to achieve high recognition accuracy: Convolutional Neural Networks (CNN), a combination of CNN and Long Short-Term Memory networks (CNN-LSTM), and the YOLO (You Only Look Once) method. CNNs are deep learning systems that assign learnable weights to different parts of the model, enabling them to learn from images and effectively distinguish between signs. While CNNs are highly efficient at extracting spatial features from individual frames, they do not consider the temporal sequence of signs in continuous translation scenarios. Therefore, we combined CNN with LSTM, a type of recurrent neural network capable of learning long-term dependencies. The CNN-LSTM model first extracts spatial features through CNN layers, then feeds these features into LSTM layers to capture the temporal dynamics of the sign sequence. This architecture is particularly effective for continuous sign language recognition, where the order and flow of gestures are key factors. Additionally, we used the YOLO algorithm for real-time object detection, allowing fast and accurate identification and classification of hand signs within a video stream. Our integrated approach ensures high accuracy and responsiveness in real time, providing an effective solution to bridge the communication gap within the deaf and hard-of-hearing community.
URI: https://dspace.univ-guelma.dz/jspui/handle/123456789/18254
Appears in Collections:Master

Files in This Item:
File Description SizeFormat 
F5_8_BOUCHAMA_AMEL.pdf4,45 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.