Home
World Journal of Advanced Research and Reviews
International Journal with High Impact Factor for fast publication of Research and Review articles

Main navigation

  • Home
  • Past Issues

Real-Time Vision-Based Sign Language Bilateral Communication Device for Signers and Non-Signers using Convolutional Neural Network

Breadcrumb

  • Home
  • Real-Time Vision-Based Sign Language Bilateral Communication Device for Signers and Non-Signers using Convolutional Neural Network

Uriah Sampaga *, Andrea Louise J. Toledo, Mikayla Assyria L. Dela Peret, Luisito M. Genodiala, Sheika Rania D. Aguilar, Gellie Anne M. Antoja, Charles G. Juarizo and Eufemia A. Garcia

Department of Electronics Engineering, College of Engineering, Pamantasan ng Lungsod ng Maynila, Manila, Philippines.

Research Article
 

World Journal of Advanced Research and Reviews, 2023, 18(03), 934–943
Article DOI: 10.30574/wjarr.2023.18.3.1169
DOI url: https://doi.org/10.30574/wjarr.2023.18.3.1169

Received on 07 May 2023; revised on 15 June 2023; accepted on 17 June 2023

The use of sign language is an important means of communication for individuals with hearing and speech impairments, but communication barriers can still arise due to differences in grammatical rules across different sign languages. In an effort to address these barriers, this study aimed to develop a real-time two-way communication device that uses image processing and recognition systems to translate two-handed Filipino Sign Language (FSL) gestures and facial expressions into speech; the system can recognize gestures that correspond to specific words and phrases. Specifically, the researchers utilized Convolutional Neural Networks (CNNs) to enhance the processing speed and accuracy of the device. The system also includes a speech-to-text (STT) feature that helps non-signers communicate with deaf individuals without relying on an interpreter. The study's results showed that the device achieved a 93% accuracy rate in recognizing facial expressions and FSL gestures using CNN, indicating that it is highly accurate. Additionally, the system performed in real-time, with an overall average conversion time of 1.84 and 2.74 seconds for sign language to speech and speech to text, respectively. Finally, the device was well-received by both signers and non-signers, with a total approval rating of 85.50% from participants at Manila High School, suggesting that it effectively facilitates two-way communication and has the potential to break down communication barriers.

Filipino Sign Language; Two-Way Communication; Facial Expression Recognition; Convolutional Neural Networks

https://wjarr.co.in/sites/default/files/fulltext_pdf/WJARR-2023-1169.pdf

Get Your e Certificate of Publication using below link

Download Certificate

Preview Article PDF

Uriah Sampaga, Andrea Louise J. Toledo, Mikayla Assyria L. Dela Peret, Luisito M. Genodiala, Sheika Rania D. Aguilar, Gellie Anne M. Antoja, Charles G. Juarizo and Eufemia A. Garcia. Real-Time Vision-Based Sign Language Bilateral Communication Device for Signers and Non-Signers using Convolutional Neural Network. World Journal of Advanced Research and Reviews, 2023, 18(03), 934–943. Article DOI: https://doi.org/10.30574/wjarr.2023.18.3.1169

Copyright © 2023 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0

Footer menu

  • Contact

Copyright © 2026 World Journal of Advanced Research and Reviews - All rights reserved

Developed & Designed by VS Infosolution