Sign Language Recognition

Authors

  • Tanaya Ingle, Shwetali Daware*, Neha Kumbhar, Komal Raut, Prachi Waghmare, Dhammjyoti Dhawase

Keywords:

Hand Gestures; Sign Language Recognition; Convolution Neural Networks

Abstract

It is difficult to imagine a world without communication, whether it takes the form of speech, texture, or visual expression. Communication with gestures is always protected as private and secure. Because some gestures can be a composite of multiple stable gestures, sign language recognition (SLR) falls within the category of "dynamic hand gesture recognition." There are numerous sign languages throughout the world, each of which is typically utilized in a particular nation. People with this disability connect with others through several communication ways; one well-liked is sign language. Building a machine-learning model to categorize the numerous hand movements used for fingerspelling in sign language is the project's main goal. In this user-independent model, the American Sign Language dataset is used to train the machine learning classification algorithms, and the entire set of data is used for testing. This issue can be resolved with a technique for classifying fingerspelling in sign language. In this study, multiple machine learning algorithms are employed, and their accuracies are tracked and compared. The objective of this project is to take the first important step in adopting sign language to overcome the communication gap between hearing individuals and deaf and ignorant persons.

Published

2023-03-14

How to Cite

Tanaya Ingle, Shwetali Daware*, Neha Kumbhar, Komal Raut, Prachi Waghmare, Dhammjyoti Dhawase. (2023). Sign Language Recognition. SJIS-P, 35(1), 294–298. Retrieved from http://sjis.scandinavian-iris.org/index.php/sjis/article/view/284

Issue

Section

Articles