Empowering Inclusivity: Real-time Sign Language Processing

Dátum
Folyóirat címe
Folyóirat ISSN
Kötet címe (évfolyam száma)
Kiadó
Absztrakt

True inclusivity means ensuring every voice, spoken or signed, is heard; this thesis empowers the deaf and hard-of-hearing by translating ASL into digital language. It presents a machine-learning system that recognizes static gestures (the ASL alphabet, 100 images each) via a Random Forest classifier and dynamic gestures (“hello,” etc., 30 videos each) via an LSTM network architecture. MediaPipe and OpenCV extract key hand and body landmarks to feed both models, achieving robust real-time recognition performance across both static and dynamic ASL.

Leírás
Kulcsszavak
Sign Language recognition, American Sign Language, Image Processing, LSTM networks, Neural Networks, Image Processing, Random Forest Classifier, Isolated Sign language, Continuous Sign language, Computer Vision, MediaPipe, OpenCV, TensorFlow
Forrás
Gyűjtemények