Fazekas, AttilaDakhli, Wiem2025-06-302025-06-302025-04-30https://hdl.handle.net/2437/395084True inclusivity means ensuring every voice, spoken or signed, is heard; this thesis empowers the deaf and hard-of-hearing by translating ASL into digital language. It presents a machine-learning system that recognizes static gestures (the ASL alphabet, 100 images each) via a Random Forest classifier and dynamic gestures (“hello,” etc., 30 videos each) via an LSTM network architecture. MediaPipe and OpenCV extract key hand and body landmarks to feed both models, achieving robust real-time recognition performance across both static and dynamic ASL.65enSign Language recognitionAmerican Sign LanguageImage ProcessingLSTM networksNeural NetworksImage ProcessingRandom Forest ClassifierIsolated Sign languageContinuous Sign languageComputer VisionMediaPipeOpenCVTensorFlowEmpowering Inclusivity: Real-time Sign Language ProcessingInformatics::Computer ScienceHozzáférhető a 2022 decemberi felsőoktatási törvénymódosítás értelmében.