Korean Sign Language Classification via Convolutional Neural Network
Abstract – Sign language communication consists of complex hand movements, which are considered a demanding task for the current technology to handle. This study proposes a Korean sign language recognition model that classifies images through effective deep learning algorithms, including convolutional neural networks (CNN). Despite the fact that sign language is one of the most essential forms of communication within the deaf community, non-sign language speakers are rarely exposed to sign language. As a result, a cultural gap between hearing and deaf communities is remaining as an unsolved issue derived from the lack of educational methods. Gesture recognition fields in technology are showing development as computer vision and deep learning technology are also advancing. This study not only proposes a sign language detection model but also involves the process of creating a Korean sign language database of 1900 images in total. The results demonstrate an accuracy score of 99.6% for the Korean sign language consonants and 100% for numbers and conversational phrases, respectively. The results demonstrate the potential of technology in contribution to the deaf community, where Korean sign language education needs support throughout the nation. Sign language recognition models developed in this study successfully implemented sign language image classification, proving the technology’s ability to support language barriers.