May 17, 2021

Sign Language Recognition

A real-time computer vision system that interprets hand gestures into readable text using CNN based image classification.

Paper Image 01
Paper Image 01
Paper Image 01

Year

2021

Author

A Abhishek

Category

Research Paper

Institution

Reva University
Paper Title

Sign Language Recognition using Convolutional Neural Networks in Machine Learning.

Other Authors

Anusha S.N, Arshia George & Aishwarya Girish Menon.

Project Type

Mini Project

Conference

3rd International Conference on Advances in Computing & Information Technology (IACIT - 2021)

Paper Image 02
Paper Image 02
Paper Image 02
Abstract

This research explores a real-time system that translates sign language gestures into readable text using computer vision and deep learning. By applying convolutional neural networks to camera input, the system recognizes hand gestures and converts them into English alphabets or words, improving communication accessibility between hearing impaired individuals and others.

Research Context

Communication barriers exist between sign language users and the general population due to lack of shared understanding. The project aimed to bridge this gap by developing an assistive AI-based solution capable of recognizing gestures from live video feeds and converting them into natural language output. The work contributes to accessibility focused technology using machine learning.

Contribution

Role: Team Member (4- Member group)

Responsibilities included:

  • Assisting in model research and algorithm understanding

  • Supporting implementation using Python based ML tools

  • Participating in dataset preparation and preprocessing

  • Contributing to testing and validation

  • Documentation and project reporting

Paper Image 03
Paper Image 03
Paper Image 03
Methodology
  • Image/video captured via camera input

  • Preprocessing performed using OpenCV and NumPy

  • Feature extraction and classification using CNN

  • Model built using: Python, TensorFlow and Keras.

  • Training with open source or custom sign gesture datasets

  • Prediction outputs mapped to ASL alphabets/numbers

  • NLP used to construct readable text output

Key Findings
  • Demonstrated real-time gesture classification

  • Accurate recognition of ASL characters from visual input

  • Showed feasibility of deep learning for assistive communication tools

  • Validated CNN effectiveness for image based gesture recognition

Impact
  • Developed practical understanding of computer vision pipelines

  • Exposure to neural network training workflows

  • Learned importance of dataset quality and preprocessing

  • Strengthened collaboration and documentation skills

  • Sparked interest in human centered technology design

Paper Image 04
Paper Image 04
Paper Image 04
  • More Works More Works

Let's Work

Together

Based in Bengaluru,

Karnataka, India

Ui/UX Designer

+Framer Developer

I design clean, user centered digital experiences as a UX/UI designer and Framer developer. If you’re building something meaningful, let’s connect and bring it to life.

Let's Work

Together

Based in Bengaluru,

Karnataka, India

Ui/UX Designer

+Framer Developer

I design clean, user centered digital experiences as a UX/UI designer and Framer developer. If you’re building something meaningful, let’s connect and bring it to life.

Let's Work

Together

Based in Bengaluru,

Karnataka, India

Ui/UX Designer

+Framer Developer

I design clean, user centered digital experiences as a UX/UI designer and Framer developer. If you’re building something meaningful, let’s connect and bring it to life.

Let's Work

Together

I design clean, user centered digital experiences as a UX/UI designer and Framer developer. If you’re building something meaningful, let’s connect and bring it to life.