Skip to content

NithiishSD/SIGN-LANGUAGE-TRANSLATOR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Sign Language Translator using MediaPipe, TensorFlow, and OpenCV

Overview Hey there! 👋 Welcome to the Sign Language Translator project. This application uses your webcam to recognize hand gestures in real-time and translates them into text. Whether it’s a static pose or a dynamic motion, this translator has got you covered!

It leverages:

  • MediaPipe for precise hand landmark detection 🖐️
  • TensorFlow for robust gesture classification 🧠
  • OpenCV for seamless video processing 🎥
  • CustomTkinter to make the interface smooth and simple

🚀 Features You’ll Love

  • 👁️ Real-time gesture recognition using your webcam
  • 🖖 Support for both static and dynamic gestures
  • 💬 Translates gestures into text (words & sentences)
  • 🎯 High accuracy with advanced normalization techniques
  • 🖼️ User-friendly GUI built with CustomTkinter
  • ⚡ Multiprocessing support for smooth performance

Tech Stack

  • Python 🐍
  • OpenCV 📸
  • MediaPipe✋
  • TensorFlow🧠
  • NumPy 🔢
  • CustomTkinter (CTk) 🖥️

🛠️ Get Started Ready to give it a spin? Follow these simple steps:

  1. Clone the Repository
git clone https://github.com/your_username/sign-language-translator.git
cd sign-language-translator

2. Set Up Your Environment

python -m venv env
source env/bin/activate  # On Windows: env\Scripts\activate

3. Install Dependencies

pip install -r requirements.txt
  1. Run the App Ensure your webcam is connected, then:
python main.py
  1. Start Gesturing!
  • Choose between live video or recorded video input.
  • Perform sign language gestures in front of the camera.
  • Watch your gestures transform into text on the screen!

##NOTE ALSO ENSURE THAT YOU HAVE UPDATE THE LOCATION OF IMAGE AND OTHER FILE LOCATION INORDER TO AVOID MISSING FILE ERROR 🧑‍🔬 How the Model Works The project uses a custom dataset of hand gestures, captured using MediaPipe's Hand Landmark Model. The dataset was labeled and used to train a neural network model (MLP or LSTM) in TensorFlow for accurate classification.

🔎 Normalization Techniques To improve accuracy, the model applies four key normalization techniques:

  1. Wrist-Relative Normalization🏁

    • Aligns all landmarks relative to the wrist to remove positional bias.
  2. Bone Length Normalization 🦴

    • Scales landmarks using the distance from the wrist to the middle finger MCP. This ensures hand size doesn’t affect recognition.
  3. Z-Score Normalization📊

    • Standardizes landmark values to reduce noise and improve model performance.
  4. Finger Spread Normalization ✋

    • Normalizes using fingertip distances, making gestures more comparable.

Contact Have questions or suggestions? Reach out via drop me an email at nithiishdanasekar@gmail.com

FINAL NOTE: THIIS PROJECT IS IN DEVELOPMENT BUT IT WORKS ONLY FOR STATIC GESTURES BUT NOT FOR DYNAMIC GESTURES please contact me for any QUERIES i AM ALWAYS AVAILABLE!!! Happy Coding! 💻✨

About

No description, website, or topics provided.

Resources

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages