Skip to content

πŸ“ Analyzes images and videos placed in dedicated input folders, providing uplifting and personalized compliments. ✨ Experience a streamlined workflow: simply add media to the designated directories and retrieve processed outputs for intelligent facial expression detection.

License

Notifications You must be signed in to change notification settings

alienx5499/MoodLyft-Mirror-Input-Emotion-Analyzer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

9 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🌟 MoodLyft Mirror Input Emotion Analyzer 🌟

Elevating your mood with intelligent emotion detection

Build Passing Python Contributions Welcome License: MIT Platform Views ⭐ GitHub stars 🍴 GitHub forks Commits πŸ› GitHub issues πŸ“‚ GitHub pull requests πŸ’Ύ GitHub code size


πŸ“± What is MoodLyft Mirror Input Emotion Analyzer?

The MoodLyft Mirror Input Emotion Analyzer is a comprehensive emotion detection tool that leverages AI to:

  • Analyze Emotions in Media: Detect emotions in static images and videos by processing files placed in designated input folders.
  • Provide Uplifting and Personalized Compliments: Generate tailored compliments based on detected emotions to enhance user experience.
  • Organized Input/Output Workflow: Easily manage your media by placing inputs in specific directories and accessing analyzed outputs effortlessly.

"Enhance your day by visualizing and understanding your emotions through your media!"


πŸ“š Table of Contents

  1. ✨ Features
  2. 🦾 Tech Stack
  3. πŸ“Έ Screenshots
  4. 🧩 Try the App
  5. πŸ‘¨β€πŸ”§ Setup Instructions
  6. 🎯 Target Audience
  7. 🀝 Contributing
  8. πŸ“œ License

✨ Features

Emotion Detection

  • Analyze Emotions in Media: Detect emotions in uploaded images and videos using advanced AI algorithms.
  • Displays Dominant Emotions: Identify and display emotions such as happiness, sadness, anger, and more.

Personalized Compliments

  • Tailored Uplifting Messages: Generate intelligent compliments based on the detected emotions.
  • Text-to-Speech (TTS) Functionality: Deliver compliments audibly for an enhanced interactive experience.

Organized Input/Output Workflow

  • Structured Directories: Place your input media in designated folders and retrieve analyzed outputs from organized directories.
  • Seamless Processing: Easily manage and process multiple media files with minimal effort.

🦾 Tech Stack

🌐 Core Technologies

  • Python: Core programming language.
  • OpenCV: For real-time video processing and face detection.
  • FER: Facial Expression Recognition library for emotion analysis.
  • Pillow: For enhanced text rendering and UI effects.

Additional Libraries

  • Pyttsx3: For TTS functionality.
  • NumPy: For numerical operations and efficient data processing.

πŸ“Έ Screenshots

Input 1 Output 1
Input 1 Output 1
Input 2 Output 2
Input 2 Output 2
Input 3 Output 3
Input 3 Output 3
Input 4 Output 4
Input 4 Output 4
Input 5 Output 5
Input 5 Output 5
Input 6 Output 6
Input 6 Output 6
Input 6 Output 6
Input 6 Output 6

🧩 Try the App

Want to Experience MoodLyft Mirror?

Clone the repository and follow the setup instructions to run the project locally.
Stay tuned for future releases!


πŸ‘¨β€πŸ”§ Setup Instructions

Prerequisites

  • Python 3.11 or higher installed on your system.
  • A webcam for real-time emotion detection.
  • Install required Python packages listed in requirements.txt.

Steps to Run the Project

  1. Clone the Repository

    git clone https://github.com/alienx5499/MoodLyft-Mirror-Input-Emotion-Analyzer.git
    cd MoodLyft-Mirror-Input-Emotion-Analyzer
    cd MoodLyft-Mirror-Input-Emotion-Analyzer
  2. Set Up a Virtual Environment Setting up a virtual environment ensures that your project's dependencies are isolated from your global Python installation, preventing version conflicts and promoting a clean development environment.

    For macOS/Linux

    1. Create a virtual environment:
    python3 -m venv moodlyft_env
    1. Activate the virtual environment:
    source moodlyft_env/bin/activate

    For Windows

    1. Create a virtual environment:
    python3 -m venv moodlyft_env
    1. Activate the virtual environment:
    moodlyft_env\Scripts\activate 
  3. Install Dependencies For macOS/Linux

    pip install -r requirements-macos.txt

    For Windows

    pip install -r requirements-windows.txt
  4. Add Your Media

  • Images: Place your input images in the Input/Images directory.
  • Videos: Place your input videos in the Input/Videos directory.
  1. Run the Application

    python main.py
  2. View Results

    • Processed images will be saved in Output/analyzedImages.
    • Processed videos will be saved in Output/analyzedVideos.

🎯 Target Audience

  1. Individuals: Track your mood and uplift your spirits daily.
  2. Therapists: Utilize emotion detection as part of therapy sessions.
  3. Developers: Enhance and expand the project with additional features.

🀝 Contributing

We ❀️ open source! Contributions are welcome to make this project even better.

  1. Fork the repository.
  2. Create your feature branch.
    git checkout -b feature/new-feature
  3. Commit your changes.
    git commit -m "Add a new feature"
  4. Push to the branch and open a pull request.

πŸ“œ License

This project is licensed under the MIT License. See the LICENSE file for details.


πŸ“¬ Feedback & Suggestions

We value your input! Share your thoughts through GitHub Issues.

πŸ’‘ Let's work together to uplift emotions and create positivity!

About

πŸ“ Analyzes images and videos placed in dedicated input folders, providing uplifting and personalized compliments. ✨ Experience a streamlined workflow: simply add media to the designated directories and retrieve processed outputs for intelligent facial expression detection.

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages