The MoodLyft Mirror Input Emotion Analyzer is a comprehensive emotion detection tool that leverages AI to:
- Analyze Emotions in Media: Detect emotions in static images and videos by processing files placed in designated input folders.
- Provide Uplifting and Personalized Compliments: Generate tailored compliments based on detected emotions to enhance user experience.
- Organized Input/Output Workflow: Easily manage your media by placing inputs in specific directories and accessing analyzed outputs effortlessly.
"Enhance your day by visualizing and understanding your emotions through your media!"
- β¨ Features
- π¦Ύ Tech Stack
- πΈ Screenshots
- 𧩠Try the App
- π¨βπ§ Setup Instructions
- π― Target Audience
- π€ Contributing
- π License
- Analyze Emotions in Media: Detect emotions in uploaded images and videos using advanced AI algorithms.
- Displays Dominant Emotions: Identify and display emotions such as happiness, sadness, anger, and more.
- Tailored Uplifting Messages: Generate intelligent compliments based on the detected emotions.
- Text-to-Speech (TTS) Functionality: Deliver compliments audibly for an enhanced interactive experience.
- Structured Directories: Place your input media in designated folders and retrieve analyzed outputs from organized directories.
- Seamless Processing: Easily manage and process multiple media files with minimal effort.
- Python: Core programming language.
- OpenCV: For real-time video processing and face detection.
- FER: Facial Expression Recognition library for emotion analysis.
- Pillow: For enhanced text rendering and UI effects.
- Pyttsx3: For TTS functionality.
- NumPy: For numerical operations and efficient data processing.
![]() |
![]() |
Input 1 | Output 1 |
![]() |
![]() |
Input 2 | Output 2 |
![]() |
![]() |
Input 3 | Output 3 |
![]() |
![]() |
Input 4 | Output 4 |
![]() |
![]() |
Input 5 | Output 5 |
![]() |
![]() |
Input 6 | Output 6 |
![]() |
![]() |
Input 6 | Output 6 |
Clone the repository and follow the setup instructions to run the project locally.
Stay tuned for future releases!
- Python 3.11 or higher installed on your system.
- A webcam for real-time emotion detection.
- Install required Python packages listed in
requirements.txt
.
-
Clone the Repository
git clone https://github.com/alienx5499/MoodLyft-Mirror-Input-Emotion-Analyzer.git cd MoodLyft-Mirror-Input-Emotion-Analyzer cd MoodLyft-Mirror-Input-Emotion-Analyzer
-
Set Up a Virtual Environment Setting up a virtual environment ensures that your project's dependencies are isolated from your global Python installation, preventing version conflicts and promoting a clean development environment.
For macOS/Linux
- Create a virtual environment:
python3 -m venv moodlyft_env
- Activate the virtual environment:
source moodlyft_env/bin/activate
For Windows
- Create a virtual environment:
python3 -m venv moodlyft_env
- Activate the virtual environment:
moodlyft_env\Scripts\activate
-
Install Dependencies For macOS/Linux
pip install -r requirements-macos.txt
For Windows
pip install -r requirements-windows.txt
-
Add Your Media
- Images: Place your input images in the
Input/Images
directory. - Videos: Place your input videos in the
Input/Videos
directory.
-
Run the Application
python main.py
-
View Results
- Processed images will be saved in Output/analyzedImages.
- Processed videos will be saved in Output/analyzedVideos.
- Individuals: Track your mood and uplift your spirits daily.
- Therapists: Utilize emotion detection as part of therapy sessions.
- Developers: Enhance and expand the project with additional features.
We β€οΈ open source! Contributions are welcome to make this project even better.
- Fork the repository.
- Create your feature branch.
git checkout -b feature/new-feature
- Commit your changes.
git commit -m "Add a new feature"
- Push to the branch and open a pull request.
This project is licensed under the MIT License. See the LICENSE file for details.
We value your input! Share your thoughts through GitHub Issues.
π‘ Let's work together to uplift emotions and create positivity!