-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathtunex.html
162 lines (156 loc) · 8.02 KB
/
tunex.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="google-site-verification" content="G4gClB66Jr2h0XiecdD1OyTE0HCHwwS707_GIuEMCfU" />
<meta name="description"
content="The DRISHTI-ROBOCON is a student initiative at SVNIT to build robots for participating in ABU-ROBOCON">
<meta name="keywords" content="Drishti,ROBOCON, robocon,WE,ME">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<title>
TUNEX
</title>
<link rel="apple-touch-icon" sizes="180x180" href="favicon_io/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="favicon_io/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="favicon_io/favicon-16x16.png">
<link rel="manifest" href="favicon_io/site.webmanifest">
<link href="css/all.min.css" rel="stylesheet">
<link href="css/fontawesome.min.css" rel="stylesheet">
<link rel="stylesheet" href="css/bootstrap.min.css">
<link rel="stylesheet" type="text/css" media="screen" href="footer.css">
<link rel="stylesheet" type="text/css" media="screen" href="tunex.css">
</head>
<body>
<div class="main-div">
<nav class="navbar navbar-expand-md navbar-light" style="background-color: rgba(2,2,2,0.4);">
<a href="http://svnit.ac.in/" class="navbar-brand">
<img class="nit_photo" src="images/svnit.png">
</a>
<div class="navbar-brand v1"></div>
<a href="https://drishti-svnit.github.io/drishti/" class="navbar-brand">
<img src="images/logoleft.jpg" class="svnit_photo">
</a>
<button type="button" class="navbar-toggler" data-toggle="collapse" data-target="#menu">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="menu">
<ul class="navbar-nav ml-auto">
<li class="nav-item" style="font-weight: bold; margin-right: 20px;"><a href="index.html"
class="nav-link">Home</a></li>
<li class="nav-item" style="font-weight: bold; margin-right: 20px;"><a href="gallery.html"
class="nav-link">Gallery</a></li>
<li class="nav-item dropdown " style="font-weight: bold; margin-right: 20px;">
<a href="#" class="nav-link dropdown-toogle" data-toggle="dropdown">Projects</a>
<div class="dropdown-menu" style="background-color: rgb(220,220,220);">
<a href="tunex.html" class="dropdown-item">
Tunex
</a>
<a href="virtuon.html" class="dropdown-item">
Virtuon
</a>
<a href="gisa.html" class="dropdown-item">
GISA
</a>
<a href="planet.html" class="dropdown-item">
PlaNet
</a>
<a href="acs.html" class="dropdown-item">
Automated-Check-in-System
</a>
<a href="aicolorization.html" class="dropdown-item">
Ai-image-Colorization
</a>
</div>
</li>
<li class="nav-item" style="font-weight: bold; margin-right: 40px;"><a href="#down"
class="nav-link">Contact</a></li>
</ul>
</div>
</nav>
<div class="ref-image">
<div class="ref-title">
<h1>
TUNEX
</h1>
<h3>2020-2021</h3>
</div>
</div>
</div>
<!-- -------------------------------------------------------------------------------------------------------- -->
<br><br><br>
<div class="ref-aim">
<h1>INTRODUCTION</h1>
<p>
Song is a way to express a certain mood and emotion, and as a listener we find it great if the
song we are listening echoes our emotion and hence lightens up our mood with enthusiasm, love,
compassion and pumps us up. Hence we worked to make a model <b>TunEx (Tunes for Expressions)</b> that
detects our emotions at real
time using a webcam feed and classifies one’s playlist into genres, and plays the song that best
matches the emotion based on the facial gesture.</p>
<br>
<hr>
<p><b>Emotion Detection</b></p>
<p> For detecting the right emotion we decided to stick with the facial emotion detection. Major
reason for this was the fact that our face depicts our emotion and mood most effectively and the
detection can be performed without the need of any sophisticated hardware. Also, we will need to
provide the music suggestions for the devices that have a camera attached to them, hence it was
a rather easy task to get the input for our model at the later stage. The emotion after being
detected is mapped with the music genres, based on the kind of music a person will generally
like to listen to during a particular emotional outburst.</p>
<p><b>Emotions being Detected:</b></p>
<ul>
<li>Afraid</li>
<li> Angry</li>
<li>Disgust</li>
<li>Happy</li>
<li>Neutral</li>
<li>Sad</li>
<li>Surprised</li>
</ul>
<p><b>Models used:</b></p>
<ul>
<li>Haarcascade</li>
<li>Basic Convolutional Neural Networks (CNNs)</li>
<li>XGB Classifier (for detecting genre of newly added song)</li>
</ul>
<p><b>Workflow</b></p>
<ul>
<li>Camera feed is used to capture the face of the user, a 10 second video feed is taken.</li>
<li>Region of Interest is Derived from the image using Haarcascade.</li>
<li>The image is pre-processed further by cropping the original image as per the bounding</li>
<li>The image is pre-processed further by cropping the original image as per the bounding boxes
obtained from Haarcascade, converting image to gray scale and resizing the image to 180x180
pixels to maintain integrity and uniformity of our model.
</li>
<li>Predictions are made on the image and the emotion with highest confidence from the entire
frames of 10 second is labelled as the emotion detected.</li>
<li>According to the emotion detection, we map genres, and play the random selected songs from
these genres.</li>
<li>At the end of the song we again capture a 10 second frame of web feed and later all the
above mentioned steps are repeated.</li>
</ul>
<div class="image">
<img src="images/tunex1.png">
</div>
<p><b>Result</b></p>
<ul>
<li>65% test accuracy was obtained with a normal CNN model, as we classified the image to 7
emotions. To help deployment in real time we hence decided to take a webcam feed for 10
seconds.</li>
<li>The genre classification was also tried, but excluded from web page on account Database
needs, and had an accuracy of 78% in classifying the genre of song, hence an unlabelled song
can also be used for out model.</li>
</ul>
<div class="image">
<img src="images/tunex2.png">
</div>
</div>
<!-- ------------------------------------------------------------------------ -->
</body>
<script src="https://unpkg.com/aos@next/dist/aos.js"></script>
<script>
AOS.init();
</script>
<script src="js/jquery.min.js"></script>
<script src="js/bootstrap.min.js"></script>
</html>