-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added Temporal Score functions #72
base: master
Are you sure you want to change the base?
Changes from all commits
b75a406
1e589a2
c116e2e
d65e27b
d596ec1
2edbede
4025c24
55651b4
5196834
a20fee1
e074820
53cc3fc
d89dace
05861c1
e5e08ad
6174468
fbcd5f5
400229c
96eba69
ad89afb
abe26c3
a1b7961
6139fe3
0f53026
f1f9c40
2a00a8e
8cb7762
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -14,4 +14,5 @@ Contributors | |
* Rohit Patil <[email protected]> | ||
* Parantak Singh <[email protected]> | ||
* Abheesht Sharma <[email protected]> | ||
* Harshit Pandey <[email protected]> | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,109 @@ | ||
import torch | ||
import torch.nn as nn | ||
|
||
|
||
class RankWords: | ||
""" | ||
Accepts a feature vector torch tensor and outputs a temporal ranking of each word | ||
inputs of shape (number of words,feature_vector_size) | ||
Methods | ||
-------- | ||
temporal_score(self, model, sentence) | ||
- calculates the temporal score each word in a sentence(considering the first i words) | ||
temportal_tail_score(self, model, sentence) | ||
- calculates the temporal score each word in a sentence(considering n-i words) | ||
combined_score(self, model, sentence, lambda_) | ||
- calculates the combined score taking into consideration temporal score and tailed temporal score with lambda as the weight | ||
""" | ||
|
||
def temporal_score(self, model, sentence): | ||
""" | ||
Considering a input sequence x1,x2,...,xn | ||
we calculate T(xi) = F(x1,x2,...,xi) - F(x1,x2,...,xi-1) | ||
where F is one pass through a RNN cell | ||
|
||
Example: | ||
|
||
>> import temporal_metrics | ||
>> rw = temporal_metrics.RankWords() | ||
>> embedded_word = embed("i like dogs") #embed should convert the sentence to a (sent_length * embedding_dim) shape tensor | ||
>> print(rw.temporal_score(model,embedded_word)) #seed = 99 | ||
tensor([[1.1921e-07],[2.2054e-05],[8.4886e-01]]) | ||
|
||
:params | ||
:model : The model must output torch tensor of dimension (1,1,1) as its output | ||
:sentence : An tensor of shape (sent_length * embedding_dim) where each word has been embedded | ||
|
||
returns temporal score of each word | ||
:return type: torch.Tensor | ||
""" | ||
inputs = sentence.reshape(len(sentence), 1, -1) | ||
with torch.no_grad(): | ||
pred = model(inputs) | ||
losses = torch.zeros(inputs.shape[0:2]) | ||
for i in range(inputs.size()[0]): | ||
tempinputs = inputs.clone() | ||
tempinputs[i, :, :].zero_() | ||
tempoutput = model(tempinputs) | ||
losses[i] = torch.dist( | ||
tempoutput.squeeze(1), pred.squeeze(1) | ||
) # L2 Norm | ||
return losses | ||
|
||
def temportal_tail_score(self, model, sentence): | ||
""" | ||
Considering a input sequence x1,x2,...,xn | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Docstring needed |
||
we calculate T(xi) = F(xi,xi+1,...,xn) - F(xi+1,xi+2...,xn) | ||
where F is one pass through a RNN cell | ||
|
||
Example: | ||
|
||
>> import temporal_metrics | ||
>> rw = temporal_metrics.RankWords() | ||
>> embedded_word = embed("i like dogs") #embed should convert the sentence to a (sent_length * embedding_dim) shape tensor | ||
>> print(rw.temportal_tail_score(model,embedded_word)) #seed = 99 | ||
|
||
tensor([[5.9605e-08],[1.9789e-05],[0.0000e+00]]) | ||
|
||
:params | ||
:model : The model must output torch tensor of dimension (1,1,1) as its output | ||
:sentence : An tensor of shape (sent_length * embedding_dim) where each word has been embedded | ||
|
||
returns temporal score of each word | ||
:return type: torch.Tensor | ||
""" | ||
inputs = sentence.reshape(len(sentence), 1, -1) | ||
losses = torch.zeros(inputs.shape[0:2]) | ||
with torch.no_grad(): | ||
for i in range(inputs.size()[0] - 1): | ||
pred = model(inputs[i:, :, :]) | ||
tempinputs = inputs[i + 1 :, :,].clone() | ||
tempoutput = model(tempinputs) | ||
losses[i] = torch.dist(tempoutput, pred) # L2 Norm | ||
return losses | ||
|
||
def combined_score(self, model, sentence, lambda_): | ||
""" | ||
Computes combined temporal_score, temportal_tail_score | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Proper docstrings needed There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'll add all of the docstrings |
||
Combined Score = temporal_score + λ(temportal_tail_score) | ||
|
||
Example: | ||
|
||
>> import temporal_metrics | ||
>> rw = temporal_metrics.RankWords() | ||
>> embedded_word = embed("i like dogs") #embed should convert the sentence to a (sent_length * embedding_dim) shape tensor | ||
>> print(rw.combined_score(model,embedded_word,0.5)) #seed = 99 | ||
|
||
tensor([[6.8545e-07], [3.8507e-03], [8.4886e-01]]) | ||
|
||
:params | ||
:model : The model must output torch tensor of dimension (1,1,1) as its output | ||
:sentence : An tensor of shape (sent_length * embedding_dim) where each word has been embedded | ||
:lambda_ : (float) , 0 <= lambda_ <= 1 | ||
|
||
returns temporal score of each word | ||
:return type: torch.Tensor | ||
""" | ||
return self.temporal_score( | ||
model, sentence | ||
) + lambda_ * self.temportal_tail_score(model, sentence) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Docstrings in our preferred format needed here too? Including information about params and return values?