You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
About three weeks ago, I asked some questions about the download of the dataset. Now I have download about 18000 video clips, however about 2000 videos are not available. So if possible, I want to ask for help from you. I have sent an email with the lost video ids to your mailbox.
The text was updated successfully, but these errors were encountered:
excuse me, i have some trouble when reimplementing your model through Pytorch. Could you please tell me the effects of the function 'group_by_window' you've applied on the training dataset? i've read the official explanation of the function and i guess it might split the training set according to the length of the reference video divided by the bucket_span. Am i right ? But i also noticed that some reference video with length larger than 300 have been droped after such processing, which confused me a lot . i'd be very appreciate if you could help me ...
@tanghaoyu258
group_by_window is used to improve running speed. It is safe to remove it.
dynamic_rnn will be run n steps, where n is the length of the longest sequence in a batch.
About three weeks ago, I asked some questions about the download of the dataset. Now I have download about 18000 video clips, however about 2000 videos are not available. So if possible, I want to ask for help from you. I have sent an email with the lost video ids to your mailbox.
The text was updated successfully, but these errors were encountered: