Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does the fast step can generate 1s in about 0.8s as the paper said? #2

Open
azraelkuan opened this issue Jul 20, 2018 · 5 comments
Open

Comments

@azraelkuan
Copy link

azraelkuan commented Jul 20, 2018

Avg time per step inference on CPU: 0.002576863145828247
Avg time per step inference on GPU: 0.003681471061706543

0.003681471061706543 * 16000 = 59s

so how can we get the result in the paper?

@erogol
Copy link
Owner

erogol commented Jul 24, 2018

I don't know. I asked the authors via an email but got no reponse. Maybe you could reimplement it with dilated convolution instead of two convolutions for each part of the input. But right now, I have other things todo over FFTNet and ll return to it shortly.

@azraelkuan
Copy link
Author

@erogol yes, i have finish the basic implement. Thanks!

@erogol
Copy link
Owner

erogol commented Jul 24, 2018

@azraelkuan let me know if you have any update, question or comment. Free to discuss :)

@DLZML001
Copy link

Hi, any update on this issue?

@erogol
Copy link
Owner

erogol commented Nov 2, 2018

@DLZML001 Nope, I went to WaveRNN now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants
@erogol @azraelkuan @DLZML001 and others