-
Notifications
You must be signed in to change notification settings - Fork 96
Training problem #14
Comments
Hello. use |
Hello @vlomme File "encoder_train.py", line 46, in |
Hello, getting the same error, torch==1.5.0
after that if we are using clip_grad_norm_ from torch, it performs operation on all of the parameters, two of which are on cpu, and the rest on cuda:0
which throws the error. [UPDATE] |
Hello, +------------+--------+--------------+ RuntimeError: CUDA out of memory. Tried to allocate 118.00 MiB (GPU 0; 4.00 GiB total capacity; 2.87 GiB already allocated; 10.61 MiB free; 32.29 MiB cached) how can i solve this error ? |
not enough video memory. Reduce the Batch size |
Thank @vlomme |
First of all, thank you for sharing the open-source of Multi-Tacotron-Voice-Cloning. I also just started learning about natural language processing programming. And I also started learning Python programming.
-I put the software in the directory: D: \ SV2TTS
-I put the dataset in the directory: D: \ Datasets, I have D: \ Datasets \ book and D: \ Datasets \ LibriSpeech
When using the code you provided, I had some training issues:
and the result is
Arguments:
datasets_root: D: \ Datasets
out_dir: D: \ Datasets \ SV2TTS \ encoder
datasets: ['preprocess_voxforge']
skip_existing: False
Done preprocessing book.
because the notice appeared
C: \ Users \ Admin \ anaconda3 \ envs \ [Test_Voice] \ lib \ site-packages \ umap \ spectral.py: 4: NumbaDeprecationWarning: No direct replacement for 'numba.targets' available. Visit https://gitter.im/numba/numba-dev to request help. Thanks!
import numba.targets
usage: encoder_train.py [-h] [--clean_data_root CLEAN_DATA_ROOT]
[-m MODELS_DIR] [-v VIS_EVERY] [-u UMAP_EVERY]
[-s SAVE_EVERY] [-b BACKUP_EVERY] [-f]
[--visdom_server VISDOM_SERVER] [--no_visdom]
run_id
encoder_train.py: error: unrecognized arguments: D: \ Datasets
My question: How can I fix this problem?
Thanks again for your sharing!!!
The text was updated successfully, but these errors were encountered: