You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi I get an OOM error with an (RTX 3060 12 GB) for the following scripts: 'uDALLE-image-prompts-A100.ipynb' , 'ruDALLE-image-prompts-dress-mannequins-V100.ipynb'
Unfortunately I could not make it work.
I tried to look at the other script 'Malevich-3.5GB-vRAM-usage.ipynb' and I have tried to adept the code, but I still get an error:
RuntimeError: CUDA out of memory. Tried to allocate 1.98 GiB (GPU 0; 11.77 GiB total capacity; 6.63 GiB already allocated; 1.08 GiB free; 11.00 GiB allowed; 7.77 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Any help on how to tweak the code would be appreciated!
Thanks!
Gero
The text was updated successfully, but these errors were encountered:
Limbicnation
changed the title
CUDA out of memory on 'ruDALLE-image-prompts-A100.ipynb' & 'ruDALLE-image-prompts-dress-mannequins-V100.ipynb'
CUDA out of memory with 'ruDALLE-image-prompts-A100.ipynb' & 'ruDALLE-image-prompts-dress-mannequins-V100.ipynb'
Feb 4, 2022
Traceback (most recent call last):
File "D:\code\ru-dalle\py\test.py", line 17, in <module>
dalle = get_rudalle_model('Malevich', pretrained=True, fp16=True, device=device)
File "C:\Python310\lib\site-packages\rudalle\dalle\__init__.py", line 148, in get_rudalle_model
model = model.to(device)
File "C:\Python310\lib\site-packages\rudalle\dalle\fp16.py", line 63, in to
self.module.to(device)
File "C:\Python310\lib\site-packages\rudalle\dalle\model.py", line 165, in to
return super().to(device, *args, **kwargs)
File "C:\Python310\lib\site-packages\torch\nn\modules\module.py", line 907, in to
return self._apply(convert)
File "C:\Python310\lib\site-packages\torch\nn\modules\module.py", line 578, in _apply
module._apply(fn)
File "C:\Python310\lib\site-packages\torch\nn\modules\module.py", line 578, in _apply
module._apply(fn)
File "C:\Python310\lib\site-packages\torch\nn\modules\module.py", line 578, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "C:\Python310\lib\site-packages\torch\nn\modules\module.py", line 601, in _apply
param_applied = fn(param)
File "C:\Python310\lib\site-packages\torch\nn\modules\module.py", line 905, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA out of memory. Tried to allocate 24.00 MiB (GPU 0; 2.00 GiB total capacity; 1.69 GiB already allocated; 0 bytes free; 1.72 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
P
Hi I get an OOM error with an (RTX 3060 12 GB) for the following scripts: 'uDALLE-image-prompts-A100.ipynb' , 'ruDALLE-image-prompts-dress-mannequins-V100.ipynb'
Unfortunately I could not make it work.
I tried to look at the other script 'Malevich-3.5GB-vRAM-usage.ipynb' and I have tried to adept the code, but I still get an error:
RuntimeError: CUDA out of memory. Tried to allocate 1.98 GiB (GPU 0; 11.77 GiB total capacity; 6.63 GiB already allocated; 1.08 GiB free; 11.00 GiB allowed; 7.77 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Any help on how to tweak the code would be appreciated!
Thanks!
Gero
The text was updated successfully, but these errors were encountered: