Replies: 7 comments 6 replies
-
Do it and submit a PR! It shouldn't be hard. I know a few people have gotten it working. |
Beta Was this translation helpful? Give feedback.
-
Works with ROCm on Ubuntu 20.04 over here. All you need is the ROCm version of Torch and of course a ROCm installation on your system. I setup a venv ( You may need to run python with these environment args, replacing the versions with what is appropriate for your GPU: |
Beta Was this translation helpful? Give feedback.
-
Works on AMD for me. On Linux + PyTorch + ROCm, it treats AMD cards as CUDA. Existing code should work |
Beta Was this translation helpful? Give feedback.
-
Manage to make it works on Windows with a 6800XT. |
Beta Was this translation helpful? Give feedback.
-
Would that work on a 6700xt? How did you managed to make it work?
|
Beta Was this translation helpful? Give feedback.
-
I feel stupid, it only runs on CPU :(
…On Sun, 7 Jan 2024, 15:45 Super Chiba, ***@***.***> wrote:
Would that work on a 6700xt? How did you managed to make it work?
Manage to make it works on Windows with a 6800XT.
#501 (comment)
<#501 (comment)>
I have tested with 6800XT and consider 6700XT is extremely close.
I assume it will, you can give it a try with a smaller batch size base on
the VRAM.
—
Reply to this email directly, view it on GitHub
<#284 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AV7HCYSUDSC7JVAU2EHBZA3YNK7JJAVCNFSM6AAAAAAUNVIFEWVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DAMZXHA3TC>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Will try thanks a lot
…On Sun, 7 Jan 2024, 15:50 Super Chiba, ***@***.***> wrote:
I feel stupid, it only runs on CPU :(
On Sun, 7 Jan 2024, 15:45 Super Chiba, *@*.***> wrote:
Would that work on a 6700xt? How did you managed to make it work?
Manage to make it works on Windows with a 6800XT.
#501 <#501> (comment)
#501 (comment)
<#501 (comment)>
I have tested with 6800XT and consider 6700XT is extremely close.
I assume it will, you can give it a try with a smaller batch size base on
the VRAM.
—
Reply to this email directly, view it on GitHub
#284 (reply in thread)
<#284 (reply in thread)>
,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AV7HCYSUDSC7JVAU2EHBZA3YNK7JJAVCNFSM6AAAAAAUNVIFEWVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DAMZXHA3TC
.
You are receiving this because you commented.Message ID:
*@*.***>
You can try this fork for using a AMD GPU under Windows.
https://github.com/Chapoly1305/tortoise-tts-directml
—
Reply to this email directly, view it on GitHub
<#284 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AV7HCYU7NYTXW4E4HLMESCLYNK74ZAVCNFSM6AAAAAAUNVIFEWVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DAMZXHEZTI>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
While obviously not a priority feature, opening the project up to a broader spectrum of GPUs would benefit long term generation and development.
Beta Was this translation helpful? Give feedback.
All reactions