You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello! I tried using the sota-16z model to reconstruct a 1024x1024x24 frame video with chunk=8, but it requires 94GB of GPU memory. Does the model support multi-GPU inference or tiled-vae inference to reduce memory usage?
Additionally, I tried using chunk=4 for reconstruction, but the results have some artifacts.
The text was updated successfully, but these errors were encountered:
Hello! I tried using the sota-16z model to reconstruct a 1024x1024x24 frame video with chunk=8, but it requires 94GB of GPU memory. Does the model support multi-GPU inference or tiled-vae inference to reduce memory usage?
Additionally, I tried using chunk=4 for reconstruction, but the results have some artifacts.
The text was updated successfully, but these errors were encountered: