You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current version of inference.py has text encoder occupying all the VRAM. Even 24GB on 4090 got overwhelmed and throttling. Try updated version https://github.com/KT313/LTX_Video_better_vram/tree/test which evicted text encoder after it is done. Much faster.
--height 704 --width 480 --num_frames 121 RTX4090 need 2.5hours???
The text was updated successfully, but these errors were encountered: