Replies: 1 comment
-
|
Hello, and thank you for your feedback. We take your comments very seriously. Our method is designed to be extremely simple and does not require any preprocessing steps, offline profiling, or extensive parameter tuning. This is a core design philosophy. While our approach does not include a separate correction step, our extensive testing shows that the simple input/output difference criterion is effective across a majority of tested models. It is also important to point out that the core contribution of EasyCache is the runtime-adaptive caching criterion, tdeciding when to reuse computation. The specific mechanism for reusing the computation (e.g., whether to introduce a correction) is an orthogonal issue that does not conflict with our primary goal. We have conducted extensive verification on our open-sourced code, demonstrating that it can achieve significant acceleration while maintaining visual results that are as close as possible to the original video. Please follow our official tutorial and settings to test it on the advanced models such as Wan2.1 and HunyuanVideo. If possible, provide us with the visual results from your tests so that we can analyze the issue and give further insights. We believe there is still significant room for improvement in training-free methods, and we hope to contribute to the progress of the field with our work. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Beta Was this translation helpful? Give feedback.
All reactions