-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[issue] Compatibility with torch.compile() if torch >= 2.0 #26
Comments
Hi @jyoung105, DeepCache is a dynamic model inference algorithm, which makes it incompatible with torch.compile(). One possible solution (though I'm not certain) is to split the entire pipeline into two models: one for the whole network inference and the other for partial network inference (the shallow on). This way, both of these models can be converted into static models, thus supporting torch.compile(). |
I think the one you say is on the project 'onediff'. |
Hello, do you have any solution now? |
Hi, I think you should check it on onediff. (https://github.com/siliconflow/onediff[https://github.com/siliconflow/onediff]) |
Thanks for great work once again.
I would like to ask you,
whether it can work with torch.compile().
If it is, maybe it can work so faster.
I got an error below when I combine with together.
The text was updated successfully, but these errors were encountered: