You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the existing issues and checked the recent builds/commits
What would your feature do ?
Is it possible to add tensorRT like here: https://github.com/ddPn08/Lsmith ?
If so, why not add it, since this method speeds up the generation on 20/30/40 series of video cards by 2 or more times
Proposed workflow
Add support TensorRT
Additional information
No response
The text was updated successfully, but these errors were encountered:
I imagine that this would/may come around the same time any Vlad merging? Looking forward to this. I bought a 4070 TI, and, while it is doing better than my 1080 TI was, for sure, I know I am not getting the best out of it. Considering doing some compiling myself... But we know how that goes... ;-)
tensorRT feature right now is very very limited generation or of use TBH. Automatic has an extension now that supports this feature, but the conversion process has a limit on "size" during conversion. And the TensorRT Unet file is restricted once again for generation, with the image size/batch that was "converted", lora doesn't work on it, and neither does hires. So unless you're generating as is images it's not very useful yet. Wait for nvidia release of the implementation.I went from 47-49its/sec up to 87its on a batch 4, performance isn't really 2x as hyped about.
Is there an existing issue for this?
What would your feature do ?
Is it possible to add tensorRT like here: https://github.com/ddPn08/Lsmith ?
If so, why not add it, since this method speeds up the generation on 20/30/40 series of video cards by 2 or more times
Proposed workflow
Add support TensorRT
Additional information
No response
The text was updated successfully, but these errors were encountered: