Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request]: TensorRT support. #77

Open
1 task done
bropines opened this issue Apr 24, 2023 · 3 comments
Open
1 task done

[Feature Request]: TensorRT support. #77

bropines opened this issue Apr 24, 2023 · 3 comments
Labels
enhancement New feature or request

Comments

@bropines
Copy link

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What would your feature do ?

Is it possible to add tensorRT like here: https://github.com/ddPn08/Lsmith ?
If so, why not add it, since this method speeds up the generation on 20/30/40 series of video cards by 2 or more times

Proposed workflow

Add support TensorRT

Additional information

No response

@bropines bropines added the enhancement New feature or request label Apr 24, 2023
@anapnoe
Copy link
Owner

anapnoe commented Apr 26, 2023

Yes this is really cool I need to find some more time to look into thanks

@midcoastal
Copy link

I imagine that this would/may come around the same time any Vlad merging? Looking forward to this. I bought a 4070 TI, and, while it is doing better than my 1080 TI was, for sure, I know I am not getting the best out of it. Considering doing some compiling myself... But we know how that goes... ;-)

@rushuna86
Copy link

tensorRT feature right now is very very limited generation or of use TBH. Automatic has an extension now that supports this feature, but the conversion process has a limit on "size" during conversion. And the TensorRT Unet file is restricted once again for generation, with the image size/batch that was "converted", lora doesn't work on it, and neither does hires. So unless you're generating as is images it's not very useful yet. Wait for nvidia release of the implementation.I went from 47-49its/sec up to 87its on a batch 4, performance isn't really 2x as hyped about.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants