forked from turboderp-org/exllamav2
-
Notifications
You must be signed in to change notification settings - Fork 0
A fast inference library for running LLMs locally on modern consumer-class GPUs
License
AlpinDale/exllamav2
ErrorLooks like something went wrong!
About
A fast inference library for running LLMs locally on modern consumer-class GPUs
Resources
License
Stars
Watchers
Forks
Packages 0
No packages published
Languages
- Python 62.9%
- Cuda 20.7%
- C++ 13.9%
- C 2.5%