You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@tye-singwa just pushed a new version to PyPI with a build flag that lets you force cmake installation, you can use FORCE_CMAKE=1 pip install --upgrade llama-cpp-python
Hi!
I've tried to install python package, but seems that AVX / AVX2 / SSE3 optimizations has been not detected, as per codewars/runner#118 (comment) and per makefile ggml-org/llama.cpp@872c365#diff-76ed074a9305c04054cdebb9e9aad2d818052b07091de1f20cad0bbac34ffb52R79-R82 it is not always enabled
Also i see cmake build uses makefile https://github.com/abetlen/llama-cpp-python/blob/main/CMakeLists.txt#L8, maybe its possible to change it back?
Thanks
The text was updated successfully, but these errors were encountered: