You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I believe, in order to resolve mudler/LocalAI#1446, go-llama.cpp needs to be built against at least version 799a1cb13b0b1b560ab0ceff485caed68faa8f1f of llama.cpp to enable mixtral support
The text was updated successfully, but these errors were encountered:
sfxworks
changed the title
Build with >799a1cb13b0b1b560ab0ceff485caed68faa8f1f
Build with >799a1cb13b0b1b560ab0ceff485caed68faa8f1f to support Mixtral
Dec 15, 2023
@mudler how can we help get this in? I get an error building on Apple Metal with the current master and this llama.cpp commit 799a1cb13b0b1b560ab0ceff485caed68faa8f1f
I believe, in order to resolve mudler/LocalAI#1446, go-llama.cpp needs to be built against at least version 799a1cb13b0b1b560ab0ceff485caed68faa8f1f of llama.cpp to enable mixtral support
The text was updated successfully, but these errors were encountered: