Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: make sure the GNUMake jobserver is passed to cmake for the llama.cpp build #2697

Merged
merged 1 commit into from
Jul 2, 2024

Conversation

cryptk
Copy link
Collaborator

@cryptk cryptk commented Jul 1, 2024

Description

Builds not using the requested number of threads when building llama.cpp

Notes for Reviewers

Signed commits

  • Yes, I signed my commits.

Copy link

netlify bot commented Jul 1, 2024

Deploy Preview for localai ready!

Name Link
🔨 Latest commit 0aa8567
🔍 Latest deploy log https://app.netlify.com/sites/localai/deploys/668326b44c60a800081b3abb
😎 Deploy Preview https://deploy-preview-2697--localai.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@mudler mudler merged commit c047c19 into mudler:master Jul 2, 2024
33 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants