Skip to content

Running the example fails, after following the steps (on Readme.md) to build. #295

Open
@pnsvk

Description

@pnsvk
MAC-CBBH4ACVpp:go-llama.cpp pnsvk$ LIBRARY_PATH=$PWD C_INCLUDE_PATH=$PWD go run ./examples -m /Users/pnsvk/Downloads/mistral-7b-v0.1.Q4_K_M.gguf -t 14
\# github.com/go-skynet/go-llama.cpp
binding.cpp:333:67: warning: format specifies type 'size_t' (aka 'unsigned long') but the argument has type 'int' [-Wformat]
binding.cpp:809:5: warning: deleting pointer to incomplete type 'llama_model' may cause undefined behavior [-Wdelete-incomplete]
./llama.cpp/llama.h:60:12: note: forward declaration of 'llama_model'
\# github.com/go-skynet/go-llama.cpp/examples
/usr/local/go/pkg/tool/darwin_amd64/link: running clang++ failed: exit status 1
ld: warning: -no_pie is deprecated when targeting new OS versions
Undefined symbols for architecture x86_64:
  "_ggml_metal_add_buffer", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
  "_ggml_metal_free", referenced from:
      llama_context::~llama_context() in libbinding.a(llama.o)
  "_ggml_metal_get_concur_list", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
  "_ggml_metal_graph_compute", referenced from:
      llama_eval_internal(llama_context&, int const*, float const*, int, int, int, char const*) in libbinding.a(llama.o)
  "_ggml_metal_graph_find_concurrency", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
  "_ggml_metal_host_free", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
      llm_load_tensors(llama_model_loader&, llama_model&, int, int, int, float const*, bool, bool, ggml_type, bool, void (*)(float, void*), void*) in libbinding.a(llama.o)
      llama_model::~llama_model() in libbinding.a(llama.o)
      llama_context::~llama_context() in libbinding.a(llama.o)
  "_ggml_metal_host_malloc", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
      llm_load_tensors(llama_model_loader&, llama_model&, int, int, int, float const*, bool, bool, ggml_type, bool, void (*)(float, void*), void*) in libbinding.a(llama.o)
  "_ggml_metal_if_optimized", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
  "_ggml_metal_init", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
  "_ggml_metal_log_set_callback", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
  "_ggml_metal_set_n_cb", referenced from:
      llama_eval_internal(llama_context&, int const*, float const*, int, int, int, char const*) in libbinding.a(llama.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

Please can you help check with this error ?

Following are the two steps I followed, to run it (these are what were described on the readme)

MAC-CBBH4ACVpp:code-checkouts pnsvk$ git clone --recurse-submodules https://github.com/go-skynet/go-llama.cpp

MAC-CBBH4ACVpp:code-checkouts pnsvk$ cd go-llama.cpp
MAC-CBBH4ACVpp:go-llama.cpp pnsvk$ make libbinding.a

MAC-CBBH4ACVpp:go-llama.cpp pnsvk$ LIBRARY_PATH=$PWD C_INCLUDE_PATH=$PWD go run ./examples -m "/model/path/here" -t 14

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions