You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Recent version of release build for cblast llama-master-d7d2e6a-bin-win-clblast-x64 no longer working on Windows 11
Following output is received when trying to run inference command: main -t 24 --color -c 2048 ...
main: build = 775 (d7d2e6a)
main: seed = 1688407136
ggml_opencl: selecting platform: 'NVIDIA CUDA'
ggml_opencl: selecting device: 'NVIDIA GeForce RTX 4090 Laptop GPU'
ggml_opencl: device FP16 support: false
13 errors generated.
ggml_opencl: kernel compile error:
:2:14138: error: expected expression
:2:14230: error: expected expression
:2:14269: error: redefinition of 'is'
:2:14222: note: previous definition is here
:2:14282: error: expected expression
:2:14353: error: use of undeclared identifier 'l0'
:2:14419: error: use of undeclared identifier 'l0'
:2:14612: error: use of undeclared identifier 'ql_offset'; did you mean 'qh_offset'?
:2:14333: note: 'qh_offset' declared here
:2:14766: error: expected expression
:2:15451: error: use of undeclared identifier 'sum'
:2:15456: error: expected expression
:2:15507: error: use of undeclared identifier 'sum'
:2:15875: error: use of undeclared identifier 'sum'
:2:15880: error: expected expression
My machine setup seems to be configured correctly because other builds from same release group are working fine
Tested and verified correct functionality of:
llama-master-d7d2e6a-bin-win-openblas-x64.zip
llama-master-d7d2e6a-bin-win-cublas-cu12.1.0-x64.zip
Also older release binaries of llamacpp blast from May 2023 are working fine as well on same machine .
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Recent version of release build for cblast llama-master-d7d2e6a-bin-win-clblast-x64 no longer working on Windows 11
Following output is received when trying to run inference command: main -t 24 --color -c 2048 ...
main: build = 775 (d7d2e6a)
main: seed = 1688407136
ggml_opencl: selecting platform: 'NVIDIA CUDA'
ggml_opencl: selecting device: 'NVIDIA GeForce RTX 4090 Laptop GPU'
ggml_opencl: device FP16 support: false
13 errors generated.
ggml_opencl: kernel compile error:
:2:14138: error: expected expression
:2:14230: error: expected expression
:2:14269: error: redefinition of 'is'
:2:14222: note: previous definition is here
:2:14282: error: expected expression
:2:14353: error: use of undeclared identifier 'l0'
:2:14419: error: use of undeclared identifier 'l0'
:2:14612: error: use of undeclared identifier 'ql_offset'; did you mean 'qh_offset'?
:2:14333: note: 'qh_offset' declared here
:2:14766: error: expected expression
:2:15451: error: use of undeclared identifier 'sum'
:2:15456: error: expected expression
:2:15507: error: use of undeclared identifier 'sum'
:2:15875: error: use of undeclared identifier 'sum'
:2:15880: error: expected expression
My machine setup seems to be configured correctly because other builds from same release group are working fine
Tested and verified correct functionality of:
llama-master-d7d2e6a-bin-win-openblas-x64.zip
llama-master-d7d2e6a-bin-win-cublas-cu12.1.0-x64.zip
Also older release binaries of llamacpp blast from May 2023 are working fine as well on same machine .
Beta Was this translation helpful? Give feedback.
All reactions