Skip to content

generation hangs forever #659

Open
Open
@YerongLi

Description

@YerongLi

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

The following code should print the output

from llama_cpp import Llama
llm = Llama(model_path="/scratch/yerong/.cache/pyllama/Llama-2-7b/ggml-model-q4_0.gguf", n_gpu_layers= 100, n_ctx=100)
output = llm("Q: Name the planets in the solar system? A: ", max_tokens=32, stop=["Q:", "\n"], echo=True)
print(output)

Current Behavior

Currently it hangs forever

llm_load_print_meta: model ftype    = mostly Q4_0                                                                                                             
llm_load_print_meta: model size     = 6.74 B                                                                                                                  
llm_load_print_meta: general.name   = LLaMA v2                                                                                                                
llm_load_print_meta: BOS token = 1 '<s>'                                                                                                                      
llm_load_print_meta: EOS token = 2 '</s>'                                                                                                                     
llm_load_print_meta: UNK token = 0 '<unk>'                                                                                                                    
llm_load_print_meta: LF token  = 13 '<0x0A>'                                                                                                                  
llm_load_tensors: ggml ctx size =    0.09 MB                                                                                                                  
llm_load_tensors: using CUDA for GPU acceleration                                                                                                             
llm_load_tensors: mem required  =   70.41 MB (+   50.00 MB per state)                                                                                         
llm_load_tensors: offloading 32 repeating layers to GPU                                                                                                       
llm_load_tensors: offloading non-repeating layers to GPU                                                                                                      
llm_load_tensors: offloading v cache to GPU                                                                                                                   
llm_load_tensors: offloading k cache to GPU                                                                                                                   
llm_load_tensors: offloaded 35/35 layers to GPU                                                                                                               
llm_load_tensors: VRAM used: 3628 MB                                                                                                                          
..................................................................................................                                                            
llama_new_context_with_model: kv self size  =   50.00 MB                                                                                                      
llama_new_context_with_model: compute buffer total size =   15.24 MB                                                                                          
llama_new_context_with_model: VRAM scratch buffer: 13.77 MB                                                                                                   
AVX = 1 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | S
SE3 = 1 | SSSE3 = 1 | VSX = 0 |              






screenshot

Environment and Context

Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.

  • SDK version, e.g. for Linux:
(lla) [yerong2@ccc0351 self-instruct]$ python3 --version
Python 3.11.4
(lla) [yerong2@ccc0351 self-instruct]$ make --version
GNU Make 3.82
Built for x86_64-redhat-linux-gnu
Copyright (C) 2010  Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
(lla) [yerong2@ccc0351 self-instruct]$ g++ --version
g++ (GCC) 11.2.0
Copyright (C) 2021 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Failure Information (for bugs)

Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.

Steps to Reproduce

Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.

make clean;make LLAMA_CUBLAS=1 -j libllama.so
cp libllama.so /scratch/yerong/.conda/envs/lla/lib/python3.11/site-packages/llama_cpp
from llama_cpp import Llama
llm = Llama(model_path="/scratch/yerong/.cache/pyllama/Llama-2-7b/ggml-model-q4_0.gguf", n_gpu_layers= 100, n_ctx=100)
output = llm("Q: Name the planets in the solar system? A: ", max_tokens=32, stop=["Q:", "\n"], echo=True)
print(output)

Note: Many issues seem to be regarding functional or performance issues / differences with llama.cpp. In these cases we need to confirm that you're comparing against the version of llama.cpp that was built with your python package, and which parameters you're passing to the context.

Try the following:

  1. git clone https://github.com/abetlen/llama-cpp-python
  2. cd llama-cpp-python
  3. rm -rf _skbuild/ # delete any old builds
  4. python setup.py develop
  5. cd ./vendor/llama.cpp
  6. Follow llama.cpp's instructions to cmake llama.cpp
  7. Run llama.cpp's ./main with the same arguments you previously passed to llama-cpp-python and see if you can reproduce the issue. If you can, log an issue with llama.cpp

Failure Logs

Please include any relevant log snippets or files. If it works under one configuration but not under another, please provide logs for both configurations and their corresponding outputs so it is easy to see where behavior changes.

Also, please try to avoid using screenshots if at all possible. Instead, copy/paste the console output and use Github's markdown to cleanly format your logs for easy readability.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions