Running Custom Model with isinstance(model, VllmModelForTextGeneration) problem #15858
yesilcagri
announced in
Q&A
Replies: 2 comments
-
Make sure you have implemented all of the required methods for that interface |
Beta Was this translation helpful? Give feedback.
0 replies
-
You need to implement both compute_logits and sample interfaces: vllm/vllm/model_executor/models/interfaces_base.py Lines 95 to 112 in cb84e45 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I have a custom pretrained model. I prepared my custom model classes. And registered them in _TEXT_GENERATION_MODELS dictionary in register.py. I load the model successfully. But when I call the generate function like below I got the error "ValueError: LLM.generate() is only supported for (conditional) generation models (XForCausalLM, XForConditionalGeneration)."
llm = LLM(model="my_model", trust_remote_code=True, dtype="half")
outputs = llm.generate(prompts, sampling_params)
I did a lot of debugging. When I compare with LLama, what I see that isinstance(model, VllmModelForTextGeneration) function (vllm/model_executor/models/interfaces_base.py) returns false for my model, but it returns true for llama. I don't know how to handle this. Can you help me?
Beta Was this translation helpful? Give feedback.
All reactions