Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Getting OPENAI_KEY error despite using different LLM? #17379

Open
Leo-Boon opened this issue Dec 27, 2024 · 2 comments
Open

[Bug]: Getting OPENAI_KEY error despite using different LLM? #17379

Leo-Boon opened this issue Dec 27, 2024 · 2 comments
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized

Comments

@Leo-Boon
Copy link

Bug Description

`from llama_index.core import PromptTemplate
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

Transform a string into input zephyr-specific input

def completion_to_prompt(completion):
return f"<|system|>\n\n<|user|>\n{completion}\n<|assistant|>\n"

Transform a list of chat messages into zephyr-specific input

def messages_to_prompt(messages):
prompt = ""
for message in messages:
if message.role == "system":
prompt += f"<|system|>\n{message.content}\n"
elif message.role == "user":
prompt += f"<|user|>\n{message.content}\n"
elif message.role == "assistant":
prompt += f"<|assistant|>\n{message.content}\n"

# ensure we start with a system prompt, insert blank if needed
if not prompt.startswith("<|system|>\n"):
    prompt = "<|system|>\n</s>\n" + prompt

# add final assistant prompt
prompt = prompt + "<|assistant|>\n"

return prompt

import torch
from llama_index.llms.huggingface import HuggingFaceLLM
from llama_index.core import Settings

Settings.llm = HuggingFaceLLM(
model_name="HuggingFaceH4/zephyr-7b-beta",
tokenizer_name="HuggingFaceH4/zephyr-7b-beta",
context_window=3900,
max_new_tokens=256,
generate_kwargs={"temperature": 0.7, "top_k": 50, "top_p": 0.95},
messages_to_prompt=messages_to_prompt,
completion_to_prompt=completion_to_prompt,
device_map="auto",
)

documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("Some question about the data should go here")
print(response)
`

Running this in my llama_index folder throws an error explaining that I need an OPENAI key, despite not using an OPENAI model. This code is ripped straight from an example on the LlamaIndex site.

Version

0.12.8

Steps to Reproduce

Run the above code in your llama index folder

Relevant Logs/Tracbacks

No response

@Leo-Boon Leo-Boon added bug Something isn't working triage Issue needs to be triaged/prioritized labels Dec 27, 2024
Copy link

dosubot bot commented Dec 27, 2024

Hey there, @Leo-Boon! I'm here to help you out with any bugs or questions you have. Let's figure out what's going on with your code together.

I found a similar issue where users encountered an OPENAI_KEY error despite using a different LLM. The default embedding model used by LlamaIndex when no custom embedding model is specified is text-embedding-ada-002 from OpenAI, which might be causing the error if an embedding model isn't explicitly set [1].

To resolve this, you should specify a custom embedding model to ensure that OpenAI's services are not being used inadvertently. You can refer to the documentation for custom embeddings: Custom Embeddings Documentation.

Additionally, ensure that your environment variables and configuration do not include an OpenAI API key, and verify that no part of your code or its dependencies is attempting to use OpenAI's services.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@logan-markewich
Copy link
Collaborator

@Leo-Boon there's an embedding model and an llm. You only set the llm on the settings

For example

from llama_index.embeddings.openai import OpenAIEmbedding 

Settings.embed_model = OpenAIEmbedding()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized
Projects
None yet
Development

No branches or pull requests

2 participants