You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Transform a list of chat messages into zephyr-specific input
def messages_to_prompt(messages):
prompt = ""
for message in messages:
if message.role == "system":
prompt += f"<|system|>\n{message.content}\n"
elif message.role == "user":
prompt += f"<|user|>\n{message.content}\n"
elif message.role == "assistant":
prompt += f"<|assistant|>\n{message.content}\n"
# ensure we start with a system prompt, insert blank if needed
if not prompt.startswith("<|system|>\n"):
prompt = "<|system|>\n</s>\n" + prompt
# add final assistant prompt
prompt = prompt + "<|assistant|>\n"
return prompt
import torch
from llama_index.llms.huggingface import HuggingFaceLLM
from llama_index.core import Settings
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("Some question about the data should go here")
print(response)
`
Running this in my llama_index folder throws an error explaining that I need an OPENAI key, despite not using an OPENAI model. This code is ripped straight from an example on the LlamaIndex site.
Version
0.12.8
Steps to Reproduce
Run the above code in your llama index folder
Relevant Logs/Tracbacks
No response
The text was updated successfully, but these errors were encountered:
Hey there, @Leo-Boon! I'm here to help you out with any bugs or questions you have. Let's figure out what's going on with your code together.
I found a similar issue where users encountered an OPENAI_KEY error despite using a different LLM. The default embedding model used by LlamaIndex when no custom embedding model is specified is text-embedding-ada-002 from OpenAI, which might be causing the error if an embedding model isn't explicitly set [1].
To resolve this, you should specify a custom embedding model to ensure that OpenAI's services are not being used inadvertently. You can refer to the documentation for custom embeddings: Custom Embeddings Documentation.
Additionally, ensure that your environment variables and configuration do not include an OpenAI API key, and verify that no part of your code or its dependencies is attempting to use OpenAI's services.
Bug Description
`from llama_index.core import PromptTemplate
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
Transform a string into input zephyr-specific input
def completion_to_prompt(completion):
return f"<|system|>\n\n<|user|>\n{completion}\n<|assistant|>\n"
Transform a list of chat messages into zephyr-specific input
def messages_to_prompt(messages):
prompt = ""
for message in messages:
if message.role == "system":
prompt += f"<|system|>\n{message.content}\n"
elif message.role == "user":
prompt += f"<|user|>\n{message.content}\n"
elif message.role == "assistant":
prompt += f"<|assistant|>\n{message.content}\n"
import torch
from llama_index.llms.huggingface import HuggingFaceLLM
from llama_index.core import Settings
Settings.llm = HuggingFaceLLM(
model_name="HuggingFaceH4/zephyr-7b-beta",
tokenizer_name="HuggingFaceH4/zephyr-7b-beta",
context_window=3900,
max_new_tokens=256,
generate_kwargs={"temperature": 0.7, "top_k": 50, "top_p": 0.95},
messages_to_prompt=messages_to_prompt,
completion_to_prompt=completion_to_prompt,
device_map="auto",
)
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("Some question about the data should go here")
print(response)
`
Running this in my llama_index folder throws an error explaining that I need an OPENAI key, despite not using an OPENAI model. This code is ripped straight from an example on the LlamaIndex site.
Version
0.12.8
Steps to Reproduce
Run the above code in your llama index folder
Relevant Logs/Tracbacks
No response
The text was updated successfully, but these errors were encountered: