You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When I run the llm-based evaluators with local llms, I get an error saying that OPENAI API key is missing even though no key is needed.
Error message
...
None of the following authentication environment variables are set: ('OPENAI_API_KEY',)
Expected behavior api_key param should be optional maybe?
It works when I initialize the evaluator like this:
To Reproduce
Run the code below. It's taken from the docs
fromhaystack.components.evaluatorsimportFaithfulnessEvaluatorquestions= ["Who created the Python language?"]
contexts= [
[(
"Python, created by Guido van Rossum in the late 1980s, is a high-level general-purpose programming ""language. Its design philosophy emphasizes code readability, and its language constructs aim to help ""programmers write clear, logical code for both small and large-scale software projects."
)],
]
predicted_answers= [
"Python is a high-level general-purpose programming language that was created by George Lucas."
]
local_endpoint="http://localhost:11434/v1"evaluator=FaithfulnessEvaluator(api_params={"api_base_url": local_endpoint, "model": "llama3"})
result=evaluator.run(questions=questions, contexts=contexts, predicted_answers=predicted_answers)
I debated bringing this up a while back, but I'm unsure if this is a bug or "working as expected". Since we use the OpenAI generator as a backend, it needs to have the logic to resolve an api key from the OPENAI_API_KEY environment variable or passed in directly as a secret. We can make it optional to have an API key, but there are also uses cases where people host their own OpenAI API compatible servers (like VLLM) and still need the API key. There is also the issue of the OpenAI package requiring an api_key parameter to be passed otherwise an error is raised.
I see two possible solutions:
Leave as is and update documentation to ask users to explicitly provide an api_key (any input would work, it would just be ignored by the server)
If an api_base_url is passed and no api key is provided, we set a default value to prevent an error being raised with the OpenAI package
Describe the bug
When I run the llm-based evaluators with local llms, I get an error saying that OPENAI API key is missing even though no key is needed.
Error message
... None of the following authentication environment variables are set: ('OPENAI_API_KEY',)
Expected behavior
api_key
param should be optional maybe?It works when I initialize the evaluator like this:
or like this:
To Reproduce
Run the code below. It's taken from the docs
FAQ Check
System:
The text was updated successfully, but these errors were encountered: