Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LLM-based Evaluators ask for an API key when run with local LLMs #8818

Open
1 task done
bilgeyucel opened this issue Feb 5, 2025 · 1 comment
Open
1 task done
Labels
type:bug Something isn't working

Comments

@bilgeyucel
Copy link
Contributor

Describe the bug
When I run the llm-based evaluators with local llms, I get an error saying that OPENAI API key is missing even though no key is needed.

Error message

...
None of the following authentication environment variables are set: ('OPENAI_API_KEY',)

Expected behavior
api_key param should be optional maybe?
It works when I initialize the evaluator like this:

evaluator = FaithfulnessEvaluator(api_key=Secret.from_token("just-a-placeholder"),
api_params={"api_base_url": local_endpoint, "model": "llama3"})

or like this:

evaluator = FaithfulnessEvaluator(api_key=Secret.from_env_var("...", strict=False),
api_params={"api_base_url": local_endpoint, "model": "llama3"})

To Reproduce
Run the code below. It's taken from the docs

from haystack.components.evaluators import FaithfulnessEvaluator

questions = ["Who created the Python language?"]
contexts = [
    [(
        "Python, created by Guido van Rossum in the late 1980s, is a high-level general-purpose programming "
        "language. Its design philosophy emphasizes code readability, and its language constructs aim to help "
        "programmers write clear, logical code for both small and large-scale software projects."
    )],
]
predicted_answers = [
    "Python is a high-level general-purpose programming language that was created by George Lucas."
]
local_endpoint = "http://localhost:11434/v1"

evaluator = FaithfulnessEvaluator(api_params={"api_base_url": local_endpoint, "model": "llama3"})

result = evaluator.run(questions=questions, contexts=contexts, predicted_answers=predicted_answers)

FAQ Check

System:

  • OS: Not relevant
  • GPU/CPU: CPU
  • Haystack version (commit or version number): 2.9.0
@bilgeyucel bilgeyucel added the type:bug Something isn't working label Feb 5, 2025
@lbux
Copy link
Contributor

lbux commented Feb 5, 2025

I debated bringing this up a while back, but I'm unsure if this is a bug or "working as expected". Since we use the OpenAI generator as a backend, it needs to have the logic to resolve an api key from the OPENAI_API_KEY environment variable or passed in directly as a secret. We can make it optional to have an API key, but there are also uses cases where people host their own OpenAI API compatible servers (like VLLM) and still need the API key. There is also the issue of the OpenAI package requiring an api_key parameter to be passed otherwise an error is raised.

I see two possible solutions:

  1. Leave as is and update documentation to ask users to explicitly provide an api_key (any input would work, it would just be ignored by the server)
  2. If an api_base_url is passed and no api key is provided, we set a default value to prevent an error being raised with the OpenAI package

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type:bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants