Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The custom .toml file does not configure #1581

Open
aramfaghfouri opened this issue Nov 13, 2024 · 2 comments
Open

The custom .toml file does not configure #1581

aramfaghfouri opened this issue Nov 13, 2024 · 2 comments

Comments

@aramfaghfouri
Copy link

r2r cannot work with Azure Embedding
I am trying to use the Azure OpneAI models for completion and embedding for a full deployment using docker.

To Reproduce
Here is my file:

[auth]
provider = "r2r-full"

[completion]
  [completion.generation_config]
  model = "azure/gpt-4o"
  api_key = "my API key"
  api_base = "https://xyz.openai.azure.com/"
  api_version = "2023-05-15"
  temperature = 0.1
  top_p = 1
  max_tokens_to_sample = 1_024
  stream = false
  add_generation_kwargs = { }



[embedding]
base_model = "azure/text-embedding-3-large" 
base_dimension = 3072
batch_size = 128
add_title_as_prefix = false
rerank_model = "None"
concurrent_request_limit = 256
api_key = "my API key"
api_version = "2023-05-15"
api_base = "https://embedding101.openai.azure.com/"

Here is my command:

r2r serve --docker  --config-path=config/azure_full_new.toml

Expected behavior
The docker works perfectly and I can upload a .pdf file.
However, the workflow fails. When I inspected the docker logs, this is what I get:

2024-11-13 03:11:57,207 - ERROR - root - Error getting embeddings: litellm.APIError: APIError: OpenAIException - Connection error.
2024-11-13 03:11:57,208 - WARNING - root - Request failed (attempt 8): Error getting embeddings: litellm.APIError: APIError: OpenAIException - Connection error.

I should mention that the exact same config perfectly works for litellm for both embedding and completion.

I am on a Macbook Pro.

I would appreciate your help.
Thanks.

@emrgnt-cmplxty
Copy link
Contributor

I see, I think that the issue is that the API base is not flowing through to LiteLLM.

Can you double check the documentation on configuring LLMs here and be sure you are following all the steps to use Azure - https://r2r-docs.sciphi.ai/documentation/configuration/llm

If this does not debug for you, we can look into replicating on our end.

@aramfaghfouri
Copy link
Author

Thank you for your response.
I have tried that in a few ways and none of them worked.
I would appreciate it if you could replicate it as well.
Thanks again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants