Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AGE-348] [bug] Errors are not correctly handled in the LLM applications #1805

Open
mmabrouk opened this issue Jun 20, 2024 · 2 comments
Open
Labels
bug Something isn't working Medium priority Created by Linear-GitHub Sync qa Created by Linear-GitHub Sync SDK

Comments

@mmabrouk
Copy link
Member

mmabrouk commented Jun 20, 2024

Often, exceptions occurring within the application code are not appropriately handled and are displayed as responses instead of exceptions in the playground.

For example, if you create an application in OSS without setting the Mistral API key and then attempt to call it from the playground, you will receive a generic response 'string indices must be integers' rather than a clear exception.

The aim of this issue is to examine how we handle exceptions in the SDK, address the problem, and incorporate tests into the QA process to ensure ongoing proper handling of exceptions.

Example:

Here is the container's log yet the error in the UI shows string indices must be integers

11:42:20 - LiteLLM:ERROR: main.py:399 - litellm.acompletion(): Exception occured - litellm.APIConnectionError: {"message":"no api key supplied"}
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 1525, in completion
    model_response = cohere.completion(
  File "/usr/local/lib/python3.10/site-packages/litellm/llms/cohere.py", line 188, in completion
    raise CohereError(message=response.text, status_code=response.status_code)
litellm.llms.cohere.CohereError: {"message":"no api key supplied"}

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 1525, in completion
    model_response = cohere.completion(
  File "/usr/local/lib/python3.10/site-packages/litellm/llms/cohere.py", line 188, in completion
    raise CohereError(message=response.text, status_code=response.status_code)
litellm.llms.cohere.CohereError: {"message":"no api key supplied"}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 392, in acompletion
    response = await loop.run_in_executor(None, func_with_context)  # type: ignore
  File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 625, in wrapper
    result = original_function(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 2577, in completion
    raise exception_type(
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 7223, in exception_type
    raise e
  File "/usr/local/lib/python3.10/site-packages/litellm/utils.py", line 7187, in exception_type
    raise APIConnectionError(
litellm.exceptions.APIConnectionError: litellm.APIConnectionError: {"message":"no api key supplied"}
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/litellm/main.py", line 1525, in completion
    model_response = cohere.completion(
  File "/usr/local/lib/python3.10/site-packages/litellm/llms/cohere.py", line 188, in completion
    raise CohereError(message=response.text, status_code=response.status_code)
litellm.llms.cohere.CohereError: {"message":"no api key supplied"}

Another instance now, is not providing the openai API key and then when completion fails we get the following

CleanShot 2024-06-20 at 14.18.23@2x.png

From SyncLinear.com | AGE-348

@mmabrouk mmabrouk added bug Something isn't working Medium priority Created by Linear-GitHub Sync qa Created by Linear-GitHub Sync SDK labels Jun 20, 2024
@roldugin
Copy link

roldugin commented Jun 23, 2024

As a workaround, find the container name of your application and tail its logs:

$ docker ps

CONTAINER ID  IMAGE         COMMAND            CREATED         STATUS         PORTS    NAMES
17fb8659c46f  27f00e3db305  "./entrypoint.sh"  28 minutes ago  Up 28 minutes  80/tcp   mytest-app-66781d006ab5214eac0cfd37
...

$ docker logs -f mytest-app-66781d006ab5214eac0cfd37

...
openai.RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

@mmabrouk Is this the easiest way to see app's logs?

@mmabrouk
Copy link
Member Author

Hi @roldugin

Yes this workaround is the easiest way to see the app's logs / problems at the current point of time. However, the correct way is to forward the error message to the playground. This used to work in previous versions, however when introducing the new observability decorators, we seem to have broke this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Medium priority Created by Linear-GitHub Sync qa Created by Linear-GitHub Sync SDK
Projects
None yet
Development

No branches or pull requests

2 participants