Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feedback for “Log Levels” #1295

Open
subbiah-cape opened this issue Feb 20, 2025 · 3 comments
Open

Feedback for “Log Levels” #1295

subbiah-cape opened this issue Feb 20, 2025 · 3 comments
Labels
question Further information is requested

Comments

@subbiah-cape
Copy link

I was working on extracting the article title and abstract from a PDF. I have already implemented the code, and it is functioning correctly. I used GPT-4.0 from Azure. However, when I integrated this code into Langfuse, the model cost, input, and output token counts were not being tracked.

For the API request, I used a request URL and implemented parallel processing using OpenAI's API directly. Additionally, I imported AzureAI from the OpenAI Python package.

My question is: If I integrate my code into Langfuse without using Langfuse modules (i.e., from langfuse.openai import azureai), will it still track tokens, model costs, and performance? I also used @decorator(), but tracking did not work when executing parallel processing through the request URL.

Is it mandatory to use Langfuse modules and adjust the implementation according to Langfuse documentation for tracking features to work?

@dosubot dosubot bot added the question Further information is requested label Feb 20, 2025
@marcklingen
Copy link
Member

For auto-instrumentation to work, you need to use the import from langfuse.openai. Have you tried this?

@subbiah-cape
Copy link
Author

When using langfuse.openai, everything is tracked correctly. However, if I import from langfuse.openai but make requests using the request URL, tracking does not work.

from langfuse.openai import AzureOpenAI

client = AzureOpenAI(  
    azure_endpoint=GPT4o_BASE_URL,
    azure_deployment=GPT4o_ENGINE_Name,
    api_key=GPT4o_KEY,
    api_version=GPT4o_API_VERSION
)

base_url = str(client._base_url).rstrip('/')
GPT4o_Request_URL = f"{base_url}/chat/completions?api-version={GPT4o_API_VERSION}"

Is there a separate module or function in Langfuse that enables auto-instrumentation when making requests using GPT4o_Request_URL?

Copy link
Member

That's expected because then you are not making the request via the instrumented sdk. Why can't you use the SDK?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants