You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was working on extracting the article title and abstract from a PDF. I have already implemented the code, and it is functioning correctly. I used GPT-4.0 from Azure. However, when I integrated this code into Langfuse, the model cost, input, and output token counts were not being tracked.
For the API request, I used a request URL and implemented parallel processing using OpenAI's API directly. Additionally, I imported AzureAI from the OpenAI Python package.
My question is: If I integrate my code into Langfuse without using Langfuse modules (i.e., from langfuse.openai import azureai), will it still track tokens, model costs, and performance? I also used @decorator(), but tracking did not work when executing parallel processing through the request URL.
Is it mandatory to use Langfuse modules and adjust the implementation according to Langfuse documentation for tracking features to work?
The text was updated successfully, but these errors were encountered:
When using langfuse.openai, everything is tracked correctly. However, if I import from langfuse.openai but make requests using the request URL, tracking does not work.
I was working on extracting the article title and abstract from a PDF. I have already implemented the code, and it is functioning correctly. I used GPT-4.0 from Azure. However, when I integrated this code into Langfuse, the model cost, input, and output token counts were not being tracked.
For the API request, I used a request URL and implemented parallel processing using OpenAI's API directly. Additionally, I imported AzureAI from the OpenAI Python package.
My question is: If I integrate my code into Langfuse without using Langfuse modules (i.e., from langfuse.openai import azureai), will it still track tokens, model costs, and performance? I also used @decorator(), but tracking did not work when executing parallel processing through the request URL.
Is it mandatory to use Langfuse modules and adjust the implementation according to Langfuse documentation for tracking features to work?
The text was updated successfully, but these errors were encountered: