Skip to content

Cannot get LLM usage tokens/cost for each task in Guardrails (Jailbreak, Hallucination, Custom Prompt Check...) #41

@thuanng-a11y

Description

@thuanng-a11y
client = GuardrailsAsyncOpenAI(
            config=JsonString(json.dumps(guardrails_config))
        )
response = await client.chat.completions.create(
            model=os.getenv("OPENAI_MODEL_NAME", "gpt-4.1-mini"),
            messages=[{"role": "user", "content": req.input}],
            suppress_tripwire=True
)

I've checked the response object, it doesn't contain the LLM usage cost for each task of Guardrails (although the guardrails already checked and the tripwire triggered successfully). It just show the tokens used for LLM response.

How can I get that value? Or it has not implemented yet is this version?

Thanks.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions