Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation failed when AIFoundry created Identity Based #39000

Open
duongthaiha opened this issue Dec 30, 2024 · 2 comments
Open

Evaluation failed when AIFoundry created Identity Based #39000

duongthaiha opened this issue Dec 30, 2024 · 2 comments
Labels
customer-reported Issues that are reported by GitHub users external to the Azure organization. Evaluation Issues related to the client library for Azure AI Evaluation needs-team-attention Workflow: This issue needs attention from Azure service team or SDK team question The issue doesn't require a change to the product in order to be resolved. Most issues start as that Service Attention Workflow: This issue is responsible by Azure service team.

Comments

@duongthaiha
Copy link

  • Package Name: azure-ai-evaluation
  • Package Version: 1.1.0
  • Operating System: Any
  • Python Version: 3.11

Describe the bug
When AI Foundry is created with storage access using the Identity-based access
When evaluation function from the SDK call ie with GroundednessEvaluator then evaluation failed with the error EvaluationException: (InternalError) The workspace datastore secrets request failed with HTTP 400

To Reproduce
Steps to reproduce the behavior:

  1. Create AI Foundry and AI Project using Storage Access with IdentityBase with disabled SharedKeyAccess
  2. Execute step follow in example [note book](https://github.com/Azure-Samples/azureai-samples/blob/5269dfb7843d50186217d64a9f593e41e70d1061/scenarios/evaluate/Simulators/Simulate_Evaluate_Groundedness/Simulate_Evaluate_Groundedness.ipynb)
  1. Last step will fail with the error as stated above

Expected behavior
When AI Foundry and project is created using identity base then it will works the same as credential based

Screenshots
Image
Image

Additional context

The error occur when _eval_run.py line 424
self._management_client.workspace_get_default_datastore(self._workspace_name, True)
This will throw and error when trying to get the secret from data store as it is not available

Here is the sample code that used

import logging
import os
import sys
import uuid
from azure.identity import DefaultAzureCredential

# Set up logging
logger = logging.getLogger('azure')
handler = logging.StreamHandler(stream=sys.stdout)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
logging_enable=True
print(
    f"Logger enabled for ERROR={logger.isEnabledFor(logging.ERROR)}, "
    f"WARNING={logger.isEnabledFor(logging.WARNING)}, "
    f"INFO={logger.isEnabledFor(logging.INFO)}, "
    f"DEBUG={logger.isEnabledFor(logging.DEBUG)}"
)

# Ensure Azure credentials are set up
credential = DefaultAzureCredential()
token = credential.get_token("https://management.azure.com/.default")
print(f"Token acquired: {token.token[:10]}...")

# Ensure project scope and model config are correctly set
assert "subscription_id" in project_scope, "Subscription ID is missing in project_scope"
assert "resource_group_name" in project_scope, "Resource group name is missing in project_scope"
assert "project_name" in project_scope, "Project name is missing in project_scope"
assert "azure_endpoint" in model_config, "Azure endpoint is missing in model_config"
assert "azure_deployment" in model_config, "Azure deployment is missing in model_config"

# Initialize the evaluator
groundedness_evaluator = GroundednessEvaluator(model_config=model_config)

# Run the evaluation
eval_output = evaluate(
    data=output_file,
    evaluators={
        "groundedness": groundedness_evaluator,
    },
    azure_ai_project=project_scope,
)
print(eval_output)
@github-actions github-actions bot added customer-reported Issues that are reported by GitHub users external to the Azure organization. needs-triage Workflow: This is a new issue that needs to be triaged to the appropriate team. question The issue doesn't require a change to the product in order to be resolved. Most issues start as that labels Dec 30, 2024
@duongthaiha
Copy link
Author

@ralph-msft Hi I think i saw your commit that fix this bug. By any chance you know when this can be release or how to get the bug fix please?
Thank you very much in advance

@xiangyan99 xiangyan99 added Service Attention Workflow: This issue is responsible by Azure service team. Evaluation Issues related to the client library for Azure AI Evaluation and removed needs-triage Workflow: This is a new issue that needs to be triaged to the appropriate team. labels Dec 30, 2024
@github-actions github-actions bot added the needs-team-attention Workflow: This issue needs attention from Azure service team or SDK team label Dec 30, 2024
Copy link

Thanks for the feedback! We are routing this to the appropriate team for follow-up. cc @luigiw @needuv @singankit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
customer-reported Issues that are reported by GitHub users external to the Azure organization. Evaluation Issues related to the client library for Azure AI Evaluation needs-team-attention Workflow: This issue needs attention from Azure service team or SDK team question The issue doesn't require a change to the product in order to be resolved. Most issues start as that Service Attention Workflow: This issue is responsible by Azure service team.
Projects
None yet
Development

No branches or pull requests

2 participants