Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[dynamo exporter] Support string in dynamic_shapes #1631

Closed

address reviews

d58a008
Select commit
Loading
Failed to load commit list.
Closed

[dynamo exporter] Support string in dynamic_shapes #1631

address reviews
d58a008
Select commit
Loading
Failed to load commit list.
Azure Pipelines / Olive CI succeeded Feb 21, 2025 in 19m 37s

Build #20250221.5 had test failures

Details

Tests

  • Failed: 2 (0.08%)
  • Passed: 2,477 (94.76%)
  • Other: 135 (5.16%)
  • Total: 2,614
Code coverage

  • 8556 of 37578 lines covered (22.77%)

Annotations

Check failure on line 1 in test_evaluate_model[HfModel-get_huggingface_model-get_hf_accuracy_metric-0.1]

See this annotation in the file changed.

@azure-pipelines azure-pipelines / Olive CI

test_evaluate_model[HfModel-get_huggingface_model-get_hf_accuracy_metric-0.1]

huggingface_hub.errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/nyu-mll/glue/paths-info/bcdcba79d07bc864c1c254ccfcedcce55bcc9a8c
Raw output
self = <test.integ_test.evaluator.local_eval.test_local_evaluation.TestLocalEvaluation object at 0x7f9db6f98df0>
type = 'HfModel'
model_config_func = <function get_huggingface_model at 0x7f9db6f7bf40>
metric_func = <function get_hf_accuracy_metric at 0x7f9db6f7bd90>
expected_res = 0.1

    @pytest.mark.parametrize(
        ("type", "model_config_func", "metric_func", "expected_res"),
        EVALUATION_TEST_CASE,
    )
    def test_evaluate_model(self, type, model_config_func, metric_func, expected_res):  # noqa: A002
        model_config = model_config_func()
        metric = metric_func()
        model_config = ModelConfig.parse_obj({"type": type, "config": model_config})
        evaluator_config = OliveEvaluatorConfig(metrics=[metric])
>       actual_res = LocalSystem().evaluate_model(model_config, evaluator_config, DEFAULT_CPU_ACCELERATOR)

test/integ_test/evaluator/local_eval/test_local_evaluation.py:63: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
olive/systems/local.py:44: in evaluate_model
    return evaluator.evaluate(
olive/evaluator/olive_evaluator.py:315: in evaluate
    dataloader, eval_func, post_func = OliveEvaluator.get_user_config(model.framework, metric)
olive/evaluator/olive_evaluator.py:136: in get_user_config
    dataloader = dc.create_dataloader()
olive/data/container/data_container.py:45: in create_dataloader
    dataset = self.load_dataset()
olive/data/container/data_container.py:26: in load_dataset
    return self.config.load_dataset(**self.config.load_dataset_params)
test/integ_test/evaluator/local_eval/user_script.py:46: in tiny_bert_dataset_for_local_eval
    dataset = load_dataset("glue", "mrpc", split="validation")
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/datasets/load.py:2556: in load_dataset
    builder_instance = load_dataset_builder(
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/datasets/load.py:2228: in load_dataset_builder
    dataset_module = dataset_module_factory(
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/datasets/load.py:1879: in dataset_module_factory
    raise e1 from None
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/datasets/load.py:1861: in dataset_module_factory
    ).get_module()
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/datasets/load.py:1239: in get_module
    data_files = DataFilesDict.from_patterns(
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/datasets/data_files.py:700: in from_patterns
    DataFilesList.from_patterns(
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/datasets/data_files.py:605: in from_patterns
    resolve_pattern(
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/datasets/data_files.py:354: in resolve_pattern
    fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options)
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/fsspec/core.py:640: in get_fs_token_paths
    paths = [f for f in sorted(fs.glob(paths)) if not fs.isdir(f)]
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py:521: in glob
    return super().glob(path, **kwargs)
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/fsspec/spec.py:602: in glob
    allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs)
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py:556: in find
    return super().find(
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/fsspec/spec.py:493: in find
    out[path] = self.info(path)
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py:719: in info
    paths_info = self._api.get_paths_info(
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:114: in _inner_fn
   

Check failure on line 1 in test_evaluate_model[HfModel-get_huggingface_model-get_hf_latency_metric-0.001]

See this annotation in the file changed.

@azure-pipelines azure-pipelines / Olive CI

test_evaluate_model[HfModel-get_huggingface_model-get_hf_latency_metric-0.001]

huggingface_hub.errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/glue
Raw output
self = <test.integ_test.evaluator.local_eval.test_local_evaluation.TestLocalEvaluation object at 0x7f9db6f98d30>
type = 'HfModel'
model_config_func = <function get_huggingface_model at 0x7f9db6f7bf40>
metric_func = <function get_hf_latency_metric at 0x7f9db6f7be20>
expected_res = 0.001

    @pytest.mark.parametrize(
        ("type", "model_config_func", "metric_func", "expected_res"),
        EVALUATION_TEST_CASE,
    )
    def test_evaluate_model(self, type, model_config_func, metric_func, expected_res):  # noqa: A002
        model_config = model_config_func()
        metric = metric_func()
        model_config = ModelConfig.parse_obj({"type": type, "config": model_config})
        evaluator_config = OliveEvaluatorConfig(metrics=[metric])
>       actual_res = LocalSystem().evaluate_model(model_config, evaluator_config, DEFAULT_CPU_ACCELERATOR)

test/integ_test/evaluator/local_eval/test_local_evaluation.py:63: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
olive/systems/local.py:44: in evaluate_model
    return evaluator.evaluate(
olive/evaluator/olive_evaluator.py:315: in evaluate
    dataloader, eval_func, post_func = OliveEvaluator.get_user_config(model.framework, metric)
olive/evaluator/olive_evaluator.py:136: in get_user_config
    dataloader = dc.create_dataloader()
olive/data/container/data_container.py:45: in create_dataloader
    dataset = self.load_dataset()
olive/data/container/data_container.py:26: in load_dataset
    return self.config.load_dataset(**self.config.load_dataset_params)
test/integ_test/evaluator/local_eval/user_script.py:46: in tiny_bert_dataset_for_local_eval
    dataset = load_dataset("glue", "mrpc", split="validation")
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/datasets/load.py:2556: in load_dataset
    builder_instance = load_dataset_builder(
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/datasets/load.py:2228: in load_dataset_builder
    dataset_module = dataset_module_factory(
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/datasets/load.py:1879: in dataset_module_factory
    raise e1 from None
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/datasets/load.py:1861: in dataset_module_factory
    ).get_module()
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/datasets/load.py:1191: in get_module
    hfh_dataset_info = HfApi(config.HF_ENDPOINT).dataset_info(
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:114: in _inner_fn
    return fn(*args, **kwargs)
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/huggingface_hub/hf_api.py:2592: in dataset_info
    hf_raise_for_status(r)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

response = <Response [504]>, endpoint_name = None

    def hf_raise_for_status(response: Response, endpoint_name: Optional[str] = None) -> None:
        """
        Internal version of `response.raise_for_status()` that will refine a
        potential HTTPError. Raised exception will be an instance of `HfHubHTTPError`.
    
        This helper is meant to be the unique method to raise_for_status when making a call
        to the Hugging Face Hub.
    
    
        Example:
        ```py
            import requests
            from huggingface_hub.utils import get_session, hf_raise_for_status, HfHubHTTPError
    
            response = get_session().post(...)
            try:
                hf_raise_for_status(response)
            except HfHubHTTPError as e:
                print(str(e)) # formatted message
                e.request_id, e.server_message # details returned by server
    
                # Complete the error message with additional information once it's raised
                e.append_to_message("\n`create_commit` expects the repository to exist.")
                raise
        ```
    
        Args:
            r