-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Add mean to metrics API #10961
base: develop
Are you sure you want to change the base?
Conversation
✅ Deploy Preview for niobium-lead-7998 canceled.
|
for more information, see https://pre-commit.ci
Codecov ReportAll modified and coverable lines are covered by tests ✅
✅ All tests successful. No failed tests found. Additional details and impacted files@@ Coverage Diff @@
## develop #10961 +/- ##
===========================================
- Coverage 80.84% 78.45% -2.39%
===========================================
Files 471 472 +1
Lines 40790 40797 +7
===========================================
- Hits 32976 32008 -968
- Misses 7814 8789 +975
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
batch_setup = request.getfixturevalue(setup_datasource) | ||
breakpoint() | ||
with batch_setup.batch_test_context() as batch: | ||
metric = ColumnValuesMean(batch_id=batch.id, column="number") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will file a ticket to remove the batch_id argument when instantiating a metric.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pytest.fixture | ||
def setup_spark(dataframe: pandas.DataFrame, tmp_path: Path) -> SparkFilesystemCsvBatchTestSetup: | ||
return SparkFilesystemCsvBatchTestSetup( | ||
config=SparkFilesystemCsvDatasourceTestConfig(), | ||
data=dataframe, | ||
base_dir=tmp_path, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You may need to pass column_types
to SparkFilesystemCsvDatasourceTestConfig
to get the spark error test passing. Maybe it is coercing somehow, I'm not sure why it doesn't fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks to be a bug and we use a different metric name when the mean fails on spark. See inline comment on failure test below.
|
||
|
||
@pytest.fixture | ||
def setup_snowflake(dataframe: pandas.DataFrame) -> SnowflakeBatchTestSetup: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added snowflake tests since it is already supported in expectai.
invoke lint
(usesruff format
+ruff check
)For more information about contributing, visit our community resources.
After you submit your PR, keep the page open and monitor the statuses of the various checks made by our continuous integration process at the bottom of the page. Please fix any issues that come up and reach out on Slack if you need help. Thanks for contributing!