-
Notifications
You must be signed in to change notification settings - Fork 1.5k
[DBMON-6334] Add ClickHouse unit tests to track all config default value changes #23057
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
sangeetashivaji
merged 3 commits into
master
from
sangeeta.shivajirao/test_config_defaults
Apr 1, 2026
Merged
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,189 @@ | ||
| # (C) Datadog, Inc. 2026-present | ||
| # All rights reserved | ||
| # Licensed under a 3-clause BSD style license (see LICENSE) | ||
|
|
||
| """ | ||
| The goal of these tests is to verify our default configuration values are correct and remain consistent. | ||
| Failing tests indicate a regression in our defaults and should be inspected carefully. | ||
| Default values are duplicated within this file by design to ensure tests fail if the value changes unexpectedly. | ||
| """ | ||
|
|
||
| import pytest | ||
|
|
||
| from datadog_checks.clickhouse import ClickhouseCheck | ||
| from datadog_checks.clickhouse.config_models.instance import InstanceConfig | ||
|
|
||
| EXPECTED_DEFAULTS = { | ||
| # === Required fields (no defaults) === | ||
| 'server': None, | ||
| # === Connection configuration === | ||
| 'port': 8123, | ||
| 'db': 'default', | ||
| 'username': 'default', | ||
| 'password': '', | ||
| 'connect_timeout': 10, | ||
| 'read_timeout': 10, | ||
| 'compression': None, | ||
| 'tls_verify': False, | ||
| 'tls_ca_cert': None, | ||
| 'verify': True, | ||
| # === Database identifier === | ||
| 'database_identifier': { | ||
| 'template': '$server:$port:$db', | ||
| }, | ||
| # === DBM toggle === | ||
| 'dbm': False, | ||
| 'single_endpoint_mode': False, | ||
| # === DBM: Query metrics === | ||
| 'query_metrics': { | ||
| 'enabled': True, | ||
| 'collection_interval': 10, | ||
| 'run_sync': False, | ||
| 'full_statement_text_cache_max_size': 10000, | ||
| 'full_statement_text_samples_per_hour_per_query': 1, | ||
| }, | ||
| # === DBM: Query samples === | ||
| 'query_samples': { | ||
| 'enabled': True, | ||
| 'collection_interval': 1, | ||
| 'payload_row_limit': 1000, | ||
| 'run_sync': False, | ||
| }, | ||
| # === DBM: Query completions === | ||
| 'query_completions': { | ||
| 'enabled': True, | ||
| 'collection_interval': 10, | ||
| 'samples_per_hour_per_query': 15, | ||
| 'seen_samples_cache_maxsize': 10000, | ||
| 'max_samples_per_collection': 1000, | ||
| 'run_sync': False, | ||
| }, | ||
| # === Tagging === | ||
| 'tags': (), | ||
| 'disable_generic_tags': False, | ||
| 'enable_legacy_tags_normalization': True, | ||
| # === Custom queries === | ||
| 'custom_queries': None, | ||
| 'only_custom_queries': False, | ||
| 'use_global_custom_queries': 'true', | ||
| # === Agent standard fields === | ||
| 'min_collection_interval': 15, | ||
| 'empty_default_hostname': False, | ||
| # === User-provided / no default === | ||
| 'service': None, | ||
| 'metric_patterns': None, | ||
| } | ||
|
|
||
| pytestmark = pytest.mark.unit | ||
|
|
||
|
|
||
| @pytest.fixture | ||
| def minimal_instance(): | ||
| """Minimal instance configuration with only the required field.""" | ||
| return {'server': 'localhost'} | ||
|
|
||
|
|
||
| def test_all_config_defaults(minimal_instance): | ||
| """ | ||
| Verify that all InstanceConfig fields have the expected default values when only | ||
| the required 'server' field is provided. | ||
|
|
||
| If a field is missing from EXPECTED_DEFAULTS the test fails with instructions on how to fix it. | ||
| Failures on existing fields indicate a potential config regression for existing users. | ||
| """ | ||
| check = ClickhouseCheck('clickhouse', {}, [minimal_instance]) | ||
| config = check._config | ||
|
|
||
| all_fields = set(InstanceConfig.__annotations__.keys()) | ||
|
|
||
| missing_from_expected = all_fields - set(EXPECTED_DEFAULTS.keys()) | ||
| if missing_from_expected: | ||
| pytest.fail( | ||
| f"\n\n{'=' * 80}\n" | ||
| f"MISSING EXPECTED DEFAULTS!\n" | ||
| f"{'=' * 80}\n\n" | ||
| f"The following fields exist in InstanceConfig but are missing from EXPECTED_DEFAULTS:\n" | ||
| f" {sorted(missing_from_expected)}\n\n" | ||
| f"To fix this, add each field to EXPECTED_DEFAULTS in test_config_defaults.py\n" | ||
| f"with its expected default value. Examples:\n" | ||
| f" - Simple field: 'field_name': default_value,\n" | ||
| f" - No default: 'field_name': None,\n" | ||
| f" - Nested object: 'field_name': {{'key': 'value'}},\n" | ||
| f"{'=' * 80}\n" | ||
| ) | ||
|
|
||
| extra_in_expected = set(EXPECTED_DEFAULTS.keys()) - all_fields | ||
| if extra_in_expected: | ||
| pytest.fail( | ||
| f"\n\nThe following fields are in EXPECTED_DEFAULTS but don't exist in InstanceConfig:\n" | ||
| f" {sorted(extra_in_expected)}\n" | ||
| f"Remove them from EXPECTED_DEFAULTS in test_config_defaults.py." | ||
| ) | ||
|
|
||
| failures = [] | ||
|
|
||
| for field_name, expected_value in EXPECTED_DEFAULTS.items(): | ||
| if field_name == 'server': | ||
| continue | ||
|
|
||
| actual_value = getattr(config, field_name) | ||
|
|
||
| if isinstance(expected_value, dict): | ||
| _compare_nested_object(actual_value, expected_value, field_name, failures) | ||
| elif isinstance(expected_value, (list, tuple)): | ||
| if isinstance(actual_value, (list, tuple)): | ||
| if type(expected_value)(actual_value) != expected_value: | ||
| failures.append(f"{field_name}: expected {expected_value!r}, got {actual_value!r}") | ||
| elif actual_value is None: | ||
| failures.append(f"{field_name}: expected {expected_value!r}, got None") | ||
| else: | ||
| failures.append(f"{field_name}: expected {expected_value!r}, got {actual_value!r}") | ||
| else: | ||
| if actual_value != expected_value: | ||
| failures.append(f"{field_name}: expected {expected_value!r}, got {actual_value!r}") | ||
|
|
||
| if failures: | ||
| pytest.fail( | ||
| f"\n\n{'=' * 80}\n" | ||
| f"DEFAULT VALUE MISMATCHES DETECTED!\n" | ||
| f"{'=' * 80}\n\n" | ||
| f"The following fields have unexpected default values:\n\n" | ||
| + "\n".join(f" • {f}" for f in failures) | ||
| + f"\n\n" | ||
| f"This indicates either:\n" | ||
| f" 1. A regression in default values (BAD — investigate carefully!)\n" | ||
| f" 2. EXPECTED_DEFAULTS needs updating to reflect intentional new behaviour\n" | ||
| f"{'=' * 80}\n" | ||
| ) | ||
|
|
||
|
|
||
| def _compare_nested_object(actual_obj, expected_dict, field_path, failures): | ||
| """Recursively compare a nested config object against an expected dict.""" | ||
| actual_fields = set(type(actual_obj).model_fields.keys()) | ||
| expected_fields = set(expected_dict.keys()) | ||
| missing = actual_fields - expected_fields | ||
| if missing: | ||
| failures.append( | ||
| f"{field_path}: nested fields exist in model but are missing from EXPECTED_DEFAULTS: {sorted(missing)}" | ||
| ) | ||
|
|
||
| for key, expected_value in expected_dict.items(): | ||
| try: | ||
| actual_value = getattr(actual_obj, key) | ||
| except AttributeError: | ||
| failures.append(f"{field_path}.{key}: field not accessible on config object") | ||
| continue | ||
|
|
||
| if isinstance(expected_value, dict): | ||
| _compare_nested_object(actual_value, expected_value, f"{field_path}.{key}", failures) | ||
| elif isinstance(expected_value, (list, tuple)): | ||
| if isinstance(actual_value, (list, tuple)): | ||
| if type(expected_value)(actual_value) != expected_value: | ||
| failures.append(f"{field_path}.{key}: expected {expected_value!r}, got {actual_value!r}") | ||
| elif actual_value is None: | ||
| failures.append(f"{field_path}.{key}: expected {expected_value!r}, got None") | ||
| else: | ||
| failures.append(f"{field_path}.{key}: expected {expected_value!r}, got {actual_value!r}") | ||
| else: | ||
| if actual_value != expected_value: | ||
| failures.append(f"{field_path}.{key}: expected {expected_value!r}, got {actual_value!r}") | ||
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The nested comparison only loops over
expected_dict.items()and never checks for extra attributes present on the actual nested model, so new fields added under objects likequery_metricsorquery_samplescan slip in without failing this test. In that scenario,missing_from_expectedstill passes (top-level field unchanged) and_compare_nested_objectalso passes, which breaks the stated guarantee that every default change is explicitly tracked.Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed: added nested fields check too
Example: added check for nested fields in Query metrics