-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update fix/analytics data export cleanup #3831
Update fix/analytics data export cleanup #3831
Conversation
Updates from airqo staging
📝 WalkthroughWalkthroughThe changes in this pull request involve updates to two Kubernetes configuration files, Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
k8s/analytics/values-prod.yaml (1)
Line range hint
54-57
: Review autoscaling configuration with new resource limits.With the increased resource limits, the current autoscaling configuration might need adjustment:
- Current settings: 1-3 replicas with 70% memory target
- Consider adjusting these parameters based on:
- Expected load patterns
- New resource profile
- Cost optimization goals
Recommendation: Monitor the scaling behavior after deployment and adjust these values if needed based on actual usage patterns.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
k8s/analytics/values-prod.yaml
(2 hunks)k8s/analytics/values-stage.yaml
(2 hunks)
🔇 Additional comments (4)
k8s/analytics/values-prod.yaml (2)
2-3
: LGTM: Name override configurations are properly formatted.
The empty string values for nameOverride
and fullnameOverride
are correctly formatted, maintaining Kubernetes naming conventions.
20-21
: Verify resource allocation increase justification.
The significant increase in resource limits (CPU: 10x, Memory: ~3.3x) represents a substantial change in resource allocation:
- CPU: 100m → 1000m (1 full core)
- Memory: 600Mi → 2000Mi
While this aligns with the PR objective to enhance analytics API performance, please ensure:
- These increases are justified by performance metrics or monitoring data
- The cluster has sufficient capacity to handle these increased limits
- Cost implications have been considered
Consider implementing horizontal pod autoscaling (HPA) based on custom metrics if not already in place. The current autoscaling configuration (lines 54-57) might need adjustment to better handle the new resource profile.
k8s/analytics/values-stage.yaml (2)
2-3
: LGTM: Explicit empty string declarations.
The explicit empty string declarations for nameOverride
and fullnameOverride
follow YAML best practices.
20-21
: Consider adjusting resource requests to maintain a balanced limits-to-requests ratio.
While increasing resource limits aligns with the PR objective, the current configuration shows a significant disparity between limits and requests:
- CPU ratio: 50:1 (500m limit : 10m request)
- Memory ratio: 4:1 (1000Mi limit : 250Mi request)
Such high ratios might lead to:
- Inefficient resource allocation in the cluster
- Potential scheduling issues
- Less effective HPA decisions based on resource utilization
Consider adjusting the requests to maintain a more balanced ratio (typically 2:1 to 3:1):
resources:
limits:
cpu: 500m
memory: 1000Mi
requests:
- cpu: 10m
- memory: 250Mi
+ cpu: 200m
+ memory: 400Mi
Let's verify if similar resource configurations exist in other deployments:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @NicholasTurner23
Regarding this 'increasing resources' PR, I am awaiting detailed analysis from @Psalmz777 and @BenjaminSsempala.
Given our current workload and potential infrastructure implications, I am not prepared to approve/merge this PR at present.
Thanks!
Updates from airqo staging
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## staging #3831 +/- ##
===========================================
+ Coverage 11.77% 12.28% +0.50%
===========================================
Files 114 51 -63
Lines 15205 4607 -10598
Branches 274 274
===========================================
- Hits 1791 566 -1225
+ Misses 13414 4041 -9373 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Outside diff range and nitpick comments (13)
src/analytics/api/utils/http.py (2)
Line range hint
1-190
: Consider enhancing error handling and loggingThe error handling could be improved to provide more context and better debugging capabilities.
- Add request context to error logs:
- logger.exception(f"HTTPError: {ex}") + logger.exception( + f"HTTPError for {method.upper()} request to {url}: {ex}", + extra={ + "endpoint": endpoint, + "method": method, + "params": params, + } + )
- Add specific error handling for common scenarios:
except urllib3.exceptions.TimeoutError as ex: logger.exception("Request timed out") return self.create_response("Request timed out", success=False) except urllib3.exceptions.MaxRetryError as ex: logger.exception("Max retries exceeded") return self.create_response("Service temporarily unavailable", success=False)
- Consider implementing circuit breaker pattern for better resilience:
from functools import wraps from datetime import datetime, timedelta def circuit_breaker(failure_threshold=5, reset_timeout=300): def decorator(func): failures = 0 last_failure = None @wraps(func) def wrapper(*args, **kwargs): nonlocal failures, last_failure if failures >= failure_threshold: if datetime.now() - last_failure < timedelta(seconds=reset_timeout): return { "status": "error", "message": "Service temporarily disabled due to multiple failures", } failures = 0 try: result = func(*args, **kwargs) failures = 0 return result except Exception as e: failures += 1 last_failure = datetime.now() raise return wrapper return decorator
Line range hint
77-165
: Consider optimizing retry strategyThe current retry strategy could be enhanced to be more sophisticated and configurable.
Consider implementing a more detailed retry configuration:
retry_strategy = Retry( total=5, backoff_factor=5, + status_forcelist=[408, 429, 500, 502, 503, 504], + allowed_methods=["GET", "POST", "PUT"], + raise_on_status=False, + respect_retry_after_header=True )Also consider making retry parameters configurable through environment variables:
retry_strategy = Retry( total=int(Configuration.HTTP_RETRY_ATTEMPTS or 5), backoff_factor=float(Configuration.HTTP_RETRY_BACKOFF or 5), status_forcelist=[int(s) for s in Configuration.HTTP_RETRY_STATUS_CODES.split(',')] if Configuration.HTTP_RETRY_STATUS_CODES else [408, 429, 500, 502, 503, 504], )src/analytics/api/utils/data_formatters.py (3)
3-3
: Remove unused importThe
BadRequest
exception is imported but never used in the code. Consider removing this import or implementing it in the error handling if it was intended to be used.-from werkzeug.exceptions import BadRequest
🧰 Tools
🪛 Ruff
3-3:
werkzeug.exceptions.BadRequest
imported but unusedRemove unused import:
werkzeug.exceptions.BadRequest
(F401)
Line range hint
298-328
: Update docstring and improve error handling in filter_non_private_sitesThe function's documentation and error handling need improvements:
- The docstring doesn't document the new
filter_type
parameter- The error response could be more informative
Consider updating the docstring and error handling:
def filter_non_private_sites(filter_type: str, sites: List[str]) -> Dict[str, Any]: """ Filters out private site IDs from a provided array of site IDs. Args: + filter_type(str): The type of filter to apply (e.g., 'site_ids', 'site_names') sites(List[str]): List of site ids to filter against. Returns: a response dictionary object that contains a list of non-private site ids if any. """
Line range hint
331-360
: Fix typo in docstring and improve parameter documentationThere's a capitalization error in the docstring ("FilterS"), and the parameters documentation needs updating:
- The docstring has a typo in "FilterS"
- The
entities
parameter is documented but the actual parameter isdevices
- The new
filter_type
parameter is not documentedConsider updating the docstring:
def filter_non_private_devices(filter_type: str, devices: List[str]) -> Dict[str, Any]: """ - FilterS out private device IDs from a provided array of device IDs. + Filters out private device IDs from a provided array of device IDs. Args: - entities(List[str]): List of device/site ids to filter against. + filter_type(str): The type of filter to apply (e.g., 'device_ids', 'device_names') + devices(List[str]): List of device IDs to filter against. Returns: a response dictionary object that contains a list of non-private device ids if any. """src/analytics/api/views/data.py (3)
68-72
: Enhancement: Addition of new filter parameters for flexible queryingThe inclusion of
site_ids
,site_names
,device_ids
, anddevice_names
as optional filters allows users to filter data more precisely, improving the API's usability. Please ensure that the API documentation is updated to reflect these new parameters.
235-235
: Update error message to include all valid filter optionsThe error message in the
ValueError
exception does not list all the valid filter options (site_ids
,site_names
,device_ids
,device_names
). To improve clarity for API users, consider updating the error message to include all valid options.Apply this diff to update the error message:
raise ValueError( - "Specify exactly one of 'airqlouds', 'sites', 'device_names', or 'devices' in the request body." + "Specify exactly one of 'airqlouds', 'sites', 'site_names', 'site_ids', 'devices', 'device_names', or 'device_ids' in the request body." )
248-249
: Offer assistance in cleaning up the codeThere's a
TODO
comment indicating that this section needs cleanup. I'd be happy to help refactor this part to improve readability and maintainability.Would you like me to suggest some refactoring changes or open a GitHub issue to track this task?
src/analytics/api/models/events.py (5)
69-77
: Clarify the Purpose ofdevice_info_query_airqloud
.The docstring for
device_info_query_airqloud
could provide more context on why it excludessite_id
compared todevice_info_query
. This will help developers understand when to use each property.
160-237
: Refactorbuild_query
Method for Maintainability.The
build_query
method handles multiple cases based onfilter_type
, making it lengthy and complex. Refactoring it into smaller helper methods for each filter type can improve readability and ease future modifications.
248-266
: Update Method Signature and Docstring withweather_fields
.The
download_from_bigquery
method utilizesweather_fields
, but this parameter is not included in the method signature or documented in the docstring. Adding it will enhance clarity.Apply this change to the method signature:
def download_from_bigquery( cls, filter_type, filter_value, start_date, end_date, frequency, pollutants, data_type, + weather_fields, ) -> pd.DataFrame:
And update the docstring to include
weather_fields
in the Args section.
286-290
: Simplify Assignment with Ternary Operator.Replace the
if
-else
block with a ternary operator for conciseness and improved readability.Apply this change:
key = pollutant if pollutant == "raw" else f"{pollutant}_{data_type}"🧰 Tools
🪛 Ruff
286-289: Use ternary operator
key = pollutant if pollutant == "raw" else f"{pollutant}_{data_type}"
instead ofif
-else
-blockReplace
if
-else
-block withkey = pollutant if pollutant == "raw" else f"{pollutant}_{data_type}"
(SIM108)
Line range hint
406-408
: Consistent Rounding of Values in DataFrames.When rounding values in DataFrames, ensure the method used is consistent across the codebase to prevent discrepancies in data precision.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (6)
k8s/analytics/values-prod.yaml
(2 hunks)src/analytics/api/models/events.py
(5 hunks)src/analytics/api/utils/data_formatters.py
(5 hunks)src/analytics/api/utils/http.py
(1 hunks)src/analytics/api/utils/pollutants/pm_25.py
(1 hunks)src/analytics/api/views/data.py
(7 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- k8s/analytics/values-prod.yaml
🧰 Additional context used
🪛 Ruff
src/analytics/api/models/events.py
286-289: Use ternary operator key = pollutant if pollutant == "raw" else f"{pollutant}_{data_type}"
instead of if
-else
-block
Replace if
-else
-block with key = pollutant if pollutant == "raw" else f"{pollutant}_{data_type}"
(SIM108)
src/analytics/api/utils/data_formatters.py
3-3: werkzeug.exceptions.BadRequest
imported but unused
Remove unused import: werkzeug.exceptions.BadRequest
(F401)
🔇 Additional comments (8)
src/analytics/api/utils/pollutants/pm_25.py (1)
64-69
: LGTM! The separation of calibrated and raw values improves clarity.
The restructuring of the mapper keys provides better organization and makes the distinction between calibrated and raw values more explicit. However, let's verify that all dependent code has been updated to handle this change.
Also applies to: 72-77
src/analytics/api/utils/data_formatters.py (1)
319-324
: Verify response structure consistency
The success response structure has been standardized across both filter functions. Let's verify this matches the API contract and other endpoints.
Also applies to: 351-356
✅ Verification successful
The previous search revealed multiple create_response implementations across different modules. Let's verify the response structure definition and consistency.
Let's examine the response structure implementation and its usage in the relevant files to verify consistency.
Response structure is consistent with API standards
The response structure in the code follows the standardized pattern used throughout the codebase:
- Uses the centralized
create_response
method fromAirQoRequests
class - Maintains consistent fields: message, data, and success flag
- Properly handles both success and error cases with appropriate status checks
The implementation in data_formatters.py correctly maps the external response to this standard structure, ensuring API consistency.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistent response structure across the codebase
# Search for create_response usage patterns
rg -A 2 'create_response\(' --type py
# Search for similar endpoint response handling
rg -B 2 -A 4 '"status"\s*==\s*"success"' --type py
Length of output: 19495
Script:
#!/bin/bash
# Let's check the implementation of create_response in analytics/api/utils/http.py
ast-grep --pattern 'def create_response($$$)'
# Also check for any response structure definitions or schemas
rg -B 2 -A 4 'class.*Response|response_model' --type py
Length of output: 105
Script:
#!/bin/bash
# Let's check the specific implementation in analytics/api/utils/http.py
rg -A 5 'class AirQoRequests' src/analytics/api/utils/http.py
# And check the actual create_response method
rg -A 10 'def create_response' src/analytics/api/utils/http.py
# Also look for any response usage patterns in data_formatters.py
rg -B 2 -A 4 'airqo_requests\.create_response' src/analytics/api/utils/data_formatters.py
Length of output: 1740
src/analytics/api/models/events.py (6)
18-21
: Well-documented Class Description.
The addition of the class-level docstring enhances the clarity and understanding of the EventsModel
class. This is beneficial for future maintenance and for other developers interacting with the code.
45-51
: Clear Constructor Docstring Added.
The detailed docstring for the __init__
method improves code readability by explaining the purpose and parameters of the constructor.
53-56
: Efficient Assignment of BigQuery Table References.
Assigning BigQuery table references to instance variables streamlines the code and makes future updates easier.
59-67
: Well-Defined device_info_query
Property.
The device_info_query
property neatly encapsulates the SQL fragment needed for device information, promoting reusability and cleanliness in SQL query construction.
87-91
: Consistent Naming for airqloud_info_query
Property.
Ensure that the property name airqloud_info_query
is used consistently throughout the codebase. This promotes maintainability and reduces the risk of errors due to naming discrepancies.
301-305
: Verify BAM Pollutant Columns Alignment with Database Schema.
Ensure that the bam_pollutant_columns
constructed align with the actual column names in the BAM data table. This prevents potential mismatches and query errors.
Run the following script to verify the BAM data table columns:
✅ Verification successful
The previous command failed as bq
is not available in the sandbox. Let's try to find schema information directly from the codebase using alternative approaches:
- Looking for SQL table definitions
- Searching for BAM-related column definitions in Python files
- Finding BAM data structure initialization
This will help us verify the column naming conventions and structure used in the codebase.
BAM pollutant column construction is consistent with codebase patterns
The code's construction of BAM pollutant columns is consistent with the established patterns in the codebase. The search results show that:
-
The column naming follows a consistent pattern across the codebase where:
- Raw pollutant values use the format
ROUND(pollutant, decimal_places) AS key_value
- This aligns with the existing BAM data queries seen in both
src/insights/models/events.py
andsrc/analytics/api/models/events.py
- Raw pollutant values use the format
-
The pollutants being handled (
pm2_5
,pm10
,no2
) are consistently referenced across multiple implementations, indicating these are indeed the expected column names in the BAM data table.
The column construction is safe and follows the established conventions in the codebase.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: List the column names of the BAM data table.
bq show --schema --format=prettyjson ${PROJECT_ID}:${DATASET}.${BIGQUERY_BAM_DATA} | jq '.[].name'
Length of output: 127
Script:
#!/bin/bash
# Search for SQL table definitions or schema references
rg -i "CREATE TABLE.*bam" -g "*.sql" -A 10
# Search for BAM-related column definitions or mappings
rg -i "bam.*columns?" -g "*.py" -A 5
# Search for any BAM data structure definitions
ast-grep --pattern 'bam_pollutant_columns = $_'
Length of output: 7810
@@ -163,7 +163,7 @@ def request(self, endpoint, params=None, body=None, method="get", base_url=None) | |||
success=True, | |||
) | |||
else: | |||
return self.create_response(f"Error: {response.status}", success=False) | |||
return self.create_response(f"Error: {response.data}", success=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider sanitizing error response data before returning
The change to return response.data
instead of response.status
provides more detailed error information, which is beneficial for debugging. However, the raw response data might contain sensitive information.
Consider implementing a sanitization step:
- return self.create_response(f"Error: {response.data}", success=False)
+ sanitized_error = self.sanitize_error_response(response.data)
+ return self.create_response(f"Error: {sanitized_error}", success=False)
Add this helper method:
@staticmethod
def sanitize_error_response(error_data):
"""
Sanitize error response data to remove sensitive information.
Args:
error_data: Raw error response data
Returns:
str: Sanitized error message
"""
try:
# Parse the error data if it's JSON
error_dict = simplejson.loads(error_data)
# Extract only necessary fields
return {
'message': error_dict.get('message', 'Unknown error'),
'code': error_dict.get('code'),
'type': error_dict.get('type')
}
except (simplejson.JSONDecodeError, AttributeError):
# If not JSON or parsing fails, return a safe string representation
return str(error_data)[:200] # Limit length of raw error messages
filter_type, filter_value, error_message = self._get_validated_filter( | ||
json_data | ||
) | ||
if error_message: | ||
return error_message, AirQoRequests.Status.HTTP_400_BAD_REQUEST | ||
except Exception as e: | ||
logger.exception(f"An error has occured; {e}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ensure proper error handling in exception block
In the except Exception as e:
block, after logging the exception, the function does not return an error response to the client. This could lead to unintended behavior, as the function may continue execution after an exception has occurred. Consider returning an appropriate error response to ensure the API behaves as expected.
Apply this diff to fix the error handling:
except Exception as e:
logger.exception(f"An error has occurred; {e}")
+ return (
+ AirQoRequests.create_response(
+ "An internal error occurred while processing your request. Please contact support.",
+ success=False,
+ ),
+ AirQoRequests.Status.HTTP_500_INTERNAL_SERVER_ERROR,
+ )
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
filter_type, filter_value, error_message = self._get_validated_filter( | |
json_data | |
) | |
if error_message: | |
return error_message, AirQoRequests.Status.HTTP_400_BAD_REQUEST | |
except Exception as e: | |
logger.exception(f"An error has occured; {e}") | |
filter_type, filter_value, error_message = self._get_validated_filter( | |
json_data | |
) | |
if error_message: | |
return error_message, AirQoRequests.Status.HTTP_400_BAD_REQUEST | |
except Exception as e: | |
logger.exception(f"An error has occured; {e}") | |
return ( | |
AirQoRequests.create_response( | |
"An internal error occurred while processing your request. Please contact support.", | |
success=False, | |
), | |
AirQoRequests.Status.HTTP_500_INTERNAL_SERVER_ERROR, | |
) |
def add_device_join(self, data_query, filter_clause=""): | ||
""" | ||
Joins device information with a given data query based on device_name. | ||
|
||
Args: | ||
data_query (str): The data query to join with device information. | ||
filter_clause (str): Optional SQL filter clause. | ||
|
||
Returns: | ||
str: Modified query with device join. | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sanitize filter_clause
to Prevent SQL Injection.
In the add_device_join
method, the filter_clause
is directly appended to the SQL query. To enhance security, ensure that filter_clause
is properly sanitized or parameterized to prevent SQL injection vulnerabilities.
Consider modifying the method to use parameterized queries or sanitizing inputs before inclusion.
src/analytics/api/models/events.py
Outdated
instance = cls("build_query") | ||
query = instance.build_query( | ||
data_table, | ||
filter_type, | ||
filter_value, | ||
pollutants_query, | ||
bam_pollutants_query, | ||
start_date, | ||
end_date, | ||
frequency=frequency, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use Appropriate Tenant Identifier When Instantiating EventsModel
.
Passing "build_query"
as the tenant
parameter may not be appropriate and could lead to issues. Ensure that the correct tenant identifier is used when creating an instance.
Apply this change:
-instance = cls("build_query")
+instance = cls(cls.tenant)
Replace cls.tenant
with the appropriate tenant identifier available in the class context.
Committable suggestion skipped: line range outside the PR's diff.
Updates from airqo staging
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (3)
src/analytics/api/views/data.py (1)
215-246
: Simplify filter validation logicThe current implementation could be simplified using sets for better readability and maintainability. Also, the TODO comment indicates technical debt that should be tracked.
Consider this refactoring:
- # TODO Lias with device registry to cleanup this makeshift implementation - devices = ["devices", "device_ids", "device_names"] - sites = ["sites", "site_names", "site_ids"] - - valid_filters = [ - "sites", - "site_names", - "site_ids", - "devices", - "device_ids", - "airqlouds", - "device_names", - ] + # Filter type sets + DEVICE_FILTERS = {"devices", "device_ids", "device_names"} + SITE_FILTERS = {"sites", "site_names", "site_ids"} + VALID_FILTERS = DEVICE_FILTERS | SITE_FILTERS | {"airqlouds"}Would you like me to create a GitHub issue to track the device registry cleanup TODO?
src/analytics/api/models/events.py (2)
336-346
: Address TODO comment and improve datetime handling
- The TODO comment suggests a need for improvement in the implementation.
- The string replacement approach for datetime formatting could be made more robust by using parameterized queries.
Would you like help implementing a more robust solution for datetime handling?
380-388
: Consider using list comprehension with conditional expressionThe method could be more concise by using a single list comprehension with a conditional expression:
- if frequency in ["weekly", "monthly", "yearly"]: - return [ - f"ROUND(AVG({data_table}.{col}), {decimal_places}) AS {col}" - for col in mapping - ] - return [ - f"ROUND({data_table}.{col}, {decimal_places}) AS {col}" for col in mapping - ] + return [ + f"ROUND({'AVG' if frequency in ['weekly', 'monthly', 'yearly'] else ''}" + f"({data_table}.{col}), {decimal_places}) AS {col}" + for col in mapping + ]
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (4)
k8s/analytics/values-prod.yaml
(2 hunks)src/analytics/api/models/events.py
(5 hunks)src/analytics/api/utils/pollutants/pm_25.py
(2 hunks)src/analytics/api/views/data.py
(8 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- k8s/analytics/values-prod.yaml
- src/analytics/api/utils/pollutants/pm_25.py
🧰 Additional context used
🪛 Ruff
src/analytics/api/models/events.py
439-442: Use ternary operator key = pollutant if pollutant == "raw" else f"{pollutant}_{data_type}"
instead of if
-else
-block
Replace if
-else
-block with key = pollutant if pollutant == "raw" else f"{pollutant}_{data_type}"
(SIM108)
🔇 Additional comments (8)
src/analytics/api/views/data.py (4)
3-3
: LGTM: Good practices with typing and logging!
The addition of type hints and proper logging setup enhances code maintainability and debugging capabilities.
Also applies to: 36-38
98-104
: Refer to previous review comment about error handling
192-192
: Refer to previous review comment about exception exposure
90-90
: Verify BigQuery support for new frequencies
The addition of weekly, monthly, and yearly frequencies enhances the API's capabilities. However, we should verify that the underlying BigQuery queries support these new frequencies.
✅ Verification successful
BigQuery support for new frequencies is properly implemented
The codebase shows comprehensive support for weekly, monthly, and yearly frequencies:
- The
BIGQUERY_FREQUENCY_MAPPER
inanalytics/api/utils/pollutants/pm_25.py
includes mappings for all new frequencies usingCOMMON_POLLUTANT_MAPPING
- The Events model in
analytics/api/models/events.py
implements specific handling for these frequencies:- Proper time-based grouping using
TIMESTAMP_TRUNC
for weekly and monthly, andEXTRACT(YEAR)
for yearly - Appropriate aggregation with
GROUP BY ALL
clauses - Correct timestamp formatting for each frequency
- Proper time-based grouping using
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for BigQuery frequency handling code
ast-grep --pattern 'BIGQUERY_FREQUENCY_MAPPER = {
$$$
}'
# Search for frequency-related SQL queries
rg -g '*.py' -A 5 'frequency.*sql|sql.*frequency'
Length of output: 112
Script:
#!/bin/bash
# Search for frequency-related code in Python files
rg -g '*.py' -B 2 -A 2 'frequency.*=.*\[.*hourly.*daily.*raw'
# Search for SQL query generation or execution with frequency
rg -g '*.py' -B 2 -A 2 'def.*query.*frequency|frequency.*query'
# Look for frequency-related constants or mappings
rg -g '*.py' 'FREQUENCY|Frequency' --type py
# Search for time-based aggregation functions
ast-grep --pattern 'def $_(frequency$_):
$$$
'
Length of output: 7500
Script:
#!/bin/bash
# Let's check the content of the frequency mappers and BigQuery handlers
rg -g '*.py' -A 10 'BIGQUERY_FREQUENCY_MAPPER = \{'
# Check for any weekly/monthly/yearly frequency handling
rg -g '*.py' -B 2 -A 2 'weekly|monthly|yearly'
# Look for time-based aggregation implementations
ast-grep --pattern 'def aggregate_$_($$$):
$$$
'
Length of output: 28192
src/analytics/api/models/events.py (4)
18-21
: Well-structured docstrings!
The class and constructor docstrings are clear, informative, and follow good documentation practices.
Also applies to: 45-51
82-116
: Reiterating previous SQL injection concern
150-167
: Excellent use of dictionary mapping!
The implementation is clean, efficient, and easily extensible. The use of a dictionary for mapping frequencies to SQL expressions is a good practice.
59-80
: Consider parameterizing SQL fragments
While the properties are well-structured, ensure that any user-provided data that might be concatenated with these SQL fragments is properly parameterized to prevent SQL injection vulnerabilities.
Run the following script to check for potential SQL injection vulnerabilities:
# TODO Clean up by use using `get_columns` helper method | ||
if pollutant in {"pm2_5", "pm10", "no2"}: | ||
if frequency in ["weekly", "monthly", "yearly"]: | ||
bam_pollutant_columns.extend( | ||
[f"ROUND(AVG({pollutant}), {decimal_places}) AS {key}_value"] | ||
) | ||
else: | ||
bam_pollutant_columns.extend( | ||
[f"ROUND({pollutant}, {decimal_places}) AS {key}_value"] | ||
) | ||
# TODO Fix query when weather data is included. Currently failing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Extract pollutant handling logic into helper method
The pollutant handling logic contains duplicate code and a TODO comment. Consider extracting this into a helper method:
+ def _format_bam_pollutant_columns(self, pollutant, frequency, decimal_places):
+ if frequency in ["weekly", "monthly", "yearly"]:
+ return [f"ROUND(AVG({pollutant}), {decimal_places}) AS {pollutant}_{self.data_type}_value"]
+ return [f"ROUND({pollutant}, {decimal_places}) AS {pollutant}_{self.data_type}_value"]
+
Committable suggestion skipped: line range outside the PR's diff.
Description
Summary by CodeRabbit
New Features
Bug Fixes