-
Notifications
You must be signed in to change notification settings - Fork 2
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
78 changed files
with
8,493 additions
and
903 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,7 +1,6 @@ | ||
.mypy_cache | ||
__pycache__ | ||
.local.env | ||
docs | ||
venv | ||
.idea | ||
osbot_utils.egg-info | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
346 changes: 346 additions & 0 deletions
346
docs/code/osbot_utils/testing/performance/cls-kwargs-performance-mistery.md
Large diffs are not rendered by default.
Oops, something went wrong.
152 changes: 152 additions & 0 deletions
152
...code/osbot_utils/testing/performance/tech-doc__performance-testing-framework.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,152 @@ | ||
# OSBot_Utils Performance Testing Framework | ||
|
||
## Introduction | ||
|
||
The Performance Testing Framework in OSBot_Utils provides a system for measuring and validating Python class instantiation performance. | ||
|
||
Unlike simple timing mechanisms, this framework employs statistical analysis and dynamic normalization to deliver highly stable and | ||
reliable measurements suitable for automated testing environments. | ||
|
||
## Quick Start | ||
|
||
Here's a simple example of using the framework to measure class instantiation performance: | ||
|
||
```python | ||
from osbot_utils.testing.performance import Performance_Measure__Session | ||
from osbot_utils.helpers.Random_Guid import Random_Guid | ||
|
||
# Basic performance test | ||
def test_instantiation_performance(): | ||
with Performance_Measure__Session() as session: | ||
# Measure built-in type instantiation | ||
session.measure(str).assert_time(200) | ||
|
||
# Measure custom class instantiation | ||
session.measure(Random_Guid).assert_time(10_000) | ||
|
||
# Complex measurement example | ||
class ComplexTest(TestCase): | ||
def test_type_safe_performance(self): | ||
class SimpleClass(Type_Safe): | ||
value: int = 42 | ||
|
||
with Performance_Measure__Session() as session: | ||
result = session.measure(SimpleClass) | ||
result.assert_time(20_000) # Assert normalized time | ||
|
||
# Access detailed metrics | ||
measurements = result.result.measurements | ||
for loop_size, metric in measurements.items(): | ||
print(f"Loop size {loop_size}: {metric.avg_time}ns avg") | ||
``` | ||
|
||
## Performance Characteristics | ||
|
||
The framework has been tested extensively in both local development environments and CI/CD pipelines. | ||
|
||
Here's a comparison of performance measurements across different scenarios _(all values in nano-seconds)_: | ||
|
||
| Class Type | Local Stable | Local Raw | GH.Actions Stable | GH.Actions Raw | What is being tested | | ||
|------------|------------------|---------------|-----------------------|---------------------|-------------| | ||
| str | 100 | 79 | 200 | 190 | Python built-in string type instantiation | | ||
| Random_Guid| 6,000 | 5,541 | 10,000 | 11,638 | Simple utility class with minimal logic | | ||
| An_Class_1 | 100 | 132 | 200 | 239 | Empty class, baseline overhead | | ||
| An_Class_2 | 6,000 | 5,632 | 20,000 | 15,781 | Basic Type_Safe inheritance | | ||
| An_Class_3 | 20,000 | 16,217 | 50,000 | 53,261 | Type_Safe with uninitialized int | | ||
| An_Class_4 | 20,000 | 15,642 | 50,000 | 50,806 | Type_Safe with default int value | | ||
| An_Class_5 | 20,000 | 16,278 | 50,000 | 53,439 | Type_Safe with string annotation | | ||
| An_Class_6 | 20,000 | 15,460 | 50,000 | 50,508 | Type_Safe with multiple primitives | | ||
|
||
|
||
These measurements reveal several important patterns: | ||
|
||
1. Environment Impact: CI/CD environments typically show 2-3x higher instantiation times compared to local execution, likely due to virtualization and shared resources. | ||
|
||
2. Type_Safe Overhead: The Type_Safe base class adds consistent overhead (about 6,000ns locally, 20,000ns in CI), reflecting the cost of type checking infrastructure. | ||
|
||
3. Annotation Cost: Type annotations add measurable overhead (increasing to 20,000ns locally, 50,000ns in CI), but this cost doesn't increase significantly with additional annotations. | ||
|
||
4. Stability: Despite absolute time differences, the relative performance patterns remain consistent across environments, validating the framework's normalization strategy. | ||
|
||
## Core Architecture | ||
|
||
The framework is built around three primary components that work together to provide comprehensive performance analysis: | ||
|
||
### Performance_Measure__Session | ||
|
||
The Performance_Measure__Session class serves as the primary interface for conducting performance measurements. It implements both a context manager pattern for resource management and method chaining for a fluent API design. This design choice allows for clean, readable test code while ensuring proper cleanup of resources. | ||
|
||
The session manager orchestrates the entire measurement process, from raw data collection through statistical processing to final score normalization. Its modular design separates concerns between data collection, analysis, and result presentation, making it both maintainable and extensible. | ||
|
||
### Measurement Models | ||
|
||
The framework uses two specialized model classes for data organization: | ||
|
||
The Model__Performance_Measure__Measurement class encapsulates individual measurement metrics, including minimum, maximum, median, and standard deviation values. This granular data provides insights into performance variability and helps identify potential issues. | ||
|
||
The Model__Performance_Measure__Result class aggregates multiple measurements and computes final performance scores. It maintains the relationship between raw measurements and normalized results, facilitating both detailed analysis and high-level performance validation. | ||
|
||
## Measurement Methodology | ||
|
||
### Data Collection Strategy | ||
|
||
The framework employs a Fibonacci sequence for iteration counts: [1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610]. This sequence provides exponential coverage across different scales while maintaining efficiency in total measurement time. The exponential progression maps well to common performance characteristics in software systems. | ||
|
||
Each measurement iteration uses Python's high-precision performance counter (perf_counter_ns) to capture object instantiation time. The framework collects comprehensive measurement data across all iterations, providing a rich dataset for statistical analysis. | ||
|
||
### Statistical Processing | ||
|
||
The framework implements a three-phase statistical processing approach to ensure reliable results: | ||
|
||
First, the system performs outlier removal by sorting measurements and trimming the top and bottom 10% of values. This step helps eliminate system noise, garbage collection pauses, and other environmental artifacts that could skew results. | ||
|
||
Next, it calculates a base performance score using a weighted combination of statistical metrics: 60% median and 40% trimmed mean. This weighting provides a balance between outlier resistance (from the median) and distribution awareness (from the trimmed mean). | ||
|
||
Finally, the framework applies dynamic normalization based on the magnitude of measurements: | ||
- Measurements under 1µs are normalized to 100ns precision | ||
- Measurements under 10µs are normalized to 500ns precision | ||
- Measurements under 100µs are normalized to 1,000ns precision | ||
- Measurements over 100µs are normalized to 5,000ns precision | ||
|
||
This adaptive precision ensures meaningful comparisons across different performance scales while avoiding false positives from natural measurement variation. | ||
|
||
## Practical Implementation | ||
|
||
### Test Integration | ||
|
||
The framework is designed for seamless integration into existing test suites. Here's a typical implementation: | ||
|
||
```python | ||
def test_performance(): | ||
with Performance_Measure__Session() as session: | ||
session.measure(str).assert_time(100) | ||
session.measure(Random_Guid).assert_time(5500) | ||
``` | ||
|
||
This code demonstrates the framework's clean API and straightforward assertion mechanism. The assert_time method automatically applies appropriate normalization based on the measurement scale. | ||
|
||
### Performance Characteristics | ||
|
||
Through extensive testing, we've identified several typical performance patterns: | ||
|
||
Basic Python types typically show highly optimized instantiation times around 100ns. Simple utility classes like Random_Guid typically measure around 5,500ns. Type_Safe classes with basic type annotations usually range from 15,000ns to 16,000ns, with default values having minimal impact on performance. | ||
|
||
These patterns provide useful benchmarks for evaluating new class implementations and detecting potential performance regressions. | ||
|
||
## Best Practices | ||
|
||
### Continuous Integration | ||
|
||
When integrating the framework into CI/CD pipelines, consider these recommendations: | ||
|
||
Set baseline measurements during quiet periods to establish reliable benchmarks. Run performance tests in isolation from other intensive processes to minimize environmental interference. Use appropriate precision levels based on your performance requirements – tighter bounds for critical paths, looser bounds for less critical components. | ||
|
||
### Performance Optimization | ||
|
||
The framework provides valuable data for optimization efforts. Use the detailed measurements to identify costly operations, quantify improvements, and make data-driven optimization decisions. The statistical processing helps ensure that measured improvements represent real performance changes rather than measurement artifacts. | ||
|
||
## Conclusion | ||
|
||
The OSBot_Utils Performance Testing Framework provides a robust solution for measuring and validating Python class instantiation performance. Its combination of sophisticated statistical processing, dynamic normalization, and clean API design makes it particularly valuable for automated testing environments where reliability and ease of use are paramount. | ||
|
||
The framework's ability to provide stable, meaningful measurements across different performance scales, combined with its straightforward integration path, makes it an essential tool for maintaining and improving Python code performance. |
140 changes: 140 additions & 0 deletions
140
docs/code/osbot_utils/testing/performance/type-safe-dependencies-performance.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,140 @@ | ||
# Type_Safe Dependencies Performance Analysis | ||
|
||
## Introduction | ||
|
||
This document presents a detailed performance analysis of the core dependencies and underlying operations that power the Type_Safe system. Through comprehensive benchmarking of Python native operations and OSBot_Utils utilities, we can better understand the performance characteristics of Type_Safe's foundational components. | ||
|
||
## Core Operations Performance Map | ||
|
||
### Python Native Type Operations | ||
|
||
These baseline operations form the foundation of Type_Safe's type checking system: | ||
|
||
| Operation | Time (ns) | Context | | ||
|-----------|-----------|----------| | ||
| isinstance() check | 0-100 | Type verification baseline | | ||
| issubclass() check | 0-100 | Inheritance verification | | ||
| type() check | 0 | Type identification | | ||
|
||
The near-zero overhead of these operations demonstrates that Python's native type system provides an efficient foundation for Type_Safe's enhanced type checking. | ||
|
||
### Attribute Access Operations | ||
|
||
Basic attribute manipulation shows minimal overhead: | ||
|
||
| Operation | Time (ns) | Context | | ||
|-----------|-----------|----------| | ||
| getattr() | 0-100 | Basic attribute retrieval | | ||
| hasattr() | 0-100 | Attribute existence check | | ||
| setattr() | 100 | Attribute assignment | | ||
| getattr with default | 100 | Safe attribute access | | ||
| getattr missing | 300 | Exception handling cost | | ||
|
||
The slightly higher cost for setattr() reflects the complexity of Python's attribute assignment mechanism. | ||
|
||
### Reflection and Introspection | ||
|
||
Class and object inspection operations show varying costs: | ||
|
||
| Operation | Time (ns) | Context | | ||
|-----------|-----------|----------| | ||
| vars() | 0-100 | Object attribute dictionary | | ||
| __annotations__ access | 0-100 | Type hints retrieval | | ||
| MRO traversal | 100 | Inheritance chain analysis | | ||
| dir() | 2,000 | Complete attribute listing | | ||
| class __dict__ access | 0-100 | Class attribute access | | ||
| class __bases__ access | 0-100 | Base class access | | ||
|
||
The higher cost of dir() suggests careful consideration when performing full object inspection. | ||
|
||
### Typing Module Operations | ||
|
||
Type hint processing shows consistent overhead: | ||
|
||
| Operation | Time (ns) | Context | | ||
|-----------|-----------|----------| | ||
| get_origin (simple) | 200-300 | Basic type extraction | | ||
| get_origin (complex) | 300-500 | Nested type handling | | ||
| get_args (simple) | 300 | Type argument extraction | | ||
| get_args (complex) | 500-600 | Nested type arguments | | ||
|
||
The increased cost for complex types reflects the recursive nature of type argument processing. | ||
|
||
### Dictionary Operations | ||
|
||
Dictionary manipulation shows efficient performance: | ||
|
||
| Operation | Time (ns) | Context | | ||
|-----------|-----------|----------| | ||
| dict.get() | 0-100 | Key retrieval | | ||
| dict contains check | 0-100 | Key existence | | ||
| dict length check | 0-100 | Size determination | | ||
| dict iteration | 100 | Key traversal | | ||
| dict items() | 200 | Key-value pair access | | ||
| dict update() | 100 | Bulk modification | | ||
|
||
These operations demonstrate Python's optimized dictionary implementation. | ||
|
||
### OSBot_Utils Core Operations | ||
|
||
Higher-level utilities show expected overhead from their enhanced functionality: | ||
|
||
| Operation | Time (ns) | Context | | ||
|-----------|-----------|----------| | ||
| obj_data() | 8,000-9,000 | Complete object analysis | | ||
| default_value() | 100 | Type default creation | | ||
| all_annotations() | 300-500 | Annotation collection | | ||
| json_dumps() | 4,000 | Serialization | | ||
| json_parse() | 700-800 | Deserialization | | ||
|
||
The higher cost of obj_data() reflects its comprehensive object analysis capabilities. | ||
|
||
### Type Checking Operations | ||
|
||
Specialized type checking shows varying complexity: | ||
|
||
| Operation | Time (ns) | Context | | ||
|-----------|-----------|----------| | ||
| obj_is_type_union_compatible() | 300-400 | Union type validation | | ||
| obj_is_attribute_annotation_of_type() | 200 | Annotation type check | | ||
| value_type_matches_obj_annotation_for_attr() | 800-900 | Full type validation | | ||
| value_type_matches_obj_annotation_for_union() | 700 | Union validation | | ||
| check_none_value | 1,000 | None handling | | ||
| check_missing_annotation | 500 | Missing annotation handling | | ||
| check_complex_union | 700-800 | Complex union validation | | ||
|
||
These operations form the core of Type_Safe's runtime type checking system. | ||
|
||
## Performance Patterns and Observations | ||
|
||
1. Native Operation Efficiency | ||
- Python's native type operations (isinstance, issubclass) show negligible overhead (0-100ns) | ||
- Basic attribute access operations maintain good performance (0-100ns) | ||
- Dictionary operations are highly optimized (0-200ns range) | ||
|
||
2. Typing System Overhead | ||
- Simple type operations cost 200-300ns | ||
- Complex type operations (nested types, unions) cost 500-600ns | ||
- Full type validation can cost up to 900ns | ||
|
||
3. Utility Operation Costs | ||
- Basic utilities maintain sub-microsecond performance | ||
- Complex operations (obj_data, json_dumps) show expected higher costs | ||
- Exception handling adds consistent overhead (300ns) | ||
|
||
4. Scaling Characteristics | ||
- Type complexity correlates with processing time | ||
- Dictionary operations scale well with size | ||
- Reflection operations show consistent performance | ||
|
||
## Time Threshold Categories | ||
|
||
| Category | Time Range (ns) | Operations | | ||
|----------|----------------|------------| | ||
| Zero-Cost | 0-100 | Native type checks, basic attribute access | | ||
| Very Fast | 100-300 | Simple type operations, dict operations | | ||
| Fast | 300-500 | Basic type validation, annotation handling | | ||
| Medium | 500-1000 | Complex type validation, union checking | | ||
| Higher-Cost | 1000-5000 | JSON operations, comprehensive analysis | | ||
| Complex | 5000+ | Full object analysis (obj_data) | | ||
|
Oops, something went wrong.