⚡️ Speed up function _uniquity_file by 406%#276
Open
codeflash-ai[bot] wants to merge 1 commit intomainfrom
Open
⚡️ Speed up function _uniquity_file by 406%#276codeflash-ai[bot] wants to merge 1 commit intomainfrom
_uniquity_file by 406%#276codeflash-ai[bot] wants to merge 1 commit intomainfrom
Conversation
The optimization replaces expensive regex operations with fast string operations, delivering a **406% speedup** by eliminating the main performance bottlenecks in the original code. **Key Performance Changes:** 1. **Eliminated regex pattern matching** - The original code used `re.match(pattern, f)` on every file in the list, then sorted matching files with a regex-based key function (`_sorting_key`). The optimized version uses simple string operations (`startswith()`, `endswith()`, `isdigit()`) to identify matching files, which are **significantly faster** than compiling and executing regex patterns. 2. **Removed redundant sorting** - The original sorted all matching files by `_sorting_key`, then extracted numbers from them again using `re.search()`. The optimized version directly extracts numbers during the initial scan and sorts only the integer list (which is much smaller than the file list). 3. **Single-pass extraction** - Instead of two regex passes (once for filtering, once for number extraction), the optimized code extracts numbers in a single pass using string slicing and `isdigit()`. **Why This Works:** - String methods like `startswith()` and `endswith()` are implemented in C and operate on contiguous memory, making them 10-20x faster than regex for simple prefix/suffix checks - `isdigit()` is faster than regex `\d+` pattern matching for validating numeric strings - Processing a sorted list of integers is much cheaper than sorting strings with a custom key function that calls regex **Impact Based on Context:** The function is called by `_get_non_duplicated_filename()` which uses `os.listdir()`, suggesting it runs when handling file operations that may involve many files. The test results show dramatic improvements especially for large file lists (439-447% faster with 500 files), making this optimization particularly valuable when: - Processing directories with many duplicate files (common in data processing pipelines) - Generating unique filenames in batch operations - Working with archival or versioned file systems The optimization preserves exact behavior including edge cases (files with numbers in names, special characters, multiple dots) while being most effective on larger file lists where regex overhead compounds.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
📄 406% (4.06x) speedup for
_uniquity_fileinunstructured/metrics/utils.py⏱️ Runtime :
8.79 milliseconds→1.74 milliseconds(best of125runs)📝 Explanation and details
The optimization replaces expensive regex operations with fast string operations, delivering a 406% speedup by eliminating the main performance bottlenecks in the original code.
Key Performance Changes:
Eliminated regex pattern matching - The original code used
re.match(pattern, f)on every file in the list, then sorted matching files with a regex-based key function (_sorting_key). The optimized version uses simple string operations (startswith(),endswith(),isdigit()) to identify matching files, which are significantly faster than compiling and executing regex patterns.Removed redundant sorting - The original sorted all matching files by
_sorting_key, then extracted numbers from them again usingre.search(). The optimized version directly extracts numbers during the initial scan and sorts only the integer list (which is much smaller than the file list).Single-pass extraction - Instead of two regex passes (once for filtering, once for number extraction), the optimized code extracts numbers in a single pass using string slicing and
isdigit().Why This Works:
startswith()andendswith()are implemented in C and operate on contiguous memory, making them 10-20x faster than regex for simple prefix/suffix checksisdigit()is faster than regex\d+pattern matching for validating numeric stringsImpact Based on Context:
The function is called by
_get_non_duplicated_filename()which usesos.listdir(), suggesting it runs when handling file operations that may involve many files. The test results show dramatic improvements especially for large file lists (439-447% faster with 500 files), making this optimization particularly valuable when:The optimization preserves exact behavior including edge cases (files with numbers in names, special characters, multiple dots) while being most effective on larger file lists where regex overhead compounds.
✅ Correctness verification report:
⚙️ Click to see Existing Unit Tests
metrics/test_utils.py::test_uniquity_file🌀 Click to see Generated Regression Tests
To edit these changes
git checkout codeflash/optimize-_uniquity_file-mks4vsp5and push.