⚡️ Speed up function sort_objects_by_score by 19%#36
Open
codeflash-ai[bot] wants to merge 1 commit intomainfrom
Open
⚡️ Speed up function sort_objects_by_score by 19%#36codeflash-ai[bot] wants to merge 1 commit intomainfrom
sort_objects_by_score by 19%#36codeflash-ai[bot] wants to merge 1 commit intomainfrom
Conversation
The optimization replaces a lambda function `lambda k: k["score"]` with `operator.itemgetter("score")` as the key function for sorting.
**What changed:**
- Added `from operator import itemgetter` import
- Changed `key=lambda k: k["score"]` to `key=itemgetter("score")`
**Why it's faster:**
In Python, `lambda` functions create a new function object at runtime and invoke Python's function call machinery for each comparison during sorting. `itemgetter` is implemented in C and avoids this overhead—it's a specialized callable designed specifically for attribute/item access. This eliminates per-item lambda invocation costs, resulting in faster execution especially as the list size grows.
The line profiler shows the optimization reduces per-hit time from ~65.2μs to ~18.0μs (3.6x faster per call), with an overall 18% speedup across all test workloads.
**Performance characteristics:**
- Small lists (2-10 objects): 4-15% slower due to fixed import overhead and simpler cases where lambda's flexibility isn't a bottleneck
- Medium lists (~100 objects): The optimization starts showing benefits
- Large lists (500-1000 objects): **15-57% faster** - the optimization shines here, with particularly strong gains for lists with many duplicate scores (57% speedup for 1000 identical scores) where comparison count is high
**Impact on existing workloads:**
Looking at `function_references`, this function is called in critical hot paths:
- `nms()` - Non-maxima suppression for object detection, processes potentially large object lists
- `slot_into_containers()` - Called inside nested loops over packages/containers, may sort repeatedly
- `nms_by_containment()`, `nms_supercells()`, `header_supercell_tree()` - All sort lists during table structure analysis
These are performance-sensitive post-processing operations in table detection pipelines where objects (cells, rows, columns) can number in the hundreds. The optimization provides meaningful speedups when processing tables with many detected elements, with negligible impact on smaller tables.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
📄 19% (0.19x) speedup for
sort_objects_by_scoreinunstructured_inference/models/table_postprocess.py⏱️ Runtime :
760 microseconds→640 microseconds(best of208runs)📝 Explanation and details
The optimization replaces a lambda function
lambda k: k["score"]withoperator.itemgetter("score")as the key function for sorting.What changed:
from operator import itemgetterimportkey=lambda k: k["score"]tokey=itemgetter("score")Why it's faster:
In Python,
lambdafunctions create a new function object at runtime and invoke Python's function call machinery for each comparison during sorting.itemgetteris implemented in C and avoids this overhead—it's a specialized callable designed specifically for attribute/item access. This eliminates per-item lambda invocation costs, resulting in faster execution especially as the list size grows.The line profiler shows the optimization reduces per-hit time from ~65.2μs to ~18.0μs (3.6x faster per call), with an overall 18% speedup across all test workloads.
Performance characteristics:
Impact on existing workloads:
Looking at
function_references, this function is called in critical hot paths:nms()- Non-maxima suppression for object detection, processes potentially large object listsslot_into_containers()- Called inside nested loops over packages/containers, may sort repeatedlynms_by_containment(),nms_supercells(),header_supercell_tree()- All sort lists during table structure analysisThese are performance-sensitive post-processing operations in table detection pipelines where objects (cells, rows, columns) can number in the hundreds. The optimization provides meaningful speedups when processing tables with many detected elements, with negligible impact on smaller tables.
✅ Correctness verification report:
🌀 Click to see Generated Regression Tests
To edit these changes
git checkout codeflash/optimize-sort_objects_by_score-mkos98rzand push.