control over event loop batch sizes #706
Labels
api: datastore
Issues related to the googleapis/python-ndb API.
priority: p3
Desirable enhancement or fix. May not be included in next release.
type: feature request
‘Nice-to-have’ improvement, new feature or different behavior or design.
Feature Request: I think a use case we have would benefit from more control over how many concurrent requests will be used when fetching large #s of entities by key. Specifically my idea was to give the ability to control this batch size
We have some cases where we have several thousand Keys to lookup, and from what we've observed python-ndb is making a small number of 1000 key calls. I've tried various ways to make smaller overlapping async calls (including writing tasklets) without any success. I believe this is because python-ndb will try to keep adding to the existing batch all the way up to 1000 keys to lookup.
Smaller overlapping async calls would (IMO) perform better in our case, or at least allow us to tune the balance between the concurrency of requests and the overhead of making more requests.
So my first question is is there already a good way to do what I'm trying to do? If not I'd be happy to contribute a PR if someone could provide some guidance on how to add more control over this behavior.
Thanks!
The text was updated successfully, but these errors were encountered: