Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Central YAML (base), Organization UI (inherited) Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
💤 Files with no reviewable changes (1)
📝 WalkthroughSummary by CodeRabbit
WalkthroughRemoved the environment variable 🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
c8db24d to
a2916f7
Compare
a2916f7 to
7437bd8
Compare
Pull Request Test Coverage Report for Build 23751210641Details
💛 - Coveralls |
7437bd8 to
bbe7012
Compare
kong easily gets overwhelmed when receiving many requests, resulting in an error complaining about not having enough workers. This was due to KONG_NGINX_WORKER_PROCESSES=1 being set. Kong automatically determins the right number of worker processes when this is not specified, which resolves the issue.
bbe7012 to
3d8519c
Compare
avallete
left a comment
There was a problem hiding this comment.
Thanks for the contribution and for digging into this! The diagnosis is spot on a single Nginx worker can definitely get saturated under parallel load (bulk Storage uploads being the classic case).
For context, the KONG_NGINX_WORKER_PROCESSES=1 was set intentionally to reduce Kong's memory footprint in the local dev stack. This is a common pattern for projects embedding Kong in Docker see edgexfoundry/edgex-compose#177 for a similar rationale. With auto, Kong spawns one worker per visible CPU core, which on a modern dev machine (8–16 cores) means 8–16 Nginx workers each with their own Lua VM and connection pools. Given that supabase start already launches ~12 containers, the extra memory pressure adds up.
Rather than removing the line entirely, would you consider setting it to 2 instead? That should resolve the concurrency bottleneck you're hitting (two workers can handle a solid amount of parallel connections) while keeping the local stack reasonably lightweight for the majority of users who aren't doing heavy parallel workloads.
For users who genuinely need auto (e.g., load testing against the local stack), this could also be a good candidate for a config.toml override down the line (or a passed down env config) but that's a separate effort and shouldn't block this fix.
What do you think?
kong easily gets overwhelmed when receiving many requests, resulting in an error complaining about not having enough workers. This was due to KONG_NGINX_WORKER_PROCESSES=1 being set. Kong automatically determins the right number of worker processes when this is not specified, which resolves the issue.
What kind of change does this PR introduce?
Bug fix (removing a bad env var for the Kong container).
What is the current behavior?
If many parallel requests are made to the Storage API, Kong will quickly stop responding and start terminating socket connections.
What is the new behavior?
Kong is able to handle many parallel connections without choking.