Skip to content

fix: kong stops responding#4857

Open
unlair wants to merge 1 commit intosupabase:developfrom
unlair:fix/kong-stops-responding
Open

fix: kong stops responding#4857
unlair wants to merge 1 commit intosupabase:developfrom
unlair:fix/kong-stops-responding

Conversation

@unlair
Copy link
Copy Markdown
Contributor

@unlair unlair commented Feb 15, 2026

kong easily gets overwhelmed when receiving many requests, resulting in an error complaining about not having enough workers. This was due to KONG_NGINX_WORKER_PROCESSES=1 being set. Kong automatically determins the right number of worker processes when this is not specified, which resolves the issue.

What kind of change does this PR introduce?

Bug fix (removing a bad env var for the Kong container).

What is the current behavior?

If many parallel requests are made to the Storage API, Kong will quickly stop responding and start terminating socket connections.

What is the new behavior?

Kong is able to handle many parallel connections without choking.

@unlair unlair requested a review from a team as a code owner February 15, 2026 04:37
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Feb 15, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Central YAML (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 94499ee9-703d-4660-b427-efaba939e5c4

📥 Commits

Reviewing files that changed from the base of the PR and between a2916f7 and 7437bd8.

📒 Files selected for processing (1)
  • internal/start/start.go
💤 Files with no reviewable changes (1)
  • internal/start/start.go

📝 Walkthrough

Summary by CodeRabbit

  • Chores
    • Removed a fixed worker-process setting from the Kong container environment so it will use the platform's default worker configuration.

Walkthrough

Removed the environment variable KONG_NGINX_WORKER_PROCESSES=1 from the Kong container environment in internal/start/start.go. This single-line deletion modifies the environment passed to the Kong container at startup.

🚥 Pre-merge checks | ✅ 4
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: removing a problematic environment variable that was causing Kong to stop responding under load.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Merge Conflict Detection ✅ Passed ✅ No merge conflicts detected when merging into develop

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@unlair unlair force-pushed the fix/kong-stops-responding branch from c8db24d to a2916f7 Compare February 28, 2026 01:49
@unlair unlair force-pushed the fix/kong-stops-responding branch from a2916f7 to 7437bd8 Compare March 9, 2026 22:35
@coveralls
Copy link
Copy Markdown

coveralls commented Mar 9, 2026

Pull Request Test Coverage Report for Build 23751210641

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • 7 unchanged lines in 2 files lost coverage.
  • Overall coverage decreased (-0.02%) to 63.114%

Files with Coverage Reduction New Missed Lines %
internal/storage/rm/rm.go 2 80.61%
internal/utils/git.go 5 57.14%
Totals Coverage Status
Change from base Build 23735607557: -0.02%
Covered Lines: 9202
Relevant Lines: 14580

💛 - Coveralls

@unlair unlair force-pushed the fix/kong-stops-responding branch from 7437bd8 to bbe7012 Compare March 15, 2026 19:08
kong easily gets overwhelmed when receiving many requests, resulting in an error complaining about not having enough workers. This was due to KONG_NGINX_WORKER_PROCESSES=1 being set. Kong automatically determins the right number of worker processes when this is not specified, which resolves the issue.
@unlair unlair force-pushed the fix/kong-stops-responding branch from bbe7012 to 3d8519c Compare March 30, 2026 14:52
Copy link
Copy Markdown
Member

@avallete avallete left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution and for digging into this! The diagnosis is spot on a single Nginx worker can definitely get saturated under parallel load (bulk Storage uploads being the classic case).

For context, the KONG_NGINX_WORKER_PROCESSES=1 was set intentionally to reduce Kong's memory footprint in the local dev stack. This is a common pattern for projects embedding Kong in Docker see edgexfoundry/edgex-compose#177 for a similar rationale. With auto, Kong spawns one worker per visible CPU core, which on a modern dev machine (8–16 cores) means 8–16 Nginx workers each with their own Lua VM and connection pools. Given that supabase start already launches ~12 containers, the extra memory pressure adds up.

Rather than removing the line entirely, would you consider setting it to 2 instead? That should resolve the concurrency bottleneck you're hitting (two workers can handle a solid amount of parallel connections) while keeping the local stack reasonably lightweight for the majority of users who aren't doing heavy parallel workloads.

For users who genuinely need auto (e.g., load testing against the local stack), this could also be a good candidate for a config.toml override down the line (or a passed down env config) but that's a separate effort and shouldn't block this fix.

What do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants