Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request/Response Buffering API support #3706

Open
luvk1412 opened this issue Jun 29, 2024 · 3 comments
Open

Request/Response Buffering API support #3706

luvk1412 opened this issue Jun 29, 2024 · 3 comments
Labels
area/api API-related issues area/policy kind/feature new feature

Comments

@luvk1412
Copy link
Contributor

luvk1412 commented Jun 29, 2024

Feature

I would like a direct API in envoy gateway to enable request and response buffering. As of today this can be achieved by using either Buffer filter to enable request buffering or using File System Buffer filter to enable request and response buffering with help of EnvoyPatchPolicy.

I don't wish to use EnvoyPatchPolicy in production hence would prefer an api in envoy gateway.

Use Case

One of our main reasons for setting up envoy gateway is to have a layer between our upstream servers and aws ALB which can buffer both requests and responses to prevent slow DDoS attacks. The reasons are explained further in detail in point 1 and 2 of What is it Good For section of File System Buffer documentation as well :

  1. To shield a server from intentional or unintentional denial of service via slow requests. Normal requests open a connection and stream the request. If the client streams the request very slowly, the server may have its limited resources held by that connection for the duration of the slow stream. With one of the “always buffer” configurations for requests, the connection to the server is postponed until the entire request has been received by Envoy, guaranteeing that from the server’s perspective the request will be as fast as Envoy can deliver it, rather than at the speed of the client.
  2. Similarly, to shield a server from clients receiving a response slowly. For this case, an “always buffer” configuration is not a requirement. The standard Envoy behaviour already implements a configurable memory buffer for this purpose, that will allow the server to flush until that buffer hits the “high watermark” that provokes a request for the server to slow down.

Some of our upstream servers run on a request per thread model, so above types of attacks become very critical for us as it can quickly lead to server downtimes.

@zhaohuabing zhaohuabing added kind/feature new feature area/api API-related issues area/policy and removed triage labels Jun 30, 2024
Copy link

This issue has been automatically marked as stale because it has not had activity in the last 30 days.

@github-actions github-actions bot added the stale label Jul 31, 2024
@arkodg arkodg removed the stale label Aug 16, 2024
@arkodg
Copy link
Contributor

arkodg commented Sep 4, 2024

@luvk1412
Copy link
Contributor Author

luvk1412 commented Sep 16, 2024

@arkodg i think bufferLimit field in ClientConnection defines the buffer size of the connection with downstream(by default which is 32KB ref). I think it corresponds to per_connection_buffer_limit_bytes in config.listener.v3.Listener. It's my guess based on

irConnection.BufferLimitBytes = ptr.To(uint32(bufferLimit))
and
bufferLimitBytes := buildPerConnectionBufferLimitBytes(connection)

Buffer /File System Buffer filters on the other hand are for requests coming on those connections, and used to define if the request should be buffered in memory/disk or not until whole req(including the body) is received and the size they specify are of the requests and not connection buffer.
Please correct me if I am wrong here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api API-related issues area/policy kind/feature new feature
Projects
None yet
Development

No branches or pull requests

3 participants