Skip to content
This repository has been archived by the owner on Sep 20, 2023. It is now read-only.

Add sendfile_max_chunk #8

Open
tumb1er opened this issue Dec 29, 2020 · 0 comments
Open

Add sendfile_max_chunk #8

tumb1er opened this issue Dec 29, 2020 · 0 comments

Comments

@tumb1er
Copy link

tumb1er commented Dec 29, 2020

We recently had a very nice evening with large corrupted files on a specific hardware setup.

Here is an issue describing a situation like ours:

  • We have a virtual machine running linux on network-attached disk storage
  • Network is relatively faster than disks
  • Files under the hood are ~10-50GB

Like in that issue, we had curl terminating with incomplete file read and client timed out message in nginx logs. Nginx debug logs shows that there are 2GB chunks sent via sendfile. Network is faster that disks, so sendfile does not block, and this 2GB sendfile call lasts more than a minute, if disk read speed is below 36 MBytes per second (2GB/60). One minute is nginx send_timeout default value.
After adding sendfile_max_chunk setting these sendfile calls are smaller and more frequent, so nginx send_timeout does not
happen.

We solved our issue by adding sendfile_max_chunk = 1M to nginx configuration, without any performance tuning. I propose to add some reasonable value to webdav nginx config.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant