Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

too many files open #17

Open
titanturtles opened this issue Oct 31, 2022 · 1 comment
Open

too many files open #17

titanturtles opened this issue Oct 31, 2022 · 1 comment

Comments

@titanturtles
Copy link

First of all, thanks everyone who contributed to sarpedon. This server helped our cyber teams a lot. We have been using it for a while. It is running great, very fast on scoring. It's just we've noticed that for some reason the server will hang once in a while. And, these are the errors we saw:

Oct 30 22:51:15 hpdesk sarpedon[2805868]: 2022/10/30 22:51:15 http: Accept error: accept tcp [::]:4013: accept4: too many open files; retrying in 1s
Oct 30 22:51:16 hpdesk sarpedon[2805868]: 2022/10/30 22:51:16 http: Accept error: accept tcp [::]:4013: accept4: too many open files; retrying in 1s
Oct 30 22:51:17 hpdesk sarpedon[2805868]: 2022/10/30 22:51:17 http: Accept error: accept tcp [::]:4013: accept4: too many open files; retrying in 1s

We did some research, seems that golang http package does not have a default timeout value https://stackoverflow.com/questions/52456506/getting-too-many-open-files-during-load-test-with-gin-gonic. And, the package we used in this repo is gin gonic, which is also using http package under the hood. So, we tried to add timeout in our local server (line 79, main.go):

	// r.Run(":4013")
	s := &http.Server{
		Addr:           ":4013",
		Handler:        r,
		ReadTimeout:    10 * time.Second,
		WriteTimeout:   10 * time.Second,
		MaxHeaderBytes: 1 << 20,
	}
	s.ListenAndServe()

This seems to be working so far. But, we haven't tested extensively, yet.

@sourque
Copy link
Collaborator

sourque commented Nov 1, 2022

Hi titanturtles, glad to hear this software has been helpful :) And thanks for the contribution!

We've added a timeout to the web server (9c5baf9), hopefully that upstreams your fix correctly. PR's are always open as well if you have any future fixes.

Although, it's strange that you would run into this issue unless you have a ton of competitors who all have slow uplinks. If you find people receiving timeouts now, you may want to increase the timeout and raise the open file limit for your system and sarpedon (with sysctl/ulimit).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants