Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarks not reproducible #6

Open
Gobd opened this issue Dec 21, 2024 · 0 comments
Open

Benchmarks not reproducible #6

Gobd opened this issue Dec 21, 2024 · 0 comments

Comments

@Gobd
Copy link

Gobd commented Dec 21, 2024

How is this benchmark valid if the port is randomly generated and only retrievable via a log? It seems like you should bench port 8080 since that's what would need to be used in reality. It would also be nice if these benchmarks included the steps to reproduce.

In my tests with Encore.ts I get 44k req/s with the regular API, 4k req/s with the raw API (how is this 10x worse?), compared to 260k with Elysia & Bun, and 150k with Hono and Bun.

I did my test on my MacBook Pro with Docker maybe the arm64 or Docker has something to do with it? Here's what I did https://github.com/Gobd/nodebench/blob/main/runtimes/encore/build.sh then ran the test using Bombardier against port 8080 from another Docker container using a Compose file.

reqs

As you can see from these results Encore is one of the worse performing frameworks one could pick. The instructions here are useless because I could not actually use that random port in prouction.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant