-
It would be a great opportunity to see uWebaockets in TechEmpower benchmark! IMHO it can leverages uWebaockets community and introduces it to more people. |
Beta Was this translation helpful? Give feedback.
Replies: 10 comments 18 replies
-
TechEmpower benchmarks are not benchmarks for web servers. It's essentially a database benchmark, as all their tests involve database lookups. |
Beta Was this translation helpful? Give feedback.
-
Those are web server benchmarks as well, note the |
Beta Was this translation helpful? Give feedback.
-
Yep plaintext benchmarks are the only ones who qualify. But their execution is poor and tainted. So the overall conclusion is: we do our own benchmarks better, as we have done since 2016 |
Beta Was this translation helpful? Give feedback.
-
@AmirHmZz @szmarczak @alexhultman just for curiosity The PyPy3 binding is in the same level of Drogon C++ (a little behind), but got better results than Fibers. Today i testing with TechEmpower plaintext: In my machine at least, none of this passes from 200k req/s with json (with is limited to the json framework), PyPy json implementations is realy good and its got 227k reqs/s more than 10% over Drogon and Fibers with i got up to 197k and 199k respectively, but this is basically an json framework test not an http load test. I will test the Ruby extension, the Node.js extension and the C++ version today an i will post here the results. I think posting in TechEmpower is a matter of marketing to get more users, here uSockets load test are way better for finding performance botternecks than TechEmpower with uses wrk + lua for pipelining. |
Beta Was this translation helpful? Give feedback.
-
I think at least people will say WTF is this uWebSockets that tops the charts using C++, Python, Ruby, Lua. @alexhultman your work is really amazing :) |
Beta Was this translation helpful? Give feedback.
-
@alexhultman would you please reconsider about joining TechEmpower benchmarks :( ? |
Beta Was this translation helpful? Give feedback.
-
TechEmpower is not scientific. You talk about JSON - why? JSON has literally no relation with uWS whatsoever. This is scientifically nonsense, and anyone with the slightest eye for statistics can see that their plaintext test is full of 20-or-so "winners" with statistically the same result. They cap out on network bandwidth and their pipelined tests are unrealistic. I can write a small C server using uSockets and win that list with ease, but it will just be one of the 20-or-so random winners. Comparing a standards compliant server with URL router and WebSocket support and header getters and setters, with raw benchmark winners that does nothing other than just win with minimal effort says nothing. Putting uWS in that list is just going to put it among the 20 winners. Then some non scientific interpretation of that list will conclude some one winner and everyone will freak out about it even though it does nothing more than win benchmarks. Therefore it makes no sense to add uWS, but rather to add uSockets with the most minimal hack to get through that test. And that's not something I have time for or interest in. This project is 7 years old and it is really not in a position where it needs to win benchmarks, that's already established. |
Beta Was this translation helpful? Give feedback.
-
The more I look at this, the more I confirm my utter lack of respect for these tests:
|
Beta Was this translation helpful? Give feedback.
-
Let me ask you a simple question; do you think this application code below, would be comparable with the kind of application code that uWS allows? The answer is clear as day; no fucking way in hell these are comparable...
|
Beta Was this translation helpful? Give feedback.
-
Final conclusion regarding TechEmpower tests:Instead of blindly falling for the nice colorful graphs and the exciting "gladiator style" competitive tournament reporting, with one single winner, you should stop and use your analytical brain and actually look into what these tests do, and think: 1. TechEmpower is a database test disguised as a web server test.Almost all tests include some database and all of the 119 top performers use PostgreSQL while most of the bottom performers use MySQL. Any analytical brain would conclude; "If 119 of the top "web servers" use PostgreSQL, then we can group those winners under the category "PostgreSQL users" rather than under their respective web server usage. It becomes obvious that the only common trait between the 119 top performers is..... PostgreSQL, so it makes no scientifical sense to attribute the victory to the web server (which TechEmpower do!). This would also be obvious if they reported CPU-usage per web server, which they don't. 2. The few tests that actually only test the very web server are cheated.Looking at the kind of solution that wins in plaintext tests, it should be obvious that these are nothing but unsafe corner cutting hacks which blatantly ignore the given rules of the game.
3. Any proper study must refrain from drawing significant conclusions based on insignificant data!This is the thing that drives me mad. Because TechEmpower is a "gladiator tournament" where there must be only and exactly one winner - scientific sacrilege must be performed. This because, the actual data clearly shows that there are about 10 or 20 "draws" in the top, where differences are so small they cannot even be concluded to be actual differences but rather random noise. You cannot, absolutely not, draw significant conclusions based on insignificant data! You must not declare one single "winner" from a spectra of results so close they must be observed with an electron microscope to even spot an edge. 4. Adding yet another such hack to win their benchmark is pointless and childish.Not only is it pointless, but it also further gives TechEmpower credibility, something they clearly should not be given, seeing how utterly non-scientific their tests are. Anyone can win such a test! You don't even need to parse HTTP! You can just count the number of CRLFCRLF you get, sending back already pre-formatted and pre-laid out buffers in one single call. Heck, you can even figure out the size of the request they send us, roughly dividing the incoming data by that number, sending back that many responses from an already pre-formatted and pre-replicated long buffer. You can optimize this to almost nothing at all, which is exactly what these "solutions" already have done! None of this is usable in any realistic app. We could easily add yet another benchmark-winning hack to TechEmpower, easily scoring among the top 5 "winners", but who cares? None of this is realistic in any way and none of those hacks have any real production usage. |
Beta Was this translation helpful? Give feedback.
Final conclusion regarding TechEmpower tests:
Instead of blindly falling for the nice colorful graphs and the exciting "gladiator style" competitive tournament reporting, with one single winner, you should stop and use your analytical brain and actually look into what these tests do, and think:
1. TechEmpower is a database test disguised as a web server test.
Almost all tests include some database and all of the 119 top performers use PostgreSQL while most of the bottom performers use MySQL. Any analytical brain would conclude; "If 119 of the top "web servers" use PostgreSQL, then we can group those winners under the category "PostgreSQL users" rather than under their respective web serve…