-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Write performance problem, can not reach 1000qps #27
Comments
The server, as it currently is, is still at its very early stages of development; I believe there are still a few issues (cluster-wise), and it definitely can be optimized in a lot of areas. There are lots of places to push optimizations, there are many opportunities to improve. I'd definitely love to have more help figuring out where bottlenecks are. Depending of the data being indexed, my tests show it can be faster than Elasticsearch (indexing some datasets), but it certainly lags well behind with others. Welcome on board! |
We will use xapiand to replace elasticsearch on product, and we are working on it. |
That is awesome! |
BTW, why you guys develop xapiand? |
why? and how's it going? |
Run Xapiand as a test, and I found the write throughput can hardly reach 1000 qps, even as cluster. Write with golang and msgpack, and can hold about 800k qps when those services write to Elasticsearch. The number is about 900 qps at beginning, and slow down to 500 qps as document amount increase to 500k.
But the cpu usage of Xapiand service stay below 1000%(cpu core amount is 64) at the same time. Server built by Xeon Gold 5218 x2, 256G, Intel SSD DC P4510 2.0TB x2, the system is Debian 9. Which seems not the performance bottleneck.
I try to perf Xapiand service (svg send with this issue), and found
doc_preparer
thread wait for spin lock atenqueue
anddequeue
. I read the code and try to do some thing, but my C++ skill is very poor. Is there any place to do some optimization?Thank you for your awesome work, and look forward for your help.
xapian-perf.svg.zip
The text was updated successfully, but these errors were encountered: