Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Write performance problem, can not reach 1000qps #27

Open
samuelncui opened this issue Nov 1, 2019 · 5 comments
Open

Write performance problem, can not reach 1000qps #27

samuelncui opened this issue Nov 1, 2019 · 5 comments

Comments

@samuelncui
Copy link

samuelncui commented Nov 1, 2019

Run Xapiand as a test, and I found the write throughput can hardly reach 1000 qps, even as cluster. Write with golang and msgpack, and can hold about 800k qps when those services write to Elasticsearch. The number is about 900 qps at beginning, and slow down to 500 qps as document amount increase to 500k.

But the cpu usage of Xapiand service stay below 1000%(cpu core amount is 64) at the same time. Server built by Xeon Gold 5218 x2, 256G, Intel SSD DC P4510 2.0TB x2, the system is Debian 9. Which seems not the performance bottleneck.

I try to perf Xapiand service (svg send with this issue), and found doc_preparer thread wait for spin lock at enqueue and dequeue. I read the code and try to do some thing, but my C++ skill is very poor. Is there any place to do some optimization?

Thank you for your awesome work, and look forward for your help.

xapian-perf.svg.zip

@samuelncui samuelncui changed the title write performance problem, can not reach 1000qps Write performance problem, can not reach 1000qps Nov 1, 2019
@Kronuz
Copy link
Owner

Kronuz commented Nov 1, 2019

The server, as it currently is, is still at its very early stages of development; I believe there are still a few issues (cluster-wise), and it definitely can be optimized in a lot of areas.

There are lots of places to push optimizations, there are many opportunities to improve. I'd definitely love to have more help figuring out where bottlenecks are.

Depending of the data being indexed, my tests show it can be faster than Elasticsearch (indexing some datasets), but it certainly lags well behind with others.

Welcome on board!

@zhanglistar
Copy link

The server, as it currently is, is still at its very early stages of development; I believe there are still a few issues (cluster-wise), and it definitely can be optimized in a lot of areas.

There are lots of places to push optimizations, there are many opportunities to improve. I'd definitely love to have more help figuring out where bottlenecks are.

Depending of the data being indexed, my tests show it can be faster than Elasticsearch (indexing some datasets), but it certainly lags well behind with others.

Welcome on board!

We will use xapiand to replace elasticsearch on product, and we are working on it.

@Kronuz
Copy link
Owner

Kronuz commented Jan 16, 2020

We will use xapiand to replace elasticsearch on product, and we are working on it.

That is awesome!

@zhanglistar
Copy link

We will use xapiand to replace elasticsearch on product, and we are working on it.

That is awesome!

BTW, why you guys develop xapiand?

@kelly6
Copy link

kelly6 commented Nov 18, 2020

The server, as it currently is, is still at its very early stages of development; I believe there are still a few issues (cluster-wise), and it definitely can be optimized in a lot of areas.
There are lots of places to push optimizations, there are many opportunities to improve. I'd definitely love to have more help figuring out where bottlenecks are.
Depending of the data being indexed, my tests show it can be faster than Elasticsearch (indexing some datasets), but it certainly lags well behind with others.
Welcome on board!

We will use xapiand to replace elasticsearch on product, and we are working on it.

why? and how's it going?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants