-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Elasticseach "Too Many Requests" #73
Comments
hmmm.... not so easy |
You can try to override the |
This defiantly sounds like a problem, it could be solved on driver side also ( some kind of batch ops mechanism is better anyway ). I will inspect it deeper in the coming days as I don't have the time currently, a written test to test against would be in great help. |
Was thinking a bit on the problem. Question: Does it make sense to change it to something like: ... This will make the replay to take a bit longer (3-4 seconds more on my case) Some benchmarking (with one event at a time, 5 at a time and uncontrolled. ): The reasoning behind this is: --Fabio |
I don't think the order is relevant as those are different VMs, this also means that the error occurs when there are a lot of VMs, and not events ( even thought VMs are resulted from events it is important to point it out ). Btw, you could try playing with the "refresh" option in the elasticsearch6 driver ( ie repository ) options, it is 'true" by default, maybe you can see what happens if it is set to Again, best way to solve this would be with batch operations ( not only for ES btw. for all DBs that support such operations ) on the driver side, this way it will be way more effective and safe. |
i.e. this for mongodb? https://docs.mongodb.com/manual/core/bulk-write-operations/ |
fyi: With this change now cqrs-eventdenormalizer is able to call the more performant bulk function: d853967 |
This is great, exactly what i had in mind! I am on the ES implementation! |
Hi, You need any help ? -fS |
elasticsearch6 implementation included in v1.14.3 thanks to @nanov |
Still there is some work to be done in order to assure writes in case of a huge amount of vms and provide more accurate errors. First step would be to write a test for 1000+ Vms bulkcommit and take it from there. |
Hi Gents, :) I try the change and I got into trouble. I debugged a bit and it looks like when a bulk operation is made with a empty array of vms it fails in Mongo and in Elasticsearch. After doing this all runs but I started getting "ConcurrencyError" in some events. I could use some advice on this one :) There was also a typo on this function vm -> vms (check --2--) --1--
--2--
|
try v1.14.4 |
Hi, Tnks. We are checking :) So far so good :) Tnks |
@fabiomssilva, you have mentioned you get some ConcurrencyErrors, is that an issue with the implementation or the errors were right? I did some testing during the weekend, and there are two things that i want to implement this week in order to make the driver more stable in high concurrency scenarios. One would be to set a maximum bulk operation size, and when exceeded to split those into chunks. The second would be to implement some sort of buffering mechanism for the normal event handling, this way the driver will be able to handle a huge amount of concurrent events. They way I thought this could be implemented is by settings a max capacity and a timeout, and then the operations will be executed ( in a bulk manner ), when then capacity is reached OR the timeout is exceeded. |
Hi, From what I can understand it is fixed with the latest patch. Tnks a lot. |
Hi,
This issue is related with issue #70
While replaying a large number of events to Elasticsearch I find that at certain point elasticsearch reply with:
HTTP/1.1 429 Too Many Requests
Checking the issue I was able to find that all the 'POST' requests to elasticsearch are done at one point int time. I was able to count over 100 requests for every 10ms time period.
Checking the replay code I see that all the requests are done in:
cqrs-eventdenormalizer/lib/definitions/collection.js line 475-490
If we change this block to:
this will implement rate control on the requests.
But would be good to be able to control the rate while instanciating the repository. (or any other nice place)
There is any possibility to implement this on the lib ?
Please let me know.
Tnks
--Fabio
The text was updated successfully, but these errors were encountered: