-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ThreadPool: optional limit for jobs queue #1741
Conversation
@vmaffione thanks for the pull request. Could you update README to explain the return value of Since it's a breaking change, it will be merged in 0.15.0 after the pending minor release (0.14.x) are published. |
Done. |
|
||
EXPECT_NO_THROW(task_queue->shutdown()); | ||
EXPECT_EQ(queued_count, count.load()); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you insert EXPECT_TRUE(queued_count < number_of_task);
at line 6540?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the adjustment. But I suggested <
instead of <=
, so that we can confirm that task_queue->enqueue
fails at least once.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although extremely unlikely, it is still possible that enqueue
never fails.
The worker thread loop may be faster than the enqueue loop, in such a way that the latter will never find the queue full.
You can reproduce this condition by adding something like std::this_thread::sleep_for(std::chrono::microseconds(100));
in the enqueue loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What we can add, instead, is EXPECT_TRUE(queued_count >= 2)
, because even in the worst case (worker thread starts after the enqueue loop) two enqueue operations will always succeed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be honest, I would like to see something simpler which clearly shows that this edge case is properly handled. The original IncreaseAtomicInteger test isn't intended for this purpose, and I don't think it's a good idea to reuse it as base...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
clang-format done.
What about the same test idea you proposed, but with proper synchronization?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks much nicer!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could take a look at the failure in 'build (macOS-latest)' on Github Actions?
https://github.com/yhirose/cpp-httplib/actions/runs/7309110277/job/19916387047?pr=1741
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a logic bug in the test. Now is fixed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now I've run the test 10000 times and it always succeded.
db4bbb6
to
75b05c3
Compare
For very busy servers, the internal jobs queue where accepted sockets are enqueued can grow without limit. This is a problem for two reasons: - queueing too much work causes the server to respond with huge latency, resulting in repetead timeouts on the clients; it is definitely better to reject the connection early, so that the client receives the backpressure signal as soon as the queue is becoming too large - the jobs list can eventually cause an out of memory condition
@vmaffione thanks for your fine contribution! |
For very busy servers, the internal jobs queue where accepted sockets are enqueued can grow without limit. This is a problem for two reasons: - queueing too much work causes the server to respond with huge latency, resulting in repetead timeouts on the clients; it is definitely better to reject the connection early, so that the client receives the backpressure signal as soon as the queue is becoming too large - the jobs list can eventually cause an out of memory condition (cherry picked from commit 374d058)
For very busy servers, the internal jobs queue where accepted sockets are enqueued can grow without limit.
This is a problem for two reasons: