-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AioPipe and AioQueue performance #27
Comments
I noticed the line |
Alternatively, an (unlikely) source could be caused by the |
@kaotika Any update on this? I'd like to understand the latency with aioprocessing before making the decision to use it for a project. Cheers |
@kaotika, @Joshuaalbert To identify the performance of the queue mechanism, you may find it helpful to minimise the impact of your test process. Rather than doing some processing on each data item sent via the queues, try sending 1,000,000 items from a generator and time how long that takes overall. Compare your result to the time taken by the generator alone, sending the same data to your timing system without using queues. The difference will be the queue overhead. On a round trip from main() to an echo worker, via one queue in each direction, I have no problem getting 25k round trip messages sent. The rate limiting factor in my case is still the infrastructure and not the queues. The latency you obtain in practice will depend on how long the queue is (i.e. how many items in it). You can further accelerate things if you short-cut sending and receiving by presuming the queues are neither empty nor full. For example, you might insert items into the queue like this:
In general, for a processing chain using queues, the framework of your application will have more impact on the throughput and latency than the performance of the queues themselves. Options for improving performance are fairly well covered in the literature - ensure that producer and consumer tasks use "pull" rather than "push" techniques, wait somewhere sensible, and handoff the asyncio loop cooperatively, even using await asyncio.sleep(0) if required, etc. If the queues really do become the rate limiting factor in your system, consider batching items rather than sending one by one, or consider using multiple queues, so the the data rate gets up to the pace you seek. |
Hi,
I wanted to measure the time it takes to send/receive some basic values (float) from a process to another. One test with a pipe and another with a queue.
The code in short:
manager
taskmanager
starts as muchworker
processes as definedworker
starts a listener task for the quere and the pipetime.time()
each second to the queue/pipeOn my dev machine (i7-3520M, 3.6GHz, 16 GB Ram) I got around 0.5-1.9ms for pipes and 0.9-1.1 ms for queues. I expected pipes and queues to be faster than ~1ms. Are my expectations or my testcode wrong?
The text was updated successfully, but these errors were encountered: