-
Notifications
You must be signed in to change notification settings - Fork 773
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate BatchExporterProcessor to async #5838
base: main
Are you sure you want to change the base?
Conversation
Can you describe the overall idea on how this will be implemented? This a core functionality, and there cannot be any breaking changes at all. It is intentional that the current design spins up own thread and not rely on async tasks/thread-pool. |
The exporter loop is kept but is now running on the thread pool using a simple The idea of a thread pool is to avoid the cost of the creation/deletion of a thread as well the cost of the resources associated to the thread. But most importantly, it avoids having to perform a context switch to yield the execution to another task. In an ideal application, the thread pool has as many thread as CPU cores, and each thread never leaves its core. Each context switch will impact the throughput of the thread pool. If there is only a few extra threads used, it's fine, but if all libraries start spawning dedicated threads for their background jobs that can be an issue. Here, 3 threads are spawned, and maybe a 4th one will be added for the profiling signal. I won't have any figures to share but the OTEL library could become a better citizen in the .NET ecosystem by using the thread pool. It is also important to note that currently, only on a part of the work done by the export is performed on the dedicated thread. When HttpClient is used, or any other network class, the thread pool will still be used. Additionally, this will ease having an asynchronous flush method in the future, which I believe a user request will appear some day about that as you never want to block the thread pool and currently it does. BONUS1: we get to remove the synchronousSendSupportedByCurrentPlatform. |
src/OpenTelemetry/BaseExporter.cs
Outdated
/// </summary> | ||
/// <param name="batch">Batch of telemetry objects to export.</param> | ||
/// <returns>Result of the export operation.</returns> | ||
public virtual Task<ExportResult> ExportAsync(Batch<T> batch) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A related discussion on this topic we had in OTel Rust community : open-telemetry/opentelemetry-rust#2027
OTel Rust only has async API for the exporters. This is causing perf issues when the exporter does not benefit from async API (like etw exporters which has no async need as it don't talk to disk/network etc.)
We are also considering adding both options in OTel Rust, with one calling the other automatically!
Just shared as these are similar, though in diff. languages!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The allocation overhead for calling ExportAsync
instead of Export
should just be the Task
allocation. AFAIK, state machines are kept on the stack while the methods return synchronously. If a method needs to yield the execution because of a blocking operation (e.g. reading a socket), then the state machine will be moved to the heap.
In this short snippet we can see that the state machine is a struct and when the async method doesn't return synchronously (i.e. in the if (!awaiter.IsCompleted)
branch), AwaitUnsafeOnCompleted
is called which boxes the state machine.
So I'm not very concerned about the performance impact for synchronous exporters, especially that the method should only be called every X seconds.
One thing that could be done is to use ValueTask
to avoid the Task
allocation. It is used for example in Stream.ReadAsync(Memory). Though ValueTask
has more constraints than Task
so it can be a tough decision to make. That's why I always use Task
and it never was a bottleneck. More reading from our savior Stephen Toub: https://devblogs.microsoft.com/dotnet/understanding-the-whys-whats-and-whens-of-valuetask.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I'm not very concerned about the performance impact for synchronous exporters, especially that the method should only be called every X seconds.
The export method is called for every Log, every Span when using sync exporters like ETW. (they don't do any batching)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That could make sense to use ValueTask in that case. Could you give me a pointer to the ETW exporter. I can't seem to find it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/main/src/OpenTelemetry.Exporter.Geneva/Internal/ReentrantExportProcessor.cs - This is the exporting processor used with ETW exporters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this exporter, it will continue to call the synchronous export so they won't be any performance change here.
src/OpenTelemetry/BaseExporter.cs
Outdated
/// </summary> | ||
/// <param name="batch">Batch of telemetry objects to export.</param> | ||
/// <returns>Result of the export operation.</returns> | ||
public virtual Task<ExportResult> ExportAsync(Batch<T> batch) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we are going to do this (and that is a big IF), it MUST be:
public virtual Task<ExportResult> ExportAsync(Batch<T> batch, CancellationToken cancellationToken)
Because this is an opportunity to clean up a mistake 😄 That cancellationToken
should be built from https://github.com/open-telemetry/opentelemetry-dotnet/blob/1c01770882bb5113ff308b2b4398347fab6e0404/src/OpenTelemetry/BatchExportProcessor.cs#L24C27-L24C54.
I would also like to see:
public virtual ValueTask<ExportResult>
There's only 2 ExportResult
values. We could have 2 cached instances of Task<ExportResult>
. But I think ValueTask
is better for implementations which may always execute in sync. Imagine an exporter writing to a Stream. Could be async when hooked up to like a NetworkStream. But could also always run synchronously when hooked up to a MemoryStream.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea for the CancellationToken!
Regarding the ValueTask
, given the constraint of that type, I find it a hard decision to make. I've briefly discussed it here #5838 (comment). I think I'll be able to give more info once I implemented ExportAsync for most exporters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would love to have ExportAsync but I think in general this may be the wrong direction. Our telemetry pipeline is not asynchronous. It is synchronous. And the specification says it must be so. Because of that we're always going to run into somewhere having to jump from sync to async back to sync. Async only works when everything is async. The telemetry pipeline being synchronous is a good thing. We don't want to block real work being done on a thread because it tried to write some telemetry and the SDK has to wait on a busy thread pool.
Here are some questions:
-
How will
SimpleExporterProcessor
work withExportAsync
? Are we expectingSimpleExporterProcessor
to continue callingExport
or will it callExportAsync
? The later will probably be a non-starter. We can't tie up two threads for our telemetry pipelines. -
Depending on that answer, can we see some of the existing exporters (maybe start with Otlp) updated as a demonstration? We require
Export
be implemented viaabstract
. Authors will have to code something synchronous. If we offerExportAsync
how does that help with this? I don't want to see this...
public override ExportResult Export(in Batch<T> batch)
{
return this.ExportAsync(in batch).Result;
}
Because that will tie up two threads 👎 Or lead to an infinite loop in the event ExportAsync
is not overridden 🤣 Also, how do we prevent either of these thing from happening?
What I'm trying to figure out is, if I'm an exporter author, and I have to do a synchronous version anyway, why would I bother with an asynchronous version?
I don't think that's true. Here is a quote from the spec:
I've started implementing the
When a context switch occurs to work on the OTEL thread, it will actually prevent a thread pool thread to do its job. Using the thread-pool and async exports would greatly reduce that overhead.
Either should be fine. If a network exporter is used with SimpleExporterProcessor, there is obviously no guarantees about the performance, so I don't think It is also worth noting that because of the synchronous export API, some exporters need to do some sync-over-async on some platforms.
For OTLP, the
I think it's very similar to how the Here is a very concrete example dotutils/streamutils#2 where some benchmarks showed that this In the case of |
This PR was marked stale due to lack of activity and will be closed in 7 days. Commenting or pushing will instruct the bot to automatically remove the label. This bot runs once per day. |
Here is what the implementations of BaseExport.ExportAsync could look like: verdie-g@02eb450 |
This PR was marked stale due to lack of activity and will be closed in 7 days. Commenting or pushing will instruct the bot to automatically remove the label. This bot runs once per day. |
This PR was marked stale due to lack of activity and will be closed in 7 days. Commenting or pushing will instruct the bot to automatically remove the label. This bot runs once per day. |
Closed as inactive. Feel free to reopen if this PR is still being worked on. |
Kind of awkward that the PR gets automatically closed when it's waiting for a review 🤔 @CodeBlanch @cijothomas could you have a look at it again? The TL;DR is that you don't create threads in .NET. Some platforms don't even support it. And OTEL, which will be probably become the most downloaded nuget, creates 3-4 of them when it could use the thread pool. |
Sorry @verdie-g the short version is this isn't a priority right now. Sorry! That doesn't mean we don't ❤️ the passion and the contribution. We really do, we've just got a lot to get done before we release 1.10.0 stable next month. This is a big change so I'm not going to consider it for inclusion this late in the cycle. We can come back to it for the next release though. |
this.exporterThread.Start(); | ||
this.exportTask = Task.CompletedTask; | ||
this.exporterTaskCancellation = new CancellationTokenSource(); | ||
this.exporterTask = Task.Run(this.ExporterProc); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we shouldn't change the default behavior here. If there are scenarios where a thread pool must be leveraged, then it can be a optional, opt-in behavior - either a modification for the BatchExportProcessor (UseDedicatedThread=false) or a new BatchExportProcessorThreadPool
.
Having said that, I'll not have bandwidth to do a detailed review at the moment, but I believe the first step is to get some guidance from the maintainers about which direction should be taken:
- Provide a opt-in feature flag to BatchExportProcessor so it can do dedicated thread or threadpool.
- Make a new BatchExportProcessor to do the thread pool one - either in this repo OR in the contrib repo.
- Something else.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good idea. I'll do some experimentations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a comment about keeping current behavior as-is.
Could be amazing to have this merged at some point. A bit frustrating that OTEL doesn't even work with Blazor because of missing threading in the browser :) |
@CodeBlanch Any chance this can be reopened now? |
Reopen to bring @CodeBlanch attention;) |
I've added a boolean to the I've first tried to create a new async implementation of Alternatively we could also use AppContext.TryGetSwitch to have a global flag instead of a by-instance one. |
This PR was marked stale due to lack of activity and will be closed in 7 days. Commenting or pushing will instruct the bot to automatically remove the label. This bot runs once per day. |
Still awaiting review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One suggestion to make this unblocked.
BatchExportProcessor is one of the most critical component of this repo, so any changes to it would require thorough review. If the PR can be modified to make a new exporting processor (BatchExportProcessorWithTask or similar), and leave the existing one untouched, it may be easier to review. The new one can be marked experimental.
People who are blocked by current thread approach, can try the new one, and once it is battle tested, it can be incorporated to existing or made non-experimental.
(Metrics also spun up its own thread, so that'd be another area to fix so as to fully unblock browser scenarios).
Just a suggestion to unblock progress.
(I am trying to solve this exact problem in a different language right now, so I'll review this, but not enough bandwidth to do a thorough review and sign off at the moment, sorry!)
This PR was marked stale due to lack of activity and will be closed in 7 days. Commenting or pushing will instruct the bot to automatically remove the label. This bot runs once per day. |
Still not stale |
This PR was marked stale due to lack of activity and will be closed in 7 days. Commenting or pushing will instruct the bot to automatically remove the label. This bot runs once per day. |
Fixes #5778
Changes
This makes BatchExporterProcessor async to avoid having dedicated threads that are idle 99% of the time.
I first tried to make the implementation close to the original synchronous implementation where only the ExportProc method was allowed to call
BaseExporter.ExportAsync
but it was a unreviewable deadlock fest. This proposed implementation allows any method (Export, Flush, Shutdown) to also callExportAsync
but always limit a single concurrent export. This greatly simplifies the code and only took me 30 minutes to implement.It is very important that we first discuss about how
BaseExporter.ExportAsync
will be implemented on all available exporters before merging this change, otherwise, this is going to push synchronous job on the thread pool which will make the situation worse than before.Merge requirement checklist
CHANGELOG.md
files updated for non-trivial changes