Call for feedback: I've implemented an AsyncHttpAppender #4020
Replies: 2 comments
-
|
Hi @mlangc, Sorry for the delayed response, and thanks for sharing your work: it’s great to see experimentation around this area.
I haven’t benchmarked it myself, but that assessment wouldn’t surprise me. Your appender looks quite interesting, especially the batching support, which directly tackles the main drawback of HTTP: per-request overhead. Am I correct in assuming that this relies on server-side APIs that can accept multiple log events per request? Or do the backends you target simply treat logs as a raw character stream? Regarding the asynchronous aspect: have you already experimented with the asynchronous facilities that are part of Log4j Core itself (see Asynchronous loggers)? They aim to solve the same problem: minimizing latency on the logging call path. When asynchronous loggers are in use, additional asynchronous behavior inside appenders can sometimes become more of a liability than a benefit:
Improvement proposalIt might be worth exploring the following approach:
Overall, this looks like a promising direction, and I’m curious to see how it evolves. In the past, we have discussed generalizing |
Beta Was this translation helpful? Give feedback.
-
Thanks a lot for your feedback!
Yes, that's correct, all log ingestion APIs that I looked into support batching. The
Yes, this was one of the first things I did, however the fundamental problem is that pushing logs against an HTTP API one-by-one is inefficient and slow by design. Asynchronous loggers can hide the slowness, but only for very low average log throughputs. Without batching, even one log event per second can add significant overhead.
I did my best to make the implementation efficient and robust. The Last but not least, doing synchronous network IO potentially traversing multiple time zones in the logging thread is quite problematic as well. If retries are involved, it might take seconds till a message is finally delivered. During this time, the affected application threads are blocked from doing anything useful. In extreme cases, the entire application might be blocked from making progress.
Yes, however only per batch, and not per log line. Normally the log event is rendered, and appended to the current batch synchronously.
I'm only using the
So you mean that we could still batch log events, potentially by inspecting Regarding using
I like the idea. Ideally this advanced I'm happy to explore/experiment in this direction and share my findings for discussion if you are interested. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I've spent a considerable part of my Christmas holidays putting together an AsyncHttpAppender that I want to release in
the more-log4j2 library that I'm maintaining.
My personal use-case is pushing logs to the Dynatrace Ingest API without relying on proprietary Java agents or OpenTelemetry, but the appender is generic and can be integrated with other log monitoring solutions like Datadog and Grafana.
In theory, pushing logs to these APIs is also possible with the regular HttpAppender, however its performance is not acceptable even for toy projects, since logging a few lines per second ties up an entire thread due to the synchronous nature of the
HttpAppender.Thanks to compression and batching, the
AsyncHttpAppendercan handle log throughputs that are multiple orders of magnitude higher than what you can achieve with the regularHttpAppender. The implementation features different strategies to deal with overload situations, and retries with exponential backoff.I would like to release this new appender as part of
more-log4j2-2.0.0in the near future, but before doing so, I want tocollect and address any feedback you may have. You can use the snapshot release that I've pushed to Maven Central, if you want some hands-on experience without checking out the project locally.
Beta Was this translation helpful? Give feedback.
All reactions