-
Notifications
You must be signed in to change notification settings - Fork 12
Performance Considerations
LiteBus is designed to be lightweight and performant, with minimal reflection and allocation overhead during message mediation. However, understanding how it operates can help you write high-performance applications.
LiteBus heavily optimizes the process of finding handlers for a given message.
-
Startup Analysis: Handler and message types are analyzed once at startup when you call
AddLiteBus
. The relationships between messages and their handlers are stored in aMessageRegistry
. -
Descriptor Caching: The
MessageRegistry
creates and caches descriptors for each message type, containing pre-filtered lists of potential handlers. This avoids repeated reflection lookups at runtime. - Lazy Instantiation: Handlers are resolved from the DI container only when they are about to be executed. This lazy resolution minimizes upfront object creation, especially in applications with many handlers.
Recommendation: Register all your handlers during application startup. While LiteBus supports runtime registration, it is most performant when the registry is built once.
The core mediation pipeline is optimized to reduce allocations. The primary sources of allocations in a LiteBus application are typically within the user-defined handlers themselves.
-
Message Objects: For very high-throughput scenarios, consider using
record struct
for your messages to avoid heap allocations, or use an object pool to reuse message objects. -
Async/Await: Be mindful of the state machine generated by
async/await
. In extremely performance-sensitive code, you may need to manage tasks manually, though this is rarely necessary. - Closures: Avoid closures (lambda expressions that capture variables) in hot paths, as they can cause hidden allocations.
Properly handling cancellation is crucial for building responsive and resilient applications.
-
Propagate
CancellationToken
: Always pass theCancellationToken
down through your business logic and to any async library calls (e.g., database queries, HTTP requests). - Check for Cancellation: In long-running, CPU-bound handlers, periodically check the token and abort if cancellation is requested.
public async Task HandleAsync(ProcessLargeFileCommand command, CancellationToken cancellationToken = default)
{
foreach (var line in command.Lines)
{
// Check for cancellation before processing each item.
cancellationToken.ThrowIfCancellationRequested();
// Simulate work
await Task.Delay(10, cancellationToken);
}
}
The Event Module allows for parallel execution of handlers, which can significantly improve performance for events with many independent subscribers.
-
PriorityGroupsConcurrencyMode.Parallel
: Executes different priority groups concurrently. -
HandlersWithinSamePriorityConcurrencyMode.Parallel
: Executes handlers within the same priority group concurrently.
Recommendation: Use parallel execution for I/O-bound event handlers (e.g., sending emails, calling external APIs, updating separate database projections). Avoid it for CPU-bound handlers unless you have a multi-core environment and have measured the benefit. Be aware that parallel execution makes the order of operations non-deterministic.
See the Event Module documentation for configuration details.
As with any performance-related work, measurement is key.
- BenchmarkDotNet: Use BenchmarkDotNet to measure the performance of your handlers in isolation. This can help you identify and optimize hot paths.
- Profiling: Use a profiler (like Visual Studio's built-in profiler or JetBrains dotTrace) to analyze the performance of your application under load and identify any bottlenecks within the LiteBus pipeline or your own code.