Transactional Outbox
The transactional outbox solves the dual-write problem: your business data and your message enqueue happen in the same database transaction. Either both commit, or neither does. Your service cannot write to the database without the message following, and cannot send a message without the database write persisting.
How it works
-
Atomic write. When
ProduceAsyncis called within a transaction scope, the outbox entry is written to the database in the same transaction as your business data. Both commit together. -
Background delivery. A daemon polls the outbox, picks up pending entries, and delivers them to the message broker. Your endpoint returns immediately after the commit.
-
Delete on success. Once delivery is confirmed, the entry is deleted. No status fields, no state machine: entry present means pending, entry absent means delivered.
-
Automatic retry. If delivery fails, the entry stays in place and is retried on the next poll cycle. The outbox guarantees at-least-once delivery.
Choosing your approach
Pick the approach that fits your context:
| You’re writing a… | Provider | Recommended approach |
|---|---|---|
| Kafka consumer or mediator handler | EF Core or MongoDB | [Transactional] attribute |
| Controller, endpoint, or simple service | EF Core | Implicit (SaveChangesAsync) |
| Hosted service or background job | EF Core or MongoDB | IUnitOfWork |
| Anything needing conditional commit logic | EF Core or MongoDB | IUnitOfWork |
[Transactional] attribute
The simplest option for consumers and handlers. Add the attribute to your class and the pipeline handles begin, commit, and rollback automatically. Each retry attempt gets a fresh transaction.
[Transactional]public sealed class MyHandler : IRequestHandler<MyCommand>{ public async Task HandleAsync(MyCommand request, CancellationToken ct) { await eventRepository.InsertAsync(request.Event, ct); await producer.ProduceAsync(..., ct); // Transaction committed automatically by middleware. }}Implicit (EF Core only)
Produce messages and call SaveChangesAsync. The outbox entry and your business entities share the same EF Core change tracker, so they’re flushed atomically in a single implicit database transaction. No ceremony needed.
public async Task HandleAsync(PlacePizzaOrder request, CancellationToken ct){ dbContext.PizzaOrders.Add(new PizzaOrder(request)); await producer.ProduceAsync(..., ct); await dbContext.SaveChangesAsync(ct);}IUnitOfWork
For scenarios outside the pipeline, such as hosted services or background jobs, or when you need explicit control over when to commit or roll back. Inject IUnitOfWork, call BeginAsync, do your work, then CommitAsync. Works with both EF Core and MongoDB.
await using var tx = await unitOfWork.BeginAsync(ct);// ... business writes and produce calls ...await tx.CommitAsync(ct);If anything throws before CommitAsync, await using aborts the transaction automatically. For provider-specific examples, see EF Core and MongoDB.
Setup
Call UseOutbox() on your persistence provider to enable the outbox infrastructure. Once it is configured, all producers route through the outbox by default. Call UseDirect() on any producer that should bypass it.
emit.AddMongoDb(mongo =>{ mongo.Configure((sp, ctx) => { ... }) .UseOutbox(options => { options.PollingInterval = TimeSpan.FromSeconds(5); // default options.BatchSize = 100; // default });});
emit.AddKafka(kafka =>{ kafka.ConfigureClient(cfg => cfg.BootstrapServers = "localhost:9092");
kafka.Topic<string, PizzaOrdered>("pizzas", topic => { topic.Producer(); // routes through outbox by default });
kafka.Topic<string, AnalyticsEvent>("analytics", topic => { topic.Producer(p => p.UseDirect()); // bypasses the outbox });});| Option | Default | Range | Description |
|---|---|---|---|
PollingInterval | 5 seconds | minimum 1 second | Time between poll cycles. The next poll starts immediately if the previous batch took longer. |
BatchSize | 100 | 1 to 10,000 | Maximum entries fetched per poll cycle, ordered by sequence (FIFO). |
For provider-specific registration details, see EF Core and MongoDB.
Ordering guarantees
Messages are ordered by group key. The default group key for a Kafka producer is kafka:{topic} (for example, kafka:pizzas). When a partition key is present, the group key is kafka:{topic}:{partitionKey}. All entries within the same group are delivered strictly in sequence.
The worker processes different groups in parallel, so two topics make progress independently. Within a single group, entries are delivered one at a time in enqueue order.
Single-worker guarantee
The outbox worker runs as a daemon: exactly one instance in your cluster processes the outbox at any time. If the assigned node goes down, the leader reassigns the daemon to another node within one heartbeat cycle (default: 15 seconds).
This prevents concurrent workers from racing on the same group and producing duplicate deliveries.
Failure behavior
When delivery to the message broker fails, the worker logs the error and leaves the entry in place. It will be retried on the next poll cycle. There is no exponential backoff at the outbox level; retries at the broker layer (such as Kafka producer retries) handle that.
The outbox is stubborn persistence, not smart retry logic. It keeps trying until it succeeds. If you need dead-lettering or routing after repeated failures, that belongs at the broker layer. The outbox guarantees at-least-once delivery; everything else is upstream.