Skip to content

Dead Letter Queue

Some messages cannot be processed. The payload is malformed, the handler has a bug, an external dependency is permanently down. Retrying won’t help. These messages need to go somewhere safe where you can inspect them later without blocking the consumer.

That is the dead-letter queue. Emit forwards the original message bytes (untouched, not re-serialized) to a dedicated topic and attaches diagnostic headers that tell you exactly what went wrong, where, and when. The consumer moves on to the next message.

Configuring the DLQ sink

Register the dead-letter topic on the Kafka builder:

emit.AddKafka(kafka =>
{
kafka.DeadLetter("orders.dlt");
});

This registers a byte[] topic under the hood, which means the DLQ topic participates in topic verification at startup and in auto-provisioning if you have that enabled. You can configure provisioning options the same way as any other topic:

kafka.DeadLetter("orders.dlt", dlq =>
{
dlq.Provisioning(options =>
{
options.NumPartitions = 3;
options.Retention = TimeSpan.FromDays(30);
});
});

You register one DLQ topic per Kafka builder. All consumer groups that dead-letter messages route to this single topic, regardless of their source topic. The diagnostic headers identify which group and consumer produced each DLQ entry.

Where dead-letter actions appear

The .DeadLetter() terminal action shows up in three places, all configured on the consumer group:

  • Error policies route handler exceptions to the DLQ. This is the most common case. See Error Policies for configuration.
  • Validation routes invalid messages to the DLQ when you configure onFailure => onFailure.DeadLetter(). See Validation for the available approaches.
  • Deserialization errors route unparseable messages to the DLQ via OnDeserializationError. These messages never made it past deserialization, so there is no typed message to work with.

In all three cases, the forwarding mechanism is the same: raw bytes go to the DLQ topic with diagnostic headers attached.

What gets forwarded

The dead-letter sink receives the raw bytes of the original message: both key and value, byte-for-byte identical to what was consumed from the source topic. Emit does not deserialize and re-serialize the message. This is a deliberate design choice.

Re-serialization would risk data loss. Schema evolution, encoding quirks, or fields your application does not model would be silently dropped. By forwarding raw bytes, Emit guarantees that the message in the DLQ is an exact copy of what the broker delivered. When you replay it later, you can re-publish to the original topic without any transformation.

Diagnostic headers

When forwarding to the DLQ, Emit preserves the original message headers first, then appends its own diagnostic headers:

HeaderContent
(original headers)All headers from the source message, preserved in order
emit.dlq.original_traceparentW3C traceparent from the source message, for linking the DLQ entry back to the original distributed trace
emit.dlq.original_topicSource topic name
x-emit-exception-typeFully qualified name of the exception type (e.g. System.TimeoutException or Emit.MessageValidationException)
x-emit-exception-messageThe exception message text. For validation failures, this contains the concatenated validation error strings.
x-emit-source-topicSource topic name
x-emit-source-partitionSource partition number
x-emit-source-offsetSource offset within the partition
x-emit-timestampISO 8601 UTC timestamp of when the DLQ forward occurred
x-emit-consumer-groupConsumer group ID that failed to process the message
x-emit-consumerConsumer identifier within the group
x-emit-consumer-typeFully qualified type name of the consumer handler
x-emit-route-keyThe matched route key, if content-based routing is in use
x-emit-retry-countNumber of retry attempts before the message was dead-lettered (0 if no retries were configured)

Between the original headers and the diagnostic metadata, you have everything you need to understand the failure without digging through logs.

Transactional consumers and the DLQ

When a consumer decorated with [Transactional] fails and the message is dead-lettered, the transaction for the final attempt is rolled back before the message is forwarded. Any database writes from that failed attempt are not persisted. The DLQ forward itself is not transactional; it happens independently after the rollback.

This ordering matters. You will never end up in a state where partial business writes from a failed attempt are committed while the message is also sitting in the DLQ.

When things go wrong

No DLQ configured

If your error policy (or validation, or deserialization handler) specifies .DeadLetter() but you never called kafka.DeadLetter(...), Emit logs an error and discards the message. The consumer continues processing. This is a configuration mistake, not a runtime failure, so you will see it immediately in your logs at startup or on the first failure.

DLQ produce failure

If the DLQ topic becomes unavailable after startup (deleted, broker outage), the DLQ producer retries internally with exponential backoff (up to 5 attempts over roughly 30 seconds). If all attempts fail, the error is logged and the message is discarded. The source consumer is not blocked; it continues processing the next message.

Both scenarios record distinct metrics (dead_letter_no_sink and dead_letter_failed respectively) so you can alert on them.

Replaying DLQ messages

Emit does not include a built-in replay mechanism, and that is intentional. A message in the DLQ means something went wrong that retries could not fix. Blindly replaying without understanding why is rescheduling the same failure.

The intended workflow:

  1. Inspect DLQ messages using your Kafka tooling (Kafka UI, kcat, Conduktor, etc.)
  2. Read the diagnostic headers to understand the failure: exception type, source topic, consumer group, retry count.
  3. Identify the root cause. Code bug, missing configuration, schema mismatch, external dependency issue.
  4. Fix the root cause. Deploy the fix.
  5. Re-publish the raw bytes back to the original topic.

Because the raw bytes are preserved and Emit’s headers are additive metadata, step 5 is straightforward. Read the key and value from the DLQ message, publish them to the topic in emit.dlq.original_topic, and let your (now fixed) consumer process them normally.

You can also register a consumer group directly on the DLQ topic to automate inspection or alerting:

kafka.DeadLetter("orders.dlt", dlq =>
{
dlq.ConsumerGroup("dlq-alerting", group =>
{
group.AddConsumer<DlqAlertConsumer>();
});
});

This is useful for feeding DLQ entries into a monitoring system, but the replay decision should remain a human one until you are confident in the automation.