Linked Questions

Popular Questions

tl;dr; I am trying to understand how a single consumer that is assigned multiple partitions handles consuming records for reach partition.

For example:

  • Completely processes a single partition before moving to the next.
  • Process a chunk of available records from each partition every time.
  • Process a batch of N records from first available partitions
  • Process a batch of N records from partitions in round-robin rotation

I found the partition.assignment.strategy configuration for Ranged or RoundRobin Assignors but this only determines how consumers are assigned partitions not how it consumes from the partitions it is assigned to.

I started digging into the KafkaConsumer source and #poll() lead me to the #pollForFetches() #pollForFetches() then lead me to fetcher#fetchedRecords() and fetcher#sendFetches()

This just lead me to try to follow along the entire Fetcher class all together and maybe it is just late or maybe I just didn't dig in far enought but I am having trouble untangling exactly how a consumer will process multiple assigned partitions.


Working on a data pipeline backed by Kafka Streams.

At several stages in this pipeline as records are processed by different Kafka Streams applications the stream is joined to compacted topics feed by external data sources that provide the required data that will be augmented in the records before continuing to the next stage in processing.

Along the way there are several dead letter topics where the records could not be matched to external data sources that would have augmented the record. This could be because the data is just not available yet (Event or Campaign is not Live yet) or it it is bad data and will never match.

The goal is to republish records from the dead letter topic when ever new augmented data is published so that we can match previously unmatched records from the dead letter topic in order to update them and send them down stream for additional processing.

Records have potentially failed to match on several attempts and could have multiple copies in the dead letter topic so we only want to reprocess existing records (before latest offset at the time the application starts) as well as records that were sent to the dead letter topic since the last time the application ran (after the previously saved consumer group offsets).

It works well as my consumer filters out any records arriving after the application has started, and my producer is managing my consumer group offsets by committing the offsets as part of the publishing transaction.

But I want to make sure that I will eventually consume from all partitions as I have ran into an odd edge case where unmatached records get reprocessed and land in the same partition as before in the dead letter topic only to get filtered out by the consumer. And though it is not getting new batches of records to process there are partitions that have not been reprocessed yet either.

Any help understanding how a single consumer processes multiple assigned partitions would be greatly appreciated.

Related Questions