Linked Questions

Popular Questions

I have a consumer script which processes each message and commits offsets manually to the topic.

CONSUMER = KafkaConsumer(
    # Use the RoundRobinPartition method
    value_deserializer=lambda x: json.loads(x.decode('utf-8'))

while True:
    count += 1"--------------Poll {0}---------".format(count))
    for msg in CONSUMER:
        # Process msg.value
        # Commit offset to topic
        tp = TopicPartition(msg.topic, msg.partition)
        offsets = {tp: OffsetAndMetadata(msg.offset, None)}

Time taken to process each message is < 1 sec.

I get this error Error:

kafka.errors.CommitFailedError: CommitFailedError: Commit cannot be completed since the group has already
            rebalanced and assigned the partitions to another member.
            This means that the time between subsequent calls to poll()
            was longer than the configured max_poll_interval_ms, which
            typically implies that the poll loop is spending too much
            time message processing. You can address this either by
            increasing the rebalance timeout with max_poll_interval_ms,
            or by reducing the maximum size of batches returned in poll()
            with max_poll_records.

Process finished with exit code 1


a) How to fix this error ?

b) How can I ensure my manually commit is working properly ?

c) Correct way of committing offset.

I have gone through this but Difference between and for Kafka and later versions to understand my problem any help on tuning poll, session or heartbeat time is much appreciated.

Apache kafka: 2.11-2.1.0 kafka-python: 1.4.4

Related Questions