Messages dropping between spout and bolt

241 views Asked by At

I've implemented a heron topology which reads messages from Kafka queue. Hence my topology has a kafka spout and a bolt which counts the number of messages read from the queue.

When I emit say 10000 messages into kafka queue, I can see all the messages being received in the kafka spout in heron topology however there is few messages being lost at the bolt.

Following is the topology settings for heron

 Config config = Config.newBuilder()
                    .setUserConfig("topology.max.spout.pending", 100000)

                    .setUserConfig("topology.message.timeout.secs", 100000)
                    .setNumContainers(1)
                    .setPerContainerCpu(3)
                    .setPerContainerRamInGigabytes(4)
                    .setDeliverySemantics("ATLEAST_ONCE")
                    .build();

Any pointers would be helpful.

EDIT: I'm using streamlet API of heron. I replaced the count bolt with the log bolt but seeing the same issue of message drop in the logs of log bolt

processingGraphBuilder.newSource(kafkaSource)
                      .log();

EDIT 2: I resolved the issue by completely removing the streamlet API. I reimplemented everything using basic spout and bolt API and had acking at the spout. This fixed the issue. I guess this happened due to no acking happening at spout in streamlet API

2

There are 2 answers

4
Ning Wang On

Simple answer: shouldn't drop.

A few questions: - in heronui, what are the all time emit and ack counts of your spout? - in heronui, what are the all time execute, ack and failure counts of your bolt?

5
Tom Cooper On

When you say that messages are dropped are you seeing fails registered in the fail count metric or just that your execute count in the bolt does not tally with the emit count of the spout?

In Storm-compatibility mode metrics are calculated based on a sample (default is 5% I think). Therefore counts could be out by that margin. For example, depending on when the stream is sampled, you could send 100 tuples through and the execute count could be 80 or 120.