Error while fetching metadata with correlation id 32

68 views Asked by At

I am using s3 source connector i am trying to import data from s3 to auto created topic. i am using aws msk cluster.

[Worker-001d22b042f681d7a] [2023-10-21 17:49:44,127] WARN [source-connector|task-0] [Producer clientId=connector-producer-source-connector-0] Error while fetching metadata with correlation id 1 : {source-topic=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient:1119)
[Worker-001d22b042f681d7a] [2023-10-21 17:49:44,127] INFO [source-connector|task-0] [Producer clientId=connector-producer-source-connector-0] Cluster ID: J4cTke2TRmOSsoYBO5uZdA (org.apache.kafka.clients.Metadata:279)
[Worker-001d22b042f681d7a] [2023-10-21 17:49:44,478] WARN [source-connector|task-0] [Producer clientId=connector-producer-source-connector-0] Error while fetching metadata with correlation id 3 : {source-topic=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient:1119)
[Worker-001d22b042f681d7a] [2023-10-21 17:49:44,581] WARN [source-connector|task-0] [Producer clientId=connector-producer-source-connector-0] Error while fetching metadata with correlation id 4 : {source-topic=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient:1119)

approach s3 source connector

connector.class=io.confluent.connect.s3.source.S3SourceConnector
useAccelerateMode=true
s3.region=us-east-1
confluent.topic.bootstrap.servers=b-4.sink.2uql3y.c4.kafka.us-east-1.amazonaws.com:90921.amazonaws.com:9092
auto.create.topics.enable=true
flush.size=7
schema.compatibility=NONE
tasks.max=2
topics=target-topic
pathStyleAccess=true
schema.enable=false
key.converter.schemas.enable=false
format.class=io.confluent.connect.s3.format.json.JsonFormat
aws.region=us-east-1
partitioner.class=io.confluent.connect.storage.partitioner.DefaultPartitioner
value.converter=org.apache.kafka.connect.storage.StringConverter
storage.class=io.confluent.connect.s3.storage.S3Storage
errors.log.enable=true
s3.bucket.name=bucket-name
key.converter=org.apache.kafka.connect.storage.StringConverter

after creating default topic _confluent-command i received msg from s3 to new auto created topic.

 bin/kafka-console-consumer.sh --topic _confluent-command --consumer.config /home/ec2-user/kafka_2.12-3.5.1/config/consumer.properties --from-beginning --bootstrap-server <bootstrap-server>
�
�eyJhbGciOiJub25lIn0.eyJpc3MiOiJDb25mbHVlbnQiLCJhdWQiOiJ0cmlhbCIsImV4cCI6MTcwMDUwNTg3OCwianRpIjoiamoySWFWQTV6ckVjSG94ZUw5X1dsUSIsImlhdCI6MTY5NzkxMzg3NywibmJmIjoxNjk3OTEzNzU3LCJzdWIiOiJDb25mbHVlbnQgRW50ZXJwcmlzZSIsIm1vbml0b3JpbmciOnRydWUsImxpY2Vuc2VUeXBlIjoidHJpYWwifQ.
1

There are 1 answers

8
OneCricketeer On

The content of the _confluent-command topic is Protobuf serialized and is not human readable. Only Confluent maintains the Protobuf schema for reading that data, to ensure their licensing is not bypassed

If you want to read your S3 data, look at your config

topics=target-topic

Besides this, your real error message says source-topic doesn't exist, or there's no healthy partitions for it

source-topic=UNKNOWN_TOPIC_OR_PARTITION