Docker-compose Kafka: no brokers available

28 views Asked by At

Running Kafka, zookeeper, and client/producers on local docker. Getting error no brokers on Kafka and services. The waiting for a node assignment. Call: createTopics in the error below makes me wonder if the creat topics .sh or bash command is the cause but I am not sure why that would be the case.

What I have done so far:

  • Ran chmod +x create-topics.sh
  • Delete and rebuild everything.
  • Add a manual start in kafka docker-compose (shown below)
  • chatGPT
  • Google
  • Searched Stackoverflow (similar questions but no applicable resolutions)

Error in Kafka container:

WARN [AdminClient clientId=adminclient-1] Connection to node -1 (kafka1/172.31.0.8:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)

Then eventually:

2024-03-21 13:30:53 [2024-03-21 17:30:53,527] ERROR org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: createTopics
2024-03-21 13:30:53  (kafka.admin.TopicCommand$)

Error on the connecting services:

2024-03-21 13:32:55   File "/usr/local/lib/python3.11/site-packages/kafka/client_async.py", line 927, in check_version
2024-03-21 13:32:55     raise Errors.NoBrokersAvailable()
2024-03-21 13:32:55 kafka.errors.NoBrokersAvailable: NoBrokersAvailable

docker-compose.yml

version: '3.8'

services:
  database-mysql:
    image: mysql:latest
    env_file:
      - .env
    restart: always
    hostname: oaken-mysql
    container_name: oaken-mysql
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_USER: ${MYSQL_USER}
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
    ports:
      - 3306:3306
    expose:
      - 3306
    volumes: 
      - oaken-mysql:/var/lib/mysql
      - ./app/mysql/init.sql:/data/application/init.sql
    networks:
      - OakenSpirits

  mysql-kafka-processor:
    image: oaken-mysql-kafka
    env_file:
      - .env
    restart: always
    hostname: mysql-kafka-processor
    container_name: mysql-kafka-processor
    volumes: 
      - oaken-api:/app
    networks:
      - OakenSpirits

  shipping-processor:
    image: oaken-shipping
    env_file:
      - .env
    restart: always
    hostname: shipping-processor
    container_name: shipping-processor
    volumes: 
      - oaken-shipping:/app
    networks:
      - OakenSpirits

  accounting-processor:
    image: oaken-accounting
    env_file:
      - .env
    restart: always
    hostname: accounting-processor
    container_name: accounting-processor
    volumes: 
      - oaken-accounting:/app
    networks:
      - OakenSpirits

  cloudbeaver:
    image: dbeaver/cloudbeaver:latest
    container_name: oaken-dbeaver
    hostname: oaken-dbeaver
    restart: on-failure:5
    ports:
      - 8978:8978
    volumes: 
      - oaken-cloudbeaver:/opt/cloudbeaver/workspace
    networks:
      - OakenSpirits

  zoo1:
    image: confluentinc/cp-zookeeper:7.3.2
    hostname: zoo1
    container_name: zoo1
    restart: on-failure:5
    ports:
      - 2181:2181
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_SERVER_ID: 1
      ZOOKEEPER_SERVERS: zoo1:2888:3888
      ZOOKEEPER_TICK_TIME: 2000
    networks:
      - OakenSpirits

  kafka1:
    image: confluentinc/cp-kafka:7.3.2
    hostname: kafka1
    container_name: kafka1
    depends_on:
      - zoo1
    restart: on-failure:5
    ports:
      - "9092:9092"
      - "29092:29092"
      - "9999:9999"
    environment:
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka1:9092,EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092,DOCKER://host.docker.internal:29092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT,DOCKER:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
      KAFKA_BROKER_ID: 1
      KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_JMX_PORT: 9999
      KAFKA_JMX_HOSTNAME: ${DOCKER_HOST_IP:-127.0.0.1}
      KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.authorizer.AclAuthorizer
      KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true"
    volumes:
      - oaken-kafka:/kafka
      - ./create-topics.sh:/create-topics.sh
    command: ["/bin/bash", "-c", "/create-topics.sh && /bin/kafka-server-start /etc/kafka/server.properties"]
    networks:
      - OakenSpirits

networks:
  OakenSpirits:

volumes:
  oaken-mysql:
    name: oaken-mysql
  oaken-api:
    name: oaken-api
  oaken-shipping:
    name: oaken-shipping
  oaken-accounting:
    name: oaken-accounting
  oaken-cloudbeaver:
    name: oaken-cloudbeave
  oaken-kafka:
    name: oaken-kafka

.env

KAFKA_SERVER=kafka
MYSQL_TOPIC=mysql
SHIPPING_TOPIC=shipping
INVOICES_TOPIC=invoices

MYSQL_HOST=oaken-mysql
MYSQL_ROOT_PASSWORD=mysql
MYSQL_USER=mysql
MYSQL_PASSWORD=mysql
MYSQL_DATABASE=oaken

AWS_ACCESS_KEY_ID=my-access-id
AWS_SECRET_ACCESS_KEY=my-access-key
MYSQL_LOG_BUCKET=oaken-spirits
SHIPPING_LOG_BUCKET=oaken-spirits
ACCOUNTING_LOG_BUCKET=oaken-spirits
MYSQL_LOG_BUCKET=docker

creat-topics.sh

# Create Kafka topics
/bin/kafka-topics --create --topic mysql --bootstrap-server kafka1:9092 --replication-factor 1 --partitions 1
/bin/kafka-topics --create --topic invoices --bootstrap-server kafka1:9092 --replication-factor 1 --partitions 1
/bin/kafka-topics --create --topic shipping --bootstrap-server kafka1:9092 --replication-factor 1 --partitions 1

shipping.yml (all 3 services about the same, just different .py files

FROM python:3.11

RUN pip install kafka-python mysql-connector-python s3fs mysql-connector-python boto3

RUN mkdir -p /app

COPY app/shipping/shipping.py /app

WORKDIR /app

CMD ["python", "shipping.py"]
1

There are 1 answers

0
Gregory Morris On

Two issues were identified:

  1. As OneCricketeer mentioned above I am overriding the container command.
  2. It looks as though Kafka is set to auto create, so there is no need for a bash script which waiting on a failed service caused by 1.