I'm trying to install a Kafka cluster on Kubernetes (in my case I'm using minikube just for testing) using Bitnami charts for Kafka.
https://github.com/bitnami/charts/tree/main/bitnami/kafka
I start the installation by doing:
helm install cluster-kafka bitnami/kafka -f values.yaml
My values.yaml file is as follows
# values.yaml para Kafka en Kubernetes utilizando Helm y el chart de Bitnami en modo KRaft.
# Configuración básica del clúster de Kafka.
replicaCount: 3 # Define el número de réplicas del broker de Kafka para alta disponibilidad.
# Configuración de la imagen de Docker.
image:
registry: docker.io # Registro de Docker desde donde se jala la imagen.
repository: bitnami/kafka # Repositorio de la imagen de Kafka.
tag: 3.6 # Etiqueta de la imagen a utilizar, 'latest' para la última versión.
# Configuración de autenticación.
auth:
clientProtocol: plaintext # Protocolo de comunicación con los clientes, sin cifrado.
interBrokerProtocol: plaintext # Protocolo de comunicación entre brokers, sin cifrado.
# Configuración del servicio Kubernetes para exponer Kafka internamente.
service:
type: ClusterIP # Tipo de servicio Kubernetes para exponer Kafka internamente dentro del clúster.
# Configuración para el acceso externo a los brokers de Kafka.
externalAccess:
enabled: true # Habilita el acceso externo.
controller:
service:
type: NodePort
nodePorts: [31090, 31091, 31092] # Especifica puertos NodePort para cada broker.
# Sondas de vida y disponibilidad para los brokers de Kafka.
livenessProbe:
initialDelaySeconds: 30
periodSeconds: 15
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 2
readinessProbe:
initialDelaySeconds: 30
periodSeconds: 15
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 2
# Configuración de persistencia para almacenar los datos de Kafka.
persistence:
enabled: true # Habilita la persistencia.
storageClass: "standard" # Clase de almacenamiento a utilizar.
accessModes:
- ReadWriteOnce # Modo de acceso al volumen.
size: 2Gi # Tamaño del volumen de almacenamiento.
# Desactivación de Zookeeper, ya que Kraft no lo requiere.
zookeeper:
enabled: false
# Configuración del modo KRaft.
kraft:
enabled: true
clusterId: "vKaEBaltQuqktgAA3wkccA" # Identificador del clúster de Kraft.
controllerQuorumVoters: "0@cluster-kafka-controller-0.cluster-kafka-controller-headless.default.svc.cluster.local:9093,1@cluster-kafka-controller-1.cluster-kafka-controller-headless.default.svc.cluster.local:9093,2@cluster-kafka-controller-2.cluster-kafka-controller-headless.default.svc.cluster.local:9093"
# Configuración de listeners.
listeners:
client:
name: CLIENT
containerPort: 9092
protocol: PLAINTEXT
controller:
name: CONTROLLER
containerPort: 9093
protocol: PLAINTEXT
interbroker:
name: INTERNAL
containerPort: 9094
protocol: PLAINTEXT
external:
name: EXTERNAL
containerPort: 9095
protocol: PLAINTEXT
advertisedListeners:
- CLIENT://cluster-kafka-controller-0.cluster-kafka-controller-headless.default.svc.cluster.local:9092,CONTROLLER://cluster-kafka-controller-0.cluster-kafka-controller-headless.default.svc.cluster.local:9093
- CLIENT://cluster-kafka-controller-1.cluster-kafka-controller-headless.default.svc.cluster.local:9092,CONTROLLER://cluster-kafka-controller-1.cluster-kafka-controller-headless.default.svc.cluster.local:9093
- CLIENT://cluster-kafka-controller-2.cluster-kafka-controller-headless.default.svc.cluster.local:9092,CONTROLLER://cluster-kafka-controller-2.cluster-kafka-controller-headless.default.svc.cluster.local:9093
# overrideListeners: "CLIENT://:9092,CONTROLLER://:9093,INTERNAL://:9094,EXTERNAL://:9095"
securityProtocolMap: "CLIENT:PLAINTEXT,CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT"
The file was created by me after reviewing the Bitnami chart documentation for Kafka, so it is very likely that it has more than one error. What I need is to set up a Kafka cluster that has at least 3 brokers and uses Kraft, but all the examples I found are made with Zookeeper. Once I do helm install I get the following output:
NAME: cluster-kafka
LAST DEPLOYED: Wed Feb 14 21:59:02 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 26.8.5
APP VERSION: 3.6.1
---------------------------------------------------------------------------------------------
WARNING
By specifying "serviceType=LoadBalancer" and not configuring the authentication
you have most likely exposed the Kafka service externally without any
authentication mechanism.
For security reasons, we strongly suggest that you switch to "ClusterIP" or
"NodePort". As alternative, you can also configure the Kafka authentication.
---------------------------------------------------------------------------------------------
** Please be patient while the chart is being deployed **
Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:
cluster-kafka.default.svc.cluster.local
Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:
cluster-kafka-controller-0.cluster-kafka-controller-headless.default.svc.cluster.local:9092
cluster-kafka-controller-1.cluster-kafka-controller-headless.default.svc.cluster.local:9092
cluster-kafka-controller-2.cluster-kafka-controller-headless.default.svc.cluster.local:9092
To create a pod that you can use as a Kafka client run the following commands:
kubectl run cluster-kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.6 --namespace default --command -- sleep infinity
kubectl exec --tty -i cluster-kafka-client --namespace default -- bash
PRODUCER:
kafka-console-producer.sh \
--broker-list cluster-kafka-controller-0.cluster-kafka-controller-headless.default.svc.cluster.local:9092,cluster-kafka-controller-1.cluster-kafka-controller-headless.default.svc.cluster.local:9092,cluster-kafka-controller-2.cluster-kafka-controller-headless.default.svc.cluster.local:9092 \
--topic test
CONSUMER:
kafka-console-consumer.sh \
--bootstrap-server cluster-kafka.default.svc.cluster.local:9092 \
--topic test \
--from-beginning
To connect to your Kafka controller+broker nodes from outside the cluster, follow these instructions:
Kafka brokers domain: You can get the external node IP from the Kafka configuration file with the following commands (Check the EXTERNAL listener)
1. Obtain the pod name:
kubectl get pods --namespace default -l "app.kubernetes.io/name=kafka,app.kubernetes.io/instance=cluster-kafka,app.kubernetes.io/component=kafka"
2. Obtain pod configuration:
kubectl exec -it KAFKA_POD -- cat /opt/bitnami/kafka/config/server.properties | grep advertised.listeners
Kafka brokers port: You will have a different node port for each Kafka broker. You can get the list of configured node ports using the command below:
echo "$(kubectl get svc --namespace default -l "app.kubernetes.io/name=kafka,app.kubernetes.io/instance=cluster-kafka,app.kubernetes.io/component=kafka,pod" -o jsonpath='{.items[*].spec.ports[0].nodePort}' | tr ' ' '\n')"
WARNING: Rolling tag detected (bitnami/kafka:3.6), please note that it is strongly recommended to avoid using rolling tags in a production environment.
+info https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/
And if I do:
kubectl get pods
What I get is
NAME READY STATUS RESTARTS AGE
cluster-kafka-controller-0 0/1 CrashLoopBackOff 8 (4m12s ago) 21m
cluster-kafka-controller-1 0/1 CrashLoopBackOff 8 (4m20s ago) 21m
cluster-kafka-controller-2 0/1 CrashLoopBackOff 8 (4m21s ago) 21m
When I start to review the logs of some of the pods, I see the following:
2024-02-15T01:10:35.689Z | kafka 01:10:35.68 INFO ==>
2024-02-15T01:10:35.690Z | kafka 01:10:35.69 INFO ==> Welcome to the Bitnami kafka container
2024-02-15T01:10:35.691Z | kafka 01:10:35.69 INFO ==> Subscribe to project updates by watching https://github.com/bitnami/containers
2024-02-15T01:10:35.692Z | kafka 01:10:35.69 INFO ==> Submit issues and feature requests at https://github.com/bitnami/containers/issues
2024-02-15T01:10:35.693Z | kafka 01:10:35.69 INFO ==>
2024-02-15T01:10:35.694Z | kafka 01:10:35.69 INFO ==> ** Starting Kafka setup **
2024-02-15T01:10:35.742Z | kafka 01:10:35.74 INFO ==> Initializing KRaft storage metadata
2024-02-15T01:10:35.745Z | kafka 01:10:35.74 INFO ==> Formatting storage directories to add metadata...
2024-02-15T01:10:37.269Z | Exception in thread "main" java.lang.IllegalArgumentException: Error creating broker listeners from '[CLIENT://cluster-kafka-controller-0.cluster-kafka-controller-headless.default.svc.cluster.local:9092,CONTROLLER://cluster-kafka-controller-0.cluster-kafka-controller-headless.default.svc.cluster.local:9093 CLIENT://cluster-kafka-controller-1.cluster-kafka-controller-headless.default.svc.cluster.local:9092,CONTROLLER://cluster-kafka-controller-1.cluster-kafka-controller-headless.default.svc.cluster.local:9093 CLIENT://cluster-kafka-controller-2.cluster-kafka-controller-headless.default.svc.cluster.local:9092,CONTROLLER://cluster-kafka-controller-2.cluster-kafka-controller-headless.default.svc.cluster.local:9093]': No security protocol defined for listener [CLIENT
2024-02-15T01:10:37.269Z | at kafka.utils.CoreUtils$.listenerListToEndPoints(CoreUtils.scala:266)
2024-02-15T01:10:37.269Z | at kafka.server.KafkaConfig.effectiveAdvertisedListeners(KafkaConfig.scala:2154)
2024-02-15T01:10:37.269Z | at kafka.server.KafkaConfig.validateValues(KafkaConfig.scala:2275)
2024-02-15T01:10:37.269Z | at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:2233)
2024-02-15T01:10:37.269Z | at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1603)
2024-02-15T01:10:37.269Z | at kafka.tools.StorageTool$.$anonfun$main$1(StorageTool.scala:50)
2024-02-15T01:10:37.269Z | at scala.Option.flatMap(Option.scala:271)
2024-02-15T01:10:37.270Z | at kafka.tools.StorageTool$.main(StorageTool.scala:50)
2024-02-15T01:10:37.270Z | at kafka.tools.StorageTool.main(StorageTool.scala)
2024-02-15T01:10:37.270Z | Caused by: java.lang.IllegalArgumentException: No security protocol defined for listener [CLIENT
2024-02-15T01:10:37.270Z | at kafka.cluster.EndPoint$.$anonfun$createEndPoint$2(EndPoint.scala:49)
2024-02-15T01:10:37.270Z | at scala.collection.immutable.Map$Map4.getOrElse(Map.scala:450)
2024-02-15T01:10:37.270Z | at kafka.cluster.EndPoint$.securityProtocol$1(EndPoint.scala:49)
2024-02-15T01:10:37.270Z | at kafka.cluster.EndPoint$.createEndPoint(EndPoint.scala:57)
2024-02-15T01:10:37.270Z | at kafka.utils.CoreUtils$.$anonfun$listenerListToEndPoints$10(CoreUtils.scala:263)
2024-02-15T01:10:37.270Z | at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
2024-02-15T01:10:37.270Z | at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
2024-02-15T01:10:37.270Z | at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
2024-02-15T01:10:37.270Z | at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38)
2024-02-15T01:10:37.270Z | at scala.collection.TraversableLike.map(TraversableLike.scala:286)
2024-02-15T01:10:37.270Z | at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
2024-02-15T01:10:37.270Z | at scala.collection.AbstractTraversable.map(Traversable.scala:108)
2024-02-15T01:10:37.270Z | at kafka.utils.CoreUtils$.listenerListToEndPoints(CoreUtils.scala:263)
2024-02-15T01:10:37.270Z | ... 8 more
I get an error:
No security protocol defined for listener
I tried all the configurations I could think of, reviewing the possible parameters to add to the values.yaml file, but I still get the same type of error.
What could be the problem? why this is happening? Is there any other separate problem that I need to fix?
For now I don't care about security, I simply need PLAINTEXT because it is to upload to a local server for testing, I want to have a working and functional cluster before starting to deal with security and ACL issues.
I tried to change the listener configurations, use overrideListeners instead of client, controller, etc., as you can see in the file I uploaded (that's why it's commented). What I need is for the security protocol for the listeners to be configured in some way so that I can set up the cluster and start testing connectivity between pods, or connecting to a topic from a server outside the Kubernetes cluster.
Thanks for the help
If you define your own advertised listeners, it must be a comma separated string, not a list. Notice your error includes an open bracket, which isn't defined in your values, meaning it's being taken as a list object...
If not defined, it'll be built from the listeners mapping as a comma separated string
Source - https://github.com/bitnami/charts/blob/main/bitnami/kafka/templates/_helpers.tpl#L517
More specifically, the advertised listeners are set per broker and cannot be set for the whole cluster at once, so the order of that list you've tried to define isn't taken into account, and one broker shouldn't advertise the address of any other