set kafka & zookeeper volume sizes using helm bitnami kafka charts

897 views Asked by At

I have setup helm chart values for bitnami kafka & zookeper values

kafka:
  persistence:
    enabled: true
    accessModes: ["ReadWriteOnce"]
    size: 50M
    mountPath: /bitnami/kafka
    storageClass: default
    existingClaim: ""
  zookeeper:
    volumePermissions:
      enabled: true
    persistence:
      enabled: true
      storageClass: default
      existingClaim: ""
      accessModes: [ "ReadWriteOnce" ]
      size: 50M
      dataLogDir:
        size: 50M
        existingClaim: ""

It seems the pvc created for kafka is of size 16 gigs. Is there a way to setup very small disk size for testing purposes?

2

There are 2 answers

0
Deniss M. On

once storageClass: default is changed to storageClass: "" or storageClass: "-" it starts to take the values supplied into account. Locally I could go as low as 50M, but on cluster I was able to go only as low as 1Gi.

I think it relates to the existing PV setup.

0
abinet On

According to the values.yaml of Bitnami Kafka Helm Chart https://github.com/bitnami/charts/blob/main/bitnami/kafka/values.yaml you don't need to define field "kafka". So, removing "kafka:" and adjusting idents accordingly should work:

  persistence:
    enabled: true
    accessModes: ["ReadWriteOnce"]
    size: 50M
    mountPath: /bitnami/kafka
    storageClass: default
    existingClaim: ""
  zookeeper:
    volumePermissions:
      enabled: true
    persistence:
      enabled: true
      storageClass: default
      existingClaim: ""
      accessModes: [ "ReadWriteOnce" ]
      size: 50M
      dataLogDir:
        size: 50M
        existingClaim: ""