What is the correct (typical) value for elasticsearch.hosts in kibana.config in kubernetes

826 views Asked by At

In the OpenDistro Helm README.md, the Example Secure Kibana Config With Custom Certs defines:

elasticsearch.hosts: https://elasticsearch.example.com:443

This would imply a DNS hostname external to the kubernetes cluster. However, the generated name when using the defaults, and not using custom certs, results in:

    # If no custom configuration provided, default to internal DNS
    - name: ELASTICSEARCH_HOSTS
      value: https://opendistro-es-client-service:9200

which comes from kibana-deployment.yaml: value: https://{{ template "opendistro-es.fullname" . }}-client-service:9200

Shouldn't a typical Kibana config.yml also use the internal DNS, and therefore still be opendistro-es-client-service:9200, or opendistro-es-client-service.default.svc.cluster.local:9200, assuming for example, default namespace? Why would you not use the internal DNS?

UPDATE: There is a similar question with opendistro_security.nodes_dn for the elasticsearch.config (which is copied to elasticsearch.yml):

   # See: https://github.com/opendistro-for-elasticsearch/security/blob/master/securityconfig/elasticsearch.yml.example#L17
opendistro_security.nodes_dn:
  - 'CN=nodes.example.com'

It is not spelled out anywhere that I can find, but I am assuming this is the CN from Subject of the cert defined by elasticsearch.ssl.transport.existingCertSecret. Again, shouldn't these be, if anything, the internal kubernetes dns names?.

Or does it not matter if opendistro_security.ssl.transport.enforce_hostname_verification is false"

  • The default is true
  • The value in the default elasticsearch.yml (according to the helm README.md) is false.
  • The actual example (further down in the README.md), does not set it, so presumably, it is true.
  • But the actual values.yaml has a commented out value set to false. (I presume you are supposed to uncomment that when defining config, which you must do when adding your own certs).
1

There are 1 answers

0
JohnMops On

You should create your own certificates (best practice is to create separate certificates for node/admin/rest) and inject them into the master pods. Use those certificate in the kibana.yaml to establish the connection to your ES cluster.

The value for the host is created automatically and is the same as the ES client service object on your cluster. This is how Kibana knows where to connect since the service is routing the requests to your master pods.