Fluentd JSON logs truncate/splitting after 16385 characters- How to concate?

1.1k views Asked by At

I have deployed Bitnami EFK stack on K8s environment:

  repository: bitnami/fluentd
  tag: 1.12.1-debian-10-r0

Currently, one of the modules/applications inside my namespaces are configured to generate JSON logs. I see logs in Kibana as JSON format.

But there is the issue of splitting/truncating logs after 16385 characters, and I cannot see full logs trace. I have tested some of the concat plugins but they don't give the expected results so far. or maybe I did the wrong implementation of Plugins.

fluentd-inputs.conf: |
      # Get the logs from the containers running in the node
      <source>
        @type tail
        path /var/log/containers/*.log
        tag kubernetes.*
        <parse>
          @type json
          time_key time
          time_format %Y-%m-%dT%H:%M:%S.%NZ
        </parse>
      </source>
      # enrich with kubernetes metadata
      <filter kubernetes.**>
        @type kubernetes_metadata
      </filter>
      <filter kubernetes.**>
        @type parser
        key_name log
        reserve_data true
        <parse>
          @type json
        </parse>
      </filter>
      <filter kubernetes.**>
        @type concat
        key log
        stream_identity_key @timestamp
        #multiline_start_regexp /^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d+ .*/
        multiline_start_regexp /^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d{3}
        flush_interval 5
      </filter>

    fluentd-output.conf: |
      <match **>
        @type forward
         # Elasticsearch forward
        <buffer>
          @type file
          path /opt/bitnami/fluentd/logs/buffers/logs.buffer
          total_limit_size 1024MB
          chunk_limit_size 16MB
          flush_mode interval
          retry_type exponential_backoff
          retry_timeout 30m
          retry_max_interval 30
          overflow_action drop_oldest_chunk
          flush_thread_count 2
          flush_interval 5s
          flush_thread_count 2
          flush_interval 5s
        </buffer>
      </match>
      {{- else }}
      # Send the logs to the standard output
      <match **>
        @type stdout
      </match>
      {{- end }}

I am not sure but a reason could be that inside fluentd configuration, some Plugins are already used to filter JSON data, and maybe there is a different way to use a new concat plugin. ? or it can be configured in a different way. ? https://github.com/fluent-plugins-nursery/fluent-plugin-concat

Can anyone of you please support? Thanks

0

There are 0 answers