I've noticed log drops occurring when the Fluentd agent pod utilisation exceeds 1 core. Could anyone kindly provide suggestions on how to implement multi-threading to address this log drop issue? Thanks in advance
fluentd_config:
<source>
@type tail
@id in_tail_container_logs
path /var/log/containers/*app*.log
pos_file /var/log/fluentd-containers.log.pos
tag k8s.*
read_from_head true
<parse>
@type json
time_key @timestamp
time_format %Y-%m-%dT%H:%M:%S.%N%z
keep_time_key true
</parse>
</source>
<match k8s.**>
@type copy
@id k8s
<store>
@type elasticsearch
@id k8s_es
@log_level debug
scheme http
host "es.app.host"
port "80"
log_es_400_reason true
logstash_dateformat %Y.%m.%d.%p
logstash_format true
logstash_prefix ${$.kubernetes.labels.app}
reconnect_on_error true
reload_on_failure true
reload_connections false
suppress_type_name true
sniffer_class_name Fluent::Plugin::ElasticsearchSimpleSniffer
request_timeout 2147483648
compression_level best_compression
include_timestamp true
utc_index false
time_key_format "%Y-%m-%dT%H:%M:%S.%N%z"
time_key time
id_key _hash
remove_keys _hash
<buffer tag, $.kubernetes.labels.app>
@type file
flush_mode interval
flush_thread_count 16
path /var/log/fluentd-buffers/k8s.buffer
chunk_limit_size 48MB
queue_limit_length 512
flush_interval 5s
overflow_action drop_oldest_chunk
retry_max_interval 30s
retry_forever false
retry_type exponential_backoff
retry_timeout 1h
retry_wait 20s
retry_max_times 30
</buffer>
</store>
<store>
@type prometheus
@id k8s_pro
<metric>
name fluentd_output_status_num_records_total
type counter
desc The total number of outgoing records
<labels>
tag ${tag}
hostname ${hostname}
</labels>
</metric>
</store>
how to use multi threading in fluentd to use more than 1 core and mitigate this issue.