Logstash is retaining messages for few hours in Kubernetes

56 views Asked by At

I'm trying to use logstash Jdbc plugin to synchronize data from a Postgres database to some ouput (elastic, rabbitMq).

The problem is that logstash is running well every hour (I configured the cronJob like that). Logstash is successfully reading the database but for some reason doesn't send the messages right away. However, after a few hours, messages are successfully sent.

I first thought it was a perfs issue because of a high volume of messages but even with just one message the behavior is the same.

I tested locally on docker with docker-compose, I don't have the issue but in a Kubernetes pod it doesn't work.

pipeline.conf

input {
      jdbc {
        jdbc_driver_class => "${JDBC_DRIVER_CLASS}"
        jdbc_connection_string => "${JDBC_CONNECTION_STRING}"
        jdbc_user => "${JDBC_USER}"
        jdbc_password => "${JDBC_PASSWORD}"
        jdbc_paging_enabled => false
        codec => "json"
        tracking_column => "sync_unix_ts"
        use_column_value => true
        tracking_column_type => "numeric"
        schedule => "${LOGSTASH_CRONEXPRESSION}"
        statement_filepath => "/usr/share/logstash/query/elasticsearch-query.sql"
      }
    }
    filter {
      json {
        source => "fieldjson"
      }
      mutate {  
        remove_field => ["fieldjson","sync_unix_ts"]
      }
    }
    
    output {
      elasticsearch {
          index => "index_name"
          document_id => "%{id}"
          hosts => ["${ELASTICSEARCH_HOST}"]
      }
    }

logstash.yaml

http.host: "0.0.0.0"
xpack.monitoring.enabled: false

I don't know if it is a buffer issue or not...

Thanks for your answers

0

There are 0 answers