debugging Elastic Ingest pipelines with grok processor

1.6k views Asked by At

I have an elastic ingest pipeline with grok processor defined along with error handling

{
  "my_ingest" : {
    "description" : "parse multiple patterns",
    "processors" : [
      {
        "grok" : {
          "field" : "message",
          "patterns" : [
            """^\[end  ] %{DATA:method} \'%{GREEDYDATA:url}' %{DATA:status} :: Duration: %{DATA:duration} ms""",
            """^\[start] %{DATA:method} \'%{GREEDYDATA:url}' :: Start Time:%{GREEDYDATA:starttime}""",
            "%{GREEDYDATA:message}"
          ],
          "on_failure" : [
            {
              "set" : {
                "field" : "failure",
                "value" : "{{_ingest.on_failure_processor_type }}-{{ _ingest.on_failure_message }}"
              }
            }
          ]
        }
      }
    ],
    "on_failure" : [
      {
        "set" : {
          "field" : "_index",
          "value" : "failedindex"
        }
      }
    ]
  }
}

i am referring to this pipeline in my filebeat.yml the grok filters works when i do a simulate in dev tools. But when i run the actual logging i do not see the log statements. it looks like they are failing to get parsed and not visible in kibana. i also don't see a new index created where i am hoping to see the errors logged as defined on on_failure. can some one please suggest or give pointers for debugging the issue.

how do i access this on_failure_processor_type and on_failure_message from the metadata ?

Thanks

1

There are 1 answers

0
xeraa On

The _simulate endpoint is generally the best starting point for debugging.

If that doesn't solve the issue, please post a sample document. Otherwise we won't be able to help there.

Also for "i also don't see a new index created": Are you sure the data is being sent to Elasticsearch? Some logs from Filebeat or whatever you are using might be worth a check (or a share).