unable to serialize JSON type logs In fluentd(logging-operator)

24 views Asked by At

this is my really log

{
    "level": "info",

    "time": "2024-03-28T10:34:44.345Z",

    "req": {

        "id": 6,

        "method": "POST",

        "url": "/xx/xx/xxx",

        "query": {},

        "headers": {

            "x-request-id": "91d4b3e2fcdf23f1c6ccccccccc90cc",

            "x-real-ip": "10.100.00.000",

            "x-forwarded-for": "10.100.00.000",

            "x-forwarded-host": "xxxx-sit.xxxx.cn",

            "x-forwarded-port": "443",

            "x-forwarded-proto": "https",

            "x-forwarded-scheme": "https",

            "x-scheme": "https",

            "x-original-forwarded-for": "10.100.00.000, 10.100.00.000",

            "content-length": "59",

            "user-agent": "Dart/3.1 (dart:io)",

            "content-type": "application/json"

        }

    },

    "context": "MessageService",

    "error": "RESTEASY003210: Could not find resource for full path: https://ccc.ccc.ccc.com/api/v1/users.info?username=cccc",

    "msg": "message log"
}

It is a JSON type log data, and my Flow configuration(logging-operator) is this.


    spec:
    filters:
    - tag_normaliser: {}
    - parser:
    key_name: message
    parse:
    type: json
    remove_key_name_field: true
    reserve_data: true
    - record_transformer:
    enable_ruby: true
    records:
    - app: ${record["kubernetes"]["labels"]["app"]}
    - node: ${record["kubernetes"]["host"]}
    - namespace: ${record["kubernetes"]["namespace_name"]}
    remove_keys: 
    .kubernetes.host,$.kubernetes.namespace_name
    globalOutputRefs: []
    localOutputRefs:
    - output-alies
    match:
    - select:
    container_names: []
    hosts: []
    labels:
    idp.app.logging: 'true'


After being transmitted to Elasticsearch and displayed by Kibana, no logs can be displayed. However, if the Flow out configuration remains unchanged, my test logs are

{"name":"John","age":30,"city":"New York","colors":{"first":"red","second":"blue"}}`

In Kibana, logs can be displayed and serialized.

{
    "_index": "uat-2024-03",
    "_type": "_doc",
    "_id": "xxx",
    "_version": 1,
    "_score": null,
    "_source": {
        "stream": "stderr",
        "logtag": "F",
        "kubernetes": {
            "pod_name": "ginoneuat-xxxx-q8ht2",
            "pod_id": "b146652",
            "container_name": "ginoneuat",
            "docker_id": "b6f8c3cc",
            "container_hash": "xxxx",
            "container_image": "/go/goone:v15"
        },
        "name": "John",
        "age": 30,
        "city": "New York",
        "colors": {
            "first": "red",
            "second": "blue"
        },
        "app": "ginoneuat",
        "node": "uat-xxx-worker",
        "namespace": "idp",
        "@timestamp": "2024-03-28T23:30:14.336201933+00:00"
    },
    "fields": {
        "@timestamp": ["2024-03-28T23:30:14.336Z"]
    },
    "highlight": {
        "app": ["@kibana-highlighted-field@ginoneuat@/kibana-highlighted-field@"]
    },
    "sort": [1711668614336]
}

I confirm that I only modified the content of the logs and did not change any other configurations. Could you please help me understand why my actual system logs are not being displayed? Thank you!

It doesn't matter if you don't know xx, such a configuration file is generated in fluentd

 <filter **>

    @type parser

    @id flow:connection-hub:flow-dev-pods:1

    key_name message

    remove_key_name_field true

    reserve_data true

    <parse>

      @type json

    </parse>

  </filter>

  <filter **>

    @type record_transformer

    @id flow:connection-hub:flow-dev-pods:2

    enable_ruby true

    remove_keys $.kubernetes.labels,$.kubernetes.host,$.kubernetes.namespace_name

    <record>

      app ${record["kubernetes"]["labels"]["app"]}

    </record>

    <record>

      node ${record["kubernetes"]["host"]}

    </record>

    <record>

      namespace ${record["kubernetes"]["namespace_name"]}

    </record>

  </filter>

i want to my really log can show like the test log in kibana Thank you!

0

There are 0 answers