When importing a log file as input from Fluent bit and loading the log into S3, the data format in the log file is different

161 views Asked by At

First of all, this is my log data.

{ "hostname" : "test", "serverTimestamp" : "2023-10-18T18:37:42.048", "serverIp" : "-", "jobCode" : "-", "jobName" : "-" }

Log data loaded into S3.

{"date":"2023-10-18T10:02:38.614367Z","appname":"30327b422843400d953a2852ec2ac0d3.log","filename":"/fluent-bit/logs/SEARCH_LOGS/20231018/2023101819/aa1b d99e095de771-30327b422843400d953a2852ec2ac0d3. log","log":"{"} {"date":"2023-10-18T10:02:38.614369Z","appname":"30327b422843400d953a2852ec2ac0d3.log","filename":"/fluent-bit/logs/SEARCH_LOGS/20231018/2023101819/aa1b d99e095de771-30327b422843400d953a2852ec2ac0d3. log","log":" \"hostname\" : \"test\","}

....

The log is too long, so I'll skip it.

As above, the log contains one line at a time: date, appname, filename If you paste it like this, it won't come out like the original.

Can you tell me why?

Below is my conf configuration code.

-------------------Fluent-bit.conf -----------

[SERVICE]
   Flush        10
   Log_Level    info
   Daemon       off

[INPUT]
   Name         tail
   Path         /fluent-bit/logs/SEARCH_LOGS/*/*/*.log
   Path_Key     filename
   Tag          SEARCH_LOGS.*
   Parser       json
   Refresh_Interval  10
   Buffer_Chunk_Size 10M
   Buffer_Max_Size   10G

[FILTER]
   Name  lua
   Match  *
   script  /fluent-bit/etc/helper.lua
   call  extract_app_fields

[OUTPUT]
   Name                s3
   Match               SEARCH_LOGS.*
   region              ap-northeast-2
   bucket              dev-fluentbit
   upload_timeout      20s
   use_put_object      On
   static_file_path    On
   store_dir           /fluent-bit/logs/fluentbit
   upload_chunk_size   3m
   s3_key_format       /logs/$TAG[3]/%Y%m%d/%Y%m%d%H/$TAG[6].log.gz
   s3_key_format_tag_delimiters .
   compression         gzip
0

There are 0 answers