在将日志数据作为 JSON 转发到 elasticsearch 之前,如何格式化日志数据?

在将日志数据作为 JSON 转发到 elasticsearch 之前,如何格式化日志数据?

我通过以下方式将系统上的所有事件记录到 JSON 文件中syslog-ng

destination d_json { file("/var/log/all_syslog_in_json.log" perm(0666) template("{\"@timestamp\": \"$ISODATE\", \"facility\": \"$FACILITY\", \"priority\": \"$PRIORITY\", \"level\": \"$LEVEL\", \"tag\": \"$TAG\", \"host\": \"$HOST\", \"program\": \"$PROGRAM\", \"message\": \"$MSG\"}\n")); };

log { source(s_src); destination(d_json); };

该文件由 (2.0 beta) 监控,logstash并将内容转发至elasticsearch(2.0 RC1):

input
{
  file
  {
    path => "/var/log/all_syslog_in_json.log"
    start_position => "beginning"
    codec => json
    sincedb_path => "/etc/logstash/db_for_watched_files.db"
    type => "syslog"
  }

}

output {
    elasticsearch {
        hosts => ["elk.example.com"]
        index => "logs"
    }
}

然后,我将结果可视化kibana

此设置工作正常,除了kibana不扩展message部分:

在此处输入图片描述

是否可以调整处理链中的任何元素以实现扩展messages(以便其组件与处于同一级别pathtype

编辑:根据要求,几行来自/var/log/all_syslog_in_json.log

{"@timestamp": "2015-10-21T20:14:05+02:00", "facility": "auth", "priority": "info", "level": "info", "tag": "26", "host": "eu2", "program": "sshd", "message": "Disconnected from 10.8.100.112"}
{"@timestamp": "2015-10-21T20:14:05+02:00", "facility": "authpriv", "priority": "info", "level": "info", "tag": "56", "host": "eu2", "program": "sshd", "message": "pam_unix(sshd:session): session closed for user nagios"}
{"@timestamp": "2015-10-21T20:14:05+02:00", "facility": "authpriv", "priority": "info", "level": "info", "tag": "56", "host": "eu2", "program": "systemd", "message": "pam_unix(systemd-user:session): session closed for user nagios"}

答案1

我相信你在输入中使用了错误的编解码器,你需要使用 json_lines,来自文档

If you are streaming JSON messages delimited by \n then see the json_lines codec.

使用这个编解码器而是。或者,您可以忽略输入上的编解码器,并通过 json 过滤器发送它们,这就是我一直做的方式。

filter {
    json {
        source => "message"
    }
}

相关内容