ElasticSearch 延迟索引

ElasticSearch 延迟索引

我目前有以下设置:

syslog-ng 服务器 --> Logstash --> ElasticSearch

syslog-ng 服务器经过负载平衡,并写入 SAN 位置,Logstash 只需跟踪文件并将它们发送到 ES。我目前每秒收到大约 1,300 个事件到 syslog 集群,用于网络日志。我遇到的问题是日志在 ES 中实际可搜索的时间逐渐延迟。当我启动集群(4 个节点)时,它死机了。然后落后几分钟,现在 4 天后落后约 35 分钟。我可以确认日志在 syslog-ng 服务器上实时写入,我还可以确认使用相同概念但使用不同 Logstash 实例的其他 4 个索引保持最新。但是,它们明显较低(约 500 个事件/秒)。

看来读取平面文件的 Logstash 实例无法跟上。我已经将这些文件分离出来一次,并生成了 2 个 Logstash 实例来提供帮助,但我仍然落后。

任何帮助将不胜感激。

--

典型输入是 ASA 日志,主要拒绝和 VPN 连接

Jan  7 00:00:00 firewall1.domain.com Jan 06 2016 23:00:00 firewall1 : %ASA-1-106023: Deny udp src outside:192.168.1.1/22245 dst DMZ_1:10.5.1.1/33434 by access-group "acl_out" [0x0, 0x0]
Jan  7 00:00:00 firewall2.domain.com %ASA-1-106023: Deny udp src console_1:10.1.1.2/28134 dst CUSTOMER_094:2.2.2.2/514 by access-group "acl_2569" [0x0, 0x0]

这是我的 Logstash 配置。

input {

file {
    type => "network-syslog"
    exclude => ["*.gz"]
    start_position => "end"
    path => [ "/location1/*.log","/location2/*.log","/location2/*.log"]
    sincedb_path => "/etc/logstash/.sincedb-network"
  }
}

filter {
    grok {
      overwrite => [ "message", "host" ]
      patterns_dir => "/etc/logstash/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-2.0.2/patterns"
      match => [
        "message", "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:host} %%{CISCOTAG:ciscotag}: %{GREEDYDATA:message}",
        "message", "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:host} %{GREEDYDATA:message}"
      ]
     }
    grok {
      match => [
        "message", "%{CISCOFW106001}",
        "message", "%{CISCOFW106006_106007_106010}",
        "message", "%{CISCOFW106014}",
        "message", "%{CISCOFW106015}",
        "message", "%{CISCOFW106021}",
        "message", "%{CISCOFW106023}",
        "message", "%{CISCOFW106100}",
        "message", "%{CISCOFW110002}",
        "message", "%{CISCOFW302010}",
        "message", "%{CISCOFW302013_302014_302015_302016}",
        "message", "%{CISCOFW302020_302021}",
        "message", "%{CISCOFW305011}",
        "message", "%{CISCOFW313001_313004_313008}",
        "message", "%{CISCOFW313005}",
        "message", "%{CISCOFW402117}",
        "message", "%{CISCOFW402119}",
        "message", "%{CISCOFW419001}",
        "message", "%{CISCOFW419002}",
        "message", "%{CISCOFW500004}",
        "message", "%{CISCOFW602303_602304}",
        "message", "%{CISCOFW710001_710002_710003_710005_710006}",
        "message", "%{CISCOFW713172}",
        "message", "%{CISCOFW733100}",
        "message", "%{GREEDYDATA}"
      ]
    }
    syslog_pri { }
    date {
      "match" => [ "syslog_timestamp", "MMM  d HH:mm:ss",
                   "MMM dd HH:mm:ss" ]
      target => "@timestamp"
    }
    mutate {
      remove_field => [ "syslog_facility", "syslog_facility_code", "syslog_severity", "syslog_severity_code"]
    }
}

output {
    elasticsearch {
    hosts => ["server1","server2","server3"]
    index => "network-%{+YYYY.MM.dd}"
    template => "/etc/logstash/logstash-2.1.1/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.2.0-java/lib/logstash/outputs/elasticsearch/elasticsearch-network.json"
    template_name => "network"
 }
}

答案1

可以使用-w N命令行选项告诉 LS 每个实例启动更多工作程序,其中 N 是一个数字。

这应该会大大增加您的活动吞吐量。

我不知道您确切的服务器布局,但启动与 LS 盒上的核心数量一半的工作器可能是安全的,但要根据其执行的其他功能进行调整。

相关内容