我正在尝试配置 rsyslog 以将日志发送到 CentOS 上的 logstash。因此我遵循了教程。但是设置完成后,什么都没发生。一切正常,没有发生错误,但 elasticsearch 中没有日志。
这是我的/etc/rsyslog.conf
:
#### MODULES ####
$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imjournal # provides access to the systemd journal
#### GLOBAL DIRECTIVES ####
# Where to place auxiliary files
$WorkDirectory /var/lib/rsyslog
# Use default timestamp format
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf
# Turn off message reception via local log socket;
# local messages are retrieved through imjournal now.
$OmitLocalLogging on
# File to store the position in the journal
$IMJournalStateFile imjournal.state
#### RULES ####
*.info;mail.none;authpriv.none;cron.none /var/log/messages
# The authpriv file has restricted access.
authpriv.* /var/log/secure
# Log all the mail messages in one place.
mail.* -/var/log/maillog
# Log cron stuff
cron.* /var/log/cron
# Everybody gets emergency messages
*.emerg :omusrmsg:*
# Save news errors of level crit and higher in a special file.
uucp,news.crit /var/log/spooler
# Save boot messages also to boot.log
local7.* /var/log/boot.log
*.*;\
local3.none -/var/log/syslog
*.*;\
local3.none -/var/log/messages
*.* @@10.0.15.25:10514
和/etc/rsyslog.d/loghost.conf
:
$ModLoad imfile
$InputFileName /var/log/devops_training.log
$InputFileTag devops
$InputFileStateFile stat-devops
$InputFileSeverity debug
$InputFileFacility local3
$InputRunFileMonitor
这是我的 logstash 配置:
input {
syslog {
type => rsyslog
port => 10514
}
}
filter { }
output {
if [type] == "rsyslog" {
elasticsearch {
hosts => [ "localhost:9200" ]
index => 'rsyslog-%{+YYYY.MM.dd}'
document_type => "rsyslog"
}
}
}
rsyslog 配置似乎没有错误:
rsyslogd: version 7.4.7, config validation run (level 1), master config /etc/rsyslog.conf
rsyslogd: End of config validation run. Bye.
并且 logstash 的日志也没有任何错误:
[2017-06-07T20:11:48,004][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"}
[2017-06-07T20:11:48,188][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"adf934f1-caf5-48be-b65c-b2907c0d6336", :path=>"/var/lib/logstash/uuid"}
[2017-06-07T20:11:49,438][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2017-06-07T20:11:49,439][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2017-06-07T20:11:49,604][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x3fdb353a URL:http://localhost:9200/>}
[2017-06-07T20:11:49,623][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-06-07T20:11:49,744][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-06-07T20:11:49,758][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[2017-06-07T20:11:49,880][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x8dcaeed URL://localhost:9200>]}
[2017-06-07T20:11:49,883][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2017-06-07T20:11:50,623][INFO ][logstash.pipeline ] Pipeline main started
[2017-06-07T20:11:50,644][INFO ][logstash.inputs.syslog ] Starting syslog udp listener {:address=>"0.0.0.0:10514"}
[2017-06-07T20:11:50,660][INFO ][logstash.inputs.syslog ] Starting syslog tcp listener {:address=>"0.0.0.0:10514"}
[2017-06-07T20:11:50,827][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
问题不仅仅在于我不知道如何修复它。我无法理解问题是什么以及如何排除故障。
答案1
我建议您将故障排除分为两个部分:
1.) 测试远程 rsyslog 转发是否有效。停止 logstash 并使用以下命令打开 TCP 连接:
nc -l 10514
在客户端上,使用以下方式将一些内容记录到系统日志中:记录器看看它是否到达了你的 logstash 服务器。你也可以重启 rsyslog 守护进程来创建一些日志流量。
2.) 测试 logstash 和 elasticsearch 之间的连接是否正常工作。为此,请在 logstash 配置中定义一个简单的文件输入,并将一些日志行写入该文件。
input {
file {
path => "/tmp/test_log"
type => "rsyslog"
}
}
然后检查您的 rsyslog 索引是否在 elasticsearch 中正确创建。
答案2
我设置了类似的东西。因此,写下我进行故障排除时遵循的几个步骤。检查索引是否已创建。
curl -XGET 'http://localhost:9200/rsyslog-*/_search?q=*&pretty'
不要通过 systemctl 启动 logstash,而是通过 CLI 启动它,这样您就可以看到发生了什么。通常的建议是在 logstash conf 的输入节中给出 STDIN,在输出节中给出 STDOUT。但是我只是将下面一行附加到输出块中。
stdout { codec => rubydebug }
并通过命令行启动 logstash
/usr/share/logstash/bin/logstash --path.settings /etc/logstash -f /etc/logstash/conf.d/logstash.conf
然后您将能够看到 logstash 接收和处理的事件。