rsyslog:在我的文件中两次登录

rsyslog:在我的文件中两次登录

我已将防火墙配置为将日志发送到 rsyslog。但所有日志都出现了两次。

cat fortigate100D | grep "sessionid=97294098"
Jun 27 11:24:16 date=2016-06-27 time=11:24:16 logid=0000000013 type=traffic subtype=forward level=notice vd=root poluuid=49990e52-2bfa-51e5-e6c1-523e6d971a8b sessionid=97294098 ... 
Jun 27 11:24:16 date=2016-06-27 time=11:24:16 logid=0000000013 type=traffic subtype=forward level=notice vd=root poluuid=49990e52-2bfa-51e5-e6c1-523e6d971a8b sessionid=97294098 ...

我已将我的 forti 配置为发送到 local0。

这是我的 rsyslog.conf

# rsyslog configuration file

# For more information see /usr/share/doc/rsyslog-*/rsyslog_conf.html
# If you experience problems, see http://www.rsyslog.com/doc/troubleshoot.html

#### MODULES ####

# The imjournal module bellow is now used as a message source instead of imuxsock.
$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imjournal # provides access to the systemd journal
#$ModLoad imklog # reads kernel messages (the same are read from journald)
#$ModLoad immark  # provides --MARK-- message capability

# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514

# Provides TCP syslog reception
ModLoad imtcp
$InputTCPServerRun 514

#### GLOBAL DIRECTIVES ####
#Default file permission
$umask 0000
$FileCreateMode 0600

# Where to place auxiliary files
$WorkDirectory /var/lib/rsyslog

# Use default timestamp format
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat

# File syncing capability is disabled by default. This feature is usually not required,
# not useful and an extreme performance hit
#$ActionFileEnableSync on

# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf

# Turn off message reception via local log socket;
# local messages are retrieved through imjournal now.
$OmitLocalLogging on

# File to store the position in the journal
$IMJournalStateFile imjournal.state

# AccessRights
action(type="omfile" fileOwner="root" fileGroup="logstash" FileCreateMode="0640" File="/var/log/fortigate100D")

#### RULES ####

# Save Forti100D to fortigate100D
local0.*                                                /var/log/fortigate100D
&~

# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.*                                                 /dev/console

# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none                /var/log/messages
local0.none

# The authpriv file has restricted access.
authpriv.*                                              /var/log/secure

# Log all the mail messages in one place.
mail.*                                                  -/var/log/maillog

# Log cron stuff
cron.*                                                  /var/log/cron

# Everybody gets emergency messages
*.emerg                                                 :omusrmsg:*

# Save news errors of level crit and higher in a special file.
uucp,news.crit                                          /var/log/spooler

# Save boot messages also to boot.log
local7.*                                                /var/log/boot.log

# ### begin forwarding rule ###
# The statement between the begin ... end define a SINGLE forwarding
# rule. They belong together, do NOT split them. If you create multiple
# forwarding rules, duplicate the whole block!
# Remote Logging (we use TCP for reliable delivery)
#
# An on-disk queue is created for this action. If the remote host is
# down, messages are spooled to disk and sent when it is up again.
#$ActionQueueFileName fwdRule1 # unique name prefix for spool files
#$ActionQueueMaxDiskSpace 1g   # 1gb space limit (use as much as possible)
#$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
#$ActionQueueType LinkedList   # run asynchronously
#$ActionResumeRetryCount -1    # infinite retries if host is down
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
#*.* @@remote-host:514
# ### end of the forwarding rule ###

答案1

我已经通过 logstash 完成了。 https://discuss.elastic.co/t/logstash-duplicate-message/26096/3 谢谢

相关内容