Fluentd 尾部配置用于来自 stdout 的 HashiCorp Vault 审计/服务器日志

Fluentd 尾部配置用于来自 stdout 的 HashiCorp Vault 审计/服务器日志

我正在尝试找出一种方法来从 HasiCorp Vault 容器中获取审计和服务器日志(两个日志都转到同一容器的标准输出,并且它们具有不同的结构):

审计示例:

2024-02-11T15:24:37.298119887Z stdout F {"time":"2024-02-11T15:24:37.297967585Z","type":"request","auth":{"client_token":"xxxxxxxxxxxxxxxxxxxxxx","accessor":"xxxxxxxxxxxxxxxxxxx","display_name":"xxxxxxxxxxxxxxxxxxxx","policies":["default","xxxxxxxxxxxxxxxx"],"token_policies":["default","xxxxxxxxxxxxxxxx"],"policy_results":{"allowed":true,"granting_policies":[{"name":"default","namespace_id":"root","type":"acl"}]},"metadata":{"role":"xxxxxxxxxxx","service_account_name":"xxxxx","service_account_namespace":"xxxxxxxxxx","service_account_secret_name":","namespace":{"id":"root"},"path":"auth/token/lookup-self","remote_address":"XXXXXXX","remote_port":XXXXX}}

服务器示例:

2024-02-11T15:50:06.857506784Z stderr F {"@level":"info","@message":"pipelining replication","@module":"storage.raft","@timestamp":"2024-02-11T15:50:06.857384Z","peer":{"Suffrage":0,"ID":"vault-0","Address":"vault-0.vault-internal:8201"}}

当前配置(来自

fileConfigs:
  01_sources.conf: |-
    ## logs from podman
    <source>
      @type tail
      @id in_vault_logs
      @label @KUBERNETES
      path /var/log/containers/vault-1_vault_vault*.log,/var/log/containers/vault-2_vault_vault*.log,/var/log/containers/vault-3_vault_vault*.log
      pos_file /var/log/fluentd-containers.log.pos
      tag kubernetes.*
      read_from_head true
      refresh_interval 10
      max_line_size 500000
      <parse>
        @type multi_format
        <pattern>
          format json
          time_key time
          time_type string
          time_format "%Y-%m-%dT%H:%M:%S.%L"
          keep_time_key false
        </pattern>
        <pattern>
          format regexp
          expression /^(?<time>.+) (?<stream>stdout|stderr)( (.))? (?<log>.*)$/
          time_format '%Y-%m-%dT%H:%M:%S.%L'
          keep_time_key false
        </pattern>
      </parse>
      emit_unmatched_lines true
    </source>
  02_filters.conf: |-
    <label @KUBERNETES>
      <match kubernetes.var.log.containers.fluentd**>
        @type relabel
        @label @FLUENT_LOG
      </match>

      <match kubernetes.var.log.containers.**_kube-system_**>
        @type null
        @id ignore_kube_system_logs
      </match>

      <filter kubernetes.**>
        @type parser
        key_name log
        format json
        reserve_data true
      </filter>

      <filter kubernetes.**>
        @type kubernetes_metadata
        @id filter_kube_metadata
        skip_labels false
        skip_container_metadata false
        skip_namespace_metadata true
        skip_master_url true
      </filter>

      <match **>
        @type relabel
        @label @DISPATCH
      </match>
    </label>
  03_dispatch.conf: |-
    <label @DISPATCH>
      <filter **>
        @type prometheus
        <metric>
          name fluentd_input_status_num_records_total
          type counter
          desc The total number of incoming records
          <labels>
            tag ${tag}
            hostname ${hostname}
          </labels>
        </metric>
      </filter>

      <match **>
        @type relabel
        @label @OUTPUT
      </match>
    </label>
  04_outputs.conf: |-
    <label @OUTPUT>
      <match **>
        @type opensearch
        host ${fluentd_opensearch_host}
        port ${fluentd_opensearch_port}
        user ${fluentd_opensearch_username}
        password ${fluentd_opensearch_password}
        logstash_format true
        logstash_prefix vault.${tag}
        scheme https
        ssl_verify false
      </match>
    </label>

现在我想我已经尝试了所有方法,但还是没能让它正常工作。我想分别匹配每种类型,并将它们分别发送到单独的索引 vault.audit- 和 vault.server-。输出是 opensearch,但那部分没问题。我只是在纠结于我必须在哪里做什么。应该很简单,可能也很简单,但我就是想不通……Fluentd 是通过 helm chart 版本 0.5.0 部署的Fluentd作为守护进程。

相关内容