我正在尝试将 prometheus 连接到 influxdb 以存储超过 15 天的数据。如果 promehteus 未连接到 influxdb,Grafana 和 Prometheus 正在工作。
下面我发布了我的错误以及所有配置和参数:
在 Prometheus 上执行查询 1h 有效,1w 无效:执行查询时出错:不允许多对多匹配:匹配标签在一侧必须是唯一的
格拉法纳查询:
node_time{instance=~"$node:$port"} - node_boot_time{instance=~"$node:$port"}
Grafana 错误响应:
{
"status": "error",
"errorType": "execution",
"error": "many-to-many matching not allowed: matching labels must be unique on one side",
"message": "many-to-many matching not allowed: matching labels must be unique on one side"
}
Grafana 错误请求
Url api/datasources/proxy/1/api/v1/query_range?
query=node_time%7Binstance%3D~%22btzhlprom01%3A9100%22%7D%20-%20node_boot_time%7Binstance%3D~%22btzhlprom01%3A9100%22%7D&start=1520764826&end=1520851226&step=1800
Method GET
X-Grafana-Org-Id 1
Accept application/json, text/plain, */*
普罗米修斯构建信息
Version 2.0.0
Revision 0a74f98628a0463dddc90528220c94de5032d1a0
Branch HEAD
BuildUser root@615b82cb36b6
BuildDate 20171108-07:11:59
GoVersion go1.9.2
普罗米修斯命令行标志:
alertmanager.notification-queue-capacity 10000
alertmanager.timeout 10s
completion-bash false
completion-script-bash false
completion-script-zsh false
config.file /opt/prom/prometheus/prometheus.yml
help false
help-long false
help-man false
log.level info
query.lookback-delta 5m
query.max-concurrency 20
query.timeout 2m
storage.tsdb.max-block-duration 36h
storage.tsdb.min-block-duration 2h
storage.tsdb.no-lockfile false
storage.tsdb.path /prometheus/
storage.tsdb.retention 15d
version false
web.console.libraries /opt/prom/prometheus/console_libraries
web.console.templates /opt/prom/prometheus/consoles
web.enable-admin-api false
web.enable-lifecycle false
web.external-url http://prometheus.apl.dom
web.listen-address 127.0.0.1:9090
web.max-connections 512
web.read-timeout 5m
web.route-prefix /
web.user-assets
普罗米修斯配置:
global:
scrape_interval: 10s
scrape_timeout: 10s
evaluation_interval: 10s
alerting:
alertmanagers:
- static_configs:
- targets:
- alertmanager.apl.dom:443
tls_config:
ca_file: /opt/prom/prometheus/ApprenticeLab.pem
insecure_skip_verify: true
scheme: https
timeout: 10s
rule_files:
- /opt/prom/prometheus/alerts/disk.yml
- /opt/prom/prometheus/alerts/uptime.yml
scrape_configs:
- job_name: prometheus
scrape_interval: 10s
scrape_timeout: 10s
metrics_path: /metrics
scheme: https
static_configs:
- targets:
- prometheus.apl.dom
tls_config:
ca_file: /opt/prom/prometheus/TrustedRootCertificate.cer
insecure_skip_verify: true
- job_name: grafana
scrape_interval: 10s
scrape_timeout: 10s
metrics_path: /metrics
scheme: https
static_configs:
- targets:
- grafana.apl.dom:443
tls_config:
ca_file: /opt/prom/prometheus/TrustedRootCertificate.cer
insecure_skip_verify: true
- job_name: nodes
scrape_interval: 5s
scrape_timeout: 5s
metrics_path: /metrics
scheme: http
file_sd_configs:
- files:
- /opt/prom/prometheus/targets/infra.json
- /opt/prom/prometheus/targets/testing.json
refresh_interval: 5m
remote_write:
- url: http://btzhlinflx01:8086/api/v1/prom/write?u=prometheus&p=prometheus&db=prometheus
remote_timeout: 30s
queue_config:
capacity: 100000
max_shards: 1000
max_samples_per_send: 100
batch_send_deadline: 5s
max_retries: 10
min_backoff: 30ms
max_backoff: 100ms
remote_read:
- url: http://btzhlinflx01:8086/api/v1/prom/read?u=prometheus&p=prometheus&db=prometheus
remote_timeout: 1m
read_recent: true