Kubernetes:基于另一个命名空间中的指标进行水平自动扩展

Kubernetes:基于另一个命名空间中的指标进行水平自动扩展

我想根据部署在另一个命名空间中的入口控制器的指标来为部署设置水平自动扩展。

我有一个部署(petclinic)部署在某个命名空间(petclinic)中。

我有一个入口控制器(nginx-ingress),部署在另一个命名空间(nginx-ingress)。

入口控制器已经与 Helm 和 Tiller 一起部署,因此我有以下ServiceMonitor实体:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"monitoring.coreos.com/v1","kind":"ServiceMonitor","metadata":{"annotations":{},"creationTimestamp":"2019-08-19T10:48:00Z","generation":5,"labels":{"app":"nginx-ingress","chart":"nginx-ingress-1.12.1","component":"controller","heritage":"Tiller","release":"nginx-ingress"},"name":"nginx-ingress-controller","namespace":"nginx-ingress","resourceVersion":"7391237","selfLink":"/apis/monitoring.coreos.com/v1/namespaces/nginx-ingress/servicemonitors/nginx-ingress-controller","uid":"0217c466-5b78-4e38-885a-9ee65deb2dcd"},"spec":{"endpoints":[{"interval":"30s","port":"metrics"}],"namespaceSelector":{"matchNames":["nginx-ingress"]},"selector":{"matchLabels":{"app":"nginx-ingress","component":"controller","release":"nginx-ingress"}}}}
  creationTimestamp: "2019-08-21T13:12:00Z"
  generation: 1
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.12.1
    component: controller
    heritage: Tiller
    release: nginx-ingress
  name: nginx-ingress-controller
  namespace: nginx-ingress
  resourceVersion: "7663160"
  selfLink: /apis/monitoring.coreos.com/v1/namespaces/nginx-ingress/servicemonitors/nginx-ingress-controller
  uid: 33421be7-108b-4b81-9673-05db140364ce
spec:
  endpoints:
  - interval: 30s
    port: metrics
  namespaceSelector:
    matchNames:
    - nginx-ingress
  selector:
    matchLabels:
      app: nginx-ingress
      component: controller
      release: nginx-ingress

我也有 Prometheus Operaton 实例,它找到了这个实体,并用这个节更新了 Prometheus 的配置:

- job_name: nginx-ingress/nginx-ingress-controller/0
  honor_labels: false
  kubernetes_sd_configs:
  - role: endpoints
    namespaces:
      names:
      - nginx-ingress
  scrape_interval: 30s
  relabel_configs:
  - action: keep
    source_labels:
    - __meta_kubernetes_service_label_app
    regex: nginx-ingress
  - action: keep
    source_labels:
    - __meta_kubernetes_service_label_component
    regex: controller
  - action: keep
    source_labels:
    - __meta_kubernetes_service_label_release
    regex: nginx-ingress
  - action: keep
    source_labels:
    - __meta_kubernetes_endpoint_port_name
    regex: metrics
  - source_labels:
    - __meta_kubernetes_endpoint_address_target_kind
    - __meta_kubernetes_endpoint_address_target_name
    separator: ;
    regex: Node;(.*)
    replacement: ${1}
    target_label: node
  - source_labels:
    - __meta_kubernetes_endpoint_address_target_kind
    - __meta_kubernetes_endpoint_address_target_name
    separator: ;
    regex: Pod;(.*)
    replacement: ${1}
    target_label: pod
  - source_labels:
    - __meta_kubernetes_namespace
    target_label: namespace
  - source_labels:
    - __meta_kubernetes_service_name
    target_label: service
  - source_labels:
    - __meta_kubernetes_pod_name
    target_label: pod
  - source_labels:
    - __meta_kubernetes_service_name
    target_label: job
    replacement: ${1}
  - target_label: endpoint
    replacement: metrics

我还有一个 Prometheus-Adapter 实例,因此custom.metrics.k8s.io在可用 API 列表中有一个 API。

指标正在被收集和公开,因此以下命令:

$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/nginx-ingress/ingresses/petclinic/nginx_ingress_controller_requests" | jq .

得出以下结果:

{
  "kind": "MetricValueList",
  "apiVersion": "custom.metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/nginx-ingress/ingresses/petclinic/nginx_ingress_controller_requests"
  },
  "items": [
    {
      "describedObject": {
        "kind": "Ingress",
        "namespace": "nginx-ingress",
        "name": "petclinic",
        "apiVersion": "extensions/v1beta1"
      },
      "metricName": "nginx_ingress_controller_requests",
      "timestamp": "2019-08-20T12:56:50Z",
      "value": "11"
    }
  ]
}

到目前为止一切都很好,对吧?

我需要为我的部署设置 HPA 实体。执行如下操作:

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: petclinic
  namespace: petclinic
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: petclinic
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Object
    object:
      metricName: nginx_ingress_controller_requests
      target:
        apiVersion: extensions/v1beta1
        kind: Ingress
        name: petclinic
      targetValue: 10k

当然,这是不正确的,因为nginx_ingress_controller_requests与命名空间相关nginx-ingress,所以它不起作用(嗯,正如预期的那样):

    annotations:
      autoscaling.alpha.kubernetes.io/conditions: '[{"type":"AbleToScale","status":"True","lastTransitionTime":"2019-08-19T18:43:42Z","reason":"SucceededGetScale","message":"the
        HPA controller was able to get the target''s current scale"},{"type":"ScalingActive","status":"False","lastTransitionTime":"2019-08-19T18:55:26Z","reason":"FailedGetObjectMetric","message":"the
        HPA was unable to compute the replica count: unable to get metric nginx_ingress_controller_requests:
        Ingress on petclinic petclinic/unable to fetch metrics
        from custom metrics API: the server could not find the metric nginx_ingress_controller_requests
        for ingresses.extensions petclinic"},{"type":"ScalingLimited","status":"False","lastTransitionTime":"2019-08-19T18:43:42Z","reason":"DesiredWithinRange","message":"the
        desired count is within the acceptable range"}]'
      autoscaling.alpha.kubernetes.io/current-metrics: '[{"type":""},{"type":"Resource","resource":{"name":"cpu","currentAverageUtilization":1,"currentAverageValue":"10m"}}]'
      autoscaling.alpha.kubernetes.io/metrics: '[{"type":"Object","object":{"target":{"kind":"Ingress","name":"petclinic","apiVersion":"extensions/v1beta1"},"metricName":"nginx_ingress_controller_requests","targetValue":"10k"}}]'
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"petclinic","namespace":"petclinic"},"spec":{"maxReplicas":10,"metrics":[{"object":{"metricName":"nginx_ingress_controller_requests","target":{"apiVersion":"extensions/v1beta1","kind":"Ingress","name":"petclinic"},"targetValue":"10k"},"type":"Object"}],"minReplicas":1,"scaleTargetRef":{"apiVersion":"apps/v1","kind":"Deployment","name":"petclinic"}}}

以下是我在 Prometheus-Adapter 的日志文件中看到的内容:

I0820 15:42:13.467236       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1/namespaces/petclinic/ingresses.extensions/petclinic/nginx_ingress_controller_requests: (6.124398ms) 404 [[kube-controller-manager/v1.15.1 (linux/amd64) kubernetes/4485c6f/system:serviceaccount:kube-system:horizontal-pod-autoscaler] 10.103.98.0:37940]

HPA 在部署的命名空间中查找此指标,但我需要从nginx-ingress命名空间中获取它,就像这样:

I0820 15:44:40.044797       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1/namespaces/nginx-ingress/ingresses/petclinic/nginx_ingress_controller_requests: (2.210282ms) 200 [[kubectl/v1.15.2 (linux/amd64) kubernetes/f627830] 10.103.97.0:35142]

唉,autoscaling/v2beta1API 没有实体spec.metrics.object.target.namespace,所以我无法“要求”它从另一个命名空间获取值。:-(

有人能帮我解决这个难题吗?有没有办法根据属于另一个命名空间的自定义指标设置自动缩放?

也许有办法让这个指标在该 ingress.extension 所属的同一命名空间中可用?

提前感谢任何线索和提示。

答案1

啊,我明白了。这是我需要的 prometheus-adapter 配置的一部分:

    rules:
    - seriesQuery: '{__name__=~"^nginx_ingress_.*",namespace!=""}'
      seriesFilters: []
      resources:
        template: <<.Resource>>
        overrides:
          exported_namespace:
            resource: "namespace"
      name:
        matches: ""
        as: ""
      metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)

瞧! :-)

答案2

我的选择是从 prometheus 导出外部指标,因为它们不依赖于命名空间。

@Volodymyr Melnyk 您需要 prometheus 适配器才能将自定义指标导出到 petclinic 命名空间,但我没有看到您的配置中解决了这个问题,也许您还做了其他忘记提及的配置?

相关内容