Istio CNI 阻止应用程序初始化容器中的流量

Istio CNI 阻止应用程序初始化容器中的流量

在为环境网格安装 Istio CNI 后,我遇到了 Istio CNI 阻塞应用程序初始化容器中流量的问题。我熟悉文档中提出的解决该问题的解决方法 (https://istio.io/latest/docs/setup/additional-setup/cni/#compatibility-with-application-init-containers)但这些说明对我没有帮助,插件会忽略它们。

我曾尝试使用这样的值文件通过 init 容器运行 redis helm chart。

helm install redis  bitnami/redis --version 18.2.0 --namespace test --values <(cat redis-values.yaml)

<值.yaml>

image:
  registry: quay.io
  repository: opstree/redis
  tag: latest

master:
  livenessProbe:
    enabled: true
    initialDelaySeconds: 40
    periodSeconds: 15
    timeoutSeconds: 15
    successThreshold: 1
    failureThreshold: 5

  podSecurityContext:
    enabled: true
    fsGroup: 1337

  podAnnotations:
    proxy.istio.io/config: |
      proxyMetadata:
        ISTIO_META_DNS_CAPTURE: "true"
        ISTIO_META_DNS_AUTO_ALLOCATE: "true"
    traffic.sidecar.istio.io/includeOutboundIPRanges: "*"
    traffic.sidecar.istio.io/includeInboundPorts: "*"
    traffic.sidecar.istio.io/excludeOutboundPorts: "443"
    traffic.sidecar.istio.io/excludeInboundPorts: "443"
    traffic.sidecar.istio.io/excludeOutboundIPRanges: "172.20.0.1/32"

  initContainers:
    - name: init-celery-workers-restart
      image: bitnami/kubectl
      command: ['sh', '-c', 'sleep 30; kubectl delete pods -n test -l app.kubernetes.io/component=celery-worker']
      securityContext:
        runAsUser: 1337

  resources:
    requests:
      cpu: 80m
      memory: 60Mi
    limits:
      cpu: 80m
      memory: 60Mi

当 pod 启动时,我在 init 容器的输出中收到此类日志。同时,istio cni、ztunnel 和 istiod 有以下日志。

“初始化容器”

E0118 09:57:46.188460       8 memcache.go:265] couldn't get current server API group list: Get "https://172.20.0.1:443/api?timeout=32s": EOF
E0118 09:57:56.195553       8 memcache.go:265] couldn't get current server API group list: Get "https://172.20.0.1:443/api?timeout=32s": EOF
E0118 09:58:06.202729       8 memcache.go:265] couldn't get current server API group list: Get "https://172.20.0.1:443/api?timeout=32s": EOF
Unable to connect to the server: EOF

“istio cni”

info    cni istio-cni ambient cmdAdd podName: redis-master-0 podIPs: [{IP:10.0.95.14 Mask:ffffffff}]
info    cni Adding pod 'redis-master-0/test' (a9573a59-6adc-48e8-9d41-a9bd08fdc40c) to ipset
info    cni Adding route for redis-master-0/test: [table 100 10.0.95.14/32 via 192.168.126.2 dev istioin src 10.0.81.204]

“ztunnel”

INFO xds{id=2}: ztunnel::xds::client: received response type_url="type.googleapis.com/istio.workload.Address" size=1
INFO xds{id=2}: ztunnel::xds::client: received response type_url="type.googleapis.com/istio.workload.Address" size=1
WARN outbound{id=b395fe07254d28f5ec84ffc0850994eb}: ztunnel::proxy::outbound: failed dur=96.464µs err=unknown source: 10.0.82.219
WARN outbound{id=a1c3ba7180c313fa148f1899baa136e4}: ztunnel::proxy::outbound: failed dur=87.892µs err=unknown source: 10.0.82.219

“伊斯蒂奥德”

info    ads Push debounce stable[44] 3 for config ServiceEntry/pslabs/redis-master.pslabs.svc.cluster.local and 1 more configs: 100.244777ms since last change, 103.240829ms since last push, full=true
info    ads XDS: Pushing Services:95 ConnectedEndpoints:6 Version:2024-01-18T10:05:13Z/37
info    validationController    Not ready to switch validation to fail-closed: dummy invalid config not rejected
info    validationController    validatingwebhookconfiguration istio-validator-istio-system (failurePolicy=Ignore, resourceVersion=14938048) is up-to-date. No change required.
error   controllers error handling istio-validator-istio-system, retrying (retry count: 1004): webhook is not ready, retry  controller=validation

我还尝试使用 ServiceEntry 和 DestinationRule 允许流量绕过 istio mesh 到达 k8s 服务。

apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: k8s-api
  namespace: test
spec:
  hosts:
    - kubernetes.default.svc.cluster.local
  addresses:
    - 172.20.0.1
  endpoints:
    - address: 172.20.0.1
  exportTo:
    - "*"
  location: MESH_EXTERNAL
  resolution: STATIC
  ports:
    - number: 443
      name: https-k8s
      protocol: HTTPS
___

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: k8s-destrule
  namespace: test
spec:
  host: kubernetes.default.svc.cluster.local
  trafficPolicy:
    tls:
      mode: DISABLE

相关内容