如何让 Kubernetes 网络桥接像虚拟机管理程序一样工作

如何让 Kubernetes 网络桥接像虚拟机管理程序一样工作

我已经尝试让它工作了三天,但就是想不出合适的解决方案。我有一个带有虚拟机的现有环境,我想做一件非常简单的事情:

  • 创建容器(Pod)
  • 将其放在每个 Kubernetes 节点上都存在且相同的网桥上的特定 VLAN 接口上

就是这样!就像您在任何虚拟机管理程序上对任何虚拟机执行的操作一样。您创建虚拟机,指定它应该位于哪个 VLAN,然后启动它。

出于某种原因,这感觉就像 Kubernetes 的圣杯,我不明白为什么。每次需要 Pod IP 时,都必须在 Worker Node 上设置 IP 并转发到 Pod,或者他们想向我出售具有负载平衡等功能的内部 NAT。这不可能。

我尝试了以下操作但没有成功:

  • 在多种配置中使用 calico
  • 在多种配置中使用 flanel
  • 创建了“CustomResourceDefinitions”和“NetworkAttachmentDefinitions”等内容,并提供了多种配置示例和测试。
  • 在配置 Pod 之前尝试在节点上手动编写大量脚本

自定义资源定义:

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: network-attachment-definitions.k8s.cni.cncf.io
spec:
  group: k8s.cni.cncf.io
  versions:
  - name: v1
    served: true
    storage: true
    schema:
      openAPIV3Schema:
        type: object
        properties:
          spec:
            x-kubernetes-preserve-unknown-fields: true
  scope: Namespaced
  names:
        plural: network-attachment-definitions
        singular: network-attachment-definition
        kind: NetworkAttachmentDefinition
        shortNames:
        - net-attach-def

带有测试的网络附件定义:

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: macvlan-conf
spec:
  config: '{
    "name": "mynet",
    "type": "flannel",
    "delegate": {
      "bridge": "mynet0",
      "mtu": 1400
    }
  }

#    "cniVersion": "0.3.1",
#    "name": "mynet",
#    "type": "bridge",
#    "bridge": "mynet0",
#    "vlan": 100,
#    "ipam": {}
#  }'

#  config: '{
#    "cniVersion": "0.3.1",
#    "type": "vlan",
#    "master": "enp0s3",
#    "mtu": 1500,
#    "vlanId": 5,
#    "linkInContainer": false,
#    "ipam": {
#      "type": "host-local",
#      "subnet": "172.31.0.0/24"
#    },
#    "dns": {
#      "nameservers": [ "10.1.1.1", "8.8.8.8" ]
#    }
#  }'

样品舱:

apiVersion: v1
kind: Pod
metadata:
  name: samplepod
  annotations:
    k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
  containers:
  - name: samplepod
    command: ["/bin/ash", "-c", "trap : TERM INT; sleep infinity & wait"]
    image: alpine

我感觉我在这里遗漏了一些非常简单的东西。我验证了我的节点上可以使用 vlan 和桥接。我在节点上的日志或 journalctl 中都没有任何错误。看来它只是忽略了 macvlan-conf。

有人有这个环境或者知道我需要什么吗?

谢谢!

编辑:创建 Pod 的日志

Oct 06 08:31:53 debian kubelet[793]: I1006 08:31:53.995353     793 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hbf8\" (UniqueName: \"kubernetes.io/projected/d7c2b05a-a1a6-424d-b626-0d7f5b1ffebc-kube-api-access-5hbf8\") pod \"samplepod2\" (UID: \"d7c2b05a-a1a6-424d-b626-0d7f5b1ffebc\") " pod="default/samplepod2"
Oct 06 08:31:54 debian containerd[721]: time="2023-10-06T08:31:54.165145664+02:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:samplepod2,Uid:d7c2b05a-a1a6-424d-b626-0d7f5b1ffebc,Namespace:default,Attempt:0,}"
Oct 06 08:31:54 debian kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Oct 06 08:31:54 debian kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali68b9b56a958: link becomes ready
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.244 [INFO][358123] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {debian-k8s-samplepod2-eth0  default  d7c2b05a-a1a6-424d-b626-0d7f5b1ffebc 245039 0 2023-10-06 08:31:53 +0200 CEST <nil> <nil> map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s  debian  samplepod2 eth0 default [] []   [kns.default>
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.244 [INFO][358123] k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f1e4e55ae12bd9d257a47ecb7630ea59029f5da8ef89b3937dec19cbf11d681" Namespace="default" Pod="samplepod2" WorkloadEndpoint="debian-k8s-samplepod2-eth0"
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.275 [INFO][358141] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f1e4e55ae12bd9d257a47ecb7630ea59029f5da8ef89b3937dec19cbf11d681" HandleID="k8s-pod-network.7f1e4e55ae12bd9d257a47ecb7630ea59029f5da8ef89b3937dec19cbf11d681" Workload="debian-k8s-samplepod2-eth0"
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.287 [INFO][358141] ipam_plugin.go 268: Auto assigning IP ContainerID="7f1e4e55ae12bd9d257a47ecb7630ea59029f5da8ef89b3937dec19cbf11d681" HandleID="k8s-pod-network.7f1e4e55ae12bd9d257a47ecb7630ea59029f5da8ef89b3937dec19cbf11d681" Workload="debian-k8s-samplepod2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000298240), Attrs:map[string]string{"namespace":"default", "node":"debian", "pod":"samplepod2",>
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.287 [INFO][358141] ipam_plugin.go 356: About to acquire host-wide IPAM lock.
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.287 [INFO][358141] ipam_plugin.go 371: Acquired host-wide IPAM lock.
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.288 [INFO][358141] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'debian'
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.290 [INFO][358141] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7f1e4e55ae12bd9d257a47ecb7630ea59029f5da8ef89b3937dec19cbf11d681" host="debian"
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.294 [INFO][358141] ipam.go 372: Looking up existing affinities for host host="debian"
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.299 [INFO][358141] ipam.go 489: Trying affinity for 192.168.245.192/26 host="debian"
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.301 [INFO][358141] ipam.go 155: Attempting to load block cidr=192.168.245.192/26 host="debian"
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.303 [INFO][358141] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.245.192/26 host="debian"
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.303 [INFO][358141] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.245.192/26 handle="k8s-pod-network.7f1e4e55ae12bd9d257a47ecb7630ea59029f5da8ef89b3937dec19cbf11d681" host="debian"
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.305 [INFO][358141] ipam.go 1682: Creating new handle: k8s-pod-network.7f1e4e55ae12bd9d257a47ecb7630ea59029f5da8ef89b3937dec19cbf11d681
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.308 [INFO][358141] ipam.go 1203: Writing block in order to claim IPs block=192.168.245.192/26 handle="k8s-pod-network.7f1e4e55ae12bd9d257a47ecb7630ea59029f5da8ef89b3937dec19cbf11d681" host="debian"
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.315 [INFO][358141] ipam.go 1216: Successfully claimed IPs: [192.168.245.206/26] block=192.168.245.192/26 handle="k8s-pod-network.7f1e4e55ae12bd9d257a47ecb7630ea59029f5da8ef89b3937dec19cbf11d681" host="debian"
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.315 [INFO][358141] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.245.206/26] handle="k8s-pod-network.7f1e4e55ae12bd9d257a47ecb7630ea59029f5da8ef89b3937dec19cbf11d681" host="debian"
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.315 [INFO][358141] ipam_plugin.go 377: Released host-wide IPAM lock.
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.315 [INFO][358141] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.245.206/26] IPv6=[] ContainerID="7f1e4e55ae12bd9d257a47ecb7630ea59029f5da8ef89b3937dec19cbf11d681" HandleID="k8s-pod-network.7f1e4e55ae12bd9d257a47ecb7630ea59029f5da8ef89b3937dec19cbf11d681" Workload="debian-k8s-samplepod2-eth0"
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.317 [INFO][358123] k8s.go 383: Populated endpoint ContainerID="7f1e4e55ae12bd9d257a47ecb7630ea59029f5da8ef89b3937dec19cbf11d681" Namespace="default" Pod="samplepod2" WorkloadEndpoint="debian-k8s-samplepod2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"debian-k8s-samplepod2-eth0", GenerateName:"", Namespace:"default", SelfLink>
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.317 [INFO][358123] k8s.go 384: Calico CNI using IPs: [192.168.245.206/32] ContainerID="7f1e4e55ae12bd9d257a47ecb7630ea59029f5da8ef89b3937dec19cbf11d681" Namespace="default" Pod="samplepod2" WorkloadEndpoint="debian-k8s-samplepod2-eth0"
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.317 [INFO][358123] dataplane_linux.go 68: Setting the host side veth name to cali68b9b56a958 ContainerID="7f1e4e55ae12bd9d257a47ecb7630ea59029f5da8ef89b3937dec19cbf11d681" Namespace="default" Pod="samplepod2" WorkloadEndpoint="debian-k8s-samplepod2-eth0"
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.318 [INFO][358123] dataplane_linux.go 473: Disabling IPv4 forwarding ContainerID="7f1e4e55ae12bd9d257a47ecb7630ea59029f5da8ef89b3937dec19cbf11d681" Namespace="default" Pod="samplepod2" WorkloadEndpoint="debian-k8s-samplepod2-eth0"
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.356 [INFO][358123] k8s.go 411: Added Mac, interface name, and active container ID to endpoint ContainerID="7f1e4e55ae12bd9d257a47ecb7630ea59029f5da8ef89b3937dec19cbf11d681" Namespace="default" Pod="samplepod2" WorkloadEndpoint="debian-k8s-samplepod2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"debian-k8s-samplepod2-eth0", Ge>
Oct 06 08:31:54 debian containerd[721]: 2023-10-06 08:31:54.365 [INFO][358123] k8s.go 489: Wrote updated endpoint to datastore ContainerID="7f1e4e55ae12bd9d257a47ecb7630ea59029f5da8ef89b3937dec19cbf11d681" Namespace="default" Pod="samplepod2" WorkloadEndpoint="debian-k8s-samplepod2-eth0"
Oct 06 08:31:54 debian containerd[721]: time="2023-10-06T08:31:54.388979383+02:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Oct 06 08:31:54 debian containerd[721]: time="2023-10-06T08:31:54.389077005+02:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Oct 06 08:31:54 debian containerd[721]: time="2023-10-06T08:31:54.389088977+02:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Oct 06 08:31:54 debian containerd[721]: time="2023-10-06T08:31:54.389223823+02:00" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f1e4e55ae12bd9d257a47ecb7630ea59029f5da8ef89b3937dec19cbf11d681 pid=358182 runtime=io.containerd.runc.v2
Oct 06 08:31:54 debian systemd[1]: Started cri-containerd-7f1e4e55ae12bd9d257a47ecb7630ea59029f5da8ef89b3937dec19cbf11d681.scope - libcontainer container 7f1e4e55ae12bd9d257a47ecb7630ea59029f5da8ef89b3937dec19cbf11d681.

答案1

有几件事需要考虑:

  1. 在 Kubernetes 中托管容器的 Pod 是节点中的内部资源。
  2. VLAN 是第 2 层(广播域)
  3. 您似乎正在尝试使用 VM 区域解决方案来解决云原生场景。(这并非不可能,但可能有点困难)
  4. 请记住,Calico 是面向云原生环境的基于 Layer3 的网络和安全解决方案。

使用 macvlan 插件的基于 VLAN 的解决方案:您的 CNI 配置应该如下所示:

{
    "name": "mynet",
    "cniVersion": "0.3.1",
    "type": "vlan",
    "master": "eth0",
    "mtu": 1500,
    "vlanId": 5, 
    "linkInContainer": true,
    "ipam": {
        "type": "host-local",
        "subnet": "10.1.1.0/24"
    },
    "dns": {
        "nameservers": [ "10.1.1.1", "8.8.8.8" ]
    }
}

使用以下方式注释您的部署

  annotations:
    k8s.v1.cni.cncf.io/networks: mynet

此时,您的 Pod 应该已连接到该 VLAN。

笔记:如果您喜欢直接将 vlan 分配给您的 Pod,请将其更改为linkInContainertrue在您的部署中添加适当的注释。


采用 Calico 的云原生方式: 虽然以前的解决方案有效,但我建议使用真正的云原生方式来完成您的任务。优点:

  • 更简单的管理
  • 更好的可扩展性

安装 Calico 并配置IPPools

注意:natoutgoing此功能已被禁用IPPool

kubetl create -f -
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
  name: ippool-1
spec:
  cidr: <CIDR>
  ipipMode: Always
  natOutgoing: false
EOF

笔记:natoutgoing 已关闭,这意味着每个 Pod 将使用自己的 IP 向外部发送请求,如果 GW 不知道该 IP,则该请求将被丢弃。

使用BGP 对等连接与您的网关建立第 3 层路由。(静态路由也有效)

考虑使用Calico eBPF 数据平面如果您希望保留集群内的客户端 IP。

相关内容