如何使用仅具有一个 vlan 的 magnum 配置裸机 kubernetes 集群?

如何使用仅具有一个 vlan 的 magnum 配置裸机 kubernetes 集群?

似乎没有关于如何使用 magnum 部署裸机 Kubernetes 集群的指南。我已经让裸机服务器打开并发起 pxe 请求,但是 dnsmasq 服务器不响应 BOOTP 请求。要实现这一点,需要哪些步骤?

更新:
不确定是谁关闭了这个问题。如果有任何偏离主题的内容,请告诉我。

fixed-network在将设置为 baremetal provisioning 网络后,Magnum 能够发起 pxe 请求。但是,该external-network标志也是必需的,并且此设置中只有一个 vlan(已设置为 bm-provision 网络)。我尝试创建一个这样的公共网络(没有任何底层物理网络设备):

openstack network create public --provider-network-type vxlan \
                                  --external \
                                  --project service

openstack subnet create public-subnet --network public \
                                  --subnet-range 172.16.10.0/24 \
                                  --gateway 172.16.10.1 \
                                  --ip-version 4 

openstack coe cluster template create bmt \
--image fa27 \
--keypair mykey \
--external-network public \
--fixed-network bm-provision \
--fixed-subnet bm-provision-subnet \
--master-flavor bm.dev \
--flavor bm.dev \
--network-driver calico \
--coe kubernetes

openstack coe cluster create bm \
                        --cluster-template bmt \
                        --master-count 1 \
                        --node-count 1 \
                        --keypair mykey 

这通过 ironic 通过 PXE 为裸机提供了操作系统,但是遇到了以下问题:

{
  "default-master": "Resource CREATE failed: NotFound: resources.kube_masters.resources[0].resources.kube_master_floating: External network fa5174bb-d01d-48ca-a564-bbff283b1141 is not reachable from subnet 3e68266a-e28b-4d2c-8cd6-4042ac5a38ac.  Therefore, cannot associate Port 4950a517-31c6-4ed2-b7f8-03c3286063b3 with a Floating IP.\nNeutron server returns request_ids: ['req-bc587047-bf37-49ba-a46c-4d299f85812b']",
  "default-worker": "Resource CREATE failed: NotFound: resources.kube_masters.resources[0].resources.kube_master_floating: External network fa5174bb-d01d-48ca-a564-bbff283b1141 is not reachable from subnet 3e68266a-e28b-4d2c-8cd6-4042ac5a38ac.  Therefore, cannot associate Port 4950a517-31c6-4ed2-b7f8-03c3286063b3 with a Floating IP.\nNeutron server returns request_ids: ['req-bc587047-bf37-49ba-a46c-4d299f85812b']"
}


对于这个实验,我不需要外部网络,它bm-provision有一个指向 nat 的网关,因此已经可以访问互联网。尝试使用一个 VLAN 来实现这一点。这可能吗?

如果您需要任何其他信息,请告诉我。

回答
由于该问题已关闭,因此无法单独回答,但以下是基本执行的操作:

# add some more interfaces facing into the same vlan
# note macvlan was attempted also but BOOTP requests did no go through for some reason
ip link add kolla_i type veth peer name kolla_b
for i in `seq 1 10`; do ip link add p${i}_i type veth peer name p${i}_b; done
ip link add eno2_br type bridge
ip link set eno2_br up
ip link set eno2 master eno2_br
ip link set kolla_b master eno2_br
ip link set kolla_b up
ip link set kolla_i up 
ip a add 10.0.0.4/16 dev kolla_i
for i in `seq 1 10`; do ip link set p${i}_b master eno2_br; done
for i in `seq 1 10`; do ip link set p${i}_b up; done
for i in `seq 1 10`; do ip link set p${i}_i up; done

globals.yml文件中(用于 kolla-ansible 配置):

kolla_internal_vip_address: "10.0.0.4"
network_interface: "kolla_i"
neutron_external_interface: "p1_i,p2_i,p3_i,p4_i,p5_i,p6_i,p7_i,p8_i,p9_i,p10_i"
# this option does not exist so just add it into globals.yml
neutron_bridge_name: "br-ex1,br-ex2,br-ex3,br-ex4,br-ex5,br-ex6,br-ex7,br-ex8,br-ex9,br-ex10"
ironic_dnsmasq_interface: "p1_i"
ironic_dnsmasq_dhcp_range: "10.0.2.1,10.0.2.5"

这基本上为您提供了 10 个物理网络供您在同一个 VLAN 中使用。安装后 ovs conf 文件如下所示:

docker exec -it --user root neutron_openvswitch_agent bash -c "cat /etc/neutron/plugins/ml2/openvswitch_agent.ini" 
[agent]
tunnel_types = vxlan
l2_population = true
arp_responder = true

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[ovs]
bridge_mappings = physnet1:br-ex1,physnet2:br-ex2,physnet3:br-ex3,physnet4:br-ex4,physnet5:br-ex5,physnet6:br-ex6,physnet7:br-ex7,physnet8:br-ex8,physnet9:br-ex9,physnet10:br-ex10
datapath_type = system
ovsdb_connection = tcp:127.0.0.1:6640
local_ip = 10.0.0.4

下一步是创建网络:

openstack network create \
--share \
--provider-network-type flat \
--provider-physical-network physnet1 \
--external \
provision

openstack subnet create \
--network provision \
--allocation-pool start=10.0.2.6,end=10.0.2.230 \
--gateway 10.0.0.10 \
--subnet-range 10.0.0.0/16 \
provision-subnet 


openstack network create \
--share \
--provider-network-type flat \
--provider-physical-network physnet2 \
--external \
public

openstack subnet create \
--network public \
--allocation-pool start=10.1.0.1,end=10.1.0.10 \
--allocation-pool start=10.1.0.12,end=10.1.0.250 \
--gateway 10.1.0.11 \
--subnet-range 10.1.0.0/16 \
public-subnet 

网络拓扑结构 创建路由器

openstack router create provision-public
openstack router set provision-public --external-gateway public
openstack router add subnet provision-public provision-subnet

注册裸机节点:

openstack flavor create --ram 1048576 --disk 100 --vcpus 64 bm.dev
openstack flavor set --property baremetal=true bm.dev
openstack flavor set --property resources:CUSTOM_BAREMETAL_DEV=1 bm.dev
openstack flavor set --property resources:VCPU=0 bm.dev
openstack flavor set --property resources:MEMORY_MB=0 bm.dev
openstack flavor set --property resources:DISK_GB=0 bm.dev


openstack baremetal node create --name dev02 \
--driver ipmi \
--driver-info ipmi_username=<user> \
--driver-info ipmi_password=<pass> \
--driver-info ipmi_address=<ipmi_addr> \
--driver-info deploy_kernel=http://10.0.0.4:8089/ironic-agent.kernel \
--driver-info deploy_ramdisk=http://10.0.0.4:8089/ironic-agent.initramfs \
--driver-info cleaning_network=provision \
--driver-info provisioning_network=provision \
--deploy-interface=direct \
--network-interface=flat \
--driver-info force_persistent_boot_device=True \
--property capabilities=boot_mode:uefi \
--property cpu_arch=x86_64 \
--property local_gb=1000 \
--resource-class baremetal.dev

openstack baremetal port create <mac_addr> --node <id>
openstack baremetal node manage dev02
openstack baremetal node provide dev02

openstack baremetal node create --name dev03 \
--driver ipmi \
--driver-info ipmi_username=<user> \
--driver-info ipmi_password=<pass> \
--driver-info ipmi_address=<ipmi_addr> \
--driver-info deploy_kernel=http://10.0.0.4:8089/ironic-agent.kernel \
--driver-info deploy_ramdisk=http://10.0.0.4:8089/ironic-agent.initramfs \
--driver-info cleaning_network=provision \
--driver-info provisioning_network=provision \
--deploy-interface=direct \
--network-interface=flat \
--driver-info force_persistent_boot_device=True \
--property capabilities=boot_mode:uefi \
--property cpu_arch=x86_64 \
--property local_gb=1000 \
--resource-class baremetal.dev

openstack baremetal port create <mac_addr> --node <id>
openstack baremetal node manage dev03 
openstack baremetal node provide dev03

docker exec --user root  nova_conductor bash -c "nova-manage cell_v2 discover_hosts --by-service"

创建模板并部署

openstack coe cluster template create bmt \
--image bm \
--keypair mykey \
--external-network public \
--fixed-network provision \
--fixed-subnet provision-subnet \
--master-flavor bm.dev \
--flavor bm.dev \
--network-driver calico \
--coe kubernetes

openstack coe cluster create bm \
                        --cluster-template bmt \
                        --master-count 1 \
                        --node-count 1 \
                        --keypair mykey

请注意,这不适用于生产,因为所有网络都在同一个广播域中,但适合实验。

相关内容