使用 Terraform 在创建 Azure Linux VM 期间在文件系统上装载 Azure 虚拟硬盘

使用 Terraform 在创建 Azure Linux VM 期间在文件系统上装载 Azure 虚拟硬盘

我正在使用 Terraform 在 Microsoft Azure 上自动创建虚拟机。Terraform 脚本应接受如下数据磁盘配置(请参阅data_disk_size_gb),创建虚拟硬盘并将其挂载到给定的文件系统路径中。

module "jumphost" {
    count = 1
    source = "../modules/services/jumphost"
    prefix = "${module.global-vars.project}-${var.environment}"
    rg = azurerm_resource_group.rg
    vm_size = "Standard_D2s_v4"
    subnet = azurerm_subnet.web-subnet
    private_ip_address = "10.0.1.250"
    data_disk_size_gb = [ 
        ["/data", 100],
        ["/data2" , 200]
    ]
    admin_username = "zaidwaqi"
    admin_public_key_path = "../id_rsa.pub"
    nsg_allow_tcp_ports = [22]
    public_ip_address = true
}

虚拟硬盘的创建和到虚拟机的附加操作如下,我相信这样做效果很好,并且不是导致问题的原因。

resource "azurerm_managed_disk" "data-disk" {
    count                = length(var.data_disk_size_gb)
    name                 = "${var.prefix}-${var.service_name}-data-disk-${count.index}"
    location             = var.rg.location
    resource_group_name  = var.rg.name
    storage_account_type = "Standard_LRS"
    create_option        = "Empty"
    disk_size_gb         = var.data_disk_size_gb[count.index][1]
}

resource "azurerm_virtual_machine_data_disk_attachment" "external" {
    count       = length(azurerm_managed_disk.data-disk)
    managed_disk_id  = "${azurerm_managed_disk.data-disk[count.index].id}"  
    virtual_machine_id = azurerm_linux_virtual_machine.vm.id  
    lun        = "${count.index + 10}"  
    caching      = "ReadWrite"  
}

为了利用提供的数据磁盘,提供了 cloud-init 配置来处理分区、文件系统创建和挂载。Terraform 配置通过 提供相关信息data template_cloudinit_config,这些信息将传递给 VMcustom_data属性

data "template_cloudinit_config" "config" {
  gzip = true
  base64_encode = true
  part {
      filename = "init-cloud-config"
      content_type = "text/cloud-config"
      content = file("../modules/services/${var.service_name}/init.yaml")
  }
  part {
      filename = "init-shellscript"
      content_type = "text/x-shellscript"
      content = templatefile("../modules/services/${var.service_name}/init.sh",
        { 
          hostname = "${var.prefix}-${var.service_name}"
          data_disk_size_gb = var.data_disk_size_gb
        }
      )
  }
}

接受参数的cloud-init shell脚本init.sh如下

#!/bin/bash

hostnamectl set-hostname ${hostname}

%{ for index, disk in data_disk_size_gb ~}
parted /dev/sd${ split("","bcdef")[index] } --script mklabel gpt mkpart xfspart xfs 0% 100%
mkfs.xfs /dev/sd${ split("","bcdef")[index] }1
partprobe /dev/sd${ split("","bcdef")[index] }1
mkdir -p ${ disk[0] }
mount /dev/sd${ split("","bcdef")[index] }1 ${ disk[0] }
echo UUID=\"`(blkid /dev/sd${ split("","bcdef")[index] }1 -s UUID -o value)`\" ${ disk[0] }        xfs     defaults,nofail         1       2 >> /etc/fstab
%{ endfor ~}

完成后terraform apply/data和在输出/data2中不可见df。我期望看到和的条目,其挂载点分别为/dev/sdb1和。/dev/sdc1/data/data2

[zaidwaqi@starter-stage-jumphost ~]$ ls /
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
[zaidwaqi@starter-stage-jumphost ~]$ df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        3.9G     0  3.9G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G  8.6M  3.9G   1% /run
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda2        30G  1.8G   28G   7% /
/dev/sda1       496M   73M  424M  15% /boot
/dev/sda15      495M  6.9M  488M   2% /boot/efi
tmpfs           798M     0  798M   0% /run/user/1000

诊断信息

/var/lib/cloud/实例/脚本/

#!/bin/bash

hostnamectl set-hostname starter-stage-jumphost

parted /dev/sdb --script mklabel gpt mkpart xfspart xfs 0% 100%
mkfs.xfs /dev/sdb1
partprobe /dev/sdb1
mkdir -p /data
mount /dev/sdb1 /data
echo UUID=\"`(blkid /dev/sdb1 -s UUID -o value)`\" /data        xfs     defaults,nofail         1       2 >> /etc/fstab 
parted /dev/sdc --script mklabel gpt mkpart xfspart xfs 0% 100%
mkfs.xfs /dev/sdc1
partprobe /dev/sdc1
mkdir -p /data2
mount /dev/sdc1 /data2
echo UUID=\"`(blkid /dev/sdc1 -s UUID -o value)`\" /data2        xfs     defaults,nofail         1       2 >> /etc/fstab
[zaidwaqi@starter-stage-jumphost scripts]$ 

/var/log/cloud-init.log

日志的部分内容。希望我在下面展示相关部分。

2021-07-03 05:42:43,635 - cc_disk_setup.py[DEBUG]: Creating new partition table/disk
2021-07-03 05:42:43,635 - util.py[DEBUG]: Running command ['udevadm', 'settle'] with allowed return codes [0] (shell=False, capture=True)
2021-07-03 05:42:43,651 - util.py[DEBUG]: Creating partition on /dev/disk/cloud/azure_resource took 0.016 seconds
2021-07-03 05:42:43,651 - util.py[WARNING]: Failed partitioning operation
Device /dev/disk/cloud/azure_resource did not exist and was not created with a udevadm settle.
2021-07-03 05:42:43,651 - util.py[DEBUG]: Failed partitioning operation
Device /dev/disk/cloud/azure_resource did not exist and was not created with a udevadm settle.
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/cloudinit/config/cc_disk_setup.py", line 140, in handle
    func=mkpart, args=(disk, definition))
  File "/usr/lib/python3.6/site-packages/cloudinit/util.py", line 2539, in log_time
    ret = func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/cloudinit/config/cc_disk_setup.py", line 769, in mkpart
    assert_and_settle_device(device)
  File "/usr/lib/python3.6/site-packages/cloudinit/config/cc_disk_setup.py", line 746, in assert_and_settle_device
    "with a udevadm settle." % device)
RuntimeError: Device /dev/disk/cloud/azure_resource did not exist and was not created with a udevadm settle.
2021-07-03 05:42:43,672 - cc_disk_setup.py[DEBUG]: setting up filesystems: [{'filesystem': 'ext4', 'device': 'ephemeral0.1'}]
2021-07-03 05:42:43,672 - cc_disk_setup.py[DEBUG]: ephemeral0.1 is mapped to disk=/dev/disk/cloud/azure_resource part=1
2021-07-03 05:42:43,672 - cc_disk_setup.py[DEBUG]: Creating new filesystem.
2021-07-03 05:42:43,672 - util.py[DEBUG]: Running command ['udevadm', 'settle'] with allowed return codes [0] (shell=False, capture=True)
2021-07-03 05:42:43,684 - util.py[DEBUG]: Creating fs for /dev/disk/cloud/azure_resource took 0.012 seconds
2021-07-03 05:42:43,684 - util.py[WARNING]: Failed during filesystem operation
Device /dev/disk/cloud/azure_resource did not exist and was not created with a udevadm settle.
2021-07-03 05:42:43,684 - util.py[DEBUG]: Failed during filesystem operation
Device /dev/disk/cloud/azure_resource did not exist and was not created with a udevadm settle.
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/cloudinit/config/cc_disk_setup.py", line 158, in handle
    func=mkfs, args=(definition,))
  File "/usr/lib/python3.6/site-packages/cloudinit/util.py", line 2539, in log_time
    ret = func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/cloudinit/config/cc_disk_setup.py", line 871, in mkfs
    assert_and_settle_device(device)
  File "/usr/lib/python3.6/site-packages/cloudinit/config/cc_disk_setup.py", line 746, in assert_and_settle_device
    "with a udevadm settle." % device)
RuntimeError: Device /dev/disk/cloud/azure_resource did not exist and was not created with a udevadm settle.
2021-07-03 05:42:43,684 - handlers.py[DEBUG]: finish: init-network/config-disk_setup: SUCCESS: config-disk_setup ran successfully
2021-07-03 05:42:43,685 - stages.py[DEBUG]: Running module mounts (<module 'cloudinit.config.cc_mounts' from '/usr/lib/python3.6/site-packages/cloudinit/config/cc_mounts.py'>) with frequency once-per-instance
2021-07-03 05:42:43,685 - handlers.py[DEBUG]: start: init-network/config-mounts: running config-mounts with frequency once-per-instance
2021-07-03 05:42:43,685 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/b7e003ce-7ad3-4840-a4f7-06faefed9cb0/sem/config_mounts - wb: [644] 24 bytes

/var/log/cloud-init-output.log

日志的部分内容。希望我在下面展示相关部分。

Complete!
Cloud-init v. 19.4 running 'modules:config' at Sat, 03 Jul 2021 05:42:46 +0000. Up 2268.33 seconds.
meta-data=/dev/sdb1              isize=512    agcount=4, agsize=6553472 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=26213888, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=12799, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
meta-data=/dev/sdc1              isize=512    agcount=4, agsize=13107072 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=52428288, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=25599, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Cloud-init v. 19.4 running 'modules:final' at Sat, 03 Jul 2021 05:45:28 +0000. Up 2430.85 seconds.
Cloud-init v. 19.4 finished at Sat, 03 Jul 2021 05:45:34 +0000. Datasource DataSourceAzure [seed=/dev/sr0].  Up 2436.88 seconds

答案1

使用 parted 时,我必须重新启动虚拟机才能查看磁盘。此外,uuid 值不一定与我最初在 /etc/fstab 中输入的值匹配。重新启动后,我使用 lsblk 和 blkid 检查以确保 /etc/fstab 中的信息正确。这当然不是自动化的理想选择。lvm 似乎不需要重新启动。我猜 partprob 也没有正常工作。

相关内容