“count” 值取决于在应用之前无法确定的资源属性,Terraform 无法预测将创建多少个实例

“count” 值取决于在应用之前无法确定的资源属性,Terraform 无法预测将创建多少个实例

我正在尝试创建带有管理节点组的 EKS 集群,并且我想执行 shell 脚本来强化工作节点并在集群引导之前设置代理设置。

这是我的“main.tf”文件

module "eks" {
  source                                = "./modules/eks"
  cluster_name                          = "xxxxxxxxxxxxxxxxxx"
  cluster_version                       = "1.27"
  vpc_id                                = "xxxxxxxxxxxxxxxxxxxxxx"
  control_plane_subnet_ids              = ["subnet-xxxxxxxxxx", "subnet-xxxxxxxxxxx", "subnet-xxxxxxxxxxxx"]
  subnet_ids                            = ["subnet-xxxxxxxxxxxxx", "subnet-xxxxxxxxxxxxx", "subnet-xxxxxxxxxxxxx", "subnet-xxxxxxxxxxxxxxxx", "subnet-xxxxxxxxxxxxxxxx", "subnet-xxxxxxx"]
  cluster_endpoint_public_access = false
  create_aws_auth_configmap = false
  manage_aws_auth_configmap = false
  aws_auth_users = [
    {
      userarn  = "arn:aws:iam::xxxxxxxxxxx:user/test-user"
      username = "test-user"
      groups   = ["system:masters"]
    },
    ]
  eks_managed_node_groups = {
    test = {
      ami_id                     = "ami-xxxxxxxxxxxxxxx"
      enable_bootstrap_user_data = false
      pre_bootstrap_user_data   = templatefile("${path.module}/userdata.tpl",{ cluster_endpoint = module.eks.cluster_endpoint, cluster_certificate = module.eks.cluster_certificate_authority_data, cluster_name = module.eks.cluster_name })
      instance_types             = ["t2.medium"]
      min_size                   = 1
      max_size                   = 1
      desired_size               = 1
      capacity_type              = "ON_DEMAND"
      labels = {
        app  = "test"
      }
      block_device_mappings = {
        xvda = {
          device_name = "/dev/xvda"
          ebs = {
            name                  = "disk-DR"
            volume_size           = 100
            volume_type           = "gp3"
            delete_on_termination = true
            tags = {
              "Environment"  = "Testing"
            }
          }
        }
      }
      tags = {
        "Environment"  = "Testing"
      }
    }
  }
  node_security_group_additional_rules = {
    ingress_443 = {
      description                   = "Cluster node internal communication"
      protocol                      = "TCP"
      from_port                     = 443
      to_port                       = 443
      type                          = "ingress"
      self                          = true
    }
  }
}

这是我的“user_data.tpl”文件。

yum update -y
chmod 600 /etc/kubernetes/kubelet
ls -ld /etc/kubernetes/kubelet >> /var/log/vaca.log
chmod 600 /etc/kubernetes/kubelet/kubelet-config.json
ls -l /etc/kubernetes/kubelet/kubelet-config.json >> /var/log/vaca.log
sed -i 's/--hostname-override=[^ ]*//g' /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf
sed -i '5s/^/ /;3i\"eventRecordQPS":0,' /etc/kubernetes/kubelet/kubelet-config.json
systemctl daemon-reload
systemctl restart kubelet
systemctl is-active kubelet >> /var/log/vaca.log
grep "evenRecordQPS" /etc/kubernetes/kubelet/kubelet-config.json >> /var/log/vaca.log
grep "--hostname-override" /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf >> /var/log/vaca.log
echo "tmpfs /tmp tmpfs defaults,rw,nosuid,nodev,noexec,relatime 0 0" >> /etc/fstab
mount -a
/etc/bootstrap.sh --apiserver-endpoint ${cluster_endpoint} --b64-cluster-ca ${cluster_certificate} ${cluster_name}

当我运行“terraform plan”和“terraform apply”时出现以下错误。

Error: Invalid count argument
 on modules/eks/modules/_user_data/main.tf line 67, in data "cloudinit_config" "linux_eks_managed_node_group":
 67: count = count = var.create && var.platform == "linux" && var.is_eks_managed_node_group && !var.enable_bootstrap_user_data && var.pre_bootstrap_user_data != "" && var.user_data_template_path == "" ? 1 : 0

 The "count" value depends on resource attributes that cannot be determined
 until apply, so Terraform cannot predict how many instances will be created. 
 To work around this, use the -target argument to first apply only
 the resources that the cound depends on.

##[Warning]Can't find loc string for key: TerraformPlanFailed
##[error]: Error: TerraformPlanFailed 1

这是来自 main.tf 的数据块(来自 _user_data 模块(EKS Terraform 模块)),供快速参考。

data "cloudinit_config" "linux_eks_managed_node_group" {
  count = var.create && var.platform == "linux" && var.is_eks_managed_node_group && !var.enable_bootstrap_user_data && var.pre_bootstrap_user_data != "" && var.user_data_template_path == "" ? 1 : 0

  base64_encode = true
  gzip          = false
  boundary      = "//"

  # Prepend to existing user data supplied by AWS EKS
  part {
    content_type = "text/x-shellscript"
    content      = var.pre_bootstrap_user_data
  }
}

相关内容