无法使用 Terraform 创建 Kubernetes 资源

无法使用 Terraform 创建 Kubernetes 资源

我有一个正常运行的 GKE Kubernetes 集群。我尝试使用 terraform 向其部署新资源,但 Terraform apply 返回以下内容:

Error: Error applying plan:

2 error(s) occurred:

* kubernetes_pod.test: 1 error(s) occurred:

* kubernetes_pod.test: pods is forbidden: User "client" cannot create pods in the namespace "default"
* module.helm.kubernetes_service_account.tiller: 1 error(s) occurred:

* kubernetes_service_account.tiller: serviceaccounts is forbidden: User "client" cannot create serviceaccounts in the namespace "kube-system"

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

我的 Terraform 文件如下:

GKE 资源 main.tf

data "terraform_remote_state" "project" {
  backend = "gcs"

  config {
    bucket = "<bucket>"
    prefix = "terraform/testing-project"
  }
}

data "terraform_remote_state" "gke" {
  backend = "gcs"

  config {
    bucket = "<bucket>"
    prefix = "terraform/testing-gke-cluster"
  }
}

locals {
  host                   = "${data.terraform_remote_state.gke.endpoint}"
  client_certificate     = "${base64decode(data.terraform_remote_state.gke.client_certificate)}"
  client_key             = "${base64decode(data.terraform_remote_state.gke.client_key)}"
  cluster_ca_certificate = "${base64decode(data.terraform_remote_state.gke.cluster_ca_certificate)}"
}

provider "kubernetes" {
  host                   = "${local.host}"
  client_certificate     = "${local.client_certificate}"
  client_key             = "${local.client_key}"
  cluster_ca_certificate = "${local.cluster_ca_certificate}"
}

module "helm" {
  source                 = "../../../Modules/GKE/Helm/"
  host                   = "${local.host}"
  client_certificate     = "${local.client_certificate}"
  client_key             = "${local.client_key}"
  cluster_ca_certificate = "${local.cluster_ca_certificate}"
}


resource "kubernetes_pod" "test" {
  metadata {
    name = "terraform-example"
  }

  spec {
    container {
      image = "nginx:1.7.9"
      name  = "example"

      env {
        name  = "environment"
        value = "test"
      }
    }
  }
}

还有 helm 模块:

resource "kubernetes_service_account" "tiller" {
  automount_service_account_token = true

  metadata {
    name      = "tiller"
    namespace = "kube-system"
  }
}

resource "kubernetes_cluster_role_binding" "tiller" {
  metadata {
    name = "tiller"
  }

  role_ref {    
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = "cluster-admin"
  }

  subject {
    kind      = "ServiceAccount"
    name      = "${kubernetes_service_account.tiller.metadata.0.name}"
    api_group = ""
    namespace = "${kubernetes_service_account.tiller.metadata.0.namespace}"
  }
}

# initialize Helm provider
provider "helm" {
  install_tiller = true
  service_account = "${kubernetes_service_account.tiller.metadata.0.name}"
  tiller_image = "gcr.io/kubernetes-helm/tiller:v2.11.0"

  kubernetes {
  host                   = "${var.host}"
  client_certificate     = "${var.client_certificate}"
  client_key             = "${var.client_key}"
  cluster_ca_certificate = "${var.cluster_ca_certificate}"
  } 
}

我一直在尝试寻找解决方案,我最好的猜测是,这与我之前使用我的 google 帐户通过 kubectl 和 kubernetes 管理资源有关,现在 terraform 服务帐户出现了问题,但我不知道如何解决它。terraform 服务帐户在项目中具有所有者和编辑者角色。

答案1

您需要为用户“客户端”分配角色绑定或集群角色绑定,以便上述操作使用 RBAC 进行。启用旧版授权时,您将启用安全性较低的 ABAC

答案2

显然,当我启用旧版授权时,此功能开始起作用。如果有人能解释原因,我可以将其标记为答案。

相关内容