Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
Error: Get "http://localhost/api/v1/namespaces/devcluster-ns": dial tcp [::1]:80: connectex: No connection could be made because the target machine actively refused it.
Error: Get "http://localhost/api/v1/namespaces/devcluster-ns/secrets/devcluster-storage-secret": dial tcp [::1]:80: connectex: No connection could be made because the target machine actively refused it.
我使用 Terraform 配置了 Azure AKS 集群。配置后,进行了一些代码修改。其他部分的修改工作正常,但在增加或减少“default_node_pool”中的 pod 数量时,我们遇到了以下错误。
具体来说,在增加或减少 default_node_pool 的“os_disk_size_gb,node_count”时似乎会出现 kubernetes 提供程序问题。
resource "azurerm_kubernetes_cluster" "aks" {
name = "${var.cluster_name}-aks"
location = var.location
resource_group_name = data.azurerm_resource_group.aks-rg.name
node_resource_group = "${var.rgname}-aksnode"
dns_prefix = "${var.cluster_name}-aks"
kubernetes_version = var.aks_version
private_cluster_enabled = var.private_cluster_enabled
private_cluster_public_fqdn_enabled = var.private_cluster_public_fqdn_enabled
private_dns_zone_id = var.private_dns_zone_id
default_node_pool {
name = "syspool01"
vm_size = "${var.agents_size}"
os_disk_size_gb = "${var.os_disk_size_gb}"
node_count = "${var.agents_count}"
vnet_subnet_id = data.azurerm_subnet.subnet.id
zones = [1, 2, 3]
kubelet_disk_type = "OS"
os_sku = "Ubuntu"
os_disk_type = "Managed"
ultra_ssd_enabled = "false"
max_pods = "${var.max_pods}"
only_critical_addons_enabled = "true"
# enable_auto_scaling = true
# max_count = ${var.max_count}
# min_count = ${var.min_count}
}
我在provider.tf上正确的指定了kubernetes和helm的提供程序,如下所示,第一次分发时没有问题。
data "azurerm_kubernetes_cluster" "credentials" {
name = azurerm_kubernetes_cluster.aks.name
resource_group_name = data.azurerm_resource_group.aks-rg.name
}
provider "kubernetes" {
host = data.azurerm_kubernetes_cluster.credentials.kube_config[0].host
username = data.azurerm_kubernetes_cluster.credentials.kube_config[0].username
password = data.azurerm_kubernetes_cluster.credentials.kube_config[0].password
client_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].client_certificate, )
client_key = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].client_key, )
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].cluster_ca_certificate, )
}
provider "helm" {
kubernetes {
host = data.azurerm_kubernetes_cluster.credentials.kube_config[0].host
# username = data.azurerm_kubernetes_cluster.credentials.kube_config[0].username
# password = data.azurerm_kubernetes_cluster.credentials.kube_config[0].password
client_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].client_certificate, )
client_key = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].client_key, )
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].cluster_ca_certificate, )
}
}
Terraform 刷新、Terraform 计划和 Terraform 应用程序必须正常运行。
其他发现包括:在 provider.tf 上,读取“/。添加并分发“kube/config”。这似乎运行良好。但我想要的是“/。它应该可以在不指定“kube/config”的情况下正常工作。
provider "kubernetes" {
config_path = "~/.kube/config". # <=== add this
host = data.azurerm_kubernetes_cluster.credentials.kube_config[0].host
username = data.azurerm_kubernetes_cluster.credentials.kube_config[0].username
password = data.azurerm_kubernetes_cluster.credentials.kube_config[0].password
client_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].client_certificate, )
client_key = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].client_key, )
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config[0].cluster_ca_certificate, )
}
正如预期的那样,当 terraform plan、terraform refresh、terraform application、“data.azurm_kubernetes_cluster.credentials”时,我无法读取这个吗?