使用 Terraform,我想要构建一个由外部负载均衡器 (LB) 和带有 3 个虚拟机的 MIG 组成的基础设施。MIG 中的每个虚拟机都应运行一个侦听的服务器80
。此外,我想为 MIG 设置运行状况检查。此外,我想在子网中拥有一个额外的虚拟机,以便我可以 ssh 到它并检查是否可以建立与 MIG 内虚拟机的连接。
为了实现目标,我使用了以下 Terraform 模块:"GoogleCloudPlatform/lb-http/google"
和"terraform-google-modules/vm/google//modules/mig”
。不幸的是,运行命令后terraform apply
,所有健康检查都失败,并且无法通过其外部 IP 访问 LB。
我会把我的代码放在这篇文章的后面部分,但首先,我想了解一下我之前引用的模块的不同属性:
- MIG 模块的属性是否
named_ports
引用了我的服务器运行的端口? 就我的情况而言,是的80
。 - MIG 模块的
health_check
属性是否引用 MIG 内的虚拟机?如果是,那么我假设属性port
的属性health_check
应该引用服务器运行的端口,再次80
。 - LB 模块的
backends
属性是否引用 MIG 内的虚拟机?default
的属性是否应port
再次指向80
? - 最后,LB 的模块
health_check
属性与 MIG 的模块属性相同,对吗?再次强调,那里指定的端口应该是80
。 - 这些属性
target_tags
和firewall_network
引用是什么?文档中说:“在其中创建防火墙规则的网络名称”。我不明白。负载平衡器配置如何确定将哪些防火墙规则添加到网络?此外,将哪些防火墙规则添加到命名网络?如果我在那里添加我的网络,将向此网络添加哪些防火墙规则? - 使用名为 的虚拟机
ssh-vm
,我想 curl MIG 内的虚拟机。为此,我创建了防火墙规则allow-ssh
和allow-internal
。不幸的是,当我通过 ssh 连接到虚拟机并 curl MIG 内的其中一个虚拟机时,我收到:connection refused
编辑:我被要求提供有关如何通过 ssh 连接到 VM 和 curl MIG 内的机器的详细信息。所有 MIG-VM 和都ssh-vm
在 10.0.101.0/24 内。ssh-vm
有一个外部 IP,比如 X。要建立连接,我打开终端并运行ssh -i /my_key $USER@X
。然后我选择其中一台机器的内部 IP,例如:10.0.101.3
,然后我执行curl 10.0.101.3:80
。我收到:Failed to connect to 10.0.101.3 port 80: Connection refused
这里是main.tf
文件:
data "external" "my_ip_addr" {
program = ["/bin/bash", "${path.module}/getip.sh"]
}
resource "google_project_service" "project" {
// ...
}
resource "google_service_account" "service-acc" {
// ...
}
resource "google_compute_network" "vpc-network" {
project = var.pro
name = var.network_name
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "subnetwork" {
name = "subnetwork"
ip_cidr_range = "10.0.101.0/24"
region = var.region
project = var.pro
stack_type = "IPV4_ONLY"
network = google_compute_network.vpc-network.self_link
}
resource "google_compute_firewall" "allow-internal" {
name = "allow-internal"
project = var.pro
network = google_compute_network.vpc-network.self_link
allow {
protocol = "tcp"
ports = ["80"]
}
source_ranges = ["10.0.101.0/24"]
}
resource "google_compute_firewall" "allow-ssh" {
project = var.pro
name = "allow-ssh"
direction = "INGRESS"
network = google_compute_network.vpc-network.self_link
allow {
protocol = "tcp"
ports = ["22"]
}
target_tags = ["allow-ssh"]
source_ranges = [format("%s/%s", data.external.my_ip_addr.result["internet_ip"], 32)]
}
resource "google_compute_firewall" "allow-hc" {
name = "allow-health-check"
project = var.pro
direction = "INGRESS"
network = google_compute_network.vpc-network.self_link
source_ranges = ["130.211.0.0/22", "35.191.0.0/16"]
target_tags = [var.network_name]
allow {
ports = ["80"]
protocol = "tcp"
}
}
resource "google_compute_address" "static" {
project = var.pro
region = var.region
name = "ipv4-address"
}
resource "google_compute_instance" "ssh-vm" {
name = "ssh-vm"
machine_type = "e2-standard-2"
project = var.pro
tags = ["allow-ssh"]
zone = "europe-west1-b"
boot_disk {
initialize_params {
image = "ubuntu-2004-focal-v20221213"
}
}
network_interface {
subnetwork = google_compute_subnetwork.subnetwork.self_link
access_config {
nat_ip = google_compute_address.static.address
}
}
metadata = {
startup-script = <<-EOF
#!/bin/bash
sudo snap install docker
sudo docker version > file1.txt
sleep 5
sudo docker run -d --rm -p ${var.server_port}:${var.server_port} \
busybox sh -c "while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; \
echo 'yo'; } | nc -l -p ${var.server_port}; done"
EOF
}
}
module "instance_template" {
source = "terraform-google-modules/vm/google//modules/instance_template"
version = "7.9.0"
region = var.region
project_id = var.pro
network = google_compute_network.vpc-network.self_link
subnetwork = google_compute_subnetwork.subnetwork.self_link
service_account = {
email = google_service_account.service-acc.email
scopes = ["cloud-platform"]
}
name_prefix = "webserver"
tags = ["template-vm"]
machine_type = "e2-standard-2"
startup_script = <<-EOF
#!/bin/bash
sudo snap install docker
sudo docker version > docker_version.txt
sleep 5
sudo docker run -d --rm -p ${var.server_port}:${var.server_port} \
busybox sh -c "while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; \
echo 'yo'; } | nc -l -p ${var.server_port}; done"
EOF
source_image = "https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/ubuntu-2004-focal-v20221213"
disk_size_gb = 10
disk_type = "pd-balanced"
preemptible = true
}
module "vm_mig" {
source = "terraform-google-modules/vm/google//modules/mig"
version = "7.9.0"
project_id = var.pro
region = var.region
target_size = 3
instance_template = module.instance_template.self_link
named_ports = [{
name = "http"
port = 80
}]
health_check = {
type = "http"
initial_delay_sec = 30
check_interval_sec = 30
healthy_threshold = 1
timeout_sec = 10
unhealthy_threshold = 5
response = ""
proxy_header = "NONE"
port = 80
request = ""
request_path = "/"
host = ""
}
network = google_compute_network.vpc-network.self_link
subnetwork = google_compute_subnetwork.subnetwork.self_link
}
module "gce-lb-http" {
source = "GoogleCloudPlatform/lb-http/google"
version = "~> 4.4"
project = var.pro
name = "group-http-lb"
target_tags = ["template-vm"]
firewall_networks = [google_compute_network.vpc-network.name]
backends = {
default = {
description = null
port = 80
protocol = "HTTP"
port_name = "http"
timeout_sec = 10
enable_cdn = false
custom_request_headers = null
custom_response_headers = null
security_policy = null
connection_draining_timeout_sec = null
session_affinity = null
affinity_cookie_ttl_sec = null
health_check = {
check_interval_sec = null
timeout_sec = null
healthy_threshold = null
unhealthy_threshold = null
request_path = "/"
port = 80
host = null
logging = null
}
log_config = {
enable = true
sample_rate = 1.0
}
groups = [
{
# Each node pool instance group should be added to the backend.
group = module.vm_mig.instance_group
balancing_mode = null
capacity_scaler = null
description = null
max_connections = null
max_connections_per_instance = null
max_connections_per_endpoint = null
max_rate = null
max_rate_per_instance = null
max_rate_per_endpoint = null
max_utilization = null
},
]
iap_config = {
enable = false
oauth2_client_id = null
oauth2_client_secret = null
}
}
}
}
答案1
- MIG 模块的属性是否
named_ports
引用了我的服务器运行的端口? 就我的情况而言,是的80
。
是的,命名端口定义了代理(GFE 或 Envoy)与后端实例之间的 TCP 连接所使用的目标端口。在您的例子中,是端口 80。
对于问题 2-4,所有假设都是正确的。
- 属性
target_tags
和firewall_network
引用是什么?文档中说:“要在其中创建防火墙规则的网络名称”
A
network tag
是您可以添加到 Compute Engine VM 的字符串。创建防火墙规则时,您可以指定将应用规则的 Google Cloud VM,这些是target tags
。在这种情况下,目标标签将指向您在 VM 中应用的网络标签。firewall_network
只是将在其中实施防火墙规则的 VPC 网络。
负载均衡器配置如何确定将哪些防火墙规则添加到网络?
负载均衡器无法确定添加到网络的防火墙规则。创建防火墙规则时,系统会要求您选择将执行该规则的 VPC 网络。
此外,哪些防火墙规则添加到了命名网络?如果我在那里添加 my-network,哪些防火墙规则将添加到此网络?
在您的文件之一的防火墙规则中
main.tf
:
resource "google_compute_firewall" "allow-ssh" {
project = var.pro
name = "allow-ssh"
direction = "INGRESS"
network = google_compute_network.vpc-network.self_link
allow {
protocol = "tcp"
ports = ["22"]
}
target_tags = ["allow-ssh"]
source_ranges = [format("%s/%s", data.external.my_ip_addr.result["internet_ip"], 32)]
}
此防火墙规则应用于网络
google_compute_network.vpc-network.self_link
如果您要添加另一个名为的 VPCmy-network
,只需简单地创建另一个“google_compute_firewall”资源并my-network
在下定义network
。
对于问题 6,请更新问题并向我们展示(通过屏幕截图)您如何在托管实例组中卷曲虚拟机。