配置双节点 Pacemaker 集群

配置双节点 Pacemaker 集群

我正在尝试配置一个双节点 Pacemaker 集群,这样两个资源组中的服务只有在其中一个节点关闭时才应分配到同一主机上。这可以通过双节点集群实现吗,或者在这种情况下应该是主动/主动的?

答案1

类似这样的方法可能会奏效。在本例中,我有 2 个资源 - SystemD 服务httpd和浮动 IP 192.168.0.10,我只希望浮动 IP 位于httpd正在运行的同一主机上:

## Install pacemaker...
# ...

## Start and enable pcsd service
# systemctl enable --now pcsd.service

## Authenticate to all nodes (user hacluster will be created automatically after installing pacemaker)
# pcs host auth 192.168.0.5 192.168.0.6 -u hacluster -p <password>

## Setup cluster, enable and start it
# pcs cluster setup --enable --start mycluster 192.168.0.5 192.168.0.6

## Disable stonith feature
# pcs property set stonith-enabled=false

## Ignore quorum policy
# pcs property set no-quorum-policy=ignore

## Setup virtual IP
# pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=192.168.0.10

## Setup httpd resource, using SystemD. By default it runs on one instance at a time, so clone it (and cloned one defaults to run on all nodes at the same time)
## https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/ch-advancedresource-haar
# pcs resource create httpd systemd:httpd clone

## Enable constraint, so both VirtualIP assigned and application running _on the same_ node.
# pcs constraint colocation add ClusterIP with httpd-clone INFINITY

但我还没有找到一种方法来告诉起搏器httpd在发生故障/崩溃并且不再启动时继续重新启动资源。它只是变成了stopped,即使工作实例关闭了 - 整个集群也会关闭

相关内容