我正在管理一个带有托管引擎设备的 3 节点 Ovirt 4.3.7 集群;这些节点也是 glusterfs 节点。系统如下:
- ovirt1(节点位于 192.168.40.193)
- ovirt2(节点位于 192.168.40.194)
- ovirt3(节点位于 192.168.40.195)
- ovirt-engine(引擎位于 192.168.40.196)
这些服务ovirt-ha-agent
在ovirt-ha-broker
ovirt1 和 ovirt3 上不断重启,这似乎不太正常(我们第一次注意到这个问题是这些服务的日志在这些系统上填满)。
GUI 控制台的所有迹象都表明 overt-engine 正在 ovirt3 上运行。我尝试将 overt-engine 迁移到 ovirt2,但失败了,没有任何进一步的解释。
用户可以在这三个节点上创建、启动和停止虚拟机,没有任何问题。
我在每个节点上gluster-eventaapi status
都看到了以下输出:hosted-engine --vm-status
ovirt1:
[root@ovirt1 ~]# gluster-eventsapi status
Webhooks:
http://ovirt-engine.low.mdds.tcs-sec.com:80/ovirt-engine/services/glusterevents
+---------------+-------------+-----------------------+
| NODE | NODE STATUS | GLUSTEREVENTSD STATUS |
+---------------+-------------+-----------------------+
| 192.168.5.194 | UP | OK |
| 192.168.5.195 | UP | OK |
| localhost | UP | OK |
+---------------+-------------+-----------------------+
[root@ovirt1 ~]# hosted-engine --vm-status
The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.
ovirt2:
[root@ovirt2 ~]# gluster-eventsapi status
Webhooks:
http://ovirt-engine.low.mdds.tcs-sec.com:80/ovirt-engine/services/glusterevents
+---------------+-------------+-----------------------+
| NODE | NODE STATUS | GLUSTEREVENTSD STATUS |
+---------------+-------------+-----------------------+
| 192.168.5.195 | UP | OK |
| 192.168.5.193 | UP | OK |
| localhost | UP | OK |
+---------------+-------------+-----------------------+
[root@ovirt2 ~]# hosted-engine --vm-status
--== Host ovirt2.low.mdds.tcs-sec.com (id: 1) status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt2.low.mdds.tcs-sec.com
Host ID : 1
Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"}
Score : 0
stopped : False
Local maintenance : False
crc32 : e564d06b
local_conf_timestamp : 9753700
Host timestamp : 9753700
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=9753700 (Wed Mar 25 17:45:50 2020)
host-id=1
score=0
vm_conf_refresh_time=9753700 (Wed Mar 25 17:45:50 2020)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Thu Apr 23 21:29:10 1970
--== Host ovirt3.low.mdds.tcs-sec.com (id: 3) status ==--
conf_on_shared_storage : True
Status up-to-date : False
Hostname : ovirt3.low.mdds.tcs-sec.com
Host ID : 3
Engine status : unknown stale-data
Score : 3400
stopped : False
Local maintenance : False
crc32 : 620c8566
local_conf_timestamp : 1208310
Host timestamp : 1208310
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1208310 (Mon Dec 16 21:14:24 2019)
host-id=3
score=3400
vm_conf_refresh_time=1208310 (Mon Dec 16 21:14:24 2019)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
ovirt3:
[root@ovirt3 ~]# gluster-eventsapi status
Webhooks:
http://ovirt-engine.low.mdds.tcs-sec.com:80/ovirt-engine/services/glusterevents
+---------------+-------------+-----------------------+
| NODE | NODE STATUS | GLUSTEREVENTSD STATUS |
+---------------+-------------+-----------------------+
| 192.168.5.193 | DOWN | NOT OK: N/A |
| 192.168.5.194 | UP | OK |
| localhost | UP | OK |
+---------------+-------------+-----------------------+
[root@ovirt3 ~]# hosted-engine --vm-status
The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.
我目前已采取的步骤如下:
- 发现
ovirt-ha-agent
和ovirt-ha-broker
服务的日志在节点 ovirt1 和 ovirt3 上未正确轮换;日志显示两个节点上都出现了相同的故障。broker.log 包含此语句,该语句经常重复:
MainThread::WARNING::2020-03-25 18:03:28,846::storage_broker::97::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__) Can't connect vdsm storage: [Errno 5] Input/output error: '/rhev/data-center/mnt/glusterSD/ovirt2:_engine/182a4a94-743f-4941-89c1-dc2008ae1cf5/ha_agent/hosted-engine.lockspace'
- 发现 RHEV 文档建议运行
hosted-engine --vm-status
以了解问题;该输出(上方)表明 ovirt1 不是集群的完全一部分。 - 我昨天早上在 Ovirt 论坛上提问,但由于我是新手,我的问题需要版主审核,而这还没有发生(如果这个集群的用户不是突然都在家工作,并且突然依赖它,我就不会担心等待几天)。
我应该如何从这种情况中恢复?(我认为我需要先在 glusterfs 集群中恢复某些内容,但找不到提示或没有语言来形成正确的查询。)
更新:glusterd
在 ovirt3 上重新启动后,glusterfs 集群看起来很健康,但 ovirt 服务的行为没有变化。
答案1
从上述情况恢复所需的步骤相当于在 ovirt3 上运行以下命令:
hosted-engine --vm-shutdown
hosted-engine --reinitialize-lockspace
hosted-engine --vm-start
这导致 ovirt-engine 在 ovirt2 上启动。之后,我在 ovirt3 上重新启动了服务 ovirt-ha-broker.service 和 ovirt-ha-agent.service。