GlusterFS v3.10 无法在启动 Centos 7.3-1611 时安装

GlusterFS v3.10 无法在启动 Centos 7.3-1611 时安装

固定的至少对我来说是这样。我不知道是怎么回事。我查看了从它不工作到现在工作时的日志,我看不出有什么不同。不同的是,我不知道这是否是巧合,当我执行

gluster volume status

在两个节点上,它们都显示Task Status of Volume glustervol1之前在 server2 中它是盒子的主机名。我不知道这是怎么发生的。但确实发生了......不知道这是否解决了问题,但它在多次重启后自行解决了。

祝你好运。

仍然如此?!关于这个问题的文章很多,从 2014 年起,在 ubuntu 和 14.04 上使用 init 都存在这个问题。我运行的是centos 7.3-1611完全修补过的内核3.10.0-514.10.2.el7,在 lvm 模块和客户端卷安装在同一台服务器上的服务器上,重启后 gluster 卷仍然无法安装。

我有 3 个盒子

  • server1(服务器 peer1)和客户端
  • server2:(服务器 peer2)和客户端
  • server3:仅客户端

他们正在使用 lvm 后端。并且 glustervol 应该挂载到 /data/glusterfs。server3 上不存在此问题,因为它只是一个客户端。它使用与其他服务器相同的规则进行连接和挂载。我深入研究了数据日志、selinux 和启动日志。我找不到解决方法。我考虑过 CTBD 并尝试过 autofs,但无济于事。

gluster 版本

glusterfs 3.10.0 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation.

文件系统

/dev/vg_gluster/brick1 /data/bricks/brick1 xfs defaults 0 0 gluster1:/glustervol1 /data/glusterfs glusterfs defaults,_netdev 0 0

预期结果

sdb LVM2_member 6QrvQI-v5L9-bds3-BUn0-ySdB-hDmz-nVojpX └─vg_gluster-brick1 xfs d181747c-8ed3-430c-bd1c-0b7968666dfe /data/bricks/brick1 and gluster1:/glustervol1 49G 33M 49G 1% /data/glusterfs

这可以通过运行手册mount -t glusterfs...或执行mount -a我的 中的规则来实现fstab。但它在启动时不起作用。我读到这与守护进程启动前尝试进行的挂载有关。最好的解决方法是什么?是编辑 systemd 文件吗?有人知道解决方法吗?

这是尝试通过 fstab 挂载时新启动的片段,其中表明没有正在运行的砖块进程。

[2017-04-03 16:35:47.353523] I [MSGID: 100030] [glusterfsd.c:2460:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.10.0 (args: /usr/sbin/glusterfs --volfile-server=gluster1 --volfile-id=/glustervol1 /data/glusterfs) [2017-04-03 16:35:47.456915] I [MSGID: 101190] [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2017-04-03 16:35:48.711381] I [afr.c:94:fix_quorum_options] 0-glustervol1-replicate-0: reindeer: incoming qtype = none [2017-04-03 16:35:48.711398] I [afr.c:116:fix_quorum_options] 0-glustervol1-replicate-0: reindeer: quorum_count = 0 [2017-04-03 16:35:48.712437] I [socket.c:4120:socket_init] 0-glustervol1-client-1: SSL support on the I/O path is ENABLED [2017-04-03 16:35:48.712451] I [socket.c:4140:socket_init] 0-glustervol1-client-1: using private polling thread [2017-04-03 16:35:48.712892] E [socket.c:4201:socket_init] 0-glustervol1-client-1: failed to open /etc/ssl/dhparam.pem, DH ciphers are disabled [2017-04-03 16:35:48.713139] I [MSGID: 101190] [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2 [2017-04-03 16:35:48.759228] I [socket.c:4120:socket_init] 0-glustervol1-client-0: SSL support on the I/O path is ENABLED [2017-04-03 16:35:48.759243] I [socket.c:4140:socket_init] 0-glustervol1-client-0: using private polling thread [2017-04-03 16:35:48.759308] E [socket.c:4201:socket_init] 0-glustervol1-client-0: failed to open /etc/ssl/dhparam.pem, DH ciphers are disabled [2017-04-03 16:35:48.759596] W [MSGID: 101174] [graph.c:361:_log_if_unknown_option] 0-glustervol1-readdir-ahead: option 'parallel-readdir' is not recognized [2017-04-03 16:35:48.759680] I [MSGID: 114020] [client.c:2352:notify] 0-glustervol1-client-0: parent translators are ready, attempting connect on transport [2017-04-03 16:35:48.762408] I [MSGID: 114020] [client.c:2352:notify] 0-glustervol1-client-1: parent translators are ready, attempting connect on transport [2017-04-03 16:35:48.904234] E [MSGID: 114058] [client-handshake.c:1538:client_query_portmap_cbk] 0-glustervol1-client-0: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running. [2017-04-03 16:35:48.904286] I [MSGID: 114018] [client.c:2276:client_rpc_notify] 0-glustervol1-client-0: disconnected from glustervol1-client-0. Client process will keep trying to connect to glusterd until brick's port is available Final graph: +------------------------------------------------------------------------------+ 1: volume glustervol1-client-0 2: type protocol/client 3: option ping-timeout 42 4: option remote-host gluster1 5: option remote-subvolume /data/bricks/brick1/brick 6: option transport-type socket 7: option transport.address-family inet 8: option username xxx 9: option password xxx 10: option transport.socket.ssl-enabled on 11: option send-gids true 12: end-volume 13: 14: volume glustervol1-client-1 15: type protocol/client 16: option ping-timeout 42 17: option remote-host gluster2 18: option remote-subvolume /data/bricks/brick1/brick 19: option transport-type socket 20: option transport.address-family inet 21: option username xxx 22: option password xxx 23: option transport.socket.ssl-enabled on 24: option send-gids true 25: end-volume 26: 27: volume glustervol1-replicate-0 28: type cluster/replicate 29: option afr-pending-xattr glustervol1-client-0,glustervol1-client-1 30: option use-compound-fops off 31: subvolumes glustervol1-client-0 glustervol1-client-1 32: end-volume 33: 34: volume glustervol1-dht 35: type cluster/distribute 36: option lock-migration off 37: subvolumes glustervol1-replicate-0 38: end-volume 39: 40: volume glustervol1-write-behind 41: type performance/write-behind 42: subvolumes glustervol1-dht 43: end-volume 44: 45: volume glustervol1-read-ahead 46: type performance/read-ahead 47: subvolumes glustervol1-write-behind 48: end-volume 49: 50: volume glustervol1-readdir-ahead 51: type performance/readdir-ahead 52: option parallel-readdir off 53: option rda-request-size 131072 54: option rda-cache-limit 10MB 55: subvolumes glustervol1-read-ahead 56: end-volume 57: 58: volume glustervol1-io-cache 59: type performance/io-cache 60: subvolumes glustervol1-readdir-ahead 61: end-volume 62: 63: volume glustervol1-quick-read 64: type performance/quick-read 65: subvolumes glustervol1-io-cache 66: end-volume 67: 68: volume glustervol1-open-behind 69: type performance/open-behind 70: subvolumes glustervol1-quick-read 71: end-volume 72: 73: volume glustervol1-md-cache 74: type performance/md-cache 75: subvolumes glustervol1-open-behind 76: end-volume 77: 78: volume glustervol1 79: type debug/io-stats 80: option log-level INFO 81: option latency-measurement off 82: option count-fop-hits off 83: subvolumes glustervol1-md-cache 84: end-volume 85: 86: volume meta-autoload 87: type meta 88: subvolumes glustervol1 89: end-volume 90: +------------------------------------------------------------------------------+ [2017-04-03 16:35:48.949500] I [rpc-clnt.c:1964:rpc_clnt_reconfig] 0-glustervol1-client-1: changing port to 49152 (from 0) [2017-04-03 16:35:49.105087] I [socket.c:348:ssl_setup_connection] 0-glustervol1-client-1: peer CN = <name> [2017-04-03 16:35:49.105103] I [socket.c:351:ssl_setup_connection] 0-glustervol1-client-1: SSL verification succeeded (client: <ip>:24007) [2017-04-03 16:35:49.106999] I [MSGID: 114057] [client-handshake.c:1451:select_server_supported_programs] 0-glustervol1-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2017-04-03 16:35:49.109591] I [MSGID: 114046] [client-handshake.c:1216:client_setvolume_cbk] 0-glustervol1-client-1: Connected to glustervol1-client-1, attached to remote volume '/data/bricks/brick1/brick'. [2017-04-03 16:35:49.109609] I [MSGID: 114047] [client-handshake.c:1227:client_setvolume_cbk] 0-glustervol1-client-1: Server and Client lk-version numbers are not same, reopening the fds [2017-04-03 16:35:49.109713] I [MSGID: 108005] [afr-common.c:4756:afr_notify] 0-glustervol1-replicate-0: Subvolume 'glustervol1-client-1' came back up; going online. [2017-04-03 16:35:49.110987] I [fuse-bridge.c:4146:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.22 [2017-04-03 16:35:49.111004] I [fuse-bridge.c:4831:fuse_graph_sync] 0-fuse: switched to graph 0 [2017-04-03 16:35:49.112283] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-glustervol1-client-1: Server lk version = 1 [2017-04-03 16:35:52.547781] I [rpc-clnt.c:1964:rpc_clnt_reconfig] 0-glustervol1-client-0: changing port to 49152 (from 0) [2017-04-03 16:35:52.558003] I [socket.c:348:ssl_setup_connection] 0-glustervol1-client-0: peer CN = <name> [2017-04-03 16:35:52.558015] I [socket.c:351:ssl_setup_connection] 0-glustervol1-client-0: SSL verification succeeded (client: <ip>:24007) [2017-04-03 16:35:52.558167] I [MSGID: 114057] [client-handshake.c:1451:select_server_supported_programs] 0-glustervol1-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2017-04-03 16:35:52.558592] I [MSGID: 114046] [client-handshake.c:1216:client_setvolume_cbk] 0-glustervol1-client-0: Connected to glustervol1-client-0, attached to remote volume '/data/bricks/brick1/brick'. [2017-04-03 16:35:52.558604] I [MSGID: 114047] [client-handshake.c:1227:client_setvolume_cbk] 0-glustervol1-client-0: Server and Client lk-version numbers are not same, reopening the fds [2017-04-03 16:35:52.558781] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-glustervol1-client-0: Server lk version = 1

答案1

第一篇推荐的解决方案

也许你可以尝试

ip:/volume /dir glusterfs defaults,noauto,x-systemd.automount,x-systemd.device-timeout=30,_netdev 0 0

参考archwiki-fstab#远程文件系统

因为我的操作系统是 Cent6.9,没有 systemd,所以它对我来说不起作用。(也许有一些 init 选项,如果你知道请告诉我:))

第2个问题描述

我已经在 中添加了规则fstab,但是 glusterfs 无法在启动后自动挂载。Ver 3.10。

我执行命令mount -a,文件系统就可以被挂载。

查看日志文件/etc/log/boot.log,发现文件系统挂载失败。

查看日志文件/var/log/gluster/<your gluster volume name>.log,它说连接 gluster 服务器失败(但 ping 服务器,没有问题)。

我认为安装时网络可能尚未准备好?

第三个不优雅的解决方案

我搜索了很多问题、博客或论坛,但问题仍未得到解决......

答案2

#gluster volume set glustervol1 performance.cache-size 32MB

设置读取缓存低内存

相关内容