我正在尝试在两个节点上启动 glusterfs 集群。我的设置可能唯一有点复杂的是,我有两个节点通过交叉连接电缆相互连接,因此没有交换机。我在每个节点上设置私有 IP 地址的方式如下。请注意,10.0.0.0/24 网络没有默认网关,这可能会给我带来一些麻烦:
[idf@node1 ~]$ uname -a
Linux node1.synctrading 3.10.0-229.1.2.el7.x86_64 #1 SMP Fri Mar 27 03:04:26 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[idf@node1 ~]$
[idf@node2 ~]$ uname -a
Linux node2.synctrading 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 11:36:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[idf@node2 ~]$
每个节点似乎都有一个“config”目录,但我不记得接触过以下任何文件:
[idf@node1 ~]$ ls /etc/glusterfs/
glusterd.vol glusterfs-logrotate gluster-rsyslog-7.2.conf logger.conf.example
glusterfs-georep-logrotate gluster-rsyslog-5.8.conf group-virt.example
[idf@node1 ~]$
我在每个节点上都安装了一个驱动器作为我的“砖块”
[idf@node1 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 50G 6.9G 44G 14% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 140K 3.9G 1% /dev/shm
tmpfs 3.9G 9.1M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sdb 932G 33M 932G 1% /data/brick1
/dev/mapper/centos-home 408G 4.9G 403G 2% /home
/dev/sda1 497M 162M 336M 33% /boot
10.0.0.61:/var/nfsshare 50G 43G 7.7G 85% /mnt/nfs/var/nfsshare
[idf@node2 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 50G 43G 7.7G 85% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 140K 3.9G 1% /dev/shm
tmpfs 3.9G 9.1M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sdb 932G 33M 932G 1% /data/brick1
/dev/mapper/centos-home 408G 17G 391G 5% /home
/dev/sda1 497M 133M 365M 27% /boot
[idf@node2 ~]$
我的网络设置如下:
[idf@node1 ~]$ more /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=none
IPADDR0=10.0.0.60
NETMASK=255.255.255.0
PREFIX0=24
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=eth0
DEVICE=enp1s0f1
ONBOOT=yes
[idf@node2 ~]$ more /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=none
IPADDR0=10.0.0.61
NETMASK=255.255.255.0
PREFIX0=24
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=eth0
DEVICE=enp1s0f1
ONBOOT=yes
我的主机文件如下所示
[idf@node1 ~]$ more /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1 glusterfs2
10.0.0.61 glusterfs1
10.0.0.60 glusterfs2
[idf@node2 ~]$ more /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1 glusterfs1
10.0.0.61 glusterfs1
10.0.0.60 glusterfs2
我已向每个节点添加了这些防火墙规则:
sudo iptables -A INPUT -m state --state NEW -m tcp -p tcp -s 10.0.0.0/24 --dport 111 -j ACCEPT
sudo iptables -A INPUT -m state --state NEW -m udp -p udp -s 10.0.0.0/24 --dport 111 -j ACCEPT
sudo iptables -A INPUT -m state --state NEW -m tcp -p tcp -s 10.0.0.0/24 --dport 2049 -j ACCEPT
sudo iptables -A INPUT -m state --state NEW -m tcp -p tcp -s 10.0.0.0/24 --dport 24007 -j ACCEPT
sudo iptables -A INPUT -m state --state NEW -m tcp -p tcp -s 10.0.0.0/24 --dport 38465:38469 -j ACCEPT
sudo iptables -A INPUT -m state --state NEW -m tcp -p tcp -s 10.0.0.0/24 --dport 49152 -j ACCEPT
输出ip route
[idf@node2 ~]$ ip route
default via xxx.xxx.xx.xx dev enp1s0f0 proto static metric 100
10.0.0.0/24 dev enp1s0f1 proto kernel scope link src 10.0.0.61 metric 100
192.168.0.0/24 dev ib0 proto kernel scope link src 192.168.0.1 metric 150
[idf@node2 ~]$
[idf@node1 ~]$ ip route
default via xxx.xxx.xx.xx dev enp1s0f0 proto static metric 100
10.0.0.0/24 dev enp1s0f1 proto kernel scope link src 10.0.0.60 metric 100
192.168.0.0/24 dev ib0 proto kernel scope link src 192.168.0.2 metric 150
[idf@node1 ~]$
内容/etc/sysconfig/network
为空
[idf@node1 ~]$ sudo more /etc/sysconfig/network
# Created by anaconda
[idf@node1 ~]$
当我跑步时
[idf@node1 ~]$ service glusterd status
Redirecting to /bin/systemctl status glusterd.service
glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
Active: active (running) since Sat 2015-05-09 23:32:06 EDT; 26min ago
Process: 5561 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid (code=exited, status=0/SUCCESS)
Main PID: 5562 (glusterd)
CGroup: /system.slice/glusterd.service
\u2514\u25005562 /usr/sbin/glusterd -p /var/run/glusterd.pid
[idf@node1 ~]$ sudo gluster peer probe glusterfs1
peer probe: failed: Probe returned with unknown errno 107
[idf@node1 ~]$
/var/log/glusterfs日志文件的内容:
[idf@node1 ~]$ sudo tail -f /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
[2015-05-10 04:22:43.599679] I [mem-pool.c:545:mem_pool_destroy] 0-management: size=588 max=0 total=0
[2015-05-10 04:22:43.599693] I [mem-pool.c:545:mem_pool_destroy] 0-management: size=124 max=0 total=0
[2015-05-10 04:28:53.944473] I [glusterd-handler.c:1015:__glusterd_handle_cli_probe] 0-glusterd: Received CLI probe req node2 24007
[2015-05-10 04:28:53.945762] I [glusterd-handler.c:3165:glusterd_probe_begin] 0-glusterd: Unable to find peerinfo for host: node2 (24007)
[2015-05-10 04:28:53.985030] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2015-05-10 04:28:53.990323] I [glusterd-handler.c:3098:glusterd_friend_add] 0-management: connect returned 0
**[2015-05-10 04:28:53.990605] E [socket.c:2276:socket_connect_finish] 0-management: connection to 10.0.0.61:24007 failed (No route to host)**
[2015-05-10 04:28:53.990665] I [MSGID: 106004] [glusterd-handler.c:4365:__glusterd_peer_rpc_notify] 0-management: Peer 00000000-0000-0000-0000-000000000000, in Establishing Connection state, has disconnected from glusterd.
[2015-05-10 04:28:53.990849] I [mem-pool.c:545:mem_pool_destroy] 0-management: size=588 max=0 total=0
[2015-05-10 04:28:53.990867] I [mem-pool.c:545:mem_pool_destroy] 0-management: size=124 max=0 total=0
在任一节点上我都出现此错误:我不确定我做错了什么。是不是我没有为 10.0.0.0/24 网络设置默认网关?