Glusterfs 虽然是一个不错的分布式文件系统,但几乎没有办法监控其完整性。服务器可能来来去去,存储块可能会老化或发生故障,我担心等知道这些情况时可能已经太迟了。
最近我们遇到了一次奇怪的故障,一切似乎都在正常运行,但是有一块砖从卷中掉了下来(纯属巧合)。
有没有一种简单可靠的方法(cron 脚本?)可以让我了解我的 GlusterFS 的健康状况3.2体积?
答案1
这是 GlusterFS 开发人员长期以来的请求,目前还没有现成的解决方案可供使用。不过,使用一些脚本并非不可能。
几乎整个 Gluster 系统都由单个 gluster 命令管理,并且使用一些选项,您可以编写自己的健康监测脚本。请参阅此处以获取有关砖块和卷的列表信息 --http://gluster.org/community/documentation/index.php/Gluster_3.2:_Displaying_Volume_Information
要监控性能,请查看此链接--http://gluster.org/community/documentation/index.php/Gluster_3.2:_Monitoring_your_GlusterFS_Workload
更新:请考虑升级到http://gluster.org/community/documentation/index.php/About_GlusterFS_3.3
使用最新版本总是更好的选择,因为它们似乎修复了更多错误,并且支持良好。当然,在迁移到较新版本之前,请自行测试——http://vbellur.wordpress.com/2012/05/31/upgrading-to-glusterfs-3-3/:)
管理指南的第 10 章中有专门用于监控 GlusterFS 3.3 安装的部分—— http://www.gluster.org/wp-content/uploads/2012/05/Gluster_File_System-3.3.0-Administration_Guide-en-US.pdf
请参阅此处了解另一个 nagios 脚本——http://code.google.com/p/glusterfs-status/
答案2
有一个nagios插件可用于监控。不过你可能需要根据你的版本来编辑它。
答案3
请查看附件中的脚本https://www.gluster.org/pipermail/gluster-users/2012-June/010709.html适用于 gluster 3.3;它可能很容易适应 gluster 3.2。
#!/bin/bash
# This Nagios script was written against version 3.3 of Gluster. Older
# versions will most likely not work at all with this monitoring script.
#
# Gluster currently requires elevated permissions to do anything. In order to
# accommodate this, you need to allow your Nagios user some additional
# permissions via sudo. The line you want to add will look something like the
# following in /etc/sudoers (or something equivalent):
#
# Defaults:nagios !requiretty
# nagios ALL=(root) NOPASSWD:/usr/sbin/gluster peer status,/usr/sbin/gluster volume list,/usr/sbin/gluster volume heal [[\:graph\:]]* info
#
# That should give us all the access we need to check the status of any
# currently defined peers and volumes.
# define some variables
ME=$(basename -- $0)
SUDO="/usr/bin/sudo"
PIDOF="/sbin/pidof"
GLUSTER="/usr/sbin/gluster"
PEERSTATUS="peer status"
VOLLIST="volume list"
VOLHEAL1="volume heal"
VOLHEAL2="info"
peererror=
volerror=
# check for commands
for cmd in $SUDO $PIDOF $GLUSTER; do
if [ ! -x "$cmd" ]; then
echo "$ME UNKNOWN - $cmd not found"
exit 3
fi
done
# check for glusterd (management daemon)
if ! $PIDOF glusterd &>/dev/null; then
echo "$ME CRITICAL - glusterd management daemon not running"
exit 2
fi
# check for glusterfsd (brick daemon)
if ! $PIDOF glusterfsd &>/dev/null; then
echo "$ME CRITICAL - glusterfsd brick daemon not running"
exit 2
fi
# get peer status
peerstatus="peers: "
for peer in $(sudo $GLUSTER $PEERSTATUS | grep '^Hostname: ' | awk '{print $2}'); do
state=
state=$(sudo $GLUSTER $PEERSTATUS | grep -A 2 "^Hostname: $peer$" | grep '^State: ' | sed -nre 's/.* \(([[:graph:]]+)\)$/\1/p')
if [ "$state" != "Connected" ]; then
peererror=1
fi
peerstatus+="$peer/$state "
done
# get volume status
volstatus="volumes: "
for vol in $(sudo $GLUSTER $VOLLIST); do
thisvolerror=0
entries=
for entries in $(sudo $GLUSTER $VOLHEAL1 $vol $VOLHEAL2 | grep '^Number of entries: ' | awk '{print $4}'); do
if [ "$entries" -gt 0 ]; then
volerror=1
let $((thisvolerror+=entries))
fi
done
volstatus+="$vol/$thisvolerror unsynchronized entries "
done
# drop extra space
peerstatus=${peerstatus:0:${#peerstatus}-1}
volstatus=${volstatus:0:${#volstatus}-1}
# set status according to whether any errors occurred
if [ "$peererror" ] || [ "$volerror" ]; then
status="CRITICAL"
else
status="OK"
fi
# actual Nagios output
echo "$ME $status $peerstatus $volstatus"
# exit with appropriate value
if [ "$peererror" ] || [ "$volerror" ]; then
exit 2
else
exit 0
fi
答案4
@Arie Skliarouk,你check_gluster.sh
有一个拼写错误——在最后一行,你 grep 的是exitst
而不是exist
。我继续重写它,使它更紧凑一些,并删除了对临时文件的要求。
#!/bin/bash
# Ensure that all peers are connected
gluster peer status | grep -q Disconnected && echo "Peer disconnected." && exit 1
# Ensure that all bricks have a running log file (i.e., are sending/receiving)
for vol in $(gluster volume list); do
for brick in $(gluster volume info "$vol" | awk '/^Brick[0-9]*:/ {print $2}'); do
gluster volume log locate "$vol" "$brick";
done;
done |
grep -qE "does not (exist|exitst)" &&
echo "Log file missing - $vol/$brick ." &&
exit 1