当 sshfs 锁定时自动重置,重新挂载失败,除非手动完成

当 sshfs 锁定时自动重置,重新挂载失败,除非手动完成

我的服务器上有 2 个本地目录安装在远程服务器上。本地与 dirs-sshfs.sh 保持连接, autossh -M 23 -R 24:localhost:22 user@server而远程挂载

sshfs -o reconnect,ServerAliveInterval=15,ServerAliveCountMax=3,cache_timeout=3600 [email protected]:/mnt/localdir1/ /home/user/dir1/ -p 24
sshfs -o reconnect,ServerAliveInterval=15,ServerAliveCountMax=3,cache_timeout=3600 [email protected]:/mnt/localdir2/ /home/user/dir2/ -p 24

当锁死时,使用sshfs-restart.sh重置连接

pkill -kill -f "sshfs"
fusermount -uz dir1
fusermount -uz dir2
./dirs-sshfs.sh

一切正常,但我必须 1)注意到它已锁定并 2)手动重置它。

棘手的是,当它锁定时,即使主目录的 ls 也会无限锁定,直到重置。因此,我放弃了从远程服务器端进行管理。在我的本地服务器端,维护 autossh 连接的地方,我有以下几乎可以工作的脚本。这会在失败时捕获,并尝试重置连接,但在卸载后不会重新安装。尝试将 dirs-sshfs.sh 内容放入 ssh-restart.sh 中,甚至 run-sshfs-restart.sh 包含的远程服务器端./sshfs-restart.sh &也无法使其工作。测试脚本(testsshfs2.sh)包含ls; echo; ls dir1; echo; ls dir2; echo; date; echo它,它的创建是为了快速、简单地检查所有内容是否已安装/工作。

这是 sshfsmanager.sh,使用 sleep 命令在 while 循环内从本地服务器运行。未来可能会将其转移到 cronjob

sshout=$(timeout 300 ssh user@host ./testsshfs2.sh)
# timeout duration probably doesnt need to be that long, but sometimes it does take 90+ sec to ls
# the remote dirs as they are network shares mounted to a vm that are then mounted across a ssh
# tunnel. Once this is working, that duration would be fine tuned. Also 100 is just an arbitrary
# number larger than ls | wc -l would provide. It should be updated to where it would catch if
# either of the mounts fail instead of only when both do. This timeout method is the only way
# ive found to get around the infinite ls lock.

if [ `echo "$sshout" | wc -l` -le 100 ]; then
  echo "$sshout" | wc -l
  ssh user@host ./sshfs-restart.sh
  #adding a "&& sleep 60" or using the run-sshfs-restart.sh script did not work

  echo sshfs restarted
else
  echo sshfs is fine
  echo "$sshout" | wc -l
fi

大多数脚本的日志记录在放置在这里时已被删除(以及更改端口和删除用户/主机)。大多数日志行只是日期>> sshfsmanager.log

本地VM运行ubuntu 18.04.5,远程服务器是运行gentoo 5.10.27的共享VPS

相关内容