I have long-running command being executed over SSH. Every so often, the command seems to hang and stop getting any output. The command is still running on the other machine (in fact, if I Ctrl+C out of the SSH command, it still runs, which is fine), my client just hangs.
My command is running an ant
target, like this:
ssh -tt user@machine 'cd /thedir; export DISPLAY=:80; ant clean test'
The ant
command can take up to an hour.
I imagine this is a bad strategy, since any network blips can occur and mess up my connection. Can anyone suggest an alternate approach? My goal would be:
- My client waits for the command to finish
- I still receive output from the command
- If network blips occur, I can still check and wait for the comment to finish
Any help or suggestions are greatly appreciated.
EDIT: I am running a full bash script... I don't need a one-liner or anything like that.
答案1
You could try running the command in the background (detached from your user session) and sending yourself the result via email. Kinda like so:
ssh -tt user@machine 'cd /thedir; nohup ant clean test 2>&1 | mail -s "The cleanup was completed." [email protected]'
I haven't tested this though.
答案2
My recommendation is to use a combination of AutoSSH and screen. autossh will automatically reconnect SSH if the session gets disconnected, and screen allows your program to run independently of the ssh session so that you can reconnect to it in process.
that way, when your network hiccupps, autossh will reconnect, and screen will reattach to your scripts shell.
localhost>$ autossh -t remotehost 'screen -Rd'
remotehost>$ scriptname
that way, when the ssh dies, autossh will reconnect and automatically reattach the screen session.
http://noone.org/blog/English/Computer/Shell/Perfect%20Team:%20autossh%20and%20GNU%20Screen.html