Git- How to kill ssh-agent properly on Linux - linux

I am using git on linux, when pushing to gitlab, sometimes it either stuck at:
debug1: Connecting to gitlab.com [52.167.219.168] port 22.
or
debug1: client_input_channel_req: channel 0 rtype keepalive#openssh.com reply 1
debug3: send packet: type 100
Seems restarting linux could solve it, but no body like to reboot machines.
So, I am trying to kill ssh-agent process, then restart it.
But the process is always defunct after kill, and then I can't use git via ssh at all, so is there any way to restart the ssh-agent, or solve the issue described above without restarting the machine?
#Update
The ssh keys that I use include key phrase, which I would input on first use of a ssh key.
The issue usually occurs after I bring the Linux desktop back from sleep, thus the network is reconnected, not sure whether this matters?
Again, does any one knows how to kill or restart a ssh-agent agent, without making it become defunct?

You can kill ssh-agent by running:
eval "$(ssh-agent -k)"

You can try this bash script to terminate the SSH agent:
#!/bin/bash
## in .bash_profile
SSHAGENT=`which ssh-agent`
SSHAGENTARGS="-s"
if [ -z "$SSH_AUTH_SOCK" -a -x "$SSHAGENT" ]; then
eval `$SSHAGENT $SSHAGENTARGS`
trap "kill $SSH_AGENT_PID" 0
fi
## in .logout
if [ ${SSH_AGENT_PID+1} == 1 ]; then
ssh-add -D
ssh-agent -k > /dev/null 2>&1
unset SSH_AGENT_PID
unset SSH_AUTH_SOCK
fi

Many ways:
killall ssh-agent
SSH_AGENT_PID="$(pidof ssh-agent)" ssh-agent -k
kill -9 $(pidof ssh-agent)
pidof is from the procps project. You may be able to find it for your distro if it is packaged

yes, ssh-agent might be defunct: [ssh-agent] <defunct>
trying to kill the agent could help:
eval "$(ssh-agent -k)"
but also try to check your keyring process (e.g. gnome-keyring-daemon), restart it or even remove the ssh socket file:
rm /run/user/$UID/keyring/ssh

It shows defunct probably because its parent process is still monitoring it, so it's not removed from the process table. It's not a big deal, the process is killed. Just start a new ssh-agent:
eval $(ssh-agent)
ssh-add

Related

ssh connections are spawning hundreds of ssh-agent /bin/bash instances

I have an Arch server running on a VMWare VM. I connect to it through a Firewall that forwards ssh connections from port X to port 22 on the server. Yesterday, I started receiving the error "Bash: Fork: Resource Temporarily Unavailable". I can log in as root and manage things without problem, but it seems that when I ssh in as the user I normally use, the ssh session is now spawning hundreds of ssh-agent /bin/bash sessions. This is, in turn, using up all the threads and file descriptors (from what I can tell) on the system and making it unusable. The little bit of info I've been able to find thus far tells me that I must have some sort of loop, but this hasn't happened until yesterday, possibly when I ran updates. At this point, I am open to suggestions.
One of your shell initialization files is probably spawning a shell, which, when reading the shell initialization files will spawn a shell, etc.
You mentioned ssh-agent /bin/bash. Putting this in .bashrc will definitely cause problems, as this instructs ssh-agent to spawn bash...
Instead, use something like
if [[ -z "$SSH_AUTH_SOCK" ]]; then
eval $(ssh-agent)
fi
in .bashrc (or .xinitrc or .xsession for systems with graphical logins).
Or possibly (untested):
if [[ -z "$SSH_AUTH_SOCK" ]]; then
ssh-agent /bin/bash
fi
in .bash_profile.
in my case (windows) it was because i was not exiting when done with a shell, so they were not getting disposed.
when done use ctrl+d or type exit to terminate the ssh agent

autossh exit when using the -f option

When running autossh without the "-f" option, everything works fine.
Adding the "-f" option indeed sends autossh to the background, but after the ssh tunneling established correctly the autossh itself exit, leaving the ssh connection without monitor.
Here is the command I'm running: autossh -f -M 20000 -N -L 0.0.0.0:5601:10.10.0.8:5601 10.10.0.8
Anyone knows what can cause this problem? alternatively - anyone knows how can I debug autossh when using the "-f"? (is there any log file produce when using AUTOSSH_DEBUG=1)?
(I'm running on Ubuntu 14.04)
Thanks,
Shay
Seeing as no one has a better suggestion... Try running autossh under a watchdog like daemontools. With this method autossh runs as a foreground child of a supervise daemon (so, no -f switch). You can start and stop it with the svc command, and you can log all of its output with multilog.
This method has proven sufficiently reliable for me, on a fleet of production boxes.
On macOS I had a problem where autossh -M 0 -R 1234:localhost:22 worked but adding -f to make autossh run in background would log the following and autossh would die instantly:
2018/04/10 12:00:06 autossh[67839]: ssh exited with status 0; autossh exiting
Adding -N ("Do not execute a remote command.") fixed the issue:
autossh -f -M 0 -N -R 1234:localhost:22
Seeing you already had -N in the command this is probably unrelated but possibly helpful to others.

How to emit a "beep" on my computer while running a script on a remote machine?

I run a long script on a remote machine and I would like to hear a beep when the script ends. On my machine I can add at the end of the script:
echo -e '\a' > /dev/console
but this is not working on the remote machine which complains :
-bash: /dev/console: Permission denied
How to achieve this ?
You could run the script by passing it as a parameter to ssh and then echo the beep locally:
ssh user#host /path/to/script; echo -e '\a' > /dev/console
Perhaps you might use /dev/tty instead of /dev/console. (I don't know how ssh handle beeps, so maybe you should start a terminal emulator, e.g. ssh -X -f remotehost xterm).

SSH: guarding stdout against disconnect

My server deployment script triggers a long-running process through SSH, like so:
ssh host 'install.sh'
Since my internet connection at home is not the best, I can sometimes be disconnected while the install.sh is running. (This is easily simulated by closing the terminal window.) I would really like for the install.sh script to keep running in those cases, so that I don't end up with interrupted apt-get processes and similar nuisances.
The reason why install.sh gets killed seems to be that stdout and stderr are closed when the SSH session is yanked, so writing to them fails. (It's not an issue of SIGHUP, by the way -- using nohup makes no difference.) If I put touch ~/1 && echo this fails && touch ~/2 into install.sh, only ~/1 is created.
So running ssh host 'install.sh &> install.out' solves the problem, but then I lose any "live" progress and error output.
So my question is: What's an easy/idiomatic way to run a process through SSH so that it doesn't crash if SSH dies, but so that I can still see the output as it runs?
Solutions I have tried:
When I run things manually, I use screen for cases like this, but I don't think it will be of much help here because I need to run install.sh automatically from a shell script. Screen seems to be made for interactive use (it complains "Must be connected to a terminal.").
Using install.sh 2>&1 | tee install.out didn't help either (silly of me to think it might).
You can redirect stdout/stderr into install.out and then tail -f it. The following snippet actually works:
touch install.out && # so tail does not bark (race condition)
(install.sh < /dev/null &> install.out &
tail --pid "$!" -F install.out)
But surely there must a less awkward way to do the same thing?
Try using screen:
screen ./install.sh
If your ssh session gets interrupted, you can simply reattach to the session via another ssh connection:
screen -x
You can provide a terminal to your ssh session using the -t switch:
ssh -t server screen ./install.sh
install.sh 2>&1 | tee install.out
if the only issue is not getting stderr. You didn't say exactly why the tee wasn't acceptable. You may need the other nohup/stdin tweaks.

ssh-agent with passwords without spawning too many processes

I use ssh-agent with password-protected keys on Linux. Every time I log into a certain machine, I do this:
eval `ssh-agent` && ssh-add
This works well enough, but every time I log in and do this, I create another ssh-agent. Once in a while, I will do a killall ssh-agent to reap them. Is there a simple way to reuse the same ssh-agent process across different sessions?
have a look at Keychain. It was written b people in a similar situation to yourself.
Keychain
How much control do you have over this machine? One answer would be to run ssh-agent as a daemon process. Other options are explained on this web page, basically testing to see if the agent is around and then running it if it's not.
To reproduce one of the ideas here:
SSH_ENV="$HOME/.ssh/environment"
function start_agent {
echo "Initialising new SSH agent..."
/usr/bin/ssh-agent | sed 's/^echo/#echo/' > "${SSH_ENV}"
echo succeeded
chmod 600 "${SSH_ENV}"
. "${SSH_ENV}" > /dev/null
/usr/bin/ssh-add;
}
# Source SSH settings, if applicable
if [ -f "${SSH_ENV}" ]; then
. "${SSH_ENV}" > /dev/null
#ps ${SSH_AGENT_PID} doesn’t work under cywgin
ps -ef | grep ${SSH_AGENT_PID} | grep ssh-agent$ > /dev/null || {
start_agent;
}
else
start_agent;
fi
You can do:
ssh-agent $SHELL
This will cause ssh-agent to exit when the shell exits. They still won't be shared across sessions, but at least they will go away when you do.
Depending on which shell you use, you can set different profiles for login shells and mere regular new shells. In general you want to start ssh-agent for login shells, but not for every subshell. In bash these files would be .bashrc and .bash_login, for example.
Most desktop linuxes these days run ssh-agent for you. You just add your key with ssh-add, and then forward the keys over to remote ssh sessions by running
ssh -A

Resources