How do I automate two layers of SSH plus a docker exec? - linux

I do this multiple times a day. Any clues on automating it, so that I can run one command to get all the way to the logs? There are two ssh and then a docker exec.
➜ ~ ssh host
Last login: Tue Jun 27 15:44:11 2017 from 10.82.34.63
$ ssh another-host
Last login: Tue Jun 27 15:44:18 2017 from host
$ docker exec -it app-container bash
[root#app-container opt]# tail -f tomcat/logs/catalina.out

We can take advantage of ProxyCommand in OpenSSH for the first part (jumping through a proxy host to SSH to others). An example for your ~/.ssh/config would look like:
Host another-host
ProxyCommand ssh -W %h:%p host
HostName another-host
If all the hosts that you are proxying through happened to be in the same domain you could catch a bunch of them with a wildcard:
Host jumphost
Hostname host.mydomain
Host *.mydomain
ProxyCommand ssh -W %h:%p jumphost
For the second part, there is no need to exec into the container with a shell before using a command. Doing docker exec -it app-container tail -f tomcat/logs/catalina.out is perfectly valid.
Combined with the SSH configuration, you can allocate a pseudo TTY (-t) and then just do one command:
ssh -t another-host docker exec -it app-container tail -f tomcat/logs/catalina.out

This is at least a partial answer for ssh. Look at ssh usage output:
ssh (.... lots of options ....) [user#]hostname [command]
So, there's an optional command at the end of the argument list, after the required hostname. This indeed works as you would expect, you can "chain" another ssh command here that's executed remote:
ssh host ssh another-host
will do.
Note that your ssh will not allocate a tty in this case, so it will not enable you to have an interactive session. But of course, you can give this second ssh something to execute as well
ssh host ssh another-host docker exec [...]
For the last part, I just looked up the docker documentation. The option -t requires a tty, so you should leave it out. Then you should be able to execute whatever you like in your container, as long as it's nothing interactive:
ssh host ssh another-host docker exec -i app-container tail -f tomcat/logs/catalina.out
Of course, for full automation, use SSH keys and have an SSH agent running with your key added.

Related

Bash script SSH command variable interpolation

First: I have searched the forum and also went through documentation, but still cannot get it right.
So, I have a docker command I want to run on a remote server, from my bash script. I want to pass an environment variable – on the local machine running the script – to the remote server. Furthermore, I need a response from the remote command.
Here is what I actually am trying to do and what I need: the script is a tiny wrapper around our Traefik/Docker/Elixir/Phoenix app setup to be able to connect easily to the running Elixir application, inside the Erlang observer. With the script, the steps would be:
ssh into the remote machine
docker ps to see all running containers, since in our blue/green deploy the active one changes name
docker exec into the correct container
execute a command inside the docker container to connect to the running Elixir application
The command I am using now is:
CONTAINER=$(ssh -q $USER#$IP 'sudo docker ps --format "{{.Names}}" | grep ""$APP_NAME"" | head -n 1')
The main problem is the part with the grep and the ENV var... It is empty, and does not get replaced. It makes sence, since that var does not exist on the remote machine, it does on my local machine. I tried single quotes, $(), ... Either it just does not work, or the solutions I find online execute the command but then I have no way of getting the container name, which I need for the subsequent command:
ssh -o 'RequestTTY force' $USER#$IP "sudo docker exec -i -t $CONTAINER /bin/bash -c './bin/app remote'"
Thanks for your input!
First, are you sure you need to call sudo docker stop? as stopping the containers did not seem to be part of the workflow you mentioned. [edit: not applicable anymore]
Basically, you use a double-double-quote, grep ""$APP_NAME"", but it seems this variable is not substituted (as the whole command 'sudo docker ps …' is singled-quoted); according to your question, this variable is available locally, but not on the remote machine, so you may try writing:
CONTAINER=$(ssh -q $USER#$IP 'f() { sudo docker ps --format "{{.Names}}" | grep "$1" | head -n 1; }; f "'"$APP_NAME"'"')
You can try this single command :
ssh -t $USER#$IP "docker exec -it \$(docker ps -a -q --filter Name=/$APP_NAME) bash -c './bin/app remote'"
You will need to redirect the command with the local environmental variable (APP_NAME) into the ssh command using <<< and so:
CONTAINER=$(ssh -q $USER#$IP <<< 'sudo docker ps --format "{{.Names}}" | grep "$APP_NAME" | head -n 1 | xargs -I{} sudo docker stop {}')

Bash script to pull pending Linux security updates from remote servers

I'm trying to pull pending linux updates from remote servers and plug them into Nagios. Here's a stripped down version of the code - the code that's giving me an error:
UPDATES=$(sshpass -p "password" StrictHostKeyChecking=no user#server:/usr/lib/update-notifier/apt-check 2>&1)
echo $UPDATES
Error message:
sshpass: Failed to run command: No such file or directory
Command in the question is wrong in multiple ways.
sshpass -p"password" \
ssh -o StrictHostKeyChecking=no user#server "/usr/lib/update-notifier/apt-check" 2>&1
For the -p option, there shouldn't be any space between the option and the value.
sshpass needs a command as argument, which is ssh in this case.
StrictHostKeyChecking=no should be following the option -o for ssh.
A space, not a : is needed between user#server and the command you are going to run remotely, i.e., /usr/lib/....

Issue shell commands on the remote server from local machine

The following command issued on a Mac terminal is failing the docker command on the remote shell.
However it works if I log in to the server and issue the command there with replacing ";" with "&&"
ssh -i "myKey.pem" user#host ‘docker stop $(docker ps -a -q --filter ancestor=name/kind); docker rm $(docker ps -a -q --filter ancestor=name/kind); docker rmi name/kind; docker build -t name/kind .; sudo docker run -it -d -p 80:80 name/kind’
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
I need to run this command form the local terminal because if is part of bigger command which first build the project locally and scp it to the server.
`$bigger-command && then-the-ssh-as-shown-above.
How do I go about it? Thanks
The best way to pass very complex commands to ssh is the create a script on the server side.
If you need to pass some parameters, proceed this way:
create a .sh file on your localhost
scp it to your remote host
run `ssh user#remotehost 'bash scriptfile.sh'
This should do the trick without giving you headaches about escapement.

SSH commands from script to remote server not working

Greeting All,
I have following query and would appreciate any help on this.Thanks.
Scenario :
My local server (server-A) is connected to one remote server (server-B).Server-B is connected to 10 other remote servers (Server-C...Server-L).
Server-A is not directly connected to (Server-C...Server-L) ,its only connected through Server-B.
I have managed to do SSH key pairing between:
Server-A <----> Server-B
Server-B <----> Server-C....Server-L
So now I can login into Server-C from Server-A using below command:
From Server-A :
ssh user-B#(IP-Server-B) -t ssh user-c#(IP-Server-C)
ssh -t user-B#(IP-Server-B) -t scp -oStrictHostKeyChecking=no test.file user-c#(IP-Server-C):/home/user-C
Here is my actual script: (Running from Server-A)
while read line
do
scp -oStrictHostKeyChecking=no test.file user-B#(IP-Server-B):/home/user-B
ssh -t user-B#(IP-Server-B) -t scp -oStrictHostKeyChecking=no test.file mtc#$line:/home/mtc
ssh -t user-B#(IP-Server-B) -t ssh -t -tqn user-c#$line sh /home/user-c/test.file
ssh -t user-B#(IP-Server-B) -t scp user-c#$line:/home/user-c/junk.txt /home/user-B
ssh -t user-B#(IP-Server-B) -t ssh user-c#$line rm -rf /home/user-c/junk.txt
scp user-B#(IP-Server-B):/home/user-B/junk.txt .
mv junk.txt junk.txt_$line
done < LabIpList
Here is the list of IP address of servers Server-c...Server-L.
cat LabIpList
1.2.3.4
2.3.4.5
3.4.5.6
4.5.6.7
5.6.7.8
6.7.8.9
7.8.9.10
....
.....
Query:
If I do above commands on command line then they work flawlessly, however If I put them on script then they fail. Because of two reasons :
tcgetattr: Inappropriate ioctl for device
pseudo-terminal will not be allocated
As the SSH-keys are recently exchanged , so user have to manually type yes to add them to know_hosts.
I believe you have already created passwordless login using ssh-keygen.Please use below options for ssh in the script
ssh -t -t -tq <IP>

How do I configure ssh proxycommand correctly to run docker exec?

I have a host defined in /etc/hosts called web1.
There is a docker container with the name store.
While on my workstation I can ssh into the machine and execute the command to enter the container interactively like this
ssh -t -t web1 docker exec -ti store /bin/bash
It properly drops me right into the container as root as I had hoped.
However, I really want to define a pseudo host named store and set it up in my ~/.ssh/config file like this using ProxyCommand so I can use ssh store
Host store
ProxyCommand ssh -t -t web1 docker exec -ti store /bin/bash
But it fails with the following error:
Bad packet length 218958363.
ssh_dispatch_run_fatal: Connection to UNKNOWN: message authentication code incorrect
Killed by signal 1.
If I add -v for some debugging, the last two lines just before the block above are
debug1: Authenticating to store:22 as 'user1'
debug1: SSH2_MSG_KEXINIT sent
I think it is trying ssh into the store container instead of just executing the command which is throwing that error, is that correct? If not what is the issue?
Is there a way to do this using ProxyCommand without trying to ssh into the container but instead just use the docker exec?
Is it easy enough to also setup the ssh into the container? We currently aren't doing that as a matter of practice.
Is there another option other than an alias for ssh-store?
The end goal is to have a virtual host defined that I can just say ssh store and have it end up in the store container on web1.
Edited:
Solution:
As Jakuje indicated, using the ProxyCommand with ssh is not going to allow a non-ssh further command. Therefore I am just using an alias and potentially a bash function for this to accomplish it. I've setup both.
Also per Jakuje's recommendation in ~/.ssh/config
Host web1
RequestTTY yes
in ~/.bash_aliases
alias ssh-store="ssh web1 docker exec -ti store /bin/bash"
so I can do ssh-store and end up in the container
or in ~/.bashrc
function ssh-web1 { ssh web1 docker exec -ti $1 /bin/bash; }
so I can do ssh-web1 store and also end up in the container
I think it is trying ssh into the store container instead of just executing the command which is throwing that error, is that correct? If not what is the issue?
Yes
Is there a way to do this using ProxyCommand without trying to ssh into the container but instead just use the docker exec?
No. It does not work this way. ProxyCommand expects the other step to be also SSH session and not direct bash prompt.
Is it easy enough to also setup the ssh into the container? We currently aren't doing that as a matter of practice.
I think this is unnecessary overhead. But it is possible as described in many other questions around here.
At least you can get rid of -t -t by specifying RequestTTY in your ~/.ssh/config. But the rest have to be bash alias or function (if you have more host function is more appropriate).
function ssh-docker {
ssh web1 docker exec -ti $1 /bin/bash
}
and then you can call it regardless the container like this:
ssh-docker store
You just store such function into your .bashrc or where you stored your aliases.

Resources