SSH commands from script to remote server not working - linux

Greeting All,
I have following query and would appreciate any help on this.Thanks.
Scenario :
My local server (server-A) is connected to one remote server (server-B).Server-B is connected to 10 other remote servers (Server-C...Server-L).
Server-A is not directly connected to (Server-C...Server-L) ,its only connected through Server-B.
I have managed to do SSH key pairing between:
Server-A <----> Server-B
Server-B <----> Server-C....Server-L
So now I can login into Server-C from Server-A using below command:
From Server-A :
ssh user-B#(IP-Server-B) -t ssh user-c#(IP-Server-C)
ssh -t user-B#(IP-Server-B) -t scp -oStrictHostKeyChecking=no test.file user-c#(IP-Server-C):/home/user-C
Here is my actual script: (Running from Server-A)
while read line
do
scp -oStrictHostKeyChecking=no test.file user-B#(IP-Server-B):/home/user-B
ssh -t user-B#(IP-Server-B) -t scp -oStrictHostKeyChecking=no test.file mtc#$line:/home/mtc
ssh -t user-B#(IP-Server-B) -t ssh -t -tqn user-c#$line sh /home/user-c/test.file
ssh -t user-B#(IP-Server-B) -t scp user-c#$line:/home/user-c/junk.txt /home/user-B
ssh -t user-B#(IP-Server-B) -t ssh user-c#$line rm -rf /home/user-c/junk.txt
scp user-B#(IP-Server-B):/home/user-B/junk.txt .
mv junk.txt junk.txt_$line
done < LabIpList
Here is the list of IP address of servers Server-c...Server-L.
cat LabIpList
1.2.3.4
2.3.4.5
3.4.5.6
4.5.6.7
5.6.7.8
6.7.8.9
7.8.9.10
....
.....
Query:
If I do above commands on command line then they work flawlessly, however If I put them on script then they fail. Because of two reasons :
tcgetattr: Inappropriate ioctl for device
pseudo-terminal will not be allocated
As the SSH-keys are recently exchanged , so user have to manually type yes to add them to know_hosts.

I believe you have already created passwordless login using ssh-keygen.Please use below options for ssh in the script
ssh -t -t -tq <IP>

Related

cat command not working as expected using ssh

I am creating a yaml file in GitHub Actions to install softwares on Linux servers created using terraform. Using the pipeline i am able to ssh into the linux servers. In one of the servers i am creating a .ssh directory and an id_rsa file (which contains the private key of the other server) which i intend to use to scp into the other server to copy some files. I have used the echo command to copy the private key to id_rsa file. I want to view the content of id_rsa to make sure the correct private key is copied. I am using the cat command but it does not work. here is my code
ssh chefnode -T
ssh chefnode -t 'sudo apt-get update && sudo apt-get upgrade -y'
ssh chefnode -t 'echo "$INFRA_PRIVATE_KEY" > "/home/'$SSH_NODE_USER'/.ssh/id_rsa"'
ssh chefnode -t 'cat "/home/'$SSH_NODE_USER'/.ssh/id_rsa"'
the commands run but the cat command does not return any output. It does not fail. the pipeline passes but this command does not render any output. I have tried the following combinations as well but nothing works
ssh chefnode -t 'cat /home/"$SSH_NODE_USER"/.ssh/id_rsa'
ssh chefnode -t 'cat /home/$SSH_NODE_USER/.ssh/id_rsa'
ssh chefnode -t cat /home/'$SSH_NODE_USER'/.ssh/id_rsa
ssh chefnode -t cat /home/$SSH_NODE_USER/.ssh/id_rsa
I tried this too
ssh chefnode -t 'echo "$INFRA_PRIVATE_KEY" > "/home/'$SSH_NODE_USER'/.ssh/id_rsa"'
ssh chefnode -t 'cd "/home/'$SSH_NODE_USER'/.ssh";"cat id_rsa"'
says cat command not found. I just want to view the contents of id_rsa file, not sure what i am doing wrong.
ssh chefnode -t 'echo "$INFRA_PRIVATE_KEY" > "/home/'$SSH_NODE_USER'/.ssh/id_rsa"'
Unless $INFRA_PRIVATE_KEY is a variable set by the login environment on chefnode, this is likely to be empty.
I assume you wanted to send a variable set on the local console, but as written this literally sends "$INFRA_PRIVATE_KEY" to the server which probably expands to nothing (i.e. the file is actually empty).
you probably instead want something like:
ssh chefnode -t 'echo "'"$INFRA_PRIVATE_KEY"'" > "/home/'$SSH_NODE_USER'/.ssh/id_rsa"'
which locally expands the variable and then sends it with quoting (assuming there are no quotes embedded in the variable value itself)

Execute remote SSH command and evaluate this command on remote side

Let's consider a following code:
CMD=echo $(hostname --fqdn) >> /tmp/hostname_fqdn
ssh some_user#10.9.11.4 -i ~/.ssh/id_rsa $CMD
And now, on remote side created file /tmp/hostname_fqdn contains hostname of client side instead of hostname of remote side. Is it possible to evaluate part of command (hostname --fqdn) on remote side? How to do it?
ssh some_user#10.9.11.4 -i ~/.ssh/id_rsa hostname --fqdn >> /tmp/hostname_fqdn
or, if the CMD may change at runtime:
CMD="hostname --fqdn" && ssh cloud.luminrobotics.com $CMD >> /tmp/xxx
You cannot, however, keep the redirection of the output (>> filename) be part of the command variable, because the command will be executed on the remote host.
PS: If what you want to do with the output changes at runtime as well, then you need to use a pipe and a separate command variable, e.g.,:
CMD_REMOTE="hostname --fqdn"
CMD_LOCAL="tee /tmp/hostname_fqdn"
ssh cloud.luminrobotics.com $CMD | $CMD_LOCAL
First,
CMD=echo $(hostname --fqdn) >> /tmp/hostname_fqdn
will likely do nothing like what you expect.
CMD=echo will be pasred as setting echo as the value of $CMD, and then the return from the hostname subshell we be executed, and likely fail, creating an empty file /tmp/hostname_fqdn
ssh on the other hand is pretty flexible. You could use
ssh some_user#10.9.11.4 -i ~/.ssh/id_rsa 'hostname --fqdn >> /tmp/hostname_fqdn'
if you want the remote hostname saved to a file on the remote host, or
ssh some_user#10.9.11.4 -i ~/.ssh/id_rsa 'hostname --fqdn' >> /tmp/hostname_fqdn
if you want the remote hostname on the local server, or
hostname --fqdn | ssh some_user#10.9.11.4 -i ~/.ssh/id_rsa 'cat >> /tmp/hostname_fqdn'
if you want the local hostname on the remote server...
You have options. :)

ssh inside ssh with multiples commands

When I try more than one command to remotely execute commands through ssh inside other ssh, I get weird result.
From man ssh:
-t
Force pseudo-terminal allocation. This can be used to execute
arbitrary screen-based programs on a remote machine, which can be very
useful, e.g. when implementing menu services. Multiple -t options
force tty allocation, even if ssh has no local tty.
If I do
ssh -t root#host1 ssh root#host2 "cat /etc/hostname && cat /etc/hostname"
or
ssh -t root#host1 ssh -t root#host2 "cat /etc/hostname && cat /etc/hostname"
in both cases I get
host1
Connection to host1 closed.
host2
Connection to host2 closed.
I want this result:
host1
host1
Connection to host1 closed.
Connection to host2 closed.
I want to run all commands in same server using ssh inside ssh.
If I use only one ssh, it works:
ssh -t root#host1 "cat /etc/hostname && cat /etc/hostname"
host1
host1
Connection to host1 closed.
I get it work, but I can not explain what is happening.
ssh -t root#host1 "ssh -t root#host2 \"cat /etc/hostname ; cat /etc/hostname\""
host1
host1
Connection to host1 closed.
Connection to host2 closed.
Try:
That's not how -t works.
For your option try:
ssh root#host1 .....; ssh root#host2 ....
Otherwise, use PSSH that will do the uptime command on both servers at same time:
pssh -h hostfile -i uptime

Bash script to pull pending Linux security updates from remote servers

I'm trying to pull pending linux updates from remote servers and plug them into Nagios. Here's a stripped down version of the code - the code that's giving me an error:
UPDATES=$(sshpass -p "password" StrictHostKeyChecking=no user#server:/usr/lib/update-notifier/apt-check 2>&1)
echo $UPDATES
Error message:
sshpass: Failed to run command: No such file or directory
Command in the question is wrong in multiple ways.
sshpass -p"password" \
ssh -o StrictHostKeyChecking=no user#server "/usr/lib/update-notifier/apt-check" 2>&1
For the -p option, there shouldn't be any space between the option and the value.
sshpass needs a command as argument, which is ssh in this case.
StrictHostKeyChecking=no should be following the option -o for ssh.
A space, not a : is needed between user#server and the command you are going to run remotely, i.e., /usr/lib/....

How do I become root on a remote server until I am disconnected from that server?

So far I have this:
sshpass -p "password" ssh -q username#192.168.167.654 " [ "$(whoami)" != "root" ] && exec sudo -- "$0" "$#" ; whoami ; [run some commands as root]"
I keeps giving me username as answer from whoami. I want to be root as soon as I am connected to the server (but I can only connect to it with username). How can I be root throughout the connection to the server?
Clarification:
I want to access a remote server. It is mandatory that I connect as "username" and then switch to root to run and copy files that only root is able to do. So while I am connected to that server via ssh, I want to be root until my commands are over in the remote server. My problem is that I am not able to do so because I don't have the knowledge, hence I am posting it here.
Restrictions:
-can't use rsync.
-have to connect to the server as "username" and then switch to root
sshpass -p "password" ssh -q username#192.168.167.654 exec sudo -s << "END"
whoami
commands to run
END
You can try something like this (untested)
But I've used the same concept to accomplish similar a similar task.
scp FileWithCommands.sh UserName#Hostname:/tmp
ssh Username#HostName "su -s -c /tmp/FileWithCommands.sh"

Resources