SSH works in Terminal but nor in shell script - linux

I am trying to execute a script I uploaded to an AWS instance. If I run the following command in my MacBook Terminal, it succeeds:
ssh -o StrictHostKeyChecking=no -i ~/.ec2/my.pem ec2-user#ec2-<address>.amazonaws.com "chmod u+x ./myScript.sh"
I ported the same command to a simple shell script on my local machine, where I pass in the information:
#!/bin/sh
# myLocalScript.sh
host=$1
pem=$2
fileName=$3
ssh -o StrictHostKeyChecking=no -i $pemkey ec2-user#$host "chmod u+x ./$fileName"
When I run it using this command:
sh myLocalScript.sh ec2-user#ec2-<address>.amazonaws.com ~/.ec2/my.pem myScript.sh
I get the following error:
Warning: Identity file ec2-user#ec2-<address>.amazonaws.com not accessible: No such file or directory.
ssh: Could not resolve hostname chmod u+x ./myScript.sh: nodename nor servname provided, or not known
What am I doing wrong?

You need $pem not $pemkey.
Additionally, you should get into the habit of double-quoting variables, except in very special situations where you really want an empty variable to "disappear".

Related

SCP and sshpass - Can't copy from remote source to local destination using script on PIs - debian11

I am struggling to copy files from a remote source to my local destination
I am using scp and I have tried adding sshpass to send the password
I have a script that copies from my local source to a remote destination which works:
sudo sshpass -p "pi" ssh -o StrictHostKeyChecking=no pi#$VAR_IP ls /some_dir
this just connects to it without having to put in additional commands to accept the connection if it is the first time
sudo sshpass -p "pi" scp /path_to_app/$VAR_APP pi#$VAR_IP:/home/pi/$VAR_APP/
this successfully copies from my local source to my remote destination
Now... Even though the scp documentation says I can scp remote source to local destination
I can't seem to get it to work, here is how I am trying to do it in a different script:
sudo sshpass -p "pi" ssh -o StrictHostKeyChecking=no pi#$VAR_IP ls /some_dir
this is just to initialize not to have to accept connection, same as the last script
sudo sshpass -p "pi" scp pi#$VAR_IP:/home/pi/$VAR_APP/logs/file /some_local_dir/
This gives me the error: scp: /home/pi/App_Name/logs/file: No such file or directory
the path doesn't exist on local but does on remote, so it seems it is trying to find it locally instead of remotely, any ideas on this?
I looked at all the related posts about this and the man pages but can't find an answer to my specific case
I cannot do the cert key thing as I have too many sites, it would take forever
I saw in one of the posts someone tried it without sshpass, I tried it too like this:
sudo scp pi:pi#$VAR_IP:/home/pi/$VAR_APP/logs/file /some_local_dir/
This gave me the error: ssh: Could not resolve hostname pi: Name or service not known
I don't think it works like that so I didn't go further down that vein
I hope I gave enough info with clarity
any help would really be appreciated
thank you so much for your time and input
You mention that this command is not working sudo sshpass -p "pi" scp pi#$VAR_IP:/home/pi/$VAR_APP/logs/file /some_local_dir/
Did you check this?
sudo sshpass -p "pi" ssh pi#$VAR_IP 'ls -l /home/pi/$VAR_APP/logs/file /some_local_dir/' to check the directory is exist
If that issue is still there, I recommend you to try pssh and pscp which are parallel ssh that could do the same thing as sshpass
I managed to fix it, for anyone that comes across this problem
Here is how I found the fix:
The file I was looking for was a root file but I was sshing as pi.
Even though I sudoed the script, and sudoed sshpass
That does not mean scp is sudo, so each command in a line needs its own sudo
eg:
sudo sshpass -p "pi" scp pi#IP:/file /local_dir/
This doesn't work because sshpass has sudo but scp does not, however
sudo sshpass -p "pi" sudo scp pi#IP:/file /local_dir/
This works perfectly because scp now has sudo rights

Python Subprocess, using rsync with ssh key file, error with method Call

I've got this command to sync a data file from a remote server, and I'm using a ssh key file as method for authentication. My original command in bash it's:
rsync -az -e 'ssh -i my_key.pem' admin#192.168.0.10:/export/home/admin/monitor_dir/monitor_srv1.dat .
And that works just fine, it transfers the file but when I try to use Python subprocess library method call, to run the bash command from this python script.
#!/usr/bin/python
import subprocess
server = ['192.168.0.10', '/export/home/admin/monitor_dir/', 'srv1']
subprocess.call(['rsync', '-az', '-e', '\'ssh', '-i', 'my_key.pem\'', 'admin#{0}:{1}monitor_{2}.dat'.format(server[0],server[1],server[2]), '.'])
This it's the error shown:
rsync: link_stat "/Users/works/LUIS/scripts_py/my_key.pem'" failed: No such file or directory (2)
rsync: link_stat "/Users/works/LUIS/scripts_py/admin#192.168.0.10:/export/home/admin/monitor_dir/monitor_srv1.dat" failed: No such file or directory (2)
rsync error: some files could not be transferred (code 23) at /BuildRoot/Library/Caches/com.apple.xbs/Sources/rsync/rsync-51/rsync/main.c(996) [sender=2.6.9]
I tried the method run and call too, but I don't know what I am doing wrong, or even if there is another way to execute the rsync command using python?
I'm using:
-Python 3.5.0
-GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin16)
-rsync version 2.6.9 protocol version 29
rsync -az -e 'ssh -i my_key.pem' ...
When you run rsync from your shell, the single quotes are shell syntax to treat the string ssh -i my_key.pem as a single command-line argument, instead of three arguments separated by spaces. When your shell invokes the rsync program, rsync will have this list of command-line arguments:
rsync
-az
-e
ssh -i my_key.pem
...
In the python version:
subprocess.call(['rsync', '-az', '-e', '\'ssh', '-i', 'my_key.pem\'', ...
You're passing these command-line arguments to rsync:
rsync
-az
-e
'ssh
-i
my_key.pem'
You don't need the escaped quotes here--the quotes in the original are shell syntax, and you're not using a shell to invoke rsync here. You want to invoke rsync like this:
subprocess.call(['rsync', '-az', '-e', 'ssh -i my_key.pem', ...
As for this error:
rsync: link_stat "/Users/works/LUIS/scripts_py/admin#...
This may be a side effect of the other error. Rsync has decided it's trying to do a local->remote copy instead of a remote->local copy, and it's trying to interpret the "admin#..." argument as a local filename.

Running linux commands inside bash script throws permission denied error

We have linux script in our environment which does ssh to remote machine with a common user and copies a script from base machine to remote machine through scp.
Script Test_RunFromBaseVM.sh
#!/bin/bash
machines = $1
for machine in $machines
do
ssh -tt -o StrictHostKeyChecking=no ${machine} "mkdir -p -m 700 ~/test"
scp -r bin conf.d ${machine}:~/test
ssh -tt ${machine} "cd ~/test; sudo bash bin/RunFromRemotevm.sh"
done
Script RunFromRemotevm.sh
#!/bin/bash
echo "$(date +"%Y/%m/%d %H:%M:%S")"
Before running Test_RunFromBaseVM.sh script base vm we run below two commands.
eval $(ssh-agent)
ssh-add
Executing ./Test_RunFromBaseVM.sh "<list_of_machine_hosts>" getting permission denied error.
[remote-vm-1] bin/RunFromRemotevm.sh:line 2: /bin/date: Permission denied
any clue or insights on this error will be of great help.
Thanks.
I believe the problem is the presence of the NOEXEC: tag in the sudoers file, corresponding to the user (or group) that's executing the "cd ~/test; sudo bash bin/RunFromRemotevm.sh" command. This causes any further execv(), execve() and fexecve() calls to be refused, in this case it's /bin/date.
The solution is obviously remove the NOEXEC: from the main /etc/sudoers file or some file under /etc/sudoers.d, whereever is this defined.

Bash script to pull pending Linux security updates from remote servers

I'm trying to pull pending linux updates from remote servers and plug them into Nagios. Here's a stripped down version of the code - the code that's giving me an error:
UPDATES=$(sshpass -p "password" StrictHostKeyChecking=no user#server:/usr/lib/update-notifier/apt-check 2>&1)
echo $UPDATES
Error message:
sshpass: Failed to run command: No such file or directory
Command in the question is wrong in multiple ways.
sshpass -p"password" \
ssh -o StrictHostKeyChecking=no user#server "/usr/lib/update-notifier/apt-check" 2>&1
For the -p option, there shouldn't be any space between the option and the value.
sshpass needs a command as argument, which is ssh in this case.
StrictHostKeyChecking=no should be following the option -o for ssh.
A space, not a : is needed between user#server and the command you are going to run remotely, i.e., /usr/lib/....

How to do ssh in a bash script with lot of commands?

I tried to run the following command in a bash script but only till ./install.sh rereplica is called . Rest of the commands are not called at all.
ssh $node2user#$node2 "cd /tmp; tar -xf $mmCS.tar; cd $mmCS; ./install.sh csreplica; ./install.sh keepalived $vip low;./install.sh haproxy $node1:8080 $node2:8080 $vip:8080; ./install.sh confmongo $dbPath"
You can give ssh a script on standard input:
ssh $node2user#$node2 < my_script.sh
If I have to do execute a complex script using SSH, I usually write a script locally, copy it on the target machine with SSH and then execute it there:
scp foo.sh $node2user#$node2:
ssh $node2user#$node2 "bash ./foo.sh"
That way, I can debug the script simply by invoking it with bash -x and I can use the full power of BASH.
Alternatively, you can use
ssh $node2user#$node2 "set +x; cd /tmp; ..."

Resources