We have linux script in our environment which does ssh to remote machine with a common user and copies a script from base machine to remote machine through scp.
Script Test_RunFromBaseVM.sh
#!/bin/bash
machines = $1
for machine in $machines
do
ssh -tt -o StrictHostKeyChecking=no ${machine} "mkdir -p -m 700 ~/test"
scp -r bin conf.d ${machine}:~/test
ssh -tt ${machine} "cd ~/test; sudo bash bin/RunFromRemotevm.sh"
done
Script RunFromRemotevm.sh
#!/bin/bash
echo "$(date +"%Y/%m/%d %H:%M:%S")"
Before running Test_RunFromBaseVM.sh script base vm we run below two commands.
eval $(ssh-agent)
ssh-add
Executing ./Test_RunFromBaseVM.sh "<list_of_machine_hosts>" getting permission denied error.
[remote-vm-1] bin/RunFromRemotevm.sh:line 2: /bin/date: Permission denied
any clue or insights on this error will be of great help.
Thanks.
I believe the problem is the presence of the NOEXEC: tag in the sudoers file, corresponding to the user (or group) that's executing the "cd ~/test; sudo bash bin/RunFromRemotevm.sh" command. This causes any further execv(), execve() and fexecve() calls to be refused, in this case it's /bin/date.
The solution is obviously remove the NOEXEC: from the main /etc/sudoers file or some file under /etc/sudoers.d, whereever is this defined.
Related
I've been working on a bash script that automatically runs certain scripts on remote machines and saves the logs to certain folders. As of now I have been copying the local script to the remote machine, executing it into a remote log, copying the remote log into a local folder, and then deleting the remote log and remote copy of the script.
This works, but I know it can work better if I can avoid doing all the in between steps. The one caveat is I need this to be automatic and passwordless (meaning no user input at all). One of the scripts needs to be ran as root or it won't display all the necessary information and will userlock the machine temporarily.
The code I am currently using to execute the remoteScript into a log that I later retrieve with scp is below.
sshpass -f password.txt ssh user#1.1.1.1 "echo $password | sudo -S /home/user/remoteScript.sh > remoteLog.txt"
And in my testing, execution of local script on remote machine into local log file works like below
sshpass -f password.txt ssh user#1.1.1.1 "bash -s" < /home/user/localScript.sh >> localLog.txt
How could I combine the elements of the two code examples above in order to make a local script run on a remote machine with root privilege and log the output into a local text file?
Some things I have tried that do not work include:
sshpass -f password.txt ssh user#1.1.1.1 "bash -s" < "echo $password | sudo -S /home/user/script.sh >> log.txt"
sshpass -f password.txt ssh user#1.1.1.1 "echo $password | sudo -S /home/user/script.sh" >> log.txt
and notably
sshpass -f password.txt ssh user#1.1.1.1 echo $password | sudo -S /home/user/script.sh >> log.txt
which just executes the local script with root privilege on the local machine.
I have tried many variations of the above commands and I believe its some sort of piping or flow issue but I cannot figure it out. Is there anyway to do this?
Machines are Ubuntu 16.04 and you cannot ssh in already as root.
Thanks in advance
A) It might be worth looking into an orchestration/config management solution (e.g. ansible). It's a steep learning curve at first, but initial outlay will pay off on spades down the line if you're managing multiple servers.
B) Setup password-less sudo for the scripts you want to execute, so you don't have to pass around the password in plaintext, and can run without any input. In sudoers:
user ALL=(ALL) NOPASSWD:/home/user/script.sh
C) Setup an SSH key, so you don't need to use a password at all.
But in nutshell, the code you're looking for is something like:
cat /home/user/localScript.sh | ssh user#1.1.1.1 "sudo bash" > log.txt
Which executes a non-interactive bash shell as root on the remote machine, which will take commands to execute on standard in, and the standard output will come back over the ssh channel for you to write to your local log.
Look into &> or 2>&1 if you want standard error too.
I'm trying to pull pending linux updates from remote servers and plug them into Nagios. Here's a stripped down version of the code - the code that's giving me an error:
UPDATES=$(sshpass -p "password" StrictHostKeyChecking=no user#server:/usr/lib/update-notifier/apt-check 2>&1)
echo $UPDATES
Error message:
sshpass: Failed to run command: No such file or directory
Command in the question is wrong in multiple ways.
sshpass -p"password" \
ssh -o StrictHostKeyChecking=no user#server "/usr/lib/update-notifier/apt-check" 2>&1
For the -p option, there shouldn't be any space between the option and the value.
sshpass needs a command as argument, which is ssh in this case.
StrictHostKeyChecking=no should be following the option -o for ssh.
A space, not a : is needed between user#server and the command you are going to run remotely, i.e., /usr/lib/....
I try to sudo run a local script over ssh,
ssh $HOST < script.sh
and I tried
ssh -t $HOST "sudo -s && bash" < script.sh
Actually, I searched a lot in google, find some similar questions, however, I don't find a solution which can sudo run a local script.
Reading the error message of
$ ssh -t $HOST "sudo -s && bash" < script.sh
Pseudo-terminal will not be allocated because stdin is not a terminal.
makes it pretty clear what's going wrong here.
You can't use the ssh parameter -t (which sudo needs to ask for a password) whilst redirecting your script to bash's stdin of your remote session.
If it is acceptable for you, you could transfer the local script via scp to your remote machine and then execute the script without the need of I/O redirection:
scp script.sh $HOST:/tmp/ && ssh -t $HOST "sudo -s bash /tmp/script.sh"
Another way to fix your issue is to use sudo in non-interactive mode -n but for this you need to set NOPASSWD within the remote machine's sudoers file for the executing user. Then you can use
ssh $HOST "sudo -n -s bash" < script.sh
To make Edward Itrich's answer more scalable and geared towards frequent use, you can set up a system where you only run a one line script that can be quickly ported to any host, file or command in the following manner:
Create a script in your Scripts directory if you have one by changing the name you want the script to be (I use this format frequently to change 1 word for my script name and create the file, set permissions and open for editing):
newscript="runlocalscriptonremotehost.sh"
touch $newscript && chmod +x $newscript && nano $newscript
In nano fill out the script as follows placing the directory and name information of the script you want to run remotely in the variable lines of runlocalscriptonremotehost.sh(only need to edit lines 1-3):
HOSTtoCONTROL="sudoadmin#192.168.0.254"
PATHtoSCRIPT="/home/username/Scripts/"
SCRIPTname="scripttorunremotely.sh"
scp $PATHtoSCRIPT$SCRIPTname $HOSTtoCONTROL:/tmp/ && ssh -t $HOSTtoCONTROL "sudo -s bash /tmp/$SCRIPTname"
Then just run:
sh ./runlocalscriptonremotehost.sh
Keep runlocalscriptonremotehost.sh open in a tabbed text editor for quick updating, go ahead and create a bash alias for the script and you have yourself an app-ified version of this frequently used operation.
First of all divide your objective in 2 parts. 1) ssh to the host. 2) run the command you want as sudo. After you are certain that you can 1) access the host and 2) have sudo privileges then you can combine the two commands with &&. What x_cmd && y_cmd does is that the y_cmd gets executed after x_cmd has exited successfully.
When I run a script such as this:
ssh -t root#10.10.10.10 '/tmp/somescript.sh'
where the script is defined as:
#!/bin/sh
mkdir -p /data/workday/cred
chown -R myuser:myuser /data
su myuser - # <------- NOTICE THIS ! ! ! !
rpm -Uvp --force --nodeps --prefix /data/place /data/RPMs/myrpm.rpm
Notice the above su command.
If I comment-out the su command, the script runs remotely and then my shell prompt returns to where I came from ( same server where I ran the ssh command above )
But leaving the script as listed above, causes the script to complete successfully but the shell prompt stays on the remote server.
How can I prevent that ? Making sure that the issuer of the rpm command is a different user than root just a listed ?
But leaving the script as listed above, causes the script to complete successfully but the shell prompt stays on the remote server.
Not exactly. The script is running up to the su command, which spawns a new subshell, and stopping there until you exit the shell. Until you exit that shell, the rpm command never runs, and when it does, it runs as root.
If you want to run the rpm command as a non-root user, you'd need to do something a little different, like:
sudo -u myuser rpm -Uvp ...
add 'exit' at the end of your script
I am trying to execute a script I uploaded to an AWS instance. If I run the following command in my MacBook Terminal, it succeeds:
ssh -o StrictHostKeyChecking=no -i ~/.ec2/my.pem ec2-user#ec2-<address>.amazonaws.com "chmod u+x ./myScript.sh"
I ported the same command to a simple shell script on my local machine, where I pass in the information:
#!/bin/sh
# myLocalScript.sh
host=$1
pem=$2
fileName=$3
ssh -o StrictHostKeyChecking=no -i $pemkey ec2-user#$host "chmod u+x ./$fileName"
When I run it using this command:
sh myLocalScript.sh ec2-user#ec2-<address>.amazonaws.com ~/.ec2/my.pem myScript.sh
I get the following error:
Warning: Identity file ec2-user#ec2-<address>.amazonaws.com not accessible: No such file or directory.
ssh: Could not resolve hostname chmod u+x ./myScript.sh: nodename nor servname provided, or not known
What am I doing wrong?
You need $pem not $pemkey.
Additionally, you should get into the habit of double-quoting variables, except in very special situations where you really want an empty variable to "disappear".