ssh command to run remote script exist shell on remote server when switching user - linux

When I run a script such as this:
ssh -t root#10.10.10.10 '/tmp/somescript.sh'
where the script is defined as:
#!/bin/sh
mkdir -p /data/workday/cred
chown -R myuser:myuser /data
su myuser - # <------- NOTICE THIS ! ! ! !
rpm -Uvp --force --nodeps --prefix /data/place /data/RPMs/myrpm.rpm
Notice the above su command.
If I comment-out the su command, the script runs remotely and then my shell prompt returns to where I came from ( same server where I ran the ssh command above )
But leaving the script as listed above, causes the script to complete successfully but the shell prompt stays on the remote server.
How can I prevent that ? Making sure that the issuer of the rpm command is a different user than root just a listed ?

But leaving the script as listed above, causes the script to complete successfully but the shell prompt stays on the remote server.
Not exactly. The script is running up to the su command, which spawns a new subshell, and stopping there until you exit the shell. Until you exit that shell, the rpm command never runs, and when it does, it runs as root.
If you want to run the rpm command as a non-root user, you'd need to do something a little different, like:
sudo -u myuser rpm -Uvp ...

add 'exit' at the end of your script

Related

Running a script to connect through ssh then run npm install pm2 results in: 'npm: command not found'

I'm running this command from my local machine:
ssh -tt -i "pem.pem" ec2-user#ec2-IPADDRESS.compute-1.amazonaws.com "sudo su -c 'cd /dir/;npm install pm2'"
It connects, operates as a super users, cds to dir and attempts to run the command but returns that npm is not a command recognized by the system.
However, when I connect "manually" i.e.
ssh -i "pem.pem" ec2-user#ec2-IPADDRESS.compute-1.amazonaws.com
sudo su
cd /dir
npm install pm2
it works.
npm is installed under root and the system can see it.
ssh -tt -i "pem.pem" ec2-user#ec2-IPADDRESS.compute-1.amazonaws.com "sudo su -c 'cd /dir/;whoami'"
and
ssh -i "pem.pem" ec2-user#ec2-IPADDRESS.compute-1.amazonaws.com
sudo su
cd /dir
whoami
both return "root"
Why can't the npm command be found when running on top of an ssh?
When you login, you create an interactive shell, which typically will read a couple of files, including /etc/profile, $HOME/.profile, and $HOME/.bashrc in the case of bash.
Any of these files can add extra elements (paths) to the PATH variable, which affects which commands can be found.
When you run the command line directly, no such initialisation takes place, and the value of $PATH may be limited to just /bin:/usr/bin.
Next there is sudo, which may or may not import the value of PATH when looking for commands.
Solution
Best you can do is find out where npm is installed, and use its full PATH.

execution of remote script containing "sudo su" through ssh [duplicate]

This question already has answers here:
Pass commands as input to another command (su, ssh, sh, etc)
(3 answers)
Closed 1 year ago.
I need to run a script which needs to be run with root privileges remotely. Therefore I add "sudo su" command at the start of the script. However the ssh just login the remote server and stuck at sudo su command, and it does not continue from next line in the script.
server.sh
sudo -s
sudo apt-get update
sudo apt-get upgrade
client.sh
scp -i "$key.pem" server.sh "$dns:/tmp"
ssh -tt -i "$key.pem" $dns "bash /tmp/server.sh"
server.sh and client.sh is at the same local directory. When I run ./client.sh, server.sh which is run remotely stuck at first line and does not continue with "sudo apt-get update" command. What is the reason of this behavious and is there a solution?
When you run the command sudo -s you change the user and the rest of the script is lost because it is in a new shell.
Remove the line sudo -s and try running the script again.
Note: it is important to remember that the user running sudo must be in the /etc/sudoers file with the username ALL=(ALL) NOPASSWD:ALL permissions.
sudo -s with no command starts a new, interactive shell. The following commands won't execute until it exits. See man sudo.
If you are already running apt-get via sudo, and sudo does not require a password, why do you need the sudo -s?
You can use
ssh user#ip '[command]'
to run [command] on the remote host. If you have a user with root privileges (aka. sudo) and if you can use commands without passwords (NOPASSWD:[command,list or ALL]) this is the safest way i can suggest however if you want the script to run on the remote server and triggered by the local computer you can always
ssh user#ip 'sudo /bin/bash /home/[user]/server.sh'
This would work as well. You can also use "scp" command to copy the script and then delete it with ssh again for automated one-script approach.

ssh sudo to a different user execute commands on remote Linux server

We have a password less authentication between the server for root user, I am trying to run the alias on remote server as below
#ssh remoteserver runuser -l wasadmin wasstart
But it is not working. Any suggestions or any other method to achieve it
Based on your comments as you need to sudo to wasadmin in order to run wasadmin, you can try this:
ssh remoteserver 'echo /path/to/wasadmin wasstart | sudo su - wasadmin'
For add an alias in linux you must run
alias youcommandname=‘command’
Notice:
This will work until you close or exit from current shell . To fix this issue just add this to you .bash_profile and run source .bash_profile
Also your profile file name depending on which shell you using . bash , zsh ,...

How to sudo run a local script over ssh

I try to sudo run a local script over ssh,
ssh $HOST < script.sh
and I tried
ssh -t $HOST "sudo -s && bash" < script.sh
Actually, I searched a lot in google, find some similar questions, however, I don't find a solution which can sudo run a local script.
Reading the error message of
$ ssh -t $HOST "sudo -s && bash" < script.sh
Pseudo-terminal will not be allocated because stdin is not a terminal.
makes it pretty clear what's going wrong here.
You can't use the ssh parameter -t (which sudo needs to ask for a password) whilst redirecting your script to bash's stdin of your remote session.
If it is acceptable for you, you could transfer the local script via scp to your remote machine and then execute the script without the need of I/O redirection:
scp script.sh $HOST:/tmp/ && ssh -t $HOST "sudo -s bash /tmp/script.sh"
Another way to fix your issue is to use sudo in non-interactive mode -n but for this you need to set NOPASSWD within the remote machine's sudoers file for the executing user. Then you can use
ssh $HOST "sudo -n -s bash" < script.sh
To make Edward Itrich's answer more scalable and geared towards frequent use, you can set up a system where you only run a one line script that can be quickly ported to any host, file or command in the following manner:
Create a script in your Scripts directory if you have one by changing the name you want the script to be (I use this format frequently to change 1 word for my script name and create the file, set permissions and open for editing):
newscript="runlocalscriptonremotehost.sh"
touch $newscript && chmod +x $newscript && nano $newscript
In nano fill out the script as follows placing the directory and name information of the script you want to run remotely in the variable lines of runlocalscriptonremotehost.sh(only need to edit lines 1-3):
HOSTtoCONTROL="sudoadmin#192.168.0.254"
PATHtoSCRIPT="/home/username/Scripts/"
SCRIPTname="scripttorunremotely.sh"
scp $PATHtoSCRIPT$SCRIPTname $HOSTtoCONTROL:/tmp/ && ssh -t $HOSTtoCONTROL "sudo -s bash /tmp/$SCRIPTname"
Then just run:
sh ./runlocalscriptonremotehost.sh
Keep runlocalscriptonremotehost.sh open in a tabbed text editor for quick updating, go ahead and create a bash alias for the script and you have yourself an app-ified version of this frequently used operation.
First of all divide your objective in 2 parts. 1) ssh to the host. 2) run the command you want as sudo. After you are certain that you can 1) access the host and 2) have sudo privileges then you can combine the two commands with &&. What x_cmd && y_cmd does is that the y_cmd gets executed after x_cmd has exited successfully.

rundeck - switch to root user in job script

Logging via terminal I can switch to root user fine:
ubuntu#ip-10-0-0-70:~$ sudo -s
root#ip-10-0-0-70:~# whoami
root
So I created in rundeck a job script with this:
whoami;
echo "1st step";
sudo -s;
echo "2nd step";
And when I run this, it prints:
ubuntu
1st step
After print '1st step' it get stucked forever. Seems a problem with sudo -s command.
tried sudo -i but the same happens
tried sudo su - root but the same happens
rundeck is logging as ubuntu user, me too
any idea to switch to root in rundeck script?
This is the expected behaviour.
You are running a shell via 'sudo -s' and then not leaving/exiting it ! So it waits forever for somethig that won't come.
You can probably add 'sudo' as an Advanced option of your script (where it says "Run script with an interpreter or prefix. E.g.: sudo, time:").
But it will run your whole script as root.
If you just want a specific command to be run as root , just prefix your command with sudo as so:
sudo "enter_your_command_to_be_run_as_root_here"
Entering the command prefixed by Sudo will generate the following error on most linux distributions.
sudo: sorry, you must have a tty to run sudo
You can enable sudo without tty by running 'visudo' and commenting out the defaults line or removing 'requiretty' from the defaults line.
Details can be found here:
http://www.cyberciti.biz/faq/linux-unix-bsd-sudo-sorry-you-must-haveattytorun/

Resources