SSH to Remote Server Bash Programming - linux

I programmed a script which SSH to remote server and get status of GPU server by executing following "nvidia-smi", well that was just a description and purpose of script, but the question is I run this as "root" which can ssh to another server passwordless but users can not, how can I program the script to let user to run the script and get status? of course without entering password to remote server, any authentication can I use?
Here is the script:
#!/bin/bash
HOSTS="gpuserver01\ngpuserver02"
SCRIPTS="nvidia-smi"
echo -e "which GPU server do you want to check?\n$HOSTS\n"-----------------""
echo "Please Enter Numebr of GPU Server"
read ans
#for HOSTNAME in $ans ; do
if [ $ans = '1' ]; then
HOSTNAME="gpuserver01"
ssh ${HOSTNAME} "${SCRIPTS}"
else
HOSTNAME="gpuserver02"
ssh ${HOSTNAME} "${SCRIPTS}"
fi
#done
Thank you.

You can give permissions to your script for other users to run as root privileges.
run visudo, add below;
Cmnd_Alias CUSTOM_CMD=/path/to/script/myscript.sh
myUser1 ALL = (root) NOPASSWD:CUSTOM_CMD
if other users have same group.Let say otherUsers
Cmnd_Alias CUSTOM_CMD=/path/to/script/myscript.sh
%otherUsers ALL = (root) NOPASSWD:CUSTOM_CMD

Add normal user to remote host (example gpuuser01).
Create SSH keys for that user. [1] Then check, you can log in to gpuuser01 without password.
Create new script on remote host (eg. gpuserver01), with setuid flag [2], that will run nvidia-smi.
Now you can connect to remote host and execute your script as root wihout password.
Rewrite your script (that one from question).
[1] https://kb.iu.edu/d/aews
[2] http://www.cyberciti.biz/faq/unix-bsd-linux-setuid-file/

Related

ssh-copy-id fails when run from within a remote session

I have a task to copy ssh keys from one node to all others in an array. For this, I wrote a simple bash script which copies itself to other nodes and runs it there. What confuses me is the fact that ssh-copy-id works fine on the node where the script is executed manually but it fails if run remotely in an ssh session. Here’s the script:
1 #!/bin/bash
2 # keys-exchange.sh
4 nodes=( main worker-01 worker-02 worker-03 )
6 for n in $( echo "${nodes[#]}" ); do
7 [ $n != $HOSTNAME ] && ssh-copy-id $n
8 done
10 if [ -z $REMOTE ]; then
11 for n in $( echo ${nodes[#]} ); do
12 if [ $n != $HOSTNAME ]; then
13 scp $0 $USER#$n:$0 > /dev/null
14 ssh $USER#$n "REMOTE=yes HOSTNAME=$n $0 ; rm -f $0"
15 fi
16 done
17 fi
The code in rows 6-8 works fine copying the ssh key to all nodes other than itself. Then, if the REMOTE variable is not set, code in rows 11-16 copies the script to remote nodes (except the node it’s running on, row 12) and runs it there. In row 14, I set and pass the variable REMOTE to skip the code block in rows 10-17 (so the script copies itself only from the source node to others), and the variable HOSTNAME because I found it’s not set in an ssh session. The user’s name and the script path are completely the same on the source node and all destination nodes.
When running on the source node, it works properly asking for a confirmation and the remote host's password. But the script that has just run successfully on the source node fails running in the remote ssh session: ssh-copy-id fails with the following error:
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/username/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: ERROR: Host key verification failed.
At that moment, no .ssh/known_hosts file is present on a remote node so I can't do ssh-keygen -R. What am I missing and how to make it work?
ssh $USER#$n "REMOTE=yes HOSTNAME=$n $0 ; rm -f $0"
Try running ssh with the "-tt" option to request a PTY (pseudo-TTY) for the remote session:
ssh -tt $USER#$n "REMOTE=yes HOSTNAME=$n $0 ; rm -f $0"
^^^
In the case that you're describing, you're launching ssh on the remote system to connect to a third system. The ssh instance doesn't have a saved copy of the third host's host key. So you'd normally expect ssh to prompt the user whether to continue connecting to the third host. Except that it's not prompting the user--it's just refusing to connect to the third host.
When ssh is invoked with a command to run on the remote system, by default it runs that command without a TTY. In this case, the remote ssh instance sees that it's running without a TTY and runs non-interactively. When it's non-interactive, it doesn't prompt the user for things like passwords and whether to accept a host key or not.
Running the local ssh instance with "-tt" causes it to request a PTY for the remote session. So the remote ssh instance will have a TTY and it will prompt the user--you--for things like host key confirmations.
ssh-copy-id is not copying your keys to remote hosts, it's adding them to ~/.ssh/authorized_keys and when you jump to that remote host there are no keys(or are they?) so there is nothig to copy further. And if ssh-copy-id run without -i option it'll copy(add to authorized_keys) all .pub keys from your ~/.ssh dir wich could be not desired so i suggest to run it like this ssh-copy-id -i $key $host
Be sure that on the destination side, the /etc/ssh/sshd_config is configured to accept the type of key that was generated.
PubkeyAcceptedKeyTypes ssh-ed25519,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-rsa
I generated the key using ssh-keygen -t rsa -b 4096 ... however, up above the following line did not include the ,ssh-rsa at the end, so even though the ssh-copy-id updated my destination, the sshd did not accept rsa type generated keys. Once i added ,sha-rsa, the did systemctl restart sshd it worked!

How can a BASH script automatically elevate to root on a remote server, without using sudoers nopasswd option?

o's!
Maybe you can help me with this. I can't find an answer to my specific questions, because there is an obvious solution which I'm not allowed to use. But first things first, the context:
In my company, which is a service provider, we administrate a bunch of
Linux servers. Some of my colleagues has for a long time been running
a BASH script from a source server, that then performs some tasks over
SSH on a number of remote Linux servers. The tasks it performs has to
be executed as root, so what the script does is it authorizes the
source server as root on the remote Linux servers via SSH (the remote
servers has the source servers public SSH key). Then what happened is
a new security policy was enforced and now root login over SSH is
denied. So the mentioned method no longer works.
The solution I keep finding, which we are by policy not allowed to do, is to create an entry in the sudoers file allowing sudo to root without password for the specific user.
This is the terms and they have to obey that. The only procedure that is allowed is to log on to the target server with your personal user, and then sudo su - to root WITH password.
Cocky as I apparently was, I said, "It should be possible to have the script do that automatically", and the management was like "Cool, you do it then!" and now I'm here at Stack Overflow,
because I know this is where bright minds are.
So this is exactly what I want to do with a BASH script, and I do not know if it's possible or how it's done, I really hope you can help me out:
Imagine Bob, he's logged into the source server, and he wants to
execute the script against a target server. Knowing that root over SSH
doesn't work, the authorization part of the script has been upgraded.
When Bob runs the script, it prompts him for his password. The
password is then stored in a variable (encrypted would be amazing) and
the script then logs on the target server as his user (which is
allowed) and then automatically elevates him to root on the target
server using the password he entered on the source server. Now the
script is root and it runs its tasks as usual.
Can it be done with BASH? and how?
UPDATE:
The Script:
## define code to be run on the remote system
remote_script='sudo -S hostname'
## local system
# on the local machine: prompt the user for the password
read -r -p "Enter password for $host: " password
# ...and write the password, followed by a NUL delimiter, to stdin of ssh
ssh -t 10.0.1.40 "$remote_script" < <(printf '%s\0' "$password")
The error:
[worker#source ~]$ sh elevate.sh
Enter password for : abc123
elevate.sh: line 10: syntax error near unexpected token `<'
elevate.sh: line 10: `ssh -t 10.0.1.40 "$remote_script" < <(printf '%s\0' "$password")'
First: Because it exposes plaintext passwords to the remote system (where they can be read by an attacker using diagnostic tools such as strace or sysdig), this is less secure than correctly using the NOPASSWD: flag in sudoers. If your security team aren't absolute idiots, they'll approve a policy exemption (perhaps with some appropriate controls, such as having a dedicated account with access to a setuid binary specific to the command being run, with authentication to that account being performed via public key authentication w/ the private key stored encrypted) rather than approving use of this hack.
Second: Here's your hack.
## define code to be run on the remote system
remote_script='sudo -S remote_command_here'
## local system
# on the local machine: prompt the user for the password
read -r -p "Enter password for $host: " password
# ...and write the password, followed by a NUL delimiter, to stdin of ssh
ssh "$host" "$remote_script" < <(printf '%s\0' "$password")
Allright, this is not the final answer, but I think I'm getting close, with the great help of CharlesDuffy.
So far I can run the script without errors on a remote server, that already has the publickey of my source server. However the command I execute doesn't create a file as I tell it to on the remote system.
However the script seems to run and the password seems to be accepted by the remote system.
Also I have to change in the sudoers on the remote host the line "Defaults requiretty" to "Defaults !requiretty", else it will tell me that I need a TTY to run sudo.
#!/bin/bash
## define code to be run on the remote system
remote_script='sudo -S touch /elevatedfile'
## local system
# on the local machine: prompt the user for the password
read -r -p "Enter password for $host: " password
# ...and write the password, followed by a NUL delimiter, to stdin of ssh
ssh -T 10.0.1.40 "$remote_script" < <(printf '%s\0' "$password")
UPDATE: When I tail /var/log/secure on the remote host I get the following after executing the script, which seems like the password is not being accepted.
May 11 20:15:20 target sudo: pam_unix(sudo:auth): conversation failed
May 11 20:15:20 target sudo: pam_unix(sudo:auth): auth could not identify password for [worker]
May 11 20:15:20 target sshd[3634]: Received disconnect from 10.0.1.39: 11: disconnected by user
May 11 20:15:20 target sshd[3631]: pam_unix(sshd:session): session closed for user worker
What I see on the source server, from where I launch the script:
[worker#source ~]$ bash elevate.sh
Enter password for : abc123
[sudo] password for worker:
[worker#source ~]$
Just make a daemon or cron script running as root, that in turn will check for any new scripts in specified secure location (ie. DB that it only has READ access to), and if they exist, it will download and execute them.

Crontab to send dump data to windows machine

I have mysql Database in Linux machine which should be dumped by using crontab and the data directly should have to store in a remote windows system. Is this possible? if yes, how?
You will need a script similar to the one below.
It would be best if you tested the script before running it from cron.
The scp command will prompt for the user's password on the destination machine - unless the ssh key setup on the scp destination machine contains public key authorization. For this to work with cron the scp command must be able to copy without user input of a password.
Once it works then set up the crontab entry. Specify the full path of the script in the entry.
export DB_DUMP_DIR=/home/database_dump
export DB_NAME=database_name_$(date '+%Y_%m_%d').sql
mysqldump -u root -p database_name > ${DB_DUMP_DIR}/${DB_NAME}
if [ $? -eq 0 ];then
scp ${DB_DUMP_DIR}/${DB_NAME} user#windows_machine:
else
echo "Error generating database dump"
fi

Linux script - password step cuts the flow

Lets assume the script i want to write ssh to 1.2.3.4 and then invokes
ls.
The problem is that when the line "ssh 1.2.3.4" is invoked, a password is
Required, hence, the flow is stopped, even when i fill the password,
The script wont continue.
How can i make the script continue after the password is given?
Thx!
You want to do public key authentication. Here are some resources which should get you going.
http://magicmonster.com/kb/net/ssh/auto_login.html
http://www.cs.rpi.edu/research/groups/vision/doc/auto/ssh/ssh_public_key_authentication.html
I would post a couple more links, but I don't have enough reputation points. ;) Just google on "SSH automated login" or "SSH public key authentication" if you need more help.
Actually you're trying to run ls locally but you have an ssh session opened. So it won't run ls until the session is opened. If you want to run ls remotely, you should use
ssh username#host COMMAND
Where command is the command you want to run. Ssh session will finish as soon as the command is invoked and you can capture its output normally.
I would suggest you to use RSA authentication method for script that needs ssh.
I just tried this script:
#!/bin/sh
ssh vps1 ls
mkdir temp
cd temp
echo test > file.txt
And it works. I can connect to my server and list my home. Then, locally, it creates temp dir, cd into it and then creates file.txt with 'test' inside.
write simple login bash script named login_to and give exec permissions (chmod 744 login_to)
#!/bin/bash
if [ $1 = 'srv1' ]; then
echo 'srv1-pass' | pbcopy
ssh root#11.11.11.11
fi
if [ $1 = 'foo' ]; then
echo 'barbaz' | pbcopy
ssh -t dux#22.22.22.22 'cd ~/somedir/someotherdir; bash'
fi
now use it like this
login_to srv1
login_to foo
When asked for password, just pate (ctrl+v or command+v) and you will be logged in.

How to automatically login to ssh serve on opening new terminal

I have a vncsession running on server. Now, whenever I open a new terminal, I have to ssh to another server.
Till now I have been able to setup ssh such that it doesn't ask for password for this particular server. But I have not been able to automatically do this in new terminal. If I add the ssh command to .tcshrc it goes into recursive loop - ssh into server, execute .tcshrc, ssh to server, so on.
I'm using Linux, cshell, Gnome setup.
You should do a hostname check or other check that allow you to recognize the difference between the client and the target. I don't know cshell scripting but in SH you would want to do something like:
# Shell:
if [ $HOSTNAME == "vncserver" ]; then
ssh $TARGET_BOX;
fi;
# Cshell:
if ( $HOSTNAME == 'vncserver' ) then
ssh $TARGET_BOX;
endif
This would enforce that only the svnserver would ssh to the remote system and the remote system won't ssh to itself.
I use ssh-agent and keychain that works like a charm and allows keeping some safety on your servers. Here is another keychain tutorial (sorry, written only in french). Just putting your keychain commands in your bashrc or profile that runs at tty start.

Resources