ssh without key and collect the output using bash script - linux

I want to create a bash script that will login to all the linux servers in my network using ssh and collect the output of 'uptime' command to a local file. There is no keypair installed between these local server and the remote servers. So I need to give the password (username and password is same for all the remote servers) in the script itself. I know this is not a secure way to do it, but it is just for learning purpose. I see 'expect' command can be used for the ssh login with password but confused how to use it together with the 'uptime' command that provide the server status. So my requirement is
1. I have local server test1 which contains a text file 'server_status.txt'
2. I need a script in test1 that will try to login to all the remote servers (say 192.168.0.1 to 192.168.0.50) using the same username and password. It will execute the command 'uptime' once logged in to the remote servers and store the output to the local file 'server_status.txt'

REVOKE: paste your public key into the server's /path2userthatshouldlogon/.ssh/authorized_keys and run the your commands remotely using ssh user#host commandtoexecute
due to connection wanted to be established without key.
UPDATE: have a look at sshpass if you really want to need passwords, which is NOT RECOMMENDED

Note: Doing this is poor practice. If you are testing around with this then you are learning a bad habit. Don't do this in production on servers you care about.
You'll want to execute the expect call as a $? and be sure to store the $USER and $SERVER variables or just replace them:
uptime=$(expect -c 'spawn ssh $USER#$SERVER send "uptime"; exit;')
echo $uptime

Related

Run command multiple linux server

One of my tasks at work is to check the health/status of multiple Linux servers everyday. I'm thinking of a way to automate this task (without having to login to each server everyday). I'm a newbie system admin by the way. Initially, my idea was to setup a cron job that would run scripts and email the output. Unfortunately, it's not possible to send mail from the servers as of the moment.
I was thinking of running the command in parallel, but I don't know how. For example, how can I see output of df -h without logging in to servers one by one.
You can run ssh with the -t flag to open a ssh session, run a command and then close the session. But to get this fully automated you should automate the login process to every server so that you don't need to type the password for every server.
So to run df -hon a remote server and then close the session you would run ssh -t root#server.com "df -h". Then you can process that output however you want.
One way of automating this could be to write a bash script that runs this command for every server and process the output to check the health of the server.
For further information about the -t flag or how you can automate the login process for ssh.
https://www.novell.com/coolsolutions/tip/16747.html
https://serverfault.com/questions/241588/how-to-automate-ssh-login-with-password
You can use ssh tunnels or just simply ssh for this purpose. With ssh tunnel you can redirect the outputs to your machine, or as an alternative, you can run the ssh with the remote commands on your machine then get the ouput on your machine too.
Please check the following pages for further reading:
http://blog.trackets.com/2014/05/17/ssh-tunnel-local-and-remote-port-forwarding-explained-with-examples.html
https://www.google.hu/amp/s/www.cyberciti.biz/faq/unix-linux-execute-command-using-ssh/amp/
If you want to avoid manual login, use ssh keys.
Create a file /etc/sxx/hosts
populate like so:
[grp_ips]
1.1.1.1
2.2.2.2
3.3.3.3
share ssh key on all machines.
Install sxx from package:
https://github.com/ericcurtin/sxx/releases
Then run command like so:
sxx username#grp_ips "whatever bash command"

Run a command on local machine while on ssh in bash [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I want to run a command on local system while I have ssh'd to a remote system in bash. Is there a way to do this? This is what I want:
#!/bin/bash
ssh mysystem#ip <<'SSH'
#Do something
#Run a command here on local machine and not on machine I have sshed to
#Do Something
exit
SSH
Edit: I want to echo some message and since echo command output won't show from remote machine, I want to run from local.
WHen you are using SSH, the key sequence <enter>~ is a escape prefix that allows you to pause SSH and send key sequences to the ssh client on the host-side.
The sequence <enter>~<ctrl + z> will pause (stop) the ssh-client job and drop you to a prompt in the calling system. Typing fg (if ou are on a Unix shell) will resume your ssh session afterwards.
You can see other ssh escape sequences avaiable by typing <enter>~?.
The sequence <enter>~. will terminate the connection and is very handy when your session is locked on the remote machine.
(Users with non-US keyboard layouts that use ~ as a dead-key to compose accents and digrams have, obviously, to type ~ twice in all of these sequences)
These sequences are of use from when you are operating the SSH session an d typign commands yourself, not for scripting.
Since you seem to want a way to that in scripts, the straightforward solution is to include an ssh command back to the originating host.
I have an approach which is pretty hacky, but it works.
Overview and security caveats
In brief, you use reverse SSH tunnelling to SSH back to your local machine and run a single command, and you connect back using your SSH keys so that no password is required.
NB This approach involves agent forwarding, which comes with a risk:
anyone with root access on the remote host can discreetly access your local SSH agent through the socket. They can use your keys to impersonate you on other machines on the network.
The risk is lessened in your case because the SSH session is only open for the duration of the command. But I'm not a security expert so can't comment further.
An alternative would be to generate a specific keypair just for this connection and use that, but I'm not sure how scriptable this would be.
The second security caveat is that this approach involves running an SSH server on your local machine. See my notes at the end of this answer for more on that.
Details
First of all, your SSH command needs some extra parameters:
ssh mysystem#ip -A -R 2900:localhost:22
-A forwards your credentials (detailed article on agent forwarding). You'll use them when connecting back to your local machine.
-R 2900:localhost:22 sets up the reverse tunnel. This means that on the remote machine you can run ssh -p2900 yourlocaluser#localhost and it'll SSH back to your local machine. Replace yourlocaluser with the user from your host machine (not the machine you're SSHing into). I picked 2900 as an arbitrary port. It needs to be higher than 1024, I think.
To avoid typing these every time, you can set them in your SSH config (~/.ssh/config) on your local machine. These are the relevant properties:
ForwardAgent yes
RemoteForward 2900 localhost:22
Also, you need to tell your local machine that SSH connections are allowed to connect to it using its own key pair(!) To do this, add the contents of your public key file (e.g. ~/.ssh/id_rsa.pub) to ~/.ssh/authorized_keys.
You can now connect to your remote machine and run a command like this to connect back to your local one:
ssh -t -p2900 yourlocaluser#localhost <command here>
Note, however, that the first time you connect back from the remote machine to your local one using the key, you'll get a warning that the host you're connecting to is unknown. Once you say that you want to continue connecting, it'll save the relevant details to ~/.ssh/known_hosts on the remote machine and not ask again.
You could log in and manually do an SSH to get the details saved. Alternatively, you can update the SSH command that you run on the remote machine, but it comes with an additional security caveat.
Here's the updated command:
ssh -o StrictHostKeyChecking=accept-new -t -p2900 yourlocaluser#localhost <command here>
The security risk is that you're accepting the key without reviewing it and making sure that it's what you're expecting, so you're vulnerable to man-in-the-middle attacks. Again, I'm no security expert, but given that you're connecting using an SSH tunnel rather than a regular SSH connection, I believe that this reduces the risk. If the known hosts file on the remote machine only contains the entry for your local machine, you could update your SSH config to replace the contents of that file with your local machine's key fingerprint from your local machine on login, and then remove -o StrictHostKeyChecking=accept-new from the above.
Note: If you're prompted for your password when trying to SSH back, that suggests that agent forwarding hasn't worked. You probably need to run ssh-add on your local machine or update your local SSH config for the host in question to include AddKeysToAgent yes.
Note about running sshd on your local machine
The above assumes that you're running sshd on your local machine, and thus accepting SSH connections to that machine. That's a security risk in itself. One way of reducing that risk is to specify that SSH is only allowed from localhost, which will work in this case because you're tunnelling back. You can find instructions on how to configure your local SSH server for this here: https://askubuntu.com/questions/179325/accepting-ssh-connections-only-from-localhost
You could also adapt the answer here and use netcat rather than SSH: https://superuser.com/a/1274810/126533
If you can change the script, you can use an expect script for that - expect_example_and_tips
This allows you to start an "ssh process" to which can send commands to the remote machine, while still running on the local machine.
Much easier in python though in my opinion - example:
#!/usr/bin/env python
import pexpect
PROMPT = "\$|\%|\>"
ssh_cmd = "ssh user#192.168.1.1"
try:
ssh = pexpect.spawn(ssh_cmd)
ssh.sendline("echo hello on remote")
ssh.expect(PROMPT)
print "hello on local machine"
ssh.close()
except Exception as e:
print e
sys.exit(2)
If you want to (for argument's sake) run date locally, just don't quote the here document, and any command substitution will be executed locally.
ssh mysystem#ip <<SSH # notice absence of quotes
echo I am logged in from $(uname -n) since $(date)
SSH
Here, the uname and date commands will be executed locally, before the ssh command runs, whereas the echo in the here document will then execute remotely.
(As an aside, there is no need to explicitly exit at the end; the shell will exit when it reaches the end of input. It's hard to imagine a scenario where anything else would make any sense whatsoever.)

SSH and execute any command returns "logname: no login name"

I am trying to SSH from one Unix host to another and execute some commands.
Whenever I run ssh hostname <any command> I get back "logname: no login name".
I can succesfully just ssh hostname and then execute the same command without any issues. SSH is setup to use rsa keys for password-less connections.
Everything works fine using a different user account so I suspect it might be related to bash profile or something along those lines? I would appreciate any pointers.

Shell script for remote SSH

I'm new to shell scripts, and I have centos running. I want to write a shell script that ssh a remote machine and execute a bunch of commands. The problem I'm facing is how to provide the username, the password, the remote machine address, and the private access key to a command that shall connect the remote machine.
I've Google'd and found some scripts, but all of them need a utility called expect, and I don't want to install any utility, only to run my script. Is there a way to do this?
You can pass all you need in a ssh call, doing the following:
ssh -i private_key_path user_name#remote_machine "command"
If you're going to use this connection many times, and want to maintain it configured, add the following lines to you .ssh/config file:
Host host_alias
User user_name
HostName remote_machine
IdentityFile private_key_path
and then access the remote machine, and execute the command you want, by doing:
ssh host_alias "command"
Notice that command, AFAIK, must be embraced in quotes, as it must be considered as only one argument by ssh.

Script to automate two consecutive ssh connections

I know it is possible to write a shell script which passes your hard-coded password to a ssh connection authentication (using expect). However what I need is slightly more complicated.
At my university I have a desktop computer appointed to me. I can connect remotely to this computer by first making a ssh connection with some server, then making another ssh connection from that server to my appointed desktop computer. This goes like:
localuser#localcomputer:~$ ssh -X username#serveraddress
username#serveradress password:
server$ ssh -X username#remotecomputeraddress
username#remotecomputeraddress password:
username#remotecomputer:~>
Is there a way to write a script which could automate the above (i.e. performing two consecutive ssh connections)?
Thanks in advance!
ps: Both the local and the remote computers are running on Linux.
You can do this interactively with:
ssh -t -X username#serveraddress ssh -t -X username#remotecomputeraddress
Note that is not a pipe - the second ssh is the command to run on the connection created by the first ssh. The -t options are necessary to allocate the pseudo-ttys necessary for interaction (password gathering as well as the ultimate goal - an interactive session on the remote system). Wrapping it up with expect left as an exercise for the reader.... ;-)
Bonus points for setting up proper private/public key pairs and ssh-agent so that the passwords aren't necessary (unless, of course, that is disallowed for security reasons).
Yes, you can do this.
Presuming you have your except script in the expect_script:
cat expect_script | ssh -X username#serveraddress sh -s
In this expect_script you must run ssh -X username#remotecomputeraddress.
And of course you can install public keys on the both hosts and use passwordless authentication.
I wrote something to do this with bang paths a while back:
http://stromberg.dnsalias.org/~strombrg/deep-ssh.html
So you'd set up passwordless, passphraseless authentication (or use an agent for the passphrase), like:
http://stromberg.dnsalias.org/~strombrg/ssh-keys.html
And then:
deep-ssh username#serveraddress!username#remotecomputeraddress command
If bash complains about the !, you can just escape it with a backslash.
The old timers will recognize that this is how UUCP paths were specified.

Resources