Running keychain as different user - linux

In order to improve security a bit, I'd like to run the keychain agent as a different user. This shall prevent users who hijack my system from gaining the actual private key, while maintaining the ability to use it to authenticate ssh and scp connections.
What have I tried?
I created a user called agent who should store the private key and run the ssh-agent process. I created a script file to set up the right permissions for the socket:
#!/bin/sh
export EVAL=$(keychain --eval -q)
eval $EVAL
chmod 770 $(dirname $SSH_AUTH_SOCK) $(dirname $GPG_AGENT_INFO)
chmod 660 $SSH_AUTH_SOCK $(echo $GPG_AGENT_INFO | sed 's/:.*//')
echo $EVAL
And call that one in my .bashrc, eval'ing it.
But when I now connect to a server via ssh, I get
$ ssh server
Error reading response length from authentication socket.
Any hints?

keychain seems to use either an already running ssh-agent or gpg-agent, and start one if needed.
ssh-agent checks if the user id of the running process matches the id of the user connecting through the unix domain socket (with the exception of root). If you run the agent in debug mode you'll see the corresponding error message. In this case the socket is immediately closed, so you'll get the error message you mention above - you're probably using ssh-agent on your system. That means what you try to do won't be possible using ssh-agent (unless you don't modify it).
It should work if you use gpg-agent with the --enable-ssh-support option as replacement for ssh-agent, but you should be aware that this setup doesn't really increase security. With the permissions you're trying to set, it would allow every user that has access rights to the socket the to authenticate as you using the added key once it has been unlocked, so it's actually less secure.

Related

Shell Script: Get a SSH banner from bunch of systems

I'm trying to get a SSH banner from a bunch of systems. Unfortunately, I need to enter the password before the script can move on to the next system.
user#pc:~$ for i in {1..10}; do ssh 192.168.0.$i; done
WARNING: Unauthorized access to this system is forbidden and will be
prosecuted by law. By accessing this system, you agree that your actions
may be monitored if unauthorized usage is suspected.
user#192.168.0.1's password:
Is there a way to ignore the password prompt and proceed to the next system in order to get the banner alone?
Disable password authentication, that way ssh will not try to get the password from you.
ssh example.com -o PasswordAuthentication=no
You can explicitly disable all authentication methods to make sure, the ssh doesn't accidentally open a shell (thus block).
Or you can just timeout 10 ssh to make it exit after a specified amount of time.

Run a command on local machine while on ssh in bash [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I want to run a command on local system while I have ssh'd to a remote system in bash. Is there a way to do this? This is what I want:
#!/bin/bash
ssh mysystem#ip <<'SSH'
#Do something
#Run a command here on local machine and not on machine I have sshed to
#Do Something
exit
SSH
Edit: I want to echo some message and since echo command output won't show from remote machine, I want to run from local.
WHen you are using SSH, the key sequence <enter>~ is a escape prefix that allows you to pause SSH and send key sequences to the ssh client on the host-side.
The sequence <enter>~<ctrl + z> will pause (stop) the ssh-client job and drop you to a prompt in the calling system. Typing fg (if ou are on a Unix shell) will resume your ssh session afterwards.
You can see other ssh escape sequences avaiable by typing <enter>~?.
The sequence <enter>~. will terminate the connection and is very handy when your session is locked on the remote machine.
(Users with non-US keyboard layouts that use ~ as a dead-key to compose accents and digrams have, obviously, to type ~ twice in all of these sequences)
These sequences are of use from when you are operating the SSH session an d typign commands yourself, not for scripting.
Since you seem to want a way to that in scripts, the straightforward solution is to include an ssh command back to the originating host.
I have an approach which is pretty hacky, but it works.
Overview and security caveats
In brief, you use reverse SSH tunnelling to SSH back to your local machine and run a single command, and you connect back using your SSH keys so that no password is required.
NB This approach involves agent forwarding, which comes with a risk:
anyone with root access on the remote host can discreetly access your local SSH agent through the socket. They can use your keys to impersonate you on other machines on the network.
The risk is lessened in your case because the SSH session is only open for the duration of the command. But I'm not a security expert so can't comment further.
An alternative would be to generate a specific keypair just for this connection and use that, but I'm not sure how scriptable this would be.
The second security caveat is that this approach involves running an SSH server on your local machine. See my notes at the end of this answer for more on that.
Details
First of all, your SSH command needs some extra parameters:
ssh mysystem#ip -A -R 2900:localhost:22
-A forwards your credentials (detailed article on agent forwarding). You'll use them when connecting back to your local machine.
-R 2900:localhost:22 sets up the reverse tunnel. This means that on the remote machine you can run ssh -p2900 yourlocaluser#localhost and it'll SSH back to your local machine. Replace yourlocaluser with the user from your host machine (not the machine you're SSHing into). I picked 2900 as an arbitrary port. It needs to be higher than 1024, I think.
To avoid typing these every time, you can set them in your SSH config (~/.ssh/config) on your local machine. These are the relevant properties:
ForwardAgent yes
RemoteForward 2900 localhost:22
Also, you need to tell your local machine that SSH connections are allowed to connect to it using its own key pair(!) To do this, add the contents of your public key file (e.g. ~/.ssh/id_rsa.pub) to ~/.ssh/authorized_keys.
You can now connect to your remote machine and run a command like this to connect back to your local one:
ssh -t -p2900 yourlocaluser#localhost <command here>
Note, however, that the first time you connect back from the remote machine to your local one using the key, you'll get a warning that the host you're connecting to is unknown. Once you say that you want to continue connecting, it'll save the relevant details to ~/.ssh/known_hosts on the remote machine and not ask again.
You could log in and manually do an SSH to get the details saved. Alternatively, you can update the SSH command that you run on the remote machine, but it comes with an additional security caveat.
Here's the updated command:
ssh -o StrictHostKeyChecking=accept-new -t -p2900 yourlocaluser#localhost <command here>
The security risk is that you're accepting the key without reviewing it and making sure that it's what you're expecting, so you're vulnerable to man-in-the-middle attacks. Again, I'm no security expert, but given that you're connecting using an SSH tunnel rather than a regular SSH connection, I believe that this reduces the risk. If the known hosts file on the remote machine only contains the entry for your local machine, you could update your SSH config to replace the contents of that file with your local machine's key fingerprint from your local machine on login, and then remove -o StrictHostKeyChecking=accept-new from the above.
Note: If you're prompted for your password when trying to SSH back, that suggests that agent forwarding hasn't worked. You probably need to run ssh-add on your local machine or update your local SSH config for the host in question to include AddKeysToAgent yes.
Note about running sshd on your local machine
The above assumes that you're running sshd on your local machine, and thus accepting SSH connections to that machine. That's a security risk in itself. One way of reducing that risk is to specify that SSH is only allowed from localhost, which will work in this case because you're tunnelling back. You can find instructions on how to configure your local SSH server for this here: https://askubuntu.com/questions/179325/accepting-ssh-connections-only-from-localhost
You could also adapt the answer here and use netcat rather than SSH: https://superuser.com/a/1274810/126533
If you can change the script, you can use an expect script for that - expect_example_and_tips
This allows you to start an "ssh process" to which can send commands to the remote machine, while still running on the local machine.
Much easier in python though in my opinion - example:
#!/usr/bin/env python
import pexpect
PROMPT = "\$|\%|\>"
ssh_cmd = "ssh user#192.168.1.1"
try:
ssh = pexpect.spawn(ssh_cmd)
ssh.sendline("echo hello on remote")
ssh.expect(PROMPT)
print "hello on local machine"
ssh.close()
except Exception as e:
print e
sys.exit(2)
If you want to (for argument's sake) run date locally, just don't quote the here document, and any command substitution will be executed locally.
ssh mysystem#ip <<SSH # notice absence of quotes
echo I am logged in from $(uname -n) since $(date)
SSH
Here, the uname and date commands will be executed locally, before the ssh command runs, whereas the echo in the here document will then execute remotely.
(As an aside, there is no need to explicitly exit at the end; the shell will exit when it reaches the end of input. It's hard to imagine a scenario where anything else would make any sense whatsoever.)

How can a BASH script automatically elevate to root on a remote server, without using sudoers nopasswd option?

o's!
Maybe you can help me with this. I can't find an answer to my specific questions, because there is an obvious solution which I'm not allowed to use. But first things first, the context:
In my company, which is a service provider, we administrate a bunch of
Linux servers. Some of my colleagues has for a long time been running
a BASH script from a source server, that then performs some tasks over
SSH on a number of remote Linux servers. The tasks it performs has to
be executed as root, so what the script does is it authorizes the
source server as root on the remote Linux servers via SSH (the remote
servers has the source servers public SSH key). Then what happened is
a new security policy was enforced and now root login over SSH is
denied. So the mentioned method no longer works.
The solution I keep finding, which we are by policy not allowed to do, is to create an entry in the sudoers file allowing sudo to root without password for the specific user.
This is the terms and they have to obey that. The only procedure that is allowed is to log on to the target server with your personal user, and then sudo su - to root WITH password.
Cocky as I apparently was, I said, "It should be possible to have the script do that automatically", and the management was like "Cool, you do it then!" and now I'm here at Stack Overflow,
because I know this is where bright minds are.
So this is exactly what I want to do with a BASH script, and I do not know if it's possible or how it's done, I really hope you can help me out:
Imagine Bob, he's logged into the source server, and he wants to
execute the script against a target server. Knowing that root over SSH
doesn't work, the authorization part of the script has been upgraded.
When Bob runs the script, it prompts him for his password. The
password is then stored in a variable (encrypted would be amazing) and
the script then logs on the target server as his user (which is
allowed) and then automatically elevates him to root on the target
server using the password he entered on the source server. Now the
script is root and it runs its tasks as usual.
Can it be done with BASH? and how?
UPDATE:
The Script:
## define code to be run on the remote system
remote_script='sudo -S hostname'
## local system
# on the local machine: prompt the user for the password
read -r -p "Enter password for $host: " password
# ...and write the password, followed by a NUL delimiter, to stdin of ssh
ssh -t 10.0.1.40 "$remote_script" < <(printf '%s\0' "$password")
The error:
[worker#source ~]$ sh elevate.sh
Enter password for : abc123
elevate.sh: line 10: syntax error near unexpected token `<'
elevate.sh: line 10: `ssh -t 10.0.1.40 "$remote_script" < <(printf '%s\0' "$password")'
First: Because it exposes plaintext passwords to the remote system (where they can be read by an attacker using diagnostic tools such as strace or sysdig), this is less secure than correctly using the NOPASSWD: flag in sudoers. If your security team aren't absolute idiots, they'll approve a policy exemption (perhaps with some appropriate controls, such as having a dedicated account with access to a setuid binary specific to the command being run, with authentication to that account being performed via public key authentication w/ the private key stored encrypted) rather than approving use of this hack.
Second: Here's your hack.
## define code to be run on the remote system
remote_script='sudo -S remote_command_here'
## local system
# on the local machine: prompt the user for the password
read -r -p "Enter password for $host: " password
# ...and write the password, followed by a NUL delimiter, to stdin of ssh
ssh "$host" "$remote_script" < <(printf '%s\0' "$password")
Allright, this is not the final answer, but I think I'm getting close, with the great help of CharlesDuffy.
So far I can run the script without errors on a remote server, that already has the publickey of my source server. However the command I execute doesn't create a file as I tell it to on the remote system.
However the script seems to run and the password seems to be accepted by the remote system.
Also I have to change in the sudoers on the remote host the line "Defaults requiretty" to "Defaults !requiretty", else it will tell me that I need a TTY to run sudo.
#!/bin/bash
## define code to be run on the remote system
remote_script='sudo -S touch /elevatedfile'
## local system
# on the local machine: prompt the user for the password
read -r -p "Enter password for $host: " password
# ...and write the password, followed by a NUL delimiter, to stdin of ssh
ssh -T 10.0.1.40 "$remote_script" < <(printf '%s\0' "$password")
UPDATE: When I tail /var/log/secure on the remote host I get the following after executing the script, which seems like the password is not being accepted.
May 11 20:15:20 target sudo: pam_unix(sudo:auth): conversation failed
May 11 20:15:20 target sudo: pam_unix(sudo:auth): auth could not identify password for [worker]
May 11 20:15:20 target sshd[3634]: Received disconnect from 10.0.1.39: 11: disconnected by user
May 11 20:15:20 target sshd[3631]: pam_unix(sshd:session): session closed for user worker
What I see on the source server, from where I launch the script:
[worker#source ~]$ bash elevate.sh
Enter password for : abc123
[sudo] password for worker:
[worker#source ~]$
Just make a daemon or cron script running as root, that in turn will check for any new scripts in specified secure location (ie. DB that it only has READ access to), and if they exist, it will download and execute them.

ssh without key and collect the output using bash script

I want to create a bash script that will login to all the linux servers in my network using ssh and collect the output of 'uptime' command to a local file. There is no keypair installed between these local server and the remote servers. So I need to give the password (username and password is same for all the remote servers) in the script itself. I know this is not a secure way to do it, but it is just for learning purpose. I see 'expect' command can be used for the ssh login with password but confused how to use it together with the 'uptime' command that provide the server status. So my requirement is
1. I have local server test1 which contains a text file 'server_status.txt'
2. I need a script in test1 that will try to login to all the remote servers (say 192.168.0.1 to 192.168.0.50) using the same username and password. It will execute the command 'uptime' once logged in to the remote servers and store the output to the local file 'server_status.txt'
REVOKE: paste your public key into the server's /path2userthatshouldlogon/.ssh/authorized_keys and run the your commands remotely using ssh user#host commandtoexecute
due to connection wanted to be established without key.
UPDATE: have a look at sshpass if you really want to need passwords, which is NOT RECOMMENDED
Note: Doing this is poor practice. If you are testing around with this then you are learning a bad habit. Don't do this in production on servers you care about.
You'll want to execute the expect call as a $? and be sure to store the $USER and $SERVER variables or just replace them:
uptime=$(expect -c 'spawn ssh $USER#$SERVER send "uptime"; exit;')
echo $uptime

Script to automate two consecutive ssh connections

I know it is possible to write a shell script which passes your hard-coded password to a ssh connection authentication (using expect). However what I need is slightly more complicated.
At my university I have a desktop computer appointed to me. I can connect remotely to this computer by first making a ssh connection with some server, then making another ssh connection from that server to my appointed desktop computer. This goes like:
localuser#localcomputer:~$ ssh -X username#serveraddress
username#serveradress password:
server$ ssh -X username#remotecomputeraddress
username#remotecomputeraddress password:
username#remotecomputer:~>
Is there a way to write a script which could automate the above (i.e. performing two consecutive ssh connections)?
Thanks in advance!
ps: Both the local and the remote computers are running on Linux.
You can do this interactively with:
ssh -t -X username#serveraddress ssh -t -X username#remotecomputeraddress
Note that is not a pipe - the second ssh is the command to run on the connection created by the first ssh. The -t options are necessary to allocate the pseudo-ttys necessary for interaction (password gathering as well as the ultimate goal - an interactive session on the remote system). Wrapping it up with expect left as an exercise for the reader.... ;-)
Bonus points for setting up proper private/public key pairs and ssh-agent so that the passwords aren't necessary (unless, of course, that is disallowed for security reasons).
Yes, you can do this.
Presuming you have your except script in the expect_script:
cat expect_script | ssh -X username#serveraddress sh -s
In this expect_script you must run ssh -X username#remotecomputeraddress.
And of course you can install public keys on the both hosts and use passwordless authentication.
I wrote something to do this with bang paths a while back:
http://stromberg.dnsalias.org/~strombrg/deep-ssh.html
So you'd set up passwordless, passphraseless authentication (or use an agent for the passphrase), like:
http://stromberg.dnsalias.org/~strombrg/ssh-keys.html
And then:
deep-ssh username#serveraddress!username#remotecomputeraddress command
If bash complains about the !, you can just escape it with a backslash.
The old timers will recognize that this is how UUCP paths were specified.

Resources