SSH and sudo over a pseudo-tty terminal - linux

I am trying to overcome some limitations in our environment to write up an authorized SSH file for passwordless ssh keys.
I am requiring to perform an ssh as a to a target system, and then run a "sudo su - , and then update the service account authorized_keys with a key"
This eventually has to go onto my ansible scripts.
I am using "ssh -t user#target "sudo su - service-user" - which actually successfully gets me into a shell for service-user. But I am not able to figure out a way to pass along the file modify commands with the above.
Any tips or alternative options?
Note: I need to use "ssh -t" option as the requiretty is not set on target systems.
Cheers!

Depending on what transport you're using you can use ssh_args.
OpenSSH is the default connection type for Ansible on OSes that are new enough to support ControlPersist. (This means basically all operating systems except Enterprise Linux 6 or earlier).
Then you can do something like this in your ansible.cfg:
ssh_args = -t -t
Which will force ansible to connect the same way you do manually.
Then in your playbook or together with the task where you need it specify become and become_user
- name: Some task
debug: msg="this is a test"
become: true
become_user: someuser

su has an option, -c, that allows you to pass along a command to execute instead of launching a new shell.
-c, --command=COMMAND
pass a single COMMAND to the shell with -c
However, you're authenticating with sudo, which already does this by default; you can just cut su out of the command entirely:
ssh -t user#target "sudo -u service-user <your-command>"
To go one step further, you note that you're planning on putting this into an Ansible playbook. If so, you probably shouldn't be spending too much time trying to do this manually - Ansible will handle running commands remotely (that's one of its primary features, after all), and has a module for modifying the authorized_keys file.

Related

Ansible Run Command As Another User

I know my question is what become is designed to solve. And I do use it. However, my command seems to still be run as the ssh user. I'm trying to execute a which psql command to get the executable path. Running which psql as ssh user gives a different output than running the same command as my become user which is the output I want.
EDIT The problem is the $PATH variable ansible is using as suggested in comments. It is not using the correct $PATH variable. How can I direct ansible to use postgres users $PATH variable? Using environment module didn't work for me as suggested here https://serverfault.com/questions/734560/ansible-become-user-not-picking-up-path-correctly
EDIT2 So a solution is to use the environment module and set the path to the path I know has the psql executable but this seems hacky. Ideally, I'd like to just be able to use the become users path and not have to explicitly set it. Here's the hacky solution:
- name: Check if new or existing host
command: which psql
environment:
PATH: "/usr/pgsql-13/bin/:{{ansible_env.PATH}}"
become: yes
become_user: postgres
Playbook
---
- name: Playbook Control
hosts: all
become: yes
become_user: postgres
tasks:
- name: Check if new or existing host
shell: whoami && which psql
register: output
Relevant Output (the same as if I were to run the task command as my_user on myhost.net)
"stdout_lines": [
"postgres",
"/usr/bin/psql"
]
Expected Output (the output if I were to run the task command as postgres user on myhost.net)
"stdout_lines": [
"postgres",
"/usr/pgsql-13/bin/psql"
]
Inventory
myhost.net
[all:vars]
ansible_connection=ssh
ansible_user=my_user
Command
ansible-playbook --ask-vault-pass -vvv -i temp_hosts playbook.yml
In vault I only have the ssh pass of my_user.
Running the playbook with -vvv flag shows me that escalation was successful and that the output of this task is the output of running the command as ssh user, not become user. Any ideas?
Ansible by default uses sudo as the default become method.
Depending on how your linux system is configured (check /etc/sudoers), it could be that your $PATH variable is preserved for sudo commands.
You can either change this, or force ansible to use a different become method such as su:
https://docs.ansible.com/ansible/latest/user_guide/become.html#become-directives

How/where to provide sudo password for Vagrant shell provisioners?

I am trying to build a Vagrant box (CentOS) that will be provisioned by an install.sh shell script. This script will do several things, the first of which, involves creating the correct directory structure under /opt so that my service can be installed there and do other things, like writing logs there, as well.
So my Vagrant project (so far) consists of:
my-app-vagrant/
Vagrantfile
install.sh
Where install.sh looks like:
mkdir /opt/myapp
mkdir /opt/myapp/bin # Where we will install binary to (later in this script)
mkdir /opt/myapp/logs # Where the binary will write logs to
Now the binary does not need elevated privileges in order to run (it is configured via command-line arguments where to write logs to). However I simply want it to live under /opt with the above directory structure, at least for this particular machine.
The problem is that /opt is owned by root. Which means I need to run these mkdirs with sudo, provide the script the password for sudo, and then tweak directory permissions so that when the app runs, it has permission to both run and to write logs to my intended destination (which again, is /opt/myapp/logs). So I tweaked install.sh to look like this:
mkdir /opt/myapp
mkdir /opt/myapp/bin
mkdir /opt/myapp/logs
chmod -R 777 /opt/myapp # Now when the app runs as a normal non-privileged user, we can run + write logs
And I know that I can provide a password to the script via echo <rootPswd> | sudo -S sh install.sh (where <rootPswd> is the correct root password).
Now I'm trying to figure out how to get this running/working correctly when Vagrant is provisioning the VM.
My Vagrant file looks like:
Vagrant.configure(2) do |config|
config.vm.provision "shell", path: "install.sh"
config.vm.box = "centos7"
config.vm.box_url = "https://github.com/tommy-muehle/puppet-vagrant-boxes/releases/download/1.1.0/centos-7.0-x86_64.box"
config.vm.network "private_network", ip: "10.0.1.2"
config.vm.provider "virtualbox" do |vb|
vb.memory = "2048"
end
end
But what I'm stuck on is: how do I extend the whole "echo <rootPswd> | sudo -S sh install.sh"-concept to Vagrant? According to their docs there is a privileged option that I might be able to use, but it is set to true by default anyways.
But nowhere in their docs do they explain how/where to provide the sudo password that should be used (at least from what I have been able to find so far).
So I ask:
How do I provide the sudo password for a Vagrant VM's shell provisioner's installation script?; and
Where can I find out what the sudo password even if, given the base Vagrant box that I'm trying to use?
Turns out that (for almost all Vagrant boxes) the vagrant user is listed in /etc/sudoers with ALL=(ALL) NOPASSWD:ALL permissions, which instructs Linux to not ask that user for a "sudo password", ever.
Hence, you don't need to supply your privileged user with a sudo password.
While smeeb's answer is the case even to this day, it doesn't quite answer the question. Really there are different ways to do this depending on the provisioner you are using. For example, in Ansible you can use ask_become_pass to be asked for the password prior.
With the shell provisioner you won't have any helpers outside of those Ansible provide. If privileged doesn't do it for you then you'd probably need to just sudo manually.
To do this you can use the following:
echo "mypassword" | sudo -S command
Take heed though, doing this on something throwaway will mean you will have traces in your history of that password.

"Sudo" fails with "sudo requires a tty" when executed from PuTTY command line

I'm trying to run some commands on a remote CentOS machine using PuTTY. I'm using the following command:
putty.exe -ssh [IP] -l [user] -pw [password] -m [Script]
Where [Script] is a .txt file containing the commands I want to run. The issue is that one of the commands requires sudo, and when PuTTY tries to run it I get an error:
sudo requires a tty
The thing that's confusing me is that if I start the session without giving a script, then run the commands from the script manually, it works fine. I've tried using -load instead of -ssh, and it made no difference.
I can't change the requiretty setting in my sudoers file for security reasons, which is the only solution I've been able to find. Is there another option?
The sudo requires TTY/interactive session.
On the contrary the PuTTY/Plink -m switch uses non-interactive session by default.
Use the -t switch to override that.
putty.exe -ssh [IP] -l [user] -pw [password] -t -m [Script]
Read the error: sudo requires a tty. That is, an interactive shell. You have to find an other way of doing those privileged instructions. For example, you could login as root with a key-based authentication.

On Linux, how can I share scripts across an SSH connection for the session only?

For work, I have to connect to dozens of Linux machines via SSH (to perform maintenance, monitor the system, install software, etc).
I have a small arsenal of scripts that help me do some of these tasks, and these are located in a folder on my Mac in /Users/me/bin. I want to be able to run these scripts on the remote Linux machine, but for several reasons I do not want these scripts permanently located on these machines (e.g., other people also connect to these remote machines, and it would be unwise to let them execute these files).
So, is possible to share scripts across an SSH connection for the lifetime of the session only?
I have a couple of ideas on how to do this, but I don't know if any of them will work. Firstly, if SSH allows file mounting, I could automatically mount me#mymac:/Users/me/bin to me#linux:/remote_bin when I connect to the remote Linux box, and set my PATH variable to "$PATH:/remote_bin". Secondly, I could set up port forwarding in the connection string (e.g., ssh me#linux -R 9999:127.0.0.1:<SMBPORT|ETC> and every time I connect mount the share and set the $PATH variable.
EDIT: I've come up with a semi-solution. On the linux machine, edit /etc/ssh/sshd_config to add the following subsystem: Subsystem shareduserbinary sudo su -l -c "/bin/mount -t cifs -o port=9999,username=me,nounix,sec=ntlmssp //127.0.0.1/exported_bin /mnt/remote_bin" && bash -l -i -s. When connecting to the remote machine, set up a reverse port forward and invoke the subsystem. E.g.: ssh -R 9999:127.0.0.1:445 -s shareduserbinary me#linux.
EDIT 2: You can make the solution above cleaner, by removing the -l from the sudo command and changing the path from /mnt/remote_bin to $HOME/rbin.
Interesting question. Perhaps you can add a command to ~/.bash_login (assuming you are using bash) to copy the scripts from a remote host (such as your mac) when you login, then add a command to ~/.bash_logout to delete the scripts when you logout. But, as bmargulies points out, it would be a good idea to go a step further and make sure that nobody else has permissions to read or execute the scripts.
You can use OpenSSH's LocalCommand to upload the files (using e.g. scp or rsync) when initiating an SSH session (see man ssh_config and this):
Host server1 server2 [...]
PermitLocalCommand yes
LocalCommand scp -q /Users/bin/me/* %h:temp_bin/
and use .bash_logout or an EXIT-trap that you specify in your .bashrc to delete the contents of the directory on logout.

Executing db2 command through ssh

trying to ssh to another system then perform db2 commands however using 'su db2admin -c' does not seem to work, although it works for normal system commands ..
#!/bin/bash
sshpass -p 'passw0rd' ssh root#server.com "su db2admin -c 'db2text start'"
this is the output ..
rob#laptop:~/Desktop$ ./script.sh
bash: db2text: command not found
Any ideas?
The PATH is not getting updated to the normal root users PATH. Either specify the full path to db2text or add a dash (-) before the username to reload the environment variables
I'll hazard a guess and say that root doesn't have any of the db2 stuff in hi path.
And since you're using su db2admin rather than su - db2admin db2admin inherits
root's environment. Try with that extra - thrown in.
That all said: why on earth aren't you connecting w/ passwordless keys as db2admin?
Another solution that worked ..
#!/bin/bash
sshpass -p 'passw0rd' ssh root#server.com "su db2admin -c '~/sqllib/bin/db2text start'"
But problem is db2 path may change, better to use Eric's answer.

Resources