How is ansible machine[Master] authenticated? - linux

I'm learning ansible, I experimented working with ansible for like 4-5 servers. I was copying the Public-Key to the machines manually.
I want to know what would be the case if we have to do it for 1000's
of servers?
Like should we do the same thing, providing the other team with SSH-public key and ask them to add it to their machines? Are there any alternatives? How do people in the industry deal with it?
TIA

No, if you are managing really large number of servers, configuring your SSH key into each and every server is not a good way. If we are talking about server on cloud, which are highly dynamic in nature, ie, they are started/terminated as and when needed.
You can always configure which "remote user" to use for SSH connections on Ansible master configuration.
Apart from that, you can configure the user anywhere playbook or roles or pass as command line parameter.
For connecting to remote server, using SSH key, same methods can be used.
eg : from command line :
ansible-playbook <playbook yml> -u <user name on remote host> --key-file <SSH key file name with path on master host>
ansible-playbook abc.yml -u "user1" --key-file "/u01/ansible_keys/user1_key.pem"
You can setup these keys in inventory file as well, as below :
myHost ansible_ssh_private_key_file=~/.ssh/mykey1.pem
myOtherHost ansible_ssh_private_key_file=~/.ssh/mykey2.pem
Reference : Specifying ssh key in ansible playbook file

1.If all machines password are same.
ansible -i <inventory_file> -m copy -a "src=<public_key_filepath> dest=<target_filepath> -k"
Input the password to copy Public-key to all machines and using ansible as before
2.If the most part of all machines password are same. You can do this in inventory.
[groupA]
machine01
machine02
[groupB]
machine03
machine04 ansible_ssh_pass=<different password from others>
[all:vars]
ansible_ssh_user=<ssh_useranme>
ansible_ssh_pass=<machine password>
Then (Do not need to input password)
ansible -i <inventory_file> -m copy -a "src=<public_key_filepath> dest=<target_filepath>"
Then delete the password in inventory file, and using ansible as before
For myself, I prefer method 2 because I have all machines root password and no one can login machines with root account. So I write plaintext password in root directory.But I think your Public-key method maybe more secure.

Related

How to disable ssh-agent forwarding

ssh-agent forwarding can be accomplished with ssh -A ....
Most references I have found state that the local machine must configure ~/.ssh/config to enable AgentForwarding with the following code:
Host <trusted_ip>
ForwardAgent yes
Host *
ForwardAgent no
However, with this configuration, I am still able to see my local machines keys when tunneling into a remote machine, with ssh -A user#remote_not_trusted_ip, and running ssh-add -l.
From the configuration presented above, I would expect that the ssh-agent forwarding would fail and the keys of the local machine would not be listed by ssh-add -l.
Why is the machine #remote_not_trusted_ip able to access the ssh-agent forwarded keys even though the ~/.ssh/config file states the following?
Host *
ForwardAgent no
How can i prevent ssh-agent from forwarding keys to machines not explicitly defined in the ~/.ssh/config?
How can i prevent ssh-agent from forwarding keys to machines not explicitly defined in the ~/.ssh/config?
It is the default behavior. If you do not allow it in ~/.ssh/config it will not be forwarded. But the command-line arguments have higher priority so it overwrites what is defined in the configuration,as explained in the manual page for ssh_config:
ssh(1) obtains configuration data from the following sources in the following order:
command-line options
user's configuration file (~/.ssh/config)
system-wide configuration file (/etc/ssh/ssh_config)
So as already said, you just need to provide correct arguments to ssh.
So back to the questions:
Why is the machine #remote_not_trusted_ip able to access the ssh-agent forwarded keys even though the ~/.ssh/config file states the following?
Host *
ForwardAgent no
Because the command-line argument -A has higher priority than the configuration files.
How can I prevent ssh-agent from forwarding keys to machines not explicitly defined in the ~/.ssh/config?
Do not use -A command-line option if you do not want forward your ssh-agent. Use -a command-line option instead.
You are using -A option to connect. man ssh says :
-A Enables forwarding of the authentication agent connection.
You should connect without -A, just using :
ssh user#remote_not_trusted_ip
CLI args will have priority on ssh config file.
By the way, if you want to connect to your trusted ip without forwarding, you can also use :
ssh -a user#trusted_ip
-a Disables forwarding of the authentication agent connection.
This is over a year old, but I encountered the same issue and landed on a config option that works.
I had a problem when I connected from my home computer to my work computer that Git commands no longer worked. I figured out that it was because the connecting home computer's public key was forwarded, which was not configured for that GitHub account.
The -a command line options fixed the problem by not forwarding the authentication agent connection. I also thought that the equivalent ~/.ssh/config option would be this:
ForwardAgent no
When that didn't work I looked for other configuration variables, and finally found that this one worked.
IdentityAgent none
This part of the man-page is crucial:
Since the first obtained value for each parameter is used, more
host-specific declarations should be given near the beginning of the
file, and general defaults at the end.
Put your Host * with ForwardAgent yes at the end and the specific Host with ForwardAgent, not at the start of the .ssh/config
Not an answer to the question, and maybe just semantics:
Why is the machine #remote_not_trusted_ip able to access the ssh-agent forwarded keys even though the ~/.ssh/config file states the following?
My understanding is that authentication keys are never "forwarded" to a remote computer. Rather ssh-agent forwards authentication challenges from a remote server back to the computer that holds the authentication private key through whatever chain of remote computers the ssh connection is running through.

Adding ssh keys to ssh-agent fails w/ running agent, environment variables set

[SSH] "Could not open a connection to your authentication agent". error
I am trying to add ssh keys into my ssh agent. I start by making sure that the ssh-agent is running.
exec ssh-agent bash
I make sure that ssh-agent is running.
ps axu | grep [s]sh
and get the following
root 1562 ... ssh-agent bash
The env variables are set correctly.
SSH_AGENT_PID=1562
SSH_AUTH_SOCK=/tmp/ssh-699iHAxuK4xX/agent.1561
However when I try to add the private key using
sudo ssh-add ~/.ssh/peter-key
I get the ssh error
Could not open a connection to your authentication agent.
I have tried the suggestions on stackoverflow and serverfault but nothing.
Note: I am running a linux machine on one of the free tier AWS machines with ubuntu. My instance's security group allow (temporarily) all incoming and outgoing ssh connections from any IP address. Anyone know what the error could be?
Just use
ssh-add ~/.ssh/peter-key
...not...
sudo ssh-add ~/.ssh/peter-key
Using sudo (optionally/configurably, but typically) clears a number of environment variables, including the ones you just verified were set. (Compare output of sudo env and plain env to see this effect).
If you must use sudo to read the key, then you can ensure that the necessary environment variable is set on the other side by doing so explicitly yourself:
sudo env "SSH_AUTH_SOCK=$SSH_AUTH_SOCK" ssh-add ~/.ssh/peter-key
However, it's possible for security-sensitive programs working with UNIX domain sockets to check the ownership and permission of software on the other end of that socket, and to refuse to communicate with anything running on a user account different from what they expect, so it's possible that this approach may not be future-proof against security features added to ssh-agent.

Define a set keyfile for Ubuntu to use when SSHing into a server

I have two Amazon EC2 Ubuntu instances. When I connect to one of them, I can do
ssh ubuntu#54.123.4.56
and the shell uses the correct keyfile from my ~/.ssh directory.
I just set up a new instance, and I'm trying to figure out how to replicate that behavior for this new one. It's a minor thing, just driving me nuts. When I log in with:
ssh -i ~/.ssh/mykey.pem ubuntu#54.987.6.54
it works fine, but with just
ssh ubuntu#54.987.6.54
I get:
Permission denied (publickey).
I have no idea how I managed to get it to work this way for the first server, but I'd like to be able to run ssh into the second server without the "-i abc.pem" argument. Permissions are 600:
-r-------- 1 mdexter mdexter 1692 Nov 11 20:40 abc.pem
What I have tried: I copied the public key from authorized_keys on the remote server and pasted it to authorized_keys on the local server, with mdexter#172.12.34.56 (private key) because I thought that might be what created the association in the shell between that key and that server for the shell.
The only difference I can recall between how I set up the two servers is that with the first, I created a .ppk key in PuTTy so that I could connect through FileZilla for SFTP. But I think SSH is still utilizing the .pem given by Amazon.
How can I tell the shell to just know to always use my .pem key for that server when SSHing into that particular IP? It's trivial, but I'm trying to strengthen my (rudimentary) understanding of public/private keys and I'm wondering if this plays into that.
You could solve this in 3 ways:
By placing the contents of your ~/.ssh/mykey.pem into ~/.ssh/id_rsa on the machine where you are ssh'ing into 2nd instance. Make sure you also change the permissions of ~/.ssh/id_rsa to 600.
Using ssh-agent (ssh-agent will manage the keys for you)
Start ssh-agent
eval `ssh-agent -s`
Add the key to ssh-agent using ssh-add
ssh-add mykey.pem
Using ssh-config file:
You could use ssh config file. From the machine where you are trying to ssh, keep the following contents in the ~/.ssh/config file (make sure to give this file 600 permissions):
Host host2
HostName 54.987.6.54
Port 22
User ubuntu
IdentityFile ~/.ssh/mykey.pem
Once you do that now you could access do the ssh like this:
ssh host2
After performing any of the above steps you should be able to ssh into your second instance with out specifying the key path.
Note: The second option requires you to add the key using ssh-add every time you logout and log back in so to make that a permanent injection see this SO question.

What user will Ansible run my commands as?

Background
My question seems simple, but it gets more complex really fast.
Basically, I got really tired of maintaining my servers manually (screams in background) and I decided it was time to find a way to make being a server admin much more liveable. That's when I found Ansible. Great huh? Sure beats making bash scripts (louder scream) for everything I wanted to automate.
What's the problem?
I'm having a lot of trouble figuring out what user my Ansible playbook will run certain things as. I also need the ability to specify what user certain tasks will run as. Here are some specific use cases:
Cloning a repo as another user:
My purpose with this is it run my node.js webapp from another user, who we'll call bill (that can only use sudo to run a script that I made that starts the node server, as opposed to root or my user that can use sudo for all commands). To do this, I need the ability to have Ansible's git module clone my git repo as bill. How would I do that?
Knowing how Ansible will gain root:
As far as I understand, you can set what user Ansible will connect to the server you're maintaining by defining 'user' and the beginning of the playbook file. Here's what I don't understand: if I tell it to connect via my username, joe, and ask it to update a package via the apt module, how will it gain root? Sudo usually prompts me for my password, and I'd prefer keeping it that way (for security).
Final request
I've scoured the Ansible docs, done some (what I thought was thorough) Googling, and generally just tried to figure it out on my own, but this information continues to elude me.
I am very new to Ansible, and while it's mostly straight-forwards, I would benefit greatly if I could understand exactly how Ansible runs, on which users it runs, and how/where I can specify what user to use at different times.
Thank you tons in advance
You may find it useful to read the Hosts and Users section on Ansible's documentation site:
http://docs.ansible.com/playbooks_intro.html#hosts-and-users
In summary, ansible will run all commands in a playbook as the user specified in the remote_user variable (assuming you're using ansible >= 1.4, user before that). You can specify this variable on a per-task basis as well, in case a task needs to run as a certain user.
Use sudo: true in any playbook/task to use sudo to run it. Use the sudo_user variable to specify a user to sudo to if you don't want to use root.
In practice, I've found it easiest to run my playbook as a deploy user that has sudo privileges. I set up my SSH keys so I can SSH into any host as deploy without using a password. This means that I can run my playbook without using a password and even use sudo if I need to.
I use this same user to do things like cloning git repos and starting/stopping services. If a service needs to run as a lower-privileged user, I let the init script take care of that. A quick Google search for a node.js init.d script revealed this one for CentOS:
https://gist.github.com/nariyu/1211413
Doing things this way helps to keep it simple, which I like.
Hope that helps.
My 2 cents:
Ansible uses your local user (eg Mike) to ssh to the remote machine. (That required Mike to be able to ssh to the machine)
From there it can change to a remote user if needed
It can also sudo if needed and if Mike is allowed. If no user is specified then root will be selected via your ~/.ansible.cfg on your local machine.
If you supply a remote_user with the sudo param then like no.3 it will not use root but that user.
You can specify different situations and different users or sudo via the playbooks.
Playbook's define which roles will be run into each machine that belongs to the inventory selected.
I suggest you read Ansible best practices for some explanation on how to setup your infrastructure.
Oh and btw since you are not referring to a specific module that ansible uses and your question is not related to python, then I don't find any use your question having the python tag.
Just a note that Ansible>=1.9 uses privilege escalation commands so you can execute tasks and create resources as that secondary user if need be:
- name: Install software
shell: "curl -s get.dangerous_software.install | sudo bash"
become_user: root
https://ansible-docs.readthedocs.io/zh/stable-2.0/rst/become.html
I notice current answers are a bit old and suffering from link rot.
Ansible will SSH as your current user, by default:
https://docs.ansible.com/ansible/latest/user_guide/intro_getting_started.html#connecting-to-remote-nodes
Ansible communicates with remote machines over the SSH protocol. By default, Ansible uses native OpenSSH and connects to remote machines using your current user name, just as SSH does.
This can be overridden using:
passing the -u parameter at the command line
setting user information in your inventory file
setting user information in your configuration file
setting environment variables
But then you must ensure a route exists to SSH as that user. An approach to maintaining user-level ownership I see more often is become (root) and then to chown -R jdoe:jdoe /the/file/path.
In my 2.12 release of ansible I found the only way I could change the user was by specifying become: yes as an option at the play level. That way I am SSHing as the unprivileged, default, user. This user must have passwordless sudo enabled on the remote and is about the safest I could make my VPS. From this I could then switch to another user, with become_user, from an arbitrary command task.
Like this:
- name: Getting Started
gather_facts: false
hosts: all
become: yes # All tasks that follow will become root.
tasks:
- name: get the username running the deploy
command: echo $USER
become_user: trubuntu # From root we can switch to trubuntu.
If the user permitted SSH access to your remote is, say, victor, and not your current user, then remote_user: victor has a place at the play level, adjacent to become: yes.

connecting to amazon aws linux server by ssh on mac

I created a new keypair and downloaded it to my mac, then set up a new Amazon Linux AMI server with that keypair and my security group. Now I need to put the keypair .pem file that I downloaded in a .ssh file in my users folder? I am unable to create a folder called ".ssh" however because of the name.
Where do I put the keypair on my mac? and what chmods or other commands are then needed to connect to the server from my linux bash? I know "ssh my public DNS" but what other permissions or anything else should I be aware of? Its a newbie question. Thanks.
You'll want to put the keypair in {your home directory}/.ssh . If that folder doesn't exist, create it. Once you put the keypair in there you have to change the permissions on the file so only your user can read it.
Launch the terminal and type
chmod 600 $HOME/.ssh/<your keypair file>
That limits access to the file, and then to limit access to the folder type
chmod 700 $HOME/.ssh
You have to limit the access because the OpenSSH protocol won't let you use a key that other's can view.
Then to log into your instance, from the terminal you would enter
ssh -i <your home directory>/.ssh/<your keypair file> ec2-user#<ec2 hostname>
you can also create a file ~/.ssh/config
chmod it 644
then inside you can add something like this
host mybox-root
Hostname [the IP or dns name]
User root
IdentityFile ~/.ssh/[your keypair here]
then you can just do
$ ssh mybox-root
and you'll login easier.
You can use Java MindTerm to connect to your EC2 server in Macbook pro. It works for me. here are the more details and step by step instruction.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html
http://www.openssh.com/ is the suggested one on http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-connect-to-instance-linux.html#using-ssh-client (option 3)
Someone was asking on Mac's an easy way to create the ~/.ssh folder would be by running command ssh-keygen, then use following setup ...
A.
macbook-air$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/sam/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /Users/sam/.ssh/id_rsa.
Your public key has been saved in /Users/sam/.ssh/id_rsa.pub.
B. Then create:
touch ~/.ssh/authorized_keys
C. Fix the permissions:
chmod 600 ~/.ssh/authorized_keys
D. Copy AWS Key to that file:
cp AWS_key.text ~sam/.ssh/authorized_keys
#You would have saved this SSH key earlier when creating the EC2 instance
E. Then test the ssh to AWS Linux server - you will see this error:
ssh -i ./authorized_keys root#ec2-54-76-176-29.ap-southeast-2.compute.amazonaws.com
Please login as the user "ec2-user" rather than the user "root".
F. Re-try that and it should work with allowed AWS user "ec2-user":
ssh -i ./authorized_keys ec2-user#ec2-54-76-176-29.ap-southeast-2.compute.amazonaws.com
__| __|_ )
_| ( / Amazon Linux AMI
___|\___|___|
https://aws.amazon.com/amazon-linux-ami/2014.09-release-notes/
9 package(s) needed for security, out of 12 available
Run "sudo yum update" to apply all updates.
Hope this helps, all the best.

Resources