Ansible cannot make dir /$HOME/.ansible/cp - linux

I'm getting a very strange error when I run ansible:
GATHERING FACTS ***************************************************************
fatal: [i-0f55b6a4] => Could not make dir /$HOME/.ansible/cp: [Errno 13] Permission denied: '/$HOME'
TASK: [Task #1] ***************************************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/home/ubuntu/install.retry
i-0f55b6a4 : ok=0 changed=0 unreachable=1 failed=0
Normally, this playbook runs without problems, but I've recently made some changes so that the program that calls ansible is called from start-stop-daemon so that I will run as a service. The ultimate goal being to have a service that can run the playbook automatically, when it deems it necessary.
The beginning of the playbook looks like this:
---
- hosts: w_vm:main
sudo: True
tasks:
- name: Task #1
...
sudo is set to True so I'm somewhat certain that the error is not on the target machine.
The generated invocation of ansible-playbook looks like this:
ansible-playbook -i /tmp/ansible3397486563152037600.inventory \
/home/ubuntu/playbooks/main_playbook.yml \
-e #/home/ubuntu/extra_params.json
I'm not sure if that Could not make dir /$HOME/.ansible/cp error is occurring on the server or on the remote machine, or why ansible is trying to make a directory named $HOME in /. This only happens when the program that calls ansible is called from the linux service, not when it's called explicitly from the command line.
I've asked a more specific question here:
https://unix.stackexchange.com/questions/220841/start-stop-daemon-services-environment-variables-and-ansible

Try sudo chown -R YOUR_USERNAME /home/YOUR_USERNAME/.ansible

Late to answer but might be useful to someone. Check the ownership of ~/.ansible. The ownership of .ansible in the local machine (which runs ansible/ansible controller node) might be causing the problem. Do "chown -R username:groupname .ansible" (username:groupname should be of the user running the playbook) and try to run the playbook again
As an alternative remove this .ansible directory from controller node and rerun the playbook.

Ansible creates temporary files in ~/.ansible on your local machine and on the remote machine. So that could be theoretically triggered from both sides.
My guess is, it is on the local machine where Ansible runs since how Ansible was started should not have an effect on the target boxes. A quick search showed programs started with start-stop-deamon do not have $HOME (or any env at all) available, but it has an -e option to set them according to your needs.
If -e is unavailable, see this answer, which suggests to additionally exec /usr/bin/env to set environment variables.

I ran into a similar issue using Jenkins. It had a default $HOME env var set to /root/. The solution was to inject the environment variable at runtime.
HOME=/path/to/your/users/home

Related

What permissions settings does push-to-deploy require?

The title is general, but I have more specific questions. I am deep in a permissions nightmare trying to set up a "push-to-deploy" system using Git.
From my local machine, I push by SSH to the server (Ubuntu 14.04). I have the server set up as the remote
git remote add development devuser#development.server:/home/dummyuser/bare/repo.git
This bare repository is within the home folder of a dummy user dummyuser that we use to handle deployment tasks. devuser is my own account on the development server.
I have a post-receive hook set up within the remote repository (development.server:/home/dummyuser/bare/repo.git/hooks/post-receive) that's intended to deploy files via git checkout to a web server directory on the same server, call it webfolder/. That folder currently has permissions
drwxr-xr-x dummyuser www-data webfolder/
where www-data is the group associated with the Apache user.
If I have the post-receive hook script use the command
git --work-tree=/var/www/webfolder --git-dir=/home/dummyuser/bare/repo.git checkout -f
I get errors that it can't write to webfolder/, which is predictable since I assume the script is running as me (devuser) since I did the instigating push via SSH, and devuser doesn't have any permissions on webfolder/.
However, if I change the script to act as dummyuser,
sudo -u dummyuser git --work-tree=/var/www/webfolder --git-dir=/home/dummyuser/bare/repo.git checkout -f
just to see what happens, I have the error
warning: unable to access '/home/devuser/.config/git/attributes': Permission denied
There's a couple of things I don't understand about this:
1) Neither /home/devuser/.config/ nor /home/dummyuser/.config/ exist. That's fine, but if Git needs to access a .config/ folder, why wasn't it complaining before when I was setting up bare repos and executing hooks as devuser?
2) Now that I'm trying to act as dummyuser, why is Git looking in ~devuser/ for a .config/ folder? Why isn't it looking in ~dummyuser/?
I've been working on this tiny slice of one single problem in the maddening shitshow that is "using Git" for coming up on four hours now, and my brain is fuzzy, so please use small words.
The problem is something involving sudo -u dummyuser not setting the environment variables that Git expects. If I add HOME=/home/dummyuser to the post-receive hook, the deployment works as expected.
If anyone can provide more details about what's happening or a better solution, write it as an answer and I'll accept it. Couple of notes:
dummyuser doesn't have a login, so using sudo -iu dummyuser in the post-receive script won't work
After setting HOME=/home/dummyuser manually and successfully executing the script, I find that echo $HOME from the terminal returns /home/devuser, so there's no permanent change to $HOME
After successfully executing the hook script, neither ~devuser/ nor ~dummyuser/ nor /root/ have a .config/ folder. So... I still have no idea why Git was hung up on it.
Git expects a .config folder in the user's home directory. If $HOME isn't set correctly, e.g. if it points to a different user's home, Git will try to access $HOME/.config, not knowing that it actually doesn't even exist. However, since the user, and thus Git, doesn't have access to that $HOME, you will receive an error saying Permission denied.
To test that, try to run as dummyuser:
[ -d /home/devuser/.config ] && echo '.config exists!'
You're trying to test if the directory /home/devuser/.config exists. However, since you don't have the needed permissions, you get Permission denied, and you still don't know whether the directory exists or not.
Instead of setting $HOME manually, you could possibly use -H or --set-home:
sudo -Hu dummyuser git --work-tree=/var/www/webfolder --git-dir=/home/dummyuser/bare/repo.git checkout -f

Run a command in remote server

What would be the best way to run commands in remote servers? I am thinking of using SSH, but is there a better way than that?
I used Red Hat Linux and I want to run the command in one of the servers, specify what other servers I want to have my command run, and it has to do the exact same thing in the servers specified. Puppet couldn't solely help, but I might be able to combine some other tool with Puppet to do the job for me.
It seems you are able to log on to the other servers without entering a password. I assume this is based on SSH keys, as described here: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/s2-ssh-configuration-keypairs.html
You say another script is producing a list of servers. You can now use the following simple script to loop over the list:
for server in `./server-list-script`; do
echo $server:
ssh username#$server mkdir /etc/dir/test123
done >logfile 2>&1
The file "logfile" will collect the output. I'm pretty sure Puppet is able to do this as well.
Your solution will almost definitely end up involving ssh in some capacity.
You may want something to help manage the execution of commands on multiple servers; ansible is a great choice for something like this.
For example, if I want to install libvirt on a bunch of servers and make sure libvirtd is running, I might pass a configuration like this to ansible-playbook:
- hosts: all
tasks:
- yum:
name: libvirt
state: installed
- service:
name: libvirtd
state: running
enabled: true
This would ssh to all of the servers in my "inventory" (a file -- or command -- that provides ansible with a list of servers), install the libvirt package, start libvirtd, and then arrange for the service to start automatically at boot.
Alternatively, if I want to run puppet apply on a bunch of servers, I could just use the ansible command to run an ad-hoc command without requiring a configuration file:
ansible all -m command -a 'puppet apply'

Git push/pull fails on GitLab in Google Compute Engine

I've installed GitLab on Google Compute Engine using "Click to Deploy" from the project interface. The deployment is successful after a few minutes. I can SSH into the instance, and muck around with it as expected.
I can also log in to GitLab using the web interface, and add SSH keys to my profile. So far, so good. However, when I attempt to push or pull to a new example repository, I receive this message:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I've removed my local SSH config so it doesn't interfere. Do I need to setup an SSH tunnel of some sort? What am I missing?
UPDATE: Wiping out my local ~/.ssh folder, and regenerating an SSH key (which I've added to my profile in GitLab) produces the following error:
Received disconnect from {GITLAB_IP_ADDRESS}: 2: Too many authentication failures for git
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
UPDATE 2: It seems GitLab may already have a solution: run sudo gitlab-ctl reconfigure. See here: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md#git-ssh-access-stops-working-on-selinux-enabled-systems
You need to create an SSH tunnel to communicate with GitLab.
1. Log into your development server as your user, and create a key.
ssh-keygen -t rsa
Follow the steps, and create a passcode (that you can remember) as you'd need this to pull and push code from/to GitLab.
2. Now that you've created your key, we can copy it;
cat id_rsa.pub
Copy the output of that command (including ssh-rsa), and add it to your GitLab profile. (http://my-gitlab-server.com/profile/keys/new).
3. Ensure you have the correct privilege to the project(s)
Ensure you are at role developer at the very least. (Screengrab of roles: http://i.stack.imgur.com/DSSvl.jpg)
4. Now, copy the project link
Go into your project, and find the SSH link in the top right;
5. Now back to your development server
Navigate to your directory where you'd like to work, and run the following;
$ git init
$ git remote add origin <<project_url>>
$ git fetch
Where <<project_url>> is the link we copied in step 4.
You will be prompted your password (this is your ssh key password, not your server password) and to add the host to your known_hosts file. After that, the project will start to download and you can enjoy development.
I did these steps on a CentOS 6.4 machine with Digital Ocean. But they shouldn't differ from using Google CE.
Edit
Quote from Marty Penner answer as per this comment
Solved it! Thanks to #sxleixer and #Alexander Wenzowski for figuring this out.
Apparently, SELinux was interfering with a non-standard location for the .ssh directory. I needed to run the following commands on the Compute Engine instance:
sudo yum -y install policycoreutils-python # Install the `semanage` tool
sudo semanage fcontext -a -t ssh_home_t "/var/opt/gitlab/.ssh/authorized_keys" # Allow the nonstandard ssh_home_t
See the full thread here:
Google Cloud Engine. Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
Solved it! Thanks to #sxleixer and #Alexander Wenzowski for figuring this out.
Apparently, SELinux was interfering with a non-standard location for the .ssh directory. I needed to run the following commands on the Compute Engine instance:
sudo yum -y install policycoreutils-python # Install the `semanage` tool
sudo semanage fcontext -a -t ssh_home_t "/var/opt/gitlab/.ssh/authorized_keys" # Allow the nonstandard ssh_home_t
See the full thread here:
Google Cloud Engine. Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
UPDATE: It seems GitLab may already have a solution: run sudo gitlab-ctl reconfigure. See here: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md#git-ssh-access-stops-working-on-selinux-enabled-systems
In my situation the git user wasn´t set up completely. If you get in your log files messages like "User git not allowed because account is locked" (Under Centos or Redhat it´s /var/log/secure) than you simply need to activate the user via "passwd -d git"

What user will Ansible run my commands as?

Background
My question seems simple, but it gets more complex really fast.
Basically, I got really tired of maintaining my servers manually (screams in background) and I decided it was time to find a way to make being a server admin much more liveable. That's when I found Ansible. Great huh? Sure beats making bash scripts (louder scream) for everything I wanted to automate.
What's the problem?
I'm having a lot of trouble figuring out what user my Ansible playbook will run certain things as. I also need the ability to specify what user certain tasks will run as. Here are some specific use cases:
Cloning a repo as another user:
My purpose with this is it run my node.js webapp from another user, who we'll call bill (that can only use sudo to run a script that I made that starts the node server, as opposed to root or my user that can use sudo for all commands). To do this, I need the ability to have Ansible's git module clone my git repo as bill. How would I do that?
Knowing how Ansible will gain root:
As far as I understand, you can set what user Ansible will connect to the server you're maintaining by defining 'user' and the beginning of the playbook file. Here's what I don't understand: if I tell it to connect via my username, joe, and ask it to update a package via the apt module, how will it gain root? Sudo usually prompts me for my password, and I'd prefer keeping it that way (for security).
Final request
I've scoured the Ansible docs, done some (what I thought was thorough) Googling, and generally just tried to figure it out on my own, but this information continues to elude me.
I am very new to Ansible, and while it's mostly straight-forwards, I would benefit greatly if I could understand exactly how Ansible runs, on which users it runs, and how/where I can specify what user to use at different times.
Thank you tons in advance
You may find it useful to read the Hosts and Users section on Ansible's documentation site:
http://docs.ansible.com/playbooks_intro.html#hosts-and-users
In summary, ansible will run all commands in a playbook as the user specified in the remote_user variable (assuming you're using ansible >= 1.4, user before that). You can specify this variable on a per-task basis as well, in case a task needs to run as a certain user.
Use sudo: true in any playbook/task to use sudo to run it. Use the sudo_user variable to specify a user to sudo to if you don't want to use root.
In practice, I've found it easiest to run my playbook as a deploy user that has sudo privileges. I set up my SSH keys so I can SSH into any host as deploy without using a password. This means that I can run my playbook without using a password and even use sudo if I need to.
I use this same user to do things like cloning git repos and starting/stopping services. If a service needs to run as a lower-privileged user, I let the init script take care of that. A quick Google search for a node.js init.d script revealed this one for CentOS:
https://gist.github.com/nariyu/1211413
Doing things this way helps to keep it simple, which I like.
Hope that helps.
My 2 cents:
Ansible uses your local user (eg Mike) to ssh to the remote machine. (That required Mike to be able to ssh to the machine)
From there it can change to a remote user if needed
It can also sudo if needed and if Mike is allowed. If no user is specified then root will be selected via your ~/.ansible.cfg on your local machine.
If you supply a remote_user with the sudo param then like no.3 it will not use root but that user.
You can specify different situations and different users or sudo via the playbooks.
Playbook's define which roles will be run into each machine that belongs to the inventory selected.
I suggest you read Ansible best practices for some explanation on how to setup your infrastructure.
Oh and btw since you are not referring to a specific module that ansible uses and your question is not related to python, then I don't find any use your question having the python tag.
Just a note that Ansible>=1.9 uses privilege escalation commands so you can execute tasks and create resources as that secondary user if need be:
- name: Install software
shell: "curl -s get.dangerous_software.install | sudo bash"
become_user: root
https://ansible-docs.readthedocs.io/zh/stable-2.0/rst/become.html
I notice current answers are a bit old and suffering from link rot.
Ansible will SSH as your current user, by default:
https://docs.ansible.com/ansible/latest/user_guide/intro_getting_started.html#connecting-to-remote-nodes
Ansible communicates with remote machines over the SSH protocol. By default, Ansible uses native OpenSSH and connects to remote machines using your current user name, just as SSH does.
This can be overridden using:
passing the -u parameter at the command line
setting user information in your inventory file
setting user information in your configuration file
setting environment variables
But then you must ensure a route exists to SSH as that user. An approach to maintaining user-level ownership I see more often is become (root) and then to chown -R jdoe:jdoe /the/file/path.
In my 2.12 release of ansible I found the only way I could change the user was by specifying become: yes as an option at the play level. That way I am SSHing as the unprivileged, default, user. This user must have passwordless sudo enabled on the remote and is about the safest I could make my VPS. From this I could then switch to another user, with become_user, from an arbitrary command task.
Like this:
- name: Getting Started
gather_facts: false
hosts: all
become: yes # All tasks that follow will become root.
tasks:
- name: get the username running the deploy
command: echo $USER
become_user: trubuntu # From root we can switch to trubuntu.
If the user permitted SSH access to your remote is, say, victor, and not your current user, then remote_user: victor has a place at the play level, adjacent to become: yes.

Ansible Permissions Issue

I'm trying to add the current user to a group in the system, then execute a command that requires permission for that group. My playbook is like so:
- name: Add this user to RVM group
sudo: true
user: state=present name=vagrant append=yes groups=rvm group=rvm
- name: Install Ruby 1.9.3
command: rvm install ruby-1.9.3-p448 creates=/usr/local/rvm/bin/ruby-1.9.3-p448
The problem is that all of this is happening in the same shell. vagrant's shell hasn't been updated with the new groups yet. Is there a clean way to refresh the user's current groups in Ansible? I figure I need to get it to re-connect or open a new shell.
However I tried opening a new shell and it simply hangs:
- name: Open a new shell for the new groups
shell: bash
Of course it hangs: the process never exits!
Same thing with newgrp
- name: Refresh the groups
shell: newgrp
Because it basically does the same thing.
Any ideas?
Read the manual.
A solution here is to use the 'executable' parameter for either the 'command' or 'shell' modules.
So I tried using the command module like so:
- name: install ruby 1.9.3
command: rvm install ruby-1.9.3-p448 executable=/bin/bash creates=/usr/local/rvm/bin/ruby-1.9.3-p448
ignore_error: true
But the playbook hung indefinitely. The manual states:
If you want to run a command through the shell (say you are using <, >, |, etc), you actually want the shell module instead. The command module is much more secure as it's not affected by the user's
environment.
So I tried using the shell module:
- name: install ruby 1.9.3
shell: rvm install ruby-1.9.3-p448 executable=/bin/bash creates=/usr/local/rvm/bin/ruby-1.9.3-p448
ignore_error: true
And it works!
As others already stated, this is because of an active ssh connection to the remote host. The user needs to log out and log in again to activate the new group.
A separate shell action might be a solution for a single task. But if you want to run multiple other tasks and don't want to be forced to write all commands yourself and use the Ansible modules instead, kill the ssh connection.
- name: Killing all ssh connections of current user
delegate_to: localhost
shell: ssh {{ inventory_hostname }} "sudo ps -ef | grep sshd | grep `whoami` | awk '{print \"sudo kill -9\", \$2}' | sh"
failed_when: false
Instead of using Ansibles open ssh connection, we start our own through a shell action. Then we kill all open ssh connections of the current user. This will force Ansible to re-login at the next task.
I have seen this problem in capistrano and chef, it happens because you already have a session to the user which does not have the group yet, you would need to close the session and open new session to get the user to see the group that was added.
I am on RHEL 7.0 using Ansible 1.8 and the accepted answer did not work for me. The only way I could force Ansible to load the newly added rvm group was to use sg.
- name: add user to rvm group
user: name=ec2-user groups=rvm append=yes
sudo: yes
- name: install ruby
command: sg rvm -c "/usr/local/rvm/bin/rvm install ruby-2.0.0"

Resources