Is there a way to reliably check that systemd supports --user? - linux

I'm trying to set up some user services using Ansible and systemd.
On Ubuntu and RHEL 7 I'm getting
# systemctl --user status
Failed to get D-Bus connection: Connection refused
For Ubuntu I clarified the error, it's because of this:
https://docs.ansible.com/ansible/latest/modules/systemd_module.html
run systemctl within a given service manager scope, either as the default system scope (system), the current user's scope (user), or the scope of all users (global).
For systemd to work with 'user', the executing user must have its own instance of dbus started (systemd requirement). The user dbus process is normally started during normal login, but not during the run of Ansible tasks. Otherwise you will probably get a 'Failed to connect to bus: no such file or directory' error.
Basically DBus needs to be started before systemd --user can work. I'm not sure how to do that either, but I can work around it in other ways, I think.
However, the main blocker right now is: how do I check, generically, for the availability of the functionality?
I tried systemctl show and there's no explicit "user" feature. Is the flag the "+PAM" from the Features line? I know that systemd uses PAM at least partially to implement it, I don't know if it's needed for other features.
How can I check that "my" systemd supports --user in a reliable manner? Is there a file I could check? A command? Something else? DBus voodoo?

It's not a matter of whether systemd supports --user (all reasonably recent versions do), but rather whether (a) a user session is currently running, and (b) your Ansible process can connect to it.
A solution for both problems is become_method: machinectl (see Ansible documentation), but it has issues on some systemd versions.
If that method doesn't work for you, there are workarounds. Typically an Ansible session does not create a user systemd instance; you need to log in locally for that to happen. However, you can enable lingering to always have a systemd for that user.
The second problem is connecting to that instance. This needs the XDG_RUNTIME_DIR environment variable to be set; typically to /run/user/<UID>. It's not set by the usual become_method: sudo, but you can use something along these lines to figure it out and pass it to the systemd task:
- name: "Find uid of user"
command: "id -u {{ the_user }}"
register: the_user_uid
check_mode: no # Run even in check mode, otherwise the playbook fails with --check.
changed_when: false
- name: "Determine XDG_RUNTIME_DIR"
set_fact:
xdg_runtime_dir: "/run/user/{{ the_user_uid.stdout }}"
changed_when: false
- name: "Enable some service"
become: true
become_user: "{{ the_user }}"
environment:
XDG_RUNTIME_DIR: "{{ xdg_runtime_dir }}"
systemd:
scope: user
daemon_reload: yes
name: the_service.service
enabled: yes
state: started

Related

Postgresqll 10: Postgres User /sbin/nologin preventing initdb setup

I'm attempting to install postgresql 10 for the first time and need to run the initdb setup. Unfortunately, this fails and returns an error from the nologin shell.
server# /usr/pgsql-10/bin/postgresql-10-setup initdb
Initializing database ...
failed, see /var/lib/pgsql/10/initdb.log
server# cat /var/lib/pgsql/10/initdb.log
This account is currently not available.
I strace'd the command and verified the su commands are probably what's causing this and it seems the default setting for the postgres user is /sbin/nologin. In various examples I've seen, there is no mention of this being a possible issue. How would this work on any other system by default? I feel that temporarily modifying the login shell would work but I want to understand this issue better more specifically from the application's end.
centos 7.8
selinux mode: permissive
postgresql 10

Run a command in remote server

What would be the best way to run commands in remote servers? I am thinking of using SSH, but is there a better way than that?
I used Red Hat Linux and I want to run the command in one of the servers, specify what other servers I want to have my command run, and it has to do the exact same thing in the servers specified. Puppet couldn't solely help, but I might be able to combine some other tool with Puppet to do the job for me.
It seems you are able to log on to the other servers without entering a password. I assume this is based on SSH keys, as described here: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/s2-ssh-configuration-keypairs.html
You say another script is producing a list of servers. You can now use the following simple script to loop over the list:
for server in `./server-list-script`; do
echo $server:
ssh username#$server mkdir /etc/dir/test123
done >logfile 2>&1
The file "logfile" will collect the output. I'm pretty sure Puppet is able to do this as well.
Your solution will almost definitely end up involving ssh in some capacity.
You may want something to help manage the execution of commands on multiple servers; ansible is a great choice for something like this.
For example, if I want to install libvirt on a bunch of servers and make sure libvirtd is running, I might pass a configuration like this to ansible-playbook:
- hosts: all
tasks:
- yum:
name: libvirt
state: installed
- service:
name: libvirtd
state: running
enabled: true
This would ssh to all of the servers in my "inventory" (a file -- or command -- that provides ansible with a list of servers), install the libvirt package, start libvirtd, and then arrange for the service to start automatically at boot.
Alternatively, if I want to run puppet apply on a bunch of servers, I could just use the ansible command to run an ad-hoc command without requiring a configuration file:
ansible all -m command -a 'puppet apply'

Ansible cannot make dir /$HOME/.ansible/cp

I'm getting a very strange error when I run ansible:
GATHERING FACTS ***************************************************************
fatal: [i-0f55b6a4] => Could not make dir /$HOME/.ansible/cp: [Errno 13] Permission denied: '/$HOME'
TASK: [Task #1] ***************************************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/home/ubuntu/install.retry
i-0f55b6a4 : ok=0 changed=0 unreachable=1 failed=0
Normally, this playbook runs without problems, but I've recently made some changes so that the program that calls ansible is called from start-stop-daemon so that I will run as a service. The ultimate goal being to have a service that can run the playbook automatically, when it deems it necessary.
The beginning of the playbook looks like this:
---
- hosts: w_vm:main
sudo: True
tasks:
- name: Task #1
...
sudo is set to True so I'm somewhat certain that the error is not on the target machine.
The generated invocation of ansible-playbook looks like this:
ansible-playbook -i /tmp/ansible3397486563152037600.inventory \
/home/ubuntu/playbooks/main_playbook.yml \
-e #/home/ubuntu/extra_params.json
I'm not sure if that Could not make dir /$HOME/.ansible/cp error is occurring on the server or on the remote machine, or why ansible is trying to make a directory named $HOME in /. This only happens when the program that calls ansible is called from the linux service, not when it's called explicitly from the command line.
I've asked a more specific question here:
https://unix.stackexchange.com/questions/220841/start-stop-daemon-services-environment-variables-and-ansible
Try sudo chown -R YOUR_USERNAME /home/YOUR_USERNAME/.ansible
Late to answer but might be useful to someone. Check the ownership of ~/.ansible. The ownership of .ansible in the local machine (which runs ansible/ansible controller node) might be causing the problem. Do "chown -R username:groupname .ansible" (username:groupname should be of the user running the playbook) and try to run the playbook again
As an alternative remove this .ansible directory from controller node and rerun the playbook.
Ansible creates temporary files in ~/.ansible on your local machine and on the remote machine. So that could be theoretically triggered from both sides.
My guess is, it is on the local machine where Ansible runs since how Ansible was started should not have an effect on the target boxes. A quick search showed programs started with start-stop-deamon do not have $HOME (or any env at all) available, but it has an -e option to set them according to your needs.
If -e is unavailable, see this answer, which suggests to additionally exec /usr/bin/env to set environment variables.
I ran into a similar issue using Jenkins. It had a default $HOME env var set to /root/. The solution was to inject the environment variable at runtime.
HOME=/path/to/your/users/home

What user will Ansible run my commands as?

Background
My question seems simple, but it gets more complex really fast.
Basically, I got really tired of maintaining my servers manually (screams in background) and I decided it was time to find a way to make being a server admin much more liveable. That's when I found Ansible. Great huh? Sure beats making bash scripts (louder scream) for everything I wanted to automate.
What's the problem?
I'm having a lot of trouble figuring out what user my Ansible playbook will run certain things as. I also need the ability to specify what user certain tasks will run as. Here are some specific use cases:
Cloning a repo as another user:
My purpose with this is it run my node.js webapp from another user, who we'll call bill (that can only use sudo to run a script that I made that starts the node server, as opposed to root or my user that can use sudo for all commands). To do this, I need the ability to have Ansible's git module clone my git repo as bill. How would I do that?
Knowing how Ansible will gain root:
As far as I understand, you can set what user Ansible will connect to the server you're maintaining by defining 'user' and the beginning of the playbook file. Here's what I don't understand: if I tell it to connect via my username, joe, and ask it to update a package via the apt module, how will it gain root? Sudo usually prompts me for my password, and I'd prefer keeping it that way (for security).
Final request
I've scoured the Ansible docs, done some (what I thought was thorough) Googling, and generally just tried to figure it out on my own, but this information continues to elude me.
I am very new to Ansible, and while it's mostly straight-forwards, I would benefit greatly if I could understand exactly how Ansible runs, on which users it runs, and how/where I can specify what user to use at different times.
Thank you tons in advance
You may find it useful to read the Hosts and Users section on Ansible's documentation site:
http://docs.ansible.com/playbooks_intro.html#hosts-and-users
In summary, ansible will run all commands in a playbook as the user specified in the remote_user variable (assuming you're using ansible >= 1.4, user before that). You can specify this variable on a per-task basis as well, in case a task needs to run as a certain user.
Use sudo: true in any playbook/task to use sudo to run it. Use the sudo_user variable to specify a user to sudo to if you don't want to use root.
In practice, I've found it easiest to run my playbook as a deploy user that has sudo privileges. I set up my SSH keys so I can SSH into any host as deploy without using a password. This means that I can run my playbook without using a password and even use sudo if I need to.
I use this same user to do things like cloning git repos and starting/stopping services. If a service needs to run as a lower-privileged user, I let the init script take care of that. A quick Google search for a node.js init.d script revealed this one for CentOS:
https://gist.github.com/nariyu/1211413
Doing things this way helps to keep it simple, which I like.
Hope that helps.
My 2 cents:
Ansible uses your local user (eg Mike) to ssh to the remote machine. (That required Mike to be able to ssh to the machine)
From there it can change to a remote user if needed
It can also sudo if needed and if Mike is allowed. If no user is specified then root will be selected via your ~/.ansible.cfg on your local machine.
If you supply a remote_user with the sudo param then like no.3 it will not use root but that user.
You can specify different situations and different users or sudo via the playbooks.
Playbook's define which roles will be run into each machine that belongs to the inventory selected.
I suggest you read Ansible best practices for some explanation on how to setup your infrastructure.
Oh and btw since you are not referring to a specific module that ansible uses and your question is not related to python, then I don't find any use your question having the python tag.
Just a note that Ansible>=1.9 uses privilege escalation commands so you can execute tasks and create resources as that secondary user if need be:
- name: Install software
shell: "curl -s get.dangerous_software.install | sudo bash"
become_user: root
https://ansible-docs.readthedocs.io/zh/stable-2.0/rst/become.html
I notice current answers are a bit old and suffering from link rot.
Ansible will SSH as your current user, by default:
https://docs.ansible.com/ansible/latest/user_guide/intro_getting_started.html#connecting-to-remote-nodes
Ansible communicates with remote machines over the SSH protocol. By default, Ansible uses native OpenSSH and connects to remote machines using your current user name, just as SSH does.
This can be overridden using:
passing the -u parameter at the command line
setting user information in your inventory file
setting user information in your configuration file
setting environment variables
But then you must ensure a route exists to SSH as that user. An approach to maintaining user-level ownership I see more often is become (root) and then to chown -R jdoe:jdoe /the/file/path.
In my 2.12 release of ansible I found the only way I could change the user was by specifying become: yes as an option at the play level. That way I am SSHing as the unprivileged, default, user. This user must have passwordless sudo enabled on the remote and is about the safest I could make my VPS. From this I could then switch to another user, with become_user, from an arbitrary command task.
Like this:
- name: Getting Started
gather_facts: false
hosts: all
become: yes # All tasks that follow will become root.
tasks:
- name: get the username running the deploy
command: echo $USER
become_user: trubuntu # From root we can switch to trubuntu.
If the user permitted SSH access to your remote is, say, victor, and not your current user, then remote_user: victor has a place at the play level, adjacent to become: yes.

Ansible Permissions Issue

I'm trying to add the current user to a group in the system, then execute a command that requires permission for that group. My playbook is like so:
- name: Add this user to RVM group
sudo: true
user: state=present name=vagrant append=yes groups=rvm group=rvm
- name: Install Ruby 1.9.3
command: rvm install ruby-1.9.3-p448 creates=/usr/local/rvm/bin/ruby-1.9.3-p448
The problem is that all of this is happening in the same shell. vagrant's shell hasn't been updated with the new groups yet. Is there a clean way to refresh the user's current groups in Ansible? I figure I need to get it to re-connect or open a new shell.
However I tried opening a new shell and it simply hangs:
- name: Open a new shell for the new groups
shell: bash
Of course it hangs: the process never exits!
Same thing with newgrp
- name: Refresh the groups
shell: newgrp
Because it basically does the same thing.
Any ideas?
Read the manual.
A solution here is to use the 'executable' parameter for either the 'command' or 'shell' modules.
So I tried using the command module like so:
- name: install ruby 1.9.3
command: rvm install ruby-1.9.3-p448 executable=/bin/bash creates=/usr/local/rvm/bin/ruby-1.9.3-p448
ignore_error: true
But the playbook hung indefinitely. The manual states:
If you want to run a command through the shell (say you are using <, >, |, etc), you actually want the shell module instead. The command module is much more secure as it's not affected by the user's
environment.
So I tried using the shell module:
- name: install ruby 1.9.3
shell: rvm install ruby-1.9.3-p448 executable=/bin/bash creates=/usr/local/rvm/bin/ruby-1.9.3-p448
ignore_error: true
And it works!
As others already stated, this is because of an active ssh connection to the remote host. The user needs to log out and log in again to activate the new group.
A separate shell action might be a solution for a single task. But if you want to run multiple other tasks and don't want to be forced to write all commands yourself and use the Ansible modules instead, kill the ssh connection.
- name: Killing all ssh connections of current user
delegate_to: localhost
shell: ssh {{ inventory_hostname }} "sudo ps -ef | grep sshd | grep `whoami` | awk '{print \"sudo kill -9\", \$2}' | sh"
failed_when: false
Instead of using Ansibles open ssh connection, we start our own through a shell action. Then we kill all open ssh connections of the current user. This will force Ansible to re-login at the next task.
I have seen this problem in capistrano and chef, it happens because you already have a session to the user which does not have the group yet, you would need to close the session and open new session to get the user to see the group that was added.
I am on RHEL 7.0 using Ansible 1.8 and the accepted answer did not work for me. The only way I could force Ansible to load the newly added rvm group was to use sg.
- name: add user to rvm group
user: name=ec2-user groups=rvm append=yes
sudo: yes
- name: install ruby
command: sg rvm -c "/usr/local/rvm/bin/rvm install ruby-2.0.0"

Resources