How to set default user for mounted folder? - file-permissions

When I put in top.sls this:
/var/www:
file.directory:
- user: {{ pillar['user'] }}
- group: www-data
- mode: 755
- makedirs: True
It creates "/var/www" dir with permissions which are defined and that is ok.
So basically chown is: user:www-data
But when I try to mount that folder to my Mac then problem show up.
owner and group are-> 501:dialout
Here is code which I use:
/var/www:
{% if pillar['sshfs_www'] %}
file.directory:
- mode: 755
- follow_symlinks: False
- group: www-data
- makedirs: True
mount:
- user: {{ pillar['user'] }}
- mounted
- device: sshfs#{{ pillar['sshfs_www'] }}
- fstype: fuse
- opts: nonempty,allow_other,auto
{% else %}
file.directory:
- mode: 755
- group: www-data
- makedirs: True
{% endif %}
Not only that user and group are not set as I set, I get error: Failed to change user to myuser
How can I mount with my user and group?
Thank you

I hope this will help other users to solve their problem with permissions when mounting with salt:
So here how I solved that.
First I manually setup id for user and group:
{{ pillar['user'] }}:
user.present:
- shell: /bin/bash
- home: /home/{{ pillar['user'] }}
- require_in:
- uid: 4000
- gid: 4000
- file: /home/{{ pillar['user'] }}/.ssh/id_rsa
- file: /home/{{ pillar['user'] }}/.ssh/authorized_keys
www-data:
group.present:
- gid: 4000
- system: True
- members:
- {{ pillar['user'] }}
After that in part where is mount, I defined uid and gid with this part: uid=4000,gid=4000
/var/www:
{% if pillar['sshfs_www'] %}
mount:
- user: {{ pillar['user'] }}
- mounted
- device: sshfs#{{ pillar['sshfs_www'] }}
- fstype: fuse
- opts: nonempty,allow_other,auto,uid=4000,gid=4000
{% else %}
file.directory:
- mode: 755
- group: www-data
- makedirs: True
{% endif %}

Citing Sven from a serverfail answer:
You can't. That's a limitation of SSHFS/Fuse: Everything is mapped to the permission of the user you use to connect with SSH by default.
However, it appears you can work around this a bit with idmap files, see the options -o idmap, -o uidfile, -o gidfile and -o nomap in the man page.

Related

Ansible jinja dictionary create multiple sudo files for each user

I'm trying to create a sudo file for each user.
Playbook:
- name:
hosts: all
gather_facts: false
tasks:
- name:
template:
src: sudo.j2
dest: "/etc/sudoers.d/{{item.name}}"
loop: "{{userinfo}}"
when: "'admins' in item.groupname"
Var file:
userinfo:
- groupname: admins
name: bill
- groupname: admins
name: bob
- groupname: devs
name: bea
Jinja file:
{% for item in userinfo %}
{% if item.groupname=="admins" %}
{{item.name}} ALL=ALL NOPASSWD:ALL
{% endif %}
{% endfor %}
What I am getting is two files but with information of both users.
bill ALL=ALL NOPASSWD:ALL
bob ALL=ALL NOPASSWD:ALL
How do I make it work such that each file contains information of that user only
The issue is that you have 2 loops: one in the playbook, the other in the template jinja file; try leaving the template file with the templated information only
{{ item.name }} ALL=ALL NOPASSWD:ALL

If condition does not work in state triggered by reactor

I am using an if condition utilizing grain item within a state which triggered by reactor.
and I got an error message Jinja variable 'dict object' has no attribute 'environment'
=================================================
REACTOR config:
cat /etc/salt/master.d/reactor.conf
reactor:
- 'my/custom/event':
- salt://reactor/test.sls
==============================
test.sls
cat /srv/salt/reactor/test.sls
sync_grains:
local.saltutil.sync_grains:
- tgt: {{ data['id'] }}
{% if grains['environment'] in ["prod", "dev", "migr"] %}
test_if_this_works:
local.state.apply:
- tgt: {{ data['id'] }}
- arg:
- dummy_state
{% endif %}
===================================
dummy_state/init.sls
cat /srv/salt/dummy_state/init.sls
create_a_directory:
file.directory:
- name: /tmp/my_test_dir
- user: root
- group: root
- makedirs: True
=================================================
salt 'salt-redhat-23.test.local' grains.item environment
salt-redhat-23.test.local:
----------
environment:
prod
=================================================
salt-redhat-23 ~]# cat /etc/salt/grains
role: MyServer
environment: prod
================================================
If I change the test.sls and use instead of custom grain a grain which salt-master is taking by default then it will works. Also it will work without the if condition in the state.
Do you know why this is happening?
Thank you all in advance.
Issue resolved.
You cannot use custom grains with Reactor directly, you need to call another state to be able to add condition there.
for instance:
cat /etc/salt/master.d/reactor.conf
reactor:
- 'my/custom/event':
- salt://reactor/test.sls
test.sls
# run a state using reactor
test_if_this_works:
local.state.apply:
- tgt: {{ data['id'] }}
- arg:
- reactor.execute
execute.sls
{% set tst = grains['environment'] %}
{% if tst in ['prod', 'dev', 'test', 'migr'] %}
create_a_directory:
file.directory:
- name: /tmp/my_test_dir
- user: root
- group: root
- makedirs: True
{% endif %}
this will work with the if condition, if you try to add the if statement on the test.sls it will not work.

How to exclude filesystems with ansible

I'm writing a playbook the change file and folders permissions on a Linux server.
Until know it is working and looks like this:
-
name: Playbook to change file and directory permissions
hosts: all
become: yes
vars:
DIR: '{{ target_dir }}'
FILE: '{{ target_file }}'
PERMISSIONS: '{{ number }}'
OWNER: '{{ target_owner }}'
GROUP: '{{ target_group }}'
tasks:
- name: Checking if the directory exists
stat:
path: '{{ DIR }}'
register: dir_status
- name: Checking if the file exists
stat:
path: '{{ FILE }}'
register: file_status
- name: Report if directory exists
debug:
msg: "Directory {{ DIR }} is present on the server"
when: dir_status.stat.exists and dir_status.stat.isdir
- name: Report if file exists
debug:
msg: "File {{ FILE }} is present on the server"
when: file_status.stat.exists
- name: Applying new permissions
file:
path: '{{ DIR }}/{{ FILE }}'
state: file
mode: '0{{ PERMISSIONS }}'
owner: '{{ OWNER }}'
group: '{{ GROUP }}'
But what I need is if the user that gonna execute the playbook in rundeck wanna change permissions on the (/boot /var /etc /tmp /usr) directories tell ansible to not try doing that and throw an error message.
How can I do that?
I understand your question that you like to fail with custom message when a variable DIR contains one of the values /boot, /var, /etc, /tmp or /usr.
To do so you may use
- name: You can't work on {{ DIR }}
fail:
msg: The system may not work on {{ DIR }} according ...
when: '"/boot" or "/var" or "/etc" or "/tmp" or "/usr" in DIR'
There is also a meta_module which can end_play when condition are met.
tasks:
- meta: end_play
when: '"/boot" or "/var" or "/etc" or "/tmp" or "/usr" in DIR'
Both, fail and end_play, you can combine with different variables for certain use cases.
when: "'download' or 'unpack' in ansible_run_tags"
when: ( "DMZ" not in group_names )
Thanks to
Run an Ansible task only when the variable contains a specific string
Ansible - Execute task when variable contains specific string
Please take note that you are constructing the full path by concatenating {{ DIR }}/{{ FILE }} at the end. The above mentioned simple approach will not handle an empty DIR and FILEname with path included. Test cases could be
DIR: ""
FILE "/tmp/test"
DIR: "/"
FILE: "tmp/test"
Maybe you like to perform the test on the full filepath or test what a variable begins with.
In respect to the comments from Zeitounator and seshadri-c you may also try the approach of the assert_module
- name: Check for allowed directories
assert:
that:
- DIR in ["/boot", "/etc", "/var", "/tmp", "/usr"]
quiet: true
fail_msg: "The system may not work on {{ DIR }} according ..."
success_msg: "Path is OK."

Ansible playbook loop with with_items

I have to update sudoers.d multiple user files with few lines/commands using ansible playbook
users.yml
user1:
- Line1111
- Line2222
- Line3333
user2:
- Line4444
- Line5555
- Line6666
main.yml
- hosts: "{{ host_group }}"
vars_files:
- ../users.yml
tasks:
- name: Add user "user1" to sudoers.d
lineinfile:
path: /etc/sudoers.d/user1
line: '{{ item }}'
state: present
mode: 0440
create: yes
validate: 'visudo -cf %s'
with_items:
- "{{ user1 }}"
The above one is working only for user1..
If I want to also include user2 --> How to change the file name : path: /etc/sudoers.d/user1
I tried below and its not working :
Passing below users as variable to main.yml while running
users:
- "user1"
- "user2"
- name: Add user "{{users}}" to sudoers.d
lineinfile:
path: /etc/sudoers.d/{{users}}
line: '{{ item }}'
state: present
mode: 0440
create: yes
validate: 'visudo -cf %s'
with_items:
- "{{ users }}"
So, basically I want to pass in users to a variable {{users}} as user1 and user2 and wanted to use the lines for each user from users.yml and add it to respective user files (/etc/sudoers.d/user1 and /etc/sudoers.d/user2).
So /etc/sudoers.d/user1 should look like
Line1111
Line2222
Line3333
and /etc/sudoers.d/user2 should look like
Line4444
Line5555
Line6666
Try to add quotes:
users:
- "user1"
- "user2"
- name: "Add user {{users}} to sudoers.d"
lineinfile:
path: "/etc/sudoers.d/{{users}}"
line: "{{ item }}"
state: present
mode: 0440
create: yes
validate: 'visudo -cf %s'
with_items:
- "{{ users }}"
As per Ansible Documentation on Using Variables:
YAML syntax requires that if you start a value with {{ foo }} you quote the whole line, since it wants to be sure you aren’t trying to start a YAML dictionary. This is covered on the YAML Syntax documentation.
This won’t work:
- hosts: app_servers
vars:
app_path: {{ base_path }}/22
Do it like this and you’ll be fine:
- hosts: app_servers
vars:
app_path: "{{ base_path }}/22"
cat users.yml
---
users:
- user1:
filename: user1sudoers
args:
- Line1111
- Line2222
- Line3333
- user2:
filename: user2sudoers
args:
- Line4444
- Line5555
- Line6666
I use template here, instead of lineinfile
---
cat sudoers.j2
{% if item.args is defined and item.args %}
{% for arg in item.args %}
{{ arg }}
{% endfor %}
{% endif %}
the task content
---
- hosts: localhost
vars_files: ./users.yml
tasks:
- name: sync sudoers.j2 to localhost
template:
src: sudoers.j2
dest: "/tmp/{{ item.filename }}"
loop: "{{ users_list }}"
when: "users_list is defined and users_list"
after run the task.yml, generate two files under /tmp directory.
cat /tmp/user1sudoers
Line1111
Line2222
Line3333
cat /tmp/user2sudoers
Line4444
Line5555
Line6666

how do i create a zfs filesystem/zpool using ansible using zfs-linux

I want the equivalent of the following to be generated using the zfs module in ansible, the following works using the command line, but fails on second run as the filesystem already exists.
{{ part_postgres }} is set to /dev/sdb in this instance.
zpool create -O compression=gzip postgres {{ part_postgres }} -O secondarycache=all
Currently in ansible I have:
- name: Create postgres zpool
zfs: name=postgres{{ part_postgres }}
compression=gzip
state=present
secondarycache=all
mountpoint=/postgres
atime=off
Ok - the zfs module won't do it, would need to write a new model for zpool. That said, its easy enough to check for zpool existing using the 'creates' annotation for the command module in ansible:
- name: Create postgres zpool
command: zpool create -O compression=gzip postgres /dev/sdb -o ashift=12 -O secondarycache=all
creates=/postgres
This will check if /postgres exists, and only run the command if it doesn't.
Here is another example:
- hosts: all
vars:
zfs_pool_name: data
zfs_pool_mountpoint: /mnt/data
zfs_pool_mode: mirror
zfs_pool_devices:
- sda
- sdb
zfs_pool_state: present
zfs_pool_options:
- "ashift=12"
tasks:
- name: check ZFS pool existance
command: zpool list -Ho name {{ zfs_pool_name }}
register: result_pool_list
ignore_errors: yes
changed_when: false
- name: create ZFS pool
command: >-
zpool create
{{ '-o' if zfs_pool_options else '' }} {{ zfs_pool_options | join(' -o ') }}
{{ '-m ' + zfs_pool_mountpoint if zfs_pool_mountpoint else '' }}
{{ zfs_pool_name }}
{{ zfs_pool_mode if zfs_pool_mode else '' }}
{{ zfs_pool_devices | join(' ') }}
when:
- zfs_pool_state | default('present') == 'present'
- result_pool_list.rc == 1
- name: destroy ZFS pool
command: zpool destroy {{ zfs_pool_name }}
when:
- zfs_pool_state | default('present') == 'absent'
- result_pool_list.rc == 0

Resources