Move/copy all files in a directory with Ansible [duplicate] - linux

This question already has answers here:
Ansible: copy a directory content to another directory
(12 answers)
Closed 5 years ago.
How can I move or copy all the files in a directory to my localhost or another remote host with Ansible?
This question goes both to linux systems and windows.
What I've got so far:
- hosts: all
tasks:
- name: list the files in the folder
command: ls /dir/
register: dir_out
- name: do the action
fetch: src=/dir/{{item}} dest=/second_dir/ flat=yes
with_items: ('{{dir_out.stdout_lines}}')
The output is as follows:
TASK [setup] *******************************************************************
ok: [remote_host]
TASK [list the files in the folder] ********************************************
changed: [remote_host]
TASK [move those files] ********************************************************
ok: [remote_host] => (item=('[u'file10003', u'file10158', u'file1032', u'file10325', u'file10630', u'file10738', u'file10818', u'file10841', u'file10980', u'file11349', u'file11589', u'file11744', u'file12003', u'file12008', u'file12234', u'file12734', u'file12768', u'file12774', u'file12816', u'file13188', u'file13584', u'file14560', u'file15512', u'file16020', u'file16051', u'file1610', u'file16610', u'file16642', u'file16997', u'file17233', u'file17522', u'file17592', u'file17908', u'file18149', u'file18311', u'file18313', u'file18438', u'file185', u'file18539', u'file18777', u'file18808', u'file18878', u'file18885', u'file19313', u'file19755', u'file19863', u'file20158', u'file20347', u'file2064', u'file20840', u'file21123', u'file21422', u'file21425', u'file21711', u'file21770', u'file21790', u'file21808', u'file22054', u'file22359', u'file22601', u'file23609', u'file23763', u'file24208', u'file24430', u'file24452', u'file25028', u'file25131', u'file25863', u'file26197', u'file26384', u'file26398', u'file26815', u'file27025', u'file27127', u'file27373', u'file2815', u'file28175', u'file28780', u'file28886', u'file29058', u'file29096', u'file29456', u'file29513', u'file29677', u'file29836', u'file30034', u'file30216', u'file30464', u'file30601', u'file30687', u'file30795', u'file31299', u'file31478', u'file31883', u'file31908', u'file32251', u'file3229', u'file32724', u'file32736', u'file3498', u'file4173', u'file4235', u'file4748', u'file4883', u'file5812', u'file6126', u'file6130', u'file6327', u'file6462', u'file6624', u'file6832', u'file7576', u'file8355', u'file8693', u'file8726', u'file8838', u'file8897', u'file9112', u'file9331', u'file993']'))
PLAY RECAP *********************************************************************
remote_host : ok=3 changed=1 unreachable=0 failed=0
It got all the files from the directory, I guess because of the with_items (though I don't know what the "u" stands for), but the second directory on my localhost remains empty.
Any suggestions?

You may have more luck with the synchronize module
Example below pulls a dir from inventory host to localhost:
- synchronize:
mode: pull
src: "/dir/"
dest: "/second_dir/"
Additional info based on comment: Here is how you would delete the source files after transferring them:
- synchronize:
mode: pull
src: "/dir/"
dest: "/second_dir/"
rsync_opts:
- "--remove-source-files"

--- # copying yaml file
- hosts: localhost
user: {{ user }}
connection: ssh
become: yes
gather_facts: no
tasks:
- name: Creation of directory on remote server
file:
path: /var/lib/jenkins/.aws
state: directory
mode: 0755
register: result
- debug:
var: result
- name: get file names to copy
command: "find conf/.aws -type f"
register: files_to_copy
- name: copy files
copy:
src: "{{ item }}"
dest: "/var/lib/jenkins/.aws"
owner: {{ user }}
group: {{ group }}
remote_src: True
mode: 0644
with_items:
- "{{ files_to_copy.stdout_lines }}"

Related

Fetch files and remove them from source if succesful

I've been using Ansible to fetch files from Windows nodes to a Linux node for some time with good results.
I would now like the nodes to remove fetched files once they have uploaded successfully.
However, since I'm fetching from lots of endpoints in various states, some files occasionally fail to transfer - and I'm having trouble using Ansible to skip those files, and those files only.
Here's what I have so far:
- name: Find .something files
ansible.windows.win_find:
paths: 'C:\Some\Path'
patterns: [ '*.something' ]
recurse: yes
age_stamp: ctime
age: -1w
register: found_files
- name: Create destination directory
file:
path: "/some/path/{{inventory_hostname}}/"
state: directory
delegate_to: localhost
- name: Fetch .something files
fetch:
src: "{{ item.path }}"
dest: "/some/path/{{inventory_hostname}}/"
flat: yes
validate_checksum: no
with_items: "{{ found_files.files }}"
register: item.sync_result
ignore_errors: yes
msg: "Would remove {{ item.path }}"
when: sync_result is succeeded
with_items: "{{ found_files.files }}"
The problem is, the sync_result variable seems to apply to each node instead of each file - that is, if one file has failed to transfer, no files will be deleted.
I've tried various loops and lists and could not get it to work.
Any pointers would be greatly appreciated.
In a nutshell:
- name: Find .something files
ansible.windows.win_find:
paths: 'C:\Some\Path'
patterns: [ '*.something' ]
recurse: yes
age_stamp: ctime
age: -1w
register: find_something
- name: Create destination directory
file:
path: "/some/path/{{ inventory_hostname }}/"
state: directory
delegate_to: localhost
- name: Fetch .something files
fetch:
src: "{{ item.path }}"
dest: "/some/path/{{ inventory_hostname }}/"
flat: yes
validate_checksum: no
loop: "{{ find_something.files }}"
register: fetch_sync
ignore_errors: yes
- name: Delete successfully fetched files
file:
path: "{{ item.file }}"
state: absent
loop: "{{ fetch_sync.results | select('succeeded') }}"
# If you are using ansible < 2.10 you need to cast to list i.e.
# loop: "{{ fetch_sync.results | select('succeeded') | list }}"

How to exclude filesystems with ansible

I'm writing a playbook the change file and folders permissions on a Linux server.
Until know it is working and looks like this:
-
name: Playbook to change file and directory permissions
hosts: all
become: yes
vars:
DIR: '{{ target_dir }}'
FILE: '{{ target_file }}'
PERMISSIONS: '{{ number }}'
OWNER: '{{ target_owner }}'
GROUP: '{{ target_group }}'
tasks:
- name: Checking if the directory exists
stat:
path: '{{ DIR }}'
register: dir_status
- name: Checking if the file exists
stat:
path: '{{ FILE }}'
register: file_status
- name: Report if directory exists
debug:
msg: "Directory {{ DIR }} is present on the server"
when: dir_status.stat.exists and dir_status.stat.isdir
- name: Report if file exists
debug:
msg: "File {{ FILE }} is present on the server"
when: file_status.stat.exists
- name: Applying new permissions
file:
path: '{{ DIR }}/{{ FILE }}'
state: file
mode: '0{{ PERMISSIONS }}'
owner: '{{ OWNER }}'
group: '{{ GROUP }}'
But what I need is if the user that gonna execute the playbook in rundeck wanna change permissions on the (/boot /var /etc /tmp /usr) directories tell ansible to not try doing that and throw an error message.
How can I do that?
I understand your question that you like to fail with custom message when a variable DIR contains one of the values /boot, /var, /etc, /tmp or /usr.
To do so you may use
- name: You can't work on {{ DIR }}
fail:
msg: The system may not work on {{ DIR }} according ...
when: '"/boot" or "/var" or "/etc" or "/tmp" or "/usr" in DIR'
There is also a meta_module which can end_play when condition are met.
tasks:
- meta: end_play
when: '"/boot" or "/var" or "/etc" or "/tmp" or "/usr" in DIR'
Both, fail and end_play, you can combine with different variables for certain use cases.
when: "'download' or 'unpack' in ansible_run_tags"
when: ( "DMZ" not in group_names )
Thanks to
Run an Ansible task only when the variable contains a specific string
Ansible - Execute task when variable contains specific string
Please take note that you are constructing the full path by concatenating {{ DIR }}/{{ FILE }} at the end. The above mentioned simple approach will not handle an empty DIR and FILEname with path included. Test cases could be
DIR: ""
FILE "/tmp/test"
DIR: "/"
FILE: "tmp/test"
Maybe you like to perform the test on the full filepath or test what a variable begins with.
In respect to the comments from Zeitounator and seshadri-c you may also try the approach of the assert_module
- name: Check for allowed directories
assert:
that:
- DIR in ["/boot", "/etc", "/var", "/tmp", "/usr"]
quiet: true
fail_msg: "The system may not work on {{ DIR }} according ..."
success_msg: "Path is OK."

How to rename a file in ansible

I need to rename hosts_example or hosts_Example to be named as hosts_real if any of the file exists
- name: Playbook to Standardize Hosts
hosts: test
vars:
destpath: /etc/hosts_real
filename: [ /etc/hosts_example,/etc/hosts_Example ]
tasks:
- name: Check if file exists
stat:
path: "{{ item }}"
with_items:
- "{{ filename }}"
register: check_file_name
- debug:
msg: "{{check_file_name}}"
- name: Rename file
command: mv "{{ item }}"{{destpath}}"
with_items:
- "{{ check_file_name.results }}"
when: item.stat.exists == true
I tried this am getting errors and not able to achieve the desired result
name: replace file
shell: mv /local/oracle/12.2/oldora.ora /local/oracle/12.2/tnsnames.ora
become: appuser
Works for us. Update your path in the shell command. I dont think the file module has a rename command. If you have a variable in your shell line put the whole line in quotes like:
shell: "mv {{ path_1 }} {{ path_2 }}"

How ansible can call multiple files under path option as a variable

I am just learning ansible and trying to understand how can i include multiple file in the path option in ansible replace module.
I have three files where i need to replace a old hostname with new hostanme.
Files are :
- /etc/hosts
- /etc/hosts.custom
- /etc/hosts-backup
Below Simple Play works fine:
- name: Replace string in hosts file
hosts: all
gather_facts: false
tasks:
- name: Replace string in host file
replace:
path: /etc/hosts
regexp: "171.20.20.16 fostrain.example.com"
replace: "171.20.20.16 dbfoxtrain.example.com"
backup: yes
However, after lot of googling i see this can be done as follows, but in case i have multiple files and those needs to be called as a variable in different modules, How we can define then in such a way so as to call them by variable name.
Below is Just what i am trying to understand..
- name: Replace string in hosts file
hosts: all
gather_facts: false
tasks:
- name: Checking file contents
slurp:
path: "{{ ?? }}" <-- How to check these three files here
register: fileCheck.out
- debug:
msg: "{{ (fileCheck.out.content | b64decode).split('\n') }}"
- name: Replace string in host file
replace:
path: "{{ item.path }}"
regexp: "{{ item:from }}"
replace: "{{ item:to }}"
backup: yes
loop:
- { path: "/etc/hosts", From: "171.20.20.16 fostrain.example.com", To: "171.20.20.16 dbfoxtrain.example.com"}
- { Path: "/etc/hosts.custom", From: "171.20.20.16 fostrain.example.com", To: "171.20.20.16 dbfoxtrain.example.com"}
- { Path: "/etc/hosts-backup", From: "171.20.20.16 fostrain.example.com", To: "171.20.20.16 dbfoxtrain.example.com"}
I will appreciate any help.
Create couple of variables; a list with all the files, from and to replacement strings or divide them by ip and domain. Then loop over all the files using the file list variable and use from and to replacement variables for each file. If multiple ip and domain mapping is required then you need to adjust the structure further. So recommend going through ansible documentation on using variables and loops for more details.
Playbook may look like below. Have used a minor regex and you can adjust as required.
- name: Replace string in hosts file
hosts: all
gather_facts: false
vars:
files:
- /etc/hosts
- /etc/hosts.custom
- /etc/hosts-backup
from_ip: "171.20.20.16"
from_dn: "fostrain.example.com"
to_ip: "171.20.20.16"
to_dn: "dbfoxtrain.example.com"
tasks:
- name: Replace string in host file
replace:
path: "{{ item }}"
regexp: "{{ from_ip }}\\s+{{ from_dn }}"
replace: "{{ to_ip }} {{ to_dn }}"
loop: "{{ files }}"
If you want to see the contents of each file then slurp and debug modules can be used like below:
- slurp:
path: "{{ item }}"
loop: "{{ files }}"
register: contents
- debug:
msg: "{{ (item.content | b64decode).split('\n') }}"
loop: "{{ contents.results }}"

Ansible playbook loop with with_items

I have to update sudoers.d multiple user files with few lines/commands using ansible playbook
users.yml
user1:
- Line1111
- Line2222
- Line3333
user2:
- Line4444
- Line5555
- Line6666
main.yml
- hosts: "{{ host_group }}"
vars_files:
- ../users.yml
tasks:
- name: Add user "user1" to sudoers.d
lineinfile:
path: /etc/sudoers.d/user1
line: '{{ item }}'
state: present
mode: 0440
create: yes
validate: 'visudo -cf %s'
with_items:
- "{{ user1 }}"
The above one is working only for user1..
If I want to also include user2 --> How to change the file name : path: /etc/sudoers.d/user1
I tried below and its not working :
Passing below users as variable to main.yml while running
users:
- "user1"
- "user2"
- name: Add user "{{users}}" to sudoers.d
lineinfile:
path: /etc/sudoers.d/{{users}}
line: '{{ item }}'
state: present
mode: 0440
create: yes
validate: 'visudo -cf %s'
with_items:
- "{{ users }}"
So, basically I want to pass in users to a variable {{users}} as user1 and user2 and wanted to use the lines for each user from users.yml and add it to respective user files (/etc/sudoers.d/user1 and /etc/sudoers.d/user2).
So /etc/sudoers.d/user1 should look like
Line1111
Line2222
Line3333
and /etc/sudoers.d/user2 should look like
Line4444
Line5555
Line6666
Try to add quotes:
users:
- "user1"
- "user2"
- name: "Add user {{users}} to sudoers.d"
lineinfile:
path: "/etc/sudoers.d/{{users}}"
line: "{{ item }}"
state: present
mode: 0440
create: yes
validate: 'visudo -cf %s'
with_items:
- "{{ users }}"
As per Ansible Documentation on Using Variables:
YAML syntax requires that if you start a value with {{ foo }} you quote the whole line, since it wants to be sure you aren’t trying to start a YAML dictionary. This is covered on the YAML Syntax documentation.
This won’t work:
- hosts: app_servers
vars:
app_path: {{ base_path }}/22
Do it like this and you’ll be fine:
- hosts: app_servers
vars:
app_path: "{{ base_path }}/22"
cat users.yml
---
users:
- user1:
filename: user1sudoers
args:
- Line1111
- Line2222
- Line3333
- user2:
filename: user2sudoers
args:
- Line4444
- Line5555
- Line6666
I use template here, instead of lineinfile
---
cat sudoers.j2
{% if item.args is defined and item.args %}
{% for arg in item.args %}
{{ arg }}
{% endfor %}
{% endif %}
the task content
---
- hosts: localhost
vars_files: ./users.yml
tasks:
- name: sync sudoers.j2 to localhost
template:
src: sudoers.j2
dest: "/tmp/{{ item.filename }}"
loop: "{{ users_list }}"
when: "users_list is defined and users_list"
after run the task.yml, generate two files under /tmp directory.
cat /tmp/user1sudoers
Line1111
Line2222
Line3333
cat /tmp/user2sudoers
Line4444
Line5555
Line6666

Resources