I am trying to find files with .ign extension from a directory and copy it to another directory. Tried using the 'find' and 'copy' module as follows:
- name: Find files
find:
paths: "{{ item }}"
recurse: yes
register: find_result
with_items:
- "{{ workdir }}/*.ign"
- name: Copy files
copy:
src: "{{ item.path }}"
dest: "/var/www/html/ignition/"
mode: o+r
remote_src: yes
with_items: "{{ find_result.files }}"
workdir is set as /root/openstack-upi. And I am running this as a non-root(cloud-user) user with the command--
ansible-playbook -i inventory -e #install_vars.yaml playbooks/install.yaml --become
However, after running this I get an error as below:
TASK [ocp-config : Find files] ***************************************************
ok: [ash-test-faf0-bastion-0] => (item=/root/openstack-upi/*.ign)
TASK [ocp-config : Copy files] ***********************************************
fatal: [ash-test-faf0-bastion-0]: FAILED! => {"msg": "'dict object' has no attribute 'files'"}
PLAY RECAP **********************************************************************************
ash-test-faf0-bastion-0 : ok=3 changed=0 unreachable=0 failed=1 skipped=1 rescued=0 ignored=0
Running a debug on variable find_result gives the following-
"msg": "/root/openstack-upi/*.ign was skipped as it does not seem to be a valid directory or it cannot be accessed\n"
Am I missing anything here? Can anybody tell me the exact command for the ansible-playbook for the above scenario?
The following solution will does not use the with_items option. It will recursively find all the files which ends with the extension/suffix .ign.
- name: Find files
find:
paths: "/path/to/directory"
patterns: "*.ign"
recurse: yes
register: result
- name: Print find result
debug:
msg: "{{ item.path }}"
with_items:
- "{{ result.files }}"
An explanation regarding your given find file loop
with_items:
- "{{ workdir }}/*.ign"
register: find_result
and the given error message
msg": "/root/openstack-upi/*.ign was skipped as it does not seem to be a valid directory or it cannot be accessed
It will be necessary to lookup the files matching a pattern before. This can be done with the with_fileglob lookup plugin.
with_fileglob:
- "{{ workdir }}/*.ign"
register: find_result
or better and as explained in the accepted answer here in this thread.
The error message just sayed the path and filename were skipped as .../*.ign wasn't a valid path and filename. It means it hasn't looked up any files and hasn't resolved the fileglob.
Related
I have for example this playbook:
- name: System Configuration
hosts: host1, host2
become: yes
tasks:
- name: Set current app version
shell: export APP_VERSION={{ app_version|quote }}
I wish to set this variables for multiple hosts, for example host and host2. I read somewhere that I can create a yml file with the name of each host and store the variables fort each host there but I cane find how to load this file for each host.
I thought it may be loaded automatically if the name matches the host name but I dont think this happens I get an error FAILED! => {"msg": "The task includes an option with an undefined variable
so how can I define a set for variables for each host separately?
There are many options.
For example, create a dictionary in 'group_vars/all'
shell> cat group_vars/all
app_versions:
host1: '1.1'
host2: '1.2'
default: '1.0'
app_version: "{{ app_versions[inventory_hostname]|
default(app_versions.default) }}"
The playbook
- hosts: host1,host2,host3
tasks:
- debug:
var: app_version
gives
TASK [debug] **********************************************************
ok: [host3] =>
app_version: '1.0'
ok: [host1] =>
app_version: '1.1'
ok: [host2] =>
app_version: '1.2'
The next option is 'vars_files'. Create a YAML file and use it in the playbook. For example, the file and the playbook give the same result
shell> cat app_versions.yml
app_versions:
host1: '1.1'
host2: '1.2'
default: '1.0'
app_version: "{{ app_versions[inventory_hostname]|
default(app_versions.default) }}"
- hosts: host1,host2,host3
vars_files:
- app_versions.yml
tasks:
- debug:
var: app_version
The next option is to create files in host_vars. For example, the 'host_vars' and the playbook give also the same result
shell> cat host_vars/host1.yml
app_version: '1.1'
shell> cat host_vars/host2.yml
app_version: '1.2'
shell> cat host_vars/host3.yml
app_version: '1.0'
- hosts: host1,host2,host3
tasks:
- debug:
var: app_version
The options override each other. See Understanding variable precedence. In the above options, the lowest precedence (4,5) are group_vars/all followed by host_vars precedence (9,10). The highest precedence (14) has the play vars_file.
Putting it all together, you might want to put the defaults to the group_vars/all
shell> cat group_vars/all
app_versions:
default: '1.0'
and override the defaults in host_vars
shell> tree host_vars
host_vars
├── host1.yml
└── host2.yml
shell> cat host_vars/host1.yml
app_version: '1.1'
shell> cat host_vars/host2.yml
app_version: '1.2'
Then, the playbook below gives again the same result
- hosts: host1,host2,host3
tasks:
- set_fact:
app_version: "{{ app_version|
default(app_versions[inventory_hostname])|
default(app_versions.default) }}"
- debug:
var: app_version
Ok there are probably many solutions to this but the one I liked an used is to create a vars sub-directory in my project folder. Ansible look there by default when trying to find variable files for a host with the include_vars module.
So in my tasks I added this task:
tasks:
- name: Load variables for host
include_vars:
file: "{{ inventory_hostname }}.yml"
and in my project in the vars directory I added a host_name_as_in_inventory.yml file with the host-name for each host I want to specify variables for.
Edit: as it is suggested int he other coment if the yml files with the hostnames are added in a directory calles host_vars they are loaded automatically so the include_vars taks is redundant.
Hello to all stack overflow community.
I'm seeking you help because I've been trying to accomplish the task of getting a file from remote Windows to local linux using Ansible-AWX and I can't get it to work. Bellow I shared the playbook and most of tests I've done but none of them worked.
I'm getting latest file in a windows directory and trying to transfer that file to local AWX either inside the docker or in the linux server where AWX is running.
Test_1: Said file was copied but when I go inside the docker nothing there. I can't find an answer and couldn't find any on Google.
Test_2: Didn't work. It says can't authenticate to linux server
Test_3: Task became idle and I have to restart the docker to be able to stop it. It gets crazy. No idea why.
Test_4: It says connection unexpectedly closed.
I didn't want to provide output to reduce noise and because I can't share the information. I removed names and ips from playbook as well.
I'm connecting to Windows server using AD.
Please, I don't know what else to do. Thanks for your help in advance.
---
- name: Get file from Windows to Linux
hosts: all # remote windows server ip
gather_facts: true
become: true
vars:
local_dest_path_test1: \var\lib\awx\public\ # Inside AWX docker
local_dest_path_test2: \\<linux_ip>\home\user_name\temp\ # Outside AWX docker in the linux server
local_dest_path_test3: /var/lib/awx/public/ # Inside AWX docker
# Source file in remote windows server
src_file: C:\temp\
tasks:
# Getting file information to be copied
- name: Get files in a folder
win_find:
paths: "{{ src_file }}"
register: found_files
- name: Get latest file
set_fact:
latest_file: "{{ found_files.files | sort(attribute='creationtime',reverse=true) | first }}"
# Test 1
- name: copy files from Windows to Linux
win_copy:
src: "{{ latest_file.path }}"
dest: "{{ local_dest_path_test1 }}"
remote_src: yes
# Test 2
- name: copy files from Windows to Linux
win_copy:
src: "{{ latest_file.path }}"
dest: "{{ local_dest_path_test2 }}"
remote_src: yes
become: yes
become_method: su
become_flags: logon_type=new_credentials logon_flags=netcredentials_only
vars:
ansible_become_user: <linux_user_name>
ansible_become_pass: <linux_user_password>
ansible_remote_tmp: <linux_remote_path>
# Test 3
- name: Fetch latest file to linux
fetch:
src: "{{ latest_file.path }}"
dest: "{{ local_dest_path_test3 }}"
flat: yes
fail_on_missing: yes
delegate_to: 127.0.0.1
# Test 4
- name: Transfer file from Windows to Linux
synchronize:
src: "{{ latest_file.path }}"
dest: "{{ local_dest_path_test3 }}"
mode: pull
delegate_to: 127.0.0.1
I am trying to use ansible-pull method for running a playbooks with extra vars on run time of playbooks.
Here is how i needed to run my playbook with vars looks like.
ansible-playbook decode.yml --extra-vars "host_name=xxxxxxx bind_password=xxxxxxxxx swap_disk=xxxxx"
The bind_password will have encoded value of admin password.
and i have tried writing below playbook for it.
I am able to debug every value and getting it correctly but after decoding password not getting exact value or not sure whether i am doing it correct or not?
---
- name: Install and configure AD authentication
hosts: test
become: yes
become_user: root
vars:
hostname: "{{ host_name }}"
diskname: "{{ swap_disk }}"
password: "{{ bind_password }}"
tasks:
- name: Ansible prompt example.
debug:
msg: "{{ bind_password }}"
- name: Ansible prompt example.
debug:
msg: "{{ host_name }}"
- name: Ansible prompt example.
debug:
msg: "{{ swap_disk }}"
- name: Setup the hostname
command: hostnamectl set-hostname --static "{{ host_name }}"
- name: decode passwd
command: export passwd=$(echo "{{ bind_password }}" | base64 --decode)
- name: print decoded password
shell: echo "$passwd"
register: mypasswd
- name: debug decode value
debug:
msg: "{{ mypasswd }}"
but while we can decode base64 value with command:
echo "encodedvalue" | base64 --decode
How can i run this playbook with ansible-pull as well.
later i want to convert this playbook into roles (role1) and then needs to run it as below:
How can we run role based playbook using ansible-pull?
The problem is not b64decoding your value. Your command should not cause any problems and probably gives the expected result if you type it manually in your terminal.
But ansible is creating an ssh connection for each task, therefore each shell/command task starts on a new session. So exporting an env var in one command task and using that env var in the next shell task will never work.
Moreover, why do you want to handle all this with so many command/shell tasks when you have all the needed tools directly in ansible ? Here is a possible rewrite of your last 3 tasks that fits into a single one.
- name: debug decoded value of bind_password
debug:
msg: "{{ bind_password | b64decode }}"
I have written a playbook task in ansible. I am able to run the playbook on linux end.
- name: Set paths for go
blockinfile:
path: $HOME/.profile
backup: yes
state: present
block: |
export PATH=$PATH:/usr/local/go/bin
export GOPATH=$HOME/go
export FABRIC_CFG_PATH=$HOME/.fabdep/config
- name: Load Env variables
shell: source $HOME/.profile
args:
executable: /bin/bash
register: source_result
become: yes
As in linux we have .profile in home directory but in Mac there is no .profile and .bash_profile in macOS.
So I want to check if os is Mac then path should be $HOME/.bash_profile and if os is linux based then it should look for $HOME/.profile.
I have tried adding
when: ansible_distribution == 'Ubuntu' and ansible_distribution_release == 'precise'
But it does not work firstly and also it is length process. I want to get path based on os in a variable and use it.
Thanks
I found a solution this way. I added gather_facts:true at top of yaml file and it started working. I started using variable as ansible_distribution.
Thanks
An option would be to include_vars from files. See example below
- name: "OS specific vars (will overwrite /vars/main.yml)"
include_vars: "{{ item }}"
with_first_found:
- files:
- "{{ ansible_distribution }}-{{ ansible_distribution_release }}.yml"
- "{{ ansible_distribution }}.yml"
- "{{ ansible_os_family }}.yml"
- "default.yml"
paths: "{{ playbook_dir }}/vars"
skip: true
- name: Set paths for go
blockinfile:
path: "$HOME/{{ my_profile_file }}"
[...]
In the playbooks' directory create directory vars and create files
# cat var/Ubuntu.yml
my_profile_file: ".profile"
# cat var/macOS.yml
my_profile_file: ".bash_profile"
If you have managed hosts with different OS, group them by OS in your inventory:
[Ubuntu]
ubu1
ubu2
[RHEL6]
RH6_1
[RHEL7]
RH7_1
RH7_2
I am trying to update permissions on all the shell script in a particular directory on remote servers using ansible but it gives me error:
- name: update permissions
file: dest=/home/goldy/scripts/*.sh mode=a+x
This is the error I am getting:
fatal: [machineA]: FAILED! => {"changed": false, "msg": "file (/home/goldy/scripts/*.sh) is absent, cannot continue", "path": "/home/goldy/scripts/*.sh", "state": "absent"}
to retry, use: --limit #/var/lib/jenkins/workspace/copy/copy.retry
What wrong I am doing here?
you should run a task with find module to collect all .sh files on that directory, and register the results in a variable.
then run a 2nd task with the file module that will update the permissions when file's extension ends in .sh.
check sample playbook:
- hosts: localhost
gather_facts: false
vars:
tasks:
- name: parse /tmp directory
find:
paths: /tmp
patterns: '*.sh'
register: list_of_files
- debug:
var: item.path
with_items: "{{ list_of_files.files }}"
- name: change permissions
file:
path: "{{ item.path }}"
mode: a+x
with_items: "{{ list_of_files.files }}"