Ansible-AWX get file from remote Windows to local linux - linux

Hello to all stack overflow community.
I'm seeking you help because I've been trying to accomplish the task of getting a file from remote Windows to local linux using Ansible-AWX and I can't get it to work. Bellow I shared the playbook and most of tests I've done but none of them worked.
I'm getting latest file in a windows directory and trying to transfer that file to local AWX either inside the docker or in the linux server where AWX is running.
Test_1: Said file was copied but when I go inside the docker nothing there. I can't find an answer and couldn't find any on Google.
Test_2: Didn't work. It says can't authenticate to linux server
Test_3: Task became idle and I have to restart the docker to be able to stop it. It gets crazy. No idea why.
Test_4: It says connection unexpectedly closed.
I didn't want to provide output to reduce noise and because I can't share the information. I removed names and ips from playbook as well.
I'm connecting to Windows server using AD.
Please, I don't know what else to do. Thanks for your help in advance.
---
- name: Get file from Windows to Linux
hosts: all # remote windows server ip
gather_facts: true
become: true
vars:
local_dest_path_test1: \var\lib\awx\public\ # Inside AWX docker
local_dest_path_test2: \\<linux_ip>\home\user_name\temp\ # Outside AWX docker in the linux server
local_dest_path_test3: /var/lib/awx/public/ # Inside AWX docker
# Source file in remote windows server
src_file: C:\temp\
tasks:
# Getting file information to be copied
- name: Get files in a folder
win_find:
paths: "{{ src_file }}"
register: found_files
- name: Get latest file
set_fact:
latest_file: "{{ found_files.files | sort(attribute='creationtime',reverse=true) | first }}"
# Test 1
- name: copy files from Windows to Linux
win_copy:
src: "{{ latest_file.path }}"
dest: "{{ local_dest_path_test1 }}"
remote_src: yes
# Test 2
- name: copy files from Windows to Linux
win_copy:
src: "{{ latest_file.path }}"
dest: "{{ local_dest_path_test2 }}"
remote_src: yes
become: yes
become_method: su
become_flags: logon_type=new_credentials logon_flags=netcredentials_only
vars:
ansible_become_user: <linux_user_name>
ansible_become_pass: <linux_user_password>
ansible_remote_tmp: <linux_remote_path>
# Test 3
- name: Fetch latest file to linux
fetch:
src: "{{ latest_file.path }}"
dest: "{{ local_dest_path_test3 }}"
flat: yes
fail_on_missing: yes
delegate_to: 127.0.0.1
# Test 4
- name: Transfer file from Windows to Linux
synchronize:
src: "{{ latest_file.path }}"
dest: "{{ local_dest_path_test3 }}"
mode: pull
delegate_to: 127.0.0.1

Related

The best way to authorize ssh key of each node to all nodes in the cluster

I want to create a cluster infrastructure that each node communicates with others over shh. I want to use ansible to create a idempotent playbook/role that can be executed when cluster initialized or new nodes added to cluster. I was able to think of 2 scenarios to achieve this.
 First Scenario
task 1 fetches the ssh key from a node (Probably assigns it to a variable or writes to a file).
Then task 2 that executed locally loops over other nodes and authorizes the first node with fetched key.
This scenario supports free strategy. Tasks can be executed without waiting for all hosts. But it also requires all nodes to have related user and public key. Because if you are creating users within the same playbook (due to free strategy), when the task 2 starts running there may be users that are not created on other nodes in the cluster.
Although i am a big fan of free strategy, i din't implement this scenario due to efficiency reasons. It makes connections for node cluster.
 Second Scenario
task 1 fetches the ssh key from all nodes in order. Then writes each one to a file which name is set according to ansible_hostname.
Then task 2 that executed locally loops over other nodes and authorizes all keys.
This scenario only supports linear strategy. You can create users within same playbook thanks to linear strategy, all users will be created before task 1 starts running.
I think it is an efficient scenario. It makes only connections for node cluster. I did implement it and i put the snippet i wrote.
---
- name: create node user
user:
name: "{{ node_user }}"
password: "{{ node_user_pass |password_hash('sha512') }}"
shell: /bin/bash
create_home: yes
generate_ssh_key: yes
- name: fetch all public keys from managed nodes to manager
fetch:
src: "/home/{{ node_user }}/.ssh/id_rsa.pub"
dest: "tmp/{{ ansible_hostname }}-id_rsa.pub"
flat: yes
- name: authorize public key for all nodes
authorized_key:
user: "{{ node_user }}"
key: "{{ lookup('file', 'tmp/{{ item }}-id_rsa.pub')}}"
state: present
with_items:
- "{{ groups['cluster_node'] }}"
- name: remove local public key copies
become: false
local_action: file dest='tmp/' state=absent
changed_when: false
run_once: true
Maybe i can use lineinfile instead of fetch but other than that i don't know if it is the right way. It takes so long when cluster size getting larger (Because of the linear strategy). Is there a more efficient way that i can use?
When Ansible loops through authorized_key, it will (roughly) perform the following tasks:
Create a temporary authorized_key python script on the control node
Copy the new authorized_key python script to the managed node
Run the authorized_key python script on the managed node with the appropriate parameters
This increases n2 as the number of managed nodes increases; with 1000 boxes, this task is performed 1000 times per box.
I'm having trouble finding specific docs which properly explains exactly what's going on under-the-hood, so I'd recommend running an example script get a feel for it:
- hosts: all
tasks:
- name: do thing
shell: "echo \"hello this is {{item}}\""
with_items:
- alice
- brian
- charlie
This should be ran with the triple verbose flag (-vvv) and with the output piped to ./ansible.log (ex. ansible-playbook example-loop.yml -i hosts.yml -vvv > example-loop-output.log). Searching through those logs for command.py and sftp will help get a feel for how your script scales as the list retrieved by "{{ groups['cluster_node'] }}" increases.
For small clusters, this inefficiency is perfectly acceptable. However, it may become problematic on large clusters.
Now, the authorized_key module is essentially just generating an authorized_keys file with a) the keys which already exist within authorized_keys and b) the public keys of each node on the cluster. Instead of repeatedly generating an authorized_keys file on each box individually, we can construct the authorized_keys file on the control node and deploy it to each box.
The authorized_keys file itself can be generated with assemble; this will take all of the gathered keys and concatenate them into a single file. However, if we just synchronize or copy this file over, we'll wipe out any non-cluster keys added to authorized_keys. To avoid this, we can use blockinfile. blockinfile can manage the cluster keys added by Ansible. We'll be able to add new keys while removing those which are outdated.
- hosts: cluster
name: create node user and generate keys
tasks:
- name: create node user
user:
name: "{{ node_user }}"
password: "{{ node_user_pass |password_hash('sha512') }}"
shell: /bin/bash
create_home: yes
generate_ssh_key: yes
- name: fetch all public keys from managed nodes to manager
fetch:
src: "/home/{{ node_user }}/.ssh/id_rsa.pub"
dest: "/tmp/keys/{{ ansible_host }}-id_rsa.pub"
flat: yes
become: yes
- hosts: localhost
name: generate authorized_keys file
tasks:
- name: Assemble authorized_keys from a directory
assemble:
src: "/tmp/keys"
dest: "/tmp/authorized_keys"
- hosts: cluster
name: update authorized_keys file
tasks:
- name: insert/update configuration using a local file
blockinfile:
block: "{{ lookup('file', '/tmp/authorized_keys') }}"
dest: "/home/{{ node_user }}/.ssh/authorized_keys"
backup: yes
create: yes
owner: "{{ node_user }}"
group: "{{ node_group }}"
mode: 0600
become: yes
As-is, this solution isn't easily compatible with roles; roles are designed to only handle a single value for hosts (a host, group, set of groups, etc), and the above solution requires switching between a group and localhost.
We can remedy this with delegate_to, although it may be somewhat inefficient with large clusters, as each node in the cluster will try assembling authorized_keys. Depending on the overall structure of the ansible project (and the size of the team working on it), this may or may not be ideal; when skimming a large script with delegate_to, it can be easy to miss that something's being performed locally.
- hosts: cluster
name: create node user and generate keys
tasks:
- name: create node user
user:
name: "{{ node_user }}"
password: "{{ node_user_pass |password_hash('sha512') }}"
shell: /bin/bash
create_home: yes
generate_ssh_key: yes
- name: fetch all public keys from managed nodes to manager
fetch:
src: "/home/{{ node_user }}/.ssh/id_rsa.pub"
dest: "/tmp/keys/{{ ansible_host }}-id_rsa.pub"
flat: yes
- name: Assemble authorized_keys from a directory
assemble:
src: "/tmp/keys"
dest: "/tmp/authorized_keys"
delegate_to: localhost
- name: insert/update configuration using a local file
blockinfile:
block: "{{ lookup('file', '/tmp/authorized_keys') }}"
dest: "/home/{{ node_user }}/.ssh/authorized_keys"
backup: yes
create: yes
owner: "{{ node_user }}"
group: "{{ node_group }}"
mode: 0600
become: yes

Ansible Azure Dynamic Inventory and Sharing variables between hosts in a single playbook

Problem: referencing a fact about a host ( in this case, the private ip ) from another host in a playbook using a wildcard only seems to work in the "Host" part of a playbook, not inside a task. vm_ubuntu* cannot be used in a task.
In a single playbook, I have a couple of hosts, and because the inventory is dynamic, I don't have the hostname ahead of time as Azure appends an identifier after it has been created.
I am using TF to create.
And using the Azure dynamic inventory method.
I am calling my playbook like this, where myazure_rm.yml is a bog standard azure dynamic inventory method, as of the time of this writing.
ansible-playbook -i ./myazure_rm.yml ./bwaf-playbook.yaml --key-file ~/.ssh/id_rsa --u azureuser
My playbook looks like this ( abbreviated ).
- hosts: vm_ubuntu*
tasks:
- name: housekeeping
set_fact:
vm_ubuntu_private_ip="{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}"
#"
- debug: var=vm_ubuntu_private_ip
- hosts: vm_bwaf*
connection: local
vars:
vm_bwaf_private_ip: "{{private_ipv4_addresses | join }}"
vm_bwaf_public_ip: "{{ public_ipv4_addresses | join }}"
vm_ubuntu_private_ip: "{{ hostvars['vm_ubuntu*']['ip'] }}"
api_url: "http://{{ vm_bwaf_public_ip }}:8000/restapi/{{ api_version }}"
#"
I am answering my own question to get rep, and to to help others of course.
I also want to thank the person ( https://stackoverflow.com/users/4281353/mon ) who came up with this first, which appears in here: How do I set register a variable to persist between plays in ansible?
- name: "Save private ip to dummy host"
add_host:
name: "dummy_host"
ip: "{{ vm_ubuntu_private_ip }}"
And then this can be referenced in the other host in the playbook like this:
- hosts: vm_bwaf*
connection: local
vars:
vm_bwaf_private_ip: "{{private_ipv4_addresses | join }}"
vm_bwaf_public_ip: "{{ public_ipv4_addresses | join }}"
vm_ubuntu_private_ip: "{{ hostvars['dummy_host']['ip'] }}"

How do we use encoded value in playbook and decode it whenever needed in ansible playbook?

I am trying to use ansible-pull method for running a playbooks with extra vars on run time of playbooks.
Here is how i needed to run my playbook with vars looks like.
ansible-playbook decode.yml --extra-vars "host_name=xxxxxxx bind_password=xxxxxxxxx swap_disk=xxxxx"
The bind_password will have encoded value of admin password.
and i have tried writing below playbook for it.
I am able to debug every value and getting it correctly but after decoding password not getting exact value or not sure whether i am doing it correct or not?
---
- name: Install and configure AD authentication
hosts: test
become: yes
become_user: root
vars:
hostname: "{{ host_name }}"
diskname: "{{ swap_disk }}"
password: "{{ bind_password }}"
tasks:
- name: Ansible prompt example.
debug:
msg: "{{ bind_password }}"
- name: Ansible prompt example.
debug:
msg: "{{ host_name }}"
- name: Ansible prompt example.
debug:
msg: "{{ swap_disk }}"
- name: Setup the hostname
command: hostnamectl set-hostname --static "{{ host_name }}"
- name: decode passwd
command: export passwd=$(echo "{{ bind_password }}" | base64 --decode)
- name: print decoded password
shell: echo "$passwd"
register: mypasswd
- name: debug decode value
debug:
msg: "{{ mypasswd }}"
but while we can decode base64 value with command:
echo "encodedvalue" | base64 --decode
How can i run this playbook with ansible-pull as well.
later i want to convert this playbook into roles (role1) and then needs to run it as below:
How can we run role based playbook using ansible-pull?
The problem is not b64decoding your value. Your command should not cause any problems and probably gives the expected result if you type it manually in your terminal.
But ansible is creating an ssh connection for each task, therefore each shell/command task starts on a new session. So exporting an env var in one command task and using that env var in the next shell task will never work.
Moreover, why do you want to handle all this with so many command/shell tasks when you have all the needed tools directly in ansible ? Here is a possible rewrite of your last 3 tasks that fits into a single one.
- name: debug decoded value of bind_password
debug:
msg: "{{ bind_password | b64decode }}"

How to copy a file to remote in `ansible_connection` local?

I am creating an azure vm using ansible using azure_rm_virtualmachine command. For this case the host is localhost (ansible_connection=local). I need to copy a ssh private key which is ansible-vault encrypted. How can i do this?
Here's what is already tried:
Use command and run SCP: problem is the file is still encrypted.
Decrypt the file , scp and encrypt: problem is after decryption if the scp command fails the file is now open decrypted.
Anyone has any idea on how to approach this problem?
FYI: While creating the VM i have added my pub key there so i can access the machine
As far as I understand your use-case, you first create a new VM in Azure, and then you want to send a new private key on that fresh VM. I have two options for you.
Split in 2 plays
In the same playbook, you can have 2 different plays:
---
- name: Provisioning of my pretty little VM in Azure
hosts: localhost
vars:
my_vm_name: myprettyvm
my_resource_group: myprettygroup
…
tasks:
- name: Create the VM
azure_rm_virtualmachine:
resource_group: "{{ my_resource_group }}"
name: "{{ my_vm_name }}"
…
- name: Configure my pretty little VM with
hosts: myprettyvm
vars:
my_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
tasks:
- name: Copy my private key
copy:
content: "{{ my_priv_key }}"
dest: /root/.ssh/id_rsa
Delagate to localhost
Only one play in your playbook, but you delegate the provisioning task to localhost.
---
- name: Creation of my pretty little VM in Azure
hosts: myprettyvm
gather_facts: no
vars:
my_vm_name: myprettyvm
my_resource_group: myprettygroup
…
my_priv_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
tasks:
- name: Create the VM
azure_rm_virtualmachine:
resource_group: "{{ my_resource_group }}"
name: "{{ my_vm_name }}"
…
delegate_to: localhost
- name: Copy my private key
copy:
content: "{{ my_priv_key }}"
dest: /root/.ssh/id_rsa
Don't forget to set gather_facts to no as host is the VM that does not exist yet. So no fact available.

How to run a playbook task based on OS type in ansible?

I have written a playbook task in ansible. I am able to run the playbook on linux end.
- name: Set paths for go
blockinfile:
path: $HOME/.profile
backup: yes
state: present
block: |
export PATH=$PATH:/usr/local/go/bin
export GOPATH=$HOME/go
export FABRIC_CFG_PATH=$HOME/.fabdep/config
- name: Load Env variables
shell: source $HOME/.profile
args:
executable: /bin/bash
register: source_result
become: yes
As in linux we have .profile in home directory but in Mac there is no .profile and .bash_profile in macOS.
So I want to check if os is Mac then path should be $HOME/.bash_profile and if os is linux based then it should look for $HOME/.profile.
I have tried adding
when: ansible_distribution == 'Ubuntu' and ansible_distribution_release == 'precise'
But it does not work firstly and also it is length process. I want to get path based on os in a variable and use it.
Thanks
I found a solution this way. I added gather_facts:true at top of yaml file and it started working. I started using variable as ansible_distribution.
Thanks
An option would be to include_vars from files. See example below
- name: "OS specific vars (will overwrite /vars/main.yml)"
include_vars: "{{ item }}"
with_first_found:
- files:
- "{{ ansible_distribution }}-{{ ansible_distribution_release }}.yml"
- "{{ ansible_distribution }}.yml"
- "{{ ansible_os_family }}.yml"
- "default.yml"
paths: "{{ playbook_dir }}/vars"
skip: true
- name: Set paths for go
blockinfile:
path: "$HOME/{{ my_profile_file }}"
[...]
In the playbooks' directory create directory vars and create files
# cat var/Ubuntu.yml
my_profile_file: ".profile"
# cat var/macOS.yml
my_profile_file: ".bash_profile"
If you have managed hosts with different OS, group them by OS in your inventory:
[Ubuntu]
ubu1
ubu2
[RHEL6]
RH6_1
[RHEL7]
RH7_1
RH7_2

Resources