Sharing wireguard public keys ansible.posix.synchronize: - linux

I've just started get into ansible so can you please help me or maybe give some advice?
The point is that i`m trying to install and configurate wireguard with ansible-playbook (just in case i know how to configure wireguard without ansible)
So i want to share public keys through ansible
(and then read them in wg0.conf by PublicKey = {{ lookup('file', '/etc/wireguard/publickey_client') }} )
I'm trying to use ansible.posix.synchronize in my playbook, but when it goes to task "sharing keys" it just start thinking but don't do anything (for a long time) till i stop the proccess.
Starting playbook with -vv also don't show anything
Playbook wireguard_configuration.yml:
---
- hosts: client
name: make wg keys on client
become: true
tasks:
- name: wg0.conf client file
ansible.builtin.copy:
src: /etc/ansible/conf/wg0_client.conf
dest: /etc/wireguard/wg0.conf
mode: 0755
owner: owner
- name: creating wg keys on client
ansible.builtin.shell:
cmd: wg genkey | tee privatekey_client | wg pubkey > publickey_client
chdir: /etc/wireguard
- name: share pubkey from client to server
ansible.posix.synchronize:
src: /etc/wireguard/publickey_client
dest: /etc/wireguard/publickey_client
delegate_to: server
- hosts: server
name: make wg keys on server
become: true
tasks:
- name: wg0.conf server file
ansible.builtin.copy:
src: /etc/ansible/conf/wg0_server.conf
dest: /etc/wireguard/wg0.conf
mode: 0755
owner: owner
- name: creating wg keys on client
ansible.builtin.shell:
cmd: wg genkey | tee privatekey_server | wg pubkey > publickey_server
chdir: /etc/wireguard
- name: share pubkey from server to client
ansible.posix.synchronize:
src: /etc/wireguard/publickey_server
dest: /etc/wireguard/publickey_server
delegate_to: client

You don't need the synchronize module here: you're not trying to copy a large hierarchy of files; you're only trying to bring a single value from the client to the server. I think a better option is just to stick that value in a variable on the client and then access it via hostvars on the server.
The following playbook is one way of doing that. A few things to note:
I've tried to document the tasks, but let me know if something isn't clear.
This playbook is written to be idempotent: you can run it multiple times and it will only generate the private key once.
- hosts: client
gather_facts: false
become: true
tasks:
# Read an existing private key if it is available. We set
# failed_when to false because an "error" simply means that
# the key doesn't exist and we need to generate it.
- name: read private key
command: cat /etc/wireguard/privatekey_client
failed_when: false
changed_when: wg_private_read.rc != 0
register: wg_private_read
# Generate a new key if necessary. We used the "is changed" test
# here so that we only generate a new key if we failed to read an
# existing key in the previous task.
- name: generate private key
when: wg_private_read is changed
command: wg genkey
register: wg_private_create
# This will either create the privatekey_client file or leave it
# unmodified (because the content matches what we read from it
# earlier in the "read private key" task).
- name: write private key
when: wg_private_read is changed
copy:
content: "{{ wg_private_create.stdout }}"
dest: /etc/wireguard/privatekey_client
# We generate a public key but we don't bother writing it to disk.
# The client doesn't need it and we can always generate it from
# the private key.
- name: generate public key
shell:
cmd: wg pubkey
stdin: "{{ (wg_private_read is changed)|ternary(wg_private_create.stdout, wg_private_read.stdout) }}"
changed_when: false
register: wg_public
- hosts: server
gather_facts: false
become: true
tasks:
- name: write client public key
copy:
content: "{{ hostvars.client.wg_public.stdout }}"
dest: "/etc/wireguard/publickey_client"
Some useful documentation links:
About failed_when and changed_when
The ternary filter

Related

Fetch files and remove them from source if succesful

I've been using Ansible to fetch files from Windows nodes to a Linux node for some time with good results.
I would now like the nodes to remove fetched files once they have uploaded successfully.
However, since I'm fetching from lots of endpoints in various states, some files occasionally fail to transfer - and I'm having trouble using Ansible to skip those files, and those files only.
Here's what I have so far:
- name: Find .something files
ansible.windows.win_find:
paths: 'C:\Some\Path'
patterns: [ '*.something' ]
recurse: yes
age_stamp: ctime
age: -1w
register: found_files
- name: Create destination directory
file:
path: "/some/path/{{inventory_hostname}}/"
state: directory
delegate_to: localhost
- name: Fetch .something files
fetch:
src: "{{ item.path }}"
dest: "/some/path/{{inventory_hostname}}/"
flat: yes
validate_checksum: no
with_items: "{{ found_files.files }}"
register: item.sync_result
ignore_errors: yes
msg: "Would remove {{ item.path }}"
when: sync_result is succeeded
with_items: "{{ found_files.files }}"
The problem is, the sync_result variable seems to apply to each node instead of each file - that is, if one file has failed to transfer, no files will be deleted.
I've tried various loops and lists and could not get it to work.
Any pointers would be greatly appreciated.
In a nutshell:
- name: Find .something files
ansible.windows.win_find:
paths: 'C:\Some\Path'
patterns: [ '*.something' ]
recurse: yes
age_stamp: ctime
age: -1w
register: find_something
- name: Create destination directory
file:
path: "/some/path/{{ inventory_hostname }}/"
state: directory
delegate_to: localhost
- name: Fetch .something files
fetch:
src: "{{ item.path }}"
dest: "/some/path/{{ inventory_hostname }}/"
flat: yes
validate_checksum: no
loop: "{{ find_something.files }}"
register: fetch_sync
ignore_errors: yes
- name: Delete successfully fetched files
file:
path: "{{ item.file }}"
state: absent
loop: "{{ fetch_sync.results | select('succeeded') }}"
# If you are using ansible < 2.10 you need to cast to list i.e.
# loop: "{{ fetch_sync.results | select('succeeded') | list }}"

How to convert a iterate loop list into a string to use in Linux

I have Ansible role that I want to iterate over.
The goal is to create new user accounts from a list. The playbook calls the role and sends the list to iterate.
The OS (Linux Debian 8.8) sees the all of var unicode "[u'user']"
Some other tests performed show new users:
['test']
[u'test']
All I really want is to have the var to be a string so I make a new user and add the needed keys and other files. I can also join the var into paths for key and other files.
I have searched for an easy way to "| to_string", (not in Ansible)
The Filter "to_yaml" gets rid of the unicode but not "[]" and adds "\n" at the end.
The item for the ssh key copy if for the various id_(type).pub files.
I have read:
Convert Ansible variable from Unicode to ASCII
Code
Playbook:
vars_files:
- /home/admin/common/vars/UserList
gather_facts: False
roles:
- { role: common, "{{ UserList }}" }
UserList file
---
UserList:
- 'test'
...
role/common/main.yml
---
- name: Add user to server
user:
name: "{{ UserList }}"
shell: /bin/bash
- name: make direcotry
file:
path: "/home/{{ UserList }}/.ssh"
state: directory
- name: Copy ssh public key to user/.ssh/_key_.pub
copy:
src: "/home/{{ UserList }}/.ssh/{{ item }}"
dest: "/home/{{ UserList }}/.ssh/{{ item }}"
mode: 600
force: no
with_items:
- id_rsa.pub
- id_dsa.pub
- id_ecdsa.pub
...
A different form, but still errored as below.
roles:
- role: common
with_items:
- "{{ UserList }}"
Error
(item=id_rsa.pub) => {"failed": true, "invocation": {"module_args": {"dest": "/home/[u'test']/.ssh/id_rsa.pub", "force": false, "mode": 600, "src": "/home/[u'test']/.ssh/id_rsa.pub"}, "module_name": "copy"}, "item": "id_rsa.pub", "msg": "could not find src=/home/[u'test']/.ssh/id_rsa.pub"}
Workaround
I found a workarount for my issue. I would be cautious to call it a solution. For this case it will suffice. I need to "loop" with my var with the builtin {{ item }}. Then {{ item }} is used as a string and I can create the PATHs I need. I can also iterate through the a series of items with "with_nested".
- name: create empty file
file:
path: "{{ '/home/' + item + '/.ssh/authorized_keys' }}"
state: touch
with_items:
- "{{ UserList }}"
- name: Copy ssh public key to user/.ssh/_key_.pub
copy:
src: "{{ '/home/' + item[1] + '/.ssh/' + item[0] }}"
dest: "{{ '/home/' + item[1] + '/.ssh/' + item[0] }}"
mode: 600
force: no
with_nested:
- [ 'id_rsa.pub' , 'id_dsa.pub' , 'id_ecdsa.pub' ]
- "{{ UserList }}"

Is it possible to overwrite or create a new version of a secret through ansible in azure?

I need to deploy my secrets in Azure's keyvault through ansible.
If the secret is a new one (i.e. it didnt exist before) it works perfectly, the secret is created properly.
Problem came when I need to update the secret, it is never overwritten.
I tried to delete it and create it again but is not working either since it performs a soft delete so it can be created again with the same name.
Here what I tried so far:
Secret creation (working fine the first time but not overwriting it)
- name: "Create endpoint secret."
azure_rm_keyvaultsecret:
secret_name: mysecret
secret_value: "desiredvalue"
keyvault_uri: "https://{{ AZURE_KV_NAME }}.vault.azure.net/"
tags:
environment: "{{ ENV }}"
role: "endpointsecret"
Here is how I try to delete it first and then creating it again
- name: "Delete endpoint secret."
azure_rm_keyvaultsecret:
secret_name: mysecret
keyvault_uri: "https://{{ AZURE_KV_NAME }}.vault.azure.net/"
state: "absent"
- name: "Create endpoint secret."
azure_rm_keyvaultsecret:
secret_name: mysecret
secret_value: "desiredvalue"
keyvault_uri: "https://{{ AZURE_KV_NAME }}.vault.azure.net/"
tags:
environment: "{{ ENV }}"
role: "endpointsecret"
When trying this error is:
Secret mysecret is currently being deleted and cannot be re-created; retry later
**Secret creation with state: present (it's not creating a new version either) **
- name: "Create endpoint secret."
azure_rm_keyvaultsecret:
secret_name: mysecret
secret_value: "desiredvalue"
keyvault_uri: "https://{{ AZURE_KV_NAME }}.vault.azure.net/"
state: "present"
tags:
environment: "{{ ENV }}"
role: "endpointsecret"
Any idea how to overwrite ( create a new version )a secret or at least perform a hard delete?
I find no way other than deploy it through ARM
- name: "Create ingestion keyvault secrets."
azure_rm_deployment:
state: present
resource_group_name: "{{ AZURE_RG_NAME }}"
location: "{{ AZURE_RG_LOCATION }}"
template:
$schema: "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#"
contentVersion: "1.0.0.0"
parameters:
variables:
resources:
- apiVersion: "2018-02-14"
type: "Microsoft.KeyVault/vaults/secrets"
name: "{{AZURE_KV_NAME}}/{{item.name}}"
properties:
value: "{{item.secret}}"
contentType: "string"
loop: "{{ SECRETLIST }}"
register: publish_secrets
async: 300 # Maximum runtime in seconds.
poll: 0 # Fire and continue (never poll)
- name: Wait for the secret deployment task to finish
async_status:
jid: "{{ publish_secrets_item.ansible_job_id }}"
loop: "{{publish_secrets.results}}"
loop_control:
loop_var: "publish_secrets_item"
register: jobs_publish_secrets
until: jobs_publish_secrets.finished
retries: 5
delay: 2
And then in other file the SECRETLIST declared as a variable:
SECRETLIST :
- name: mysecret
secret: "secretvalue"
- name: othersecret
secret: "secretvalue2"
Hope this helps to anyone with a similar problem

Move/copy all files in a directory with Ansible [duplicate]

This question already has answers here:
Ansible: copy a directory content to another directory
(12 answers)
Closed 5 years ago.
How can I move or copy all the files in a directory to my localhost or another remote host with Ansible?
This question goes both to linux systems and windows.
What I've got so far:
- hosts: all
tasks:
- name: list the files in the folder
command: ls /dir/
register: dir_out
- name: do the action
fetch: src=/dir/{{item}} dest=/second_dir/ flat=yes
with_items: ('{{dir_out.stdout_lines}}')
The output is as follows:
TASK [setup] *******************************************************************
ok: [remote_host]
TASK [list the files in the folder] ********************************************
changed: [remote_host]
TASK [move those files] ********************************************************
ok: [remote_host] => (item=('[u'file10003', u'file10158', u'file1032', u'file10325', u'file10630', u'file10738', u'file10818', u'file10841', u'file10980', u'file11349', u'file11589', u'file11744', u'file12003', u'file12008', u'file12234', u'file12734', u'file12768', u'file12774', u'file12816', u'file13188', u'file13584', u'file14560', u'file15512', u'file16020', u'file16051', u'file1610', u'file16610', u'file16642', u'file16997', u'file17233', u'file17522', u'file17592', u'file17908', u'file18149', u'file18311', u'file18313', u'file18438', u'file185', u'file18539', u'file18777', u'file18808', u'file18878', u'file18885', u'file19313', u'file19755', u'file19863', u'file20158', u'file20347', u'file2064', u'file20840', u'file21123', u'file21422', u'file21425', u'file21711', u'file21770', u'file21790', u'file21808', u'file22054', u'file22359', u'file22601', u'file23609', u'file23763', u'file24208', u'file24430', u'file24452', u'file25028', u'file25131', u'file25863', u'file26197', u'file26384', u'file26398', u'file26815', u'file27025', u'file27127', u'file27373', u'file2815', u'file28175', u'file28780', u'file28886', u'file29058', u'file29096', u'file29456', u'file29513', u'file29677', u'file29836', u'file30034', u'file30216', u'file30464', u'file30601', u'file30687', u'file30795', u'file31299', u'file31478', u'file31883', u'file31908', u'file32251', u'file3229', u'file32724', u'file32736', u'file3498', u'file4173', u'file4235', u'file4748', u'file4883', u'file5812', u'file6126', u'file6130', u'file6327', u'file6462', u'file6624', u'file6832', u'file7576', u'file8355', u'file8693', u'file8726', u'file8838', u'file8897', u'file9112', u'file9331', u'file993']'))
PLAY RECAP *********************************************************************
remote_host : ok=3 changed=1 unreachable=0 failed=0
It got all the files from the directory, I guess because of the with_items (though I don't know what the "u" stands for), but the second directory on my localhost remains empty.
Any suggestions?
You may have more luck with the synchronize module
Example below pulls a dir from inventory host to localhost:
- synchronize:
mode: pull
src: "/dir/"
dest: "/second_dir/"
Additional info based on comment: Here is how you would delete the source files after transferring them:
- synchronize:
mode: pull
src: "/dir/"
dest: "/second_dir/"
rsync_opts:
- "--remove-source-files"
--- # copying yaml file
- hosts: localhost
user: {{ user }}
connection: ssh
become: yes
gather_facts: no
tasks:
- name: Creation of directory on remote server
file:
path: /var/lib/jenkins/.aws
state: directory
mode: 0755
register: result
- debug:
var: result
- name: get file names to copy
command: "find conf/.aws -type f"
register: files_to_copy
- name: copy files
copy:
src: "{{ item }}"
dest: "/var/lib/jenkins/.aws"
owner: {{ user }}
group: {{ group }}
remote_src: True
mode: 0644
with_items:
- "{{ files_to_copy.stdout_lines }}"

Why Ansible didn't see attribute in variable?

I have Ansible role "db" with simple task:
- name: Check repos
apt_repository: repo="{{ item.repo }}" state={{ item.state }}
with_items:
- "{{ apt_repos }}"
In /defaults/mail.yml:
apt_repos:
# Percona
- { state: present, repo: 'deb http://repo.percona.com/apt wheezy main', keyserver: 'keyserver.ubuntu.com', key: '1C4CBDCDCD2EFD2A', needkey: True }
- { state: present, repo: 'deb-src http://repo.percona.com/apt wheezy main', needkey: False }
When i try to run this ansible-playbook:
---
- hosts: test
roles:
- db
i see error:
fatal: [10.10.10.10] => One or more undefined variables: 'unicode object' has no attribute 'repo'
FATAL: all hosts have already failed -- aborting
But i have another role with same task and variable and it work perfectly. What's wrong?
You want to be doing this:
with_items: apt_repos
apt_repos is a list. By referencing it as - "{{ apt_repos }}" the extra - is turning it into a list of lists. You also don't need the quotes or braces in this case - those are pretty much just redundant in this type of situation.

Resources