I am trying to add labels to each node where a label is a map as:
set_node_labels:
topology.kubernetes.io/region: "syd"
topology.kubernetes.io/zone: "syd01"
I have written the Ansible task as follows, however it does not work as expected:
- name: Get all Nodes
shell: "oc get nodes | awk '(NR>1) { print $1 }'"
register: node_names
- name: Print phone records
k8s:
state: present
kind: Node
name: "{{ item }}"
definition:
metadata:
labels: "{{ item.key }} {{ item.value }}"
loop: "{{ lookup('dict', set_node_labels) }}"
with_items: "{{ node_names.stdout_lines }}"
You can also use your existing code with a little tweak
- name: Get all Nodes
shell: "oc get nodes | awk '(NR>1) { print $1 }'"
register: node_names
- name: Print phone records
k8s:
state: present
kind: Node
name: "{{ item }}"
definition:
metadata:
labels: "{{ set_node_labels }}"
with_items: "{{ node_names.stdout_lines }}"
First things first, you should use the existing module, when they are available instead of a shell module.
In your case, you can get the information about your nodes thanks to the k8s_info module.
So, your first taks should be:
- name: Get all Nodes
k8s_info:
kind: Node
register: node_names
Then in order to pass your labels, those should actually be in a dictionary, so you should be able to pass the whole set_node_labels as labels:
- name: Print phone records
k8s:
state: present
kind: Node
name: "{{ item.metadata.name }}"
definition:
metadata:
labels: "{{ set_node_labels }}"
loop: "{{ node_names.resources }}"
Related
I want to create multiple logical volumes with a variable file but it return a sintax error found character that cannot start any token, I have tried in different ways but still doesn't work
main.yml
---
- name: playbook for create volume groups
hosts: localhost
become: true
tasks:
- include_vars: vars.yml
- name: Create a logical volume
lvol:
vg: vg03
lv: "{{ item.var1 }}"
size: "{{ item.var2 }}"
with_items:
- { var1: "{{ var_lv_name }}", var2: "{{ var_lv_size }}" }
vars.yml
var_lv_name:
- lv05
- lv06
var_lv_size:
- 1g
- 1g
Use with_together. Test it first. For example,
- debug:
msg: "Create lv: {{ item.0 }} size: {{ item.1 }}"
with_together:
- "{{ var_lv_name }}"
- "{{ var_lv_size }}"
gives (abridged)
msg: 'Create lv: lv05 size: 1g'
msg: 'Create lv: lv06 size: 1g'
Optionally, put the declaration below into the file vars.yml
var_lv: "{{ var_lv_name|zip(var_lv_size) }}"
This creates the list
var_lv:
- [lv05, 1g]
- [lv06, 1g]
Use it in the code. The simplified task below gives the same results
- debug:
msg: "Create lv: {{ item.0 }} size: {{ item.1 }}"
loop: "{{ var_lv }}"
The previous answer it's totally correct but In my humble opinion we should be getting into the new way to do the things with loop and filters.
Here's my answer:
---
- name: playbook for create volume groups
hosts: localhost
gather_facts: no
become: true
vars_files: vars.yml
tasks:
- name: Create a logical volume
lvol:
vg: vg03
lv: "{{ item[0] }}"
size: "{{ item[1] }}"
loop: "{{ var_lv_name | zip(var_lv_size) | list }}"
In this answer you're using the new way to use loops with keyword loop and using filters like zip and turning the result into a list type for iteration in the loop.
my host_vars file has about 5k lines of yml code. So I would like to have separate yml files - one file per one service.
Simplified example:
user#test $ cat production/split_configs/a.example.net.yml
my_array:
- a.example.net
user#test $ cat production/split_configs/b.example.net.yml
my_array:
- b.example.net
user#test $ cat webhosts.yml
- hosts: myservers
pre_tasks:
- name: merge ansible arrays
tags: always
delegate_to: localhost
block:
- name: find config files
find:
paths: production/configs/
patterns: '*.yml'
register: find_results
- name: aaa
debug:
msg: "{{ find_results.files }}"
- name: bbb
debug:
msg: "{{ item.path }}"
with_items: "{{ find_results.files }}"
- name: ccc
debug:
msg: "{{ lookup('file', 'production/configs/a.example.net.yml') }}"
- name: ddd
debug:
msg: "{{ lookup('file', item.path) }}"
loop: "{{ find_results.files }}"
tasks:
- name: eee
debug:
msg: "{{ my_array }}"
The goal is to merge content of both arrays an print the merged content in task eee:
my_array:
- a.example.net
- b.example.net
Task aaa print informations about files (path, mode, uid, ...) - it works.
Tasks bbb, and ddd print nothing. I do not understand why.
Task ccc print content of file. But the path is written in playbook :-(
After load files I need to merge them. My idea is to use something like set_fact: my_array="{{ my_array + my_array }}" in task with with_items: "{{ find_results.files }}". Is it good idea? Or how better I can do it?
For example, the tasks below do the job
- include_vars:
file: "{{ item }}"
name: "my_prefix_{{ item|regex_replace('\\W', '_') }}"
with_fileglob: production/split_configs/*
- set_fact:
my_vars: "{{ my_vars|d({})|
combine(lookup('vars', item),
recursive=True,
list_merge='append') }}"
loop: "{{ q('varnames', 'my_prefix_.*') }}"
give
my_vars:
my_array:
- a.example.net
- b.example.net
You can use the simple cat commnd to merge the files into the one file and later include var file for example -
- raw: "cat production/split_configs/* >my_vars.yml"
- include_vars: file=my_vars.yml name=my_vars
will give you the result -
my_vars:
my_array:
- a.example.net
- b.example.net
I have trying to extend the VG via ansible passing the pvname by variable, however I really don't understand why is not working.
Below you can see my code.
Variable file:
new_disk:
- diskname: /dev/sdc
pvname: /dev/sdb1, dev/sdc1
vgname: datavg
lvm_settings:
- lv_name: datalv
lv_size: +100%FREE
fs_name: ansible_fs_test
lvpath: /dev/mapper/datavg-datalv
filesystem_type: ext4
tasks file:
include_vars: "{{ vm_name }}.yml"
- name: First disk partition settings
block:
- name: Create a new primary partition
community.general.parted:
device: "{{ item.diskname }}"
number: 1
state: present
with_items: "{{ new_disk }}"
register: partition_status
rescue:
- name: Debug messages to check the error
debug:
msg: "{{ partition_status }}"
- name: Extending the Volume Group
community.general.lvg:
vg: "{{ vgname }}"
pvs: "{{ pvname }}"
pvresize: yes
Below, you can see the error message:
TASK [resize_fs_linux : Extending the Volume Group] **********************************************************************************************************************************************************fatal: [10.1.33.225]: FAILED! => {"changed": false, "msg": "Device /home/icc-admin/ dev/sdc1 not found."}
Do you know have any idea why is not working?
I really appreciate your help and time
Best Regards,
For it works that way:
Variable file
diskname:
- /dev/sdb
- /dev/sdc
disks_settings:
- vgname: datavg
pvname:
- /dev/sdb1
- /dev/sdc1
lvm_settings:
- vgname: datavg
lv_name: datalv
lv_size: +100%FREE
fs_name: ansible_fs_test
lvpath: /dev/mapper/datavg-datalv
filesystem_type: ext4
Tasks file:
---
# tasks file for resize_fs_linux
- include_vars: "{{ vm_name }}.yml"
- name: First disk partition settings
block:
- name: Create a new primary partition
community.general.parted:
device: "{{ item }}"
number: 1
state: present
with_items: "{{ diskname }}"
register: partition_status
run_once: true
rescue:
- name: Debug messages to check the error
debug:
msg: "{{ partition_status }}"
- name: Extending the Volume Group
community.general.lvg:
vg: "{{ item.vgname }}"
pvs: "{{ item.pvname }}"
pvresize: yes
with_items: "{{ disks_settings }}"
- name: Increasing the filesystems
community.general.lvol:
vg: "{{ vgname }}"
lv: "{{ item.lv_name }}"
size: "{{ item.lv_size }}"
resizefs: true
with_items: "{{ lvm_settings }}"
I want to execute a task only when multiple files exist . if only single file is exist i need to ignore this task. How can i achieve this.
Am unable to achieve with the below playbook
---
- name: Standardize
hosts: test
gather_facts: false
vars:
file_vars:
- {id: 1, name: /etc/h_cm}
- {id: 2, name: /etc/H_CM}
tasks:
- block:
- name: Check if both exists
stat:
path: "{{ item.name }}"
with_items: "{{ file_vars }}"
register: cm_result
- name: Move both files
shell: mv "{{ item.item }}" /tmp/merged
with_items: "{{ cm_result.results }}"
when: item.stat.exists
After check if both exist task, you can add a set fact task like this one:
- name: set facts
set_fact:
files_exist: "{{ (files_exist | default([])) + [item.stat.exists] }}"
with_items: "{{ cm_result.results }}"
And you change your move files task to:
- name: Move both files
debug:
msg: "{{ item.stat.exists }}"
with_items: "{{ cm_result.results }}"
when: false not in files_exist
You have to specify shell: mv "{{ item.item.name }}" /tmp/merged insted of shell: mv "{{ item.item }}" /tmp/merged
Check the below works?:
- name: Standardize
hosts: test
gather_facts: false
become: yes ## If needed
vars:
file_vars:
- {id: 1, name: /etc/h_cm}
- {id: 2, name: /etc/H_CM}
tasks:
- block:
- name: Check if both file exists
stat:
path: "{{ item.name }}"
with_items: "{{ file_vars }}"
register: cm_result
- debug:
var: item.stat.exists
loop: "{{ cm_result.results }}"
- name: Crate a dummy list
set_fact:
file_state: []
- name: Add true to list if file exists
set_fact:
file_state: "{{ file_state }} + ['{{ item.stat.exists }}']"
loop: "{{ cm_result.results }}"
when: item.stat.exists == true
- name: Move both files
shell: mv "{{ item.item.name }}" /tmp/merged
loop: "{{ cm_result.results }}"
when: file_state|length > 1
I want to accomplish the following in aws ec2:
Create security groups using ansible module ec2_group.
Create a launch configuration using ansible module ec2_lc and attach a security group created earlier.
Now, i want to use the security group names instead of id's because i want to be able to recreate the whole infrastructure with ansible if needed.
Recreating security groups will cause the id of the group to be different.
But the ec2_lc module only accepts security group id's.
Is there any way i can map a security group id to a name?
I am defining security groups like this:
- name: create ec2 group
ec2_group:
name: "{{ item.name }}"
description: "{{ item.description }}"
vpc_id: "{{ item.vpc_id }}"
region: "{{ item.region }}"
state: present
rules: "{{ item.rules }}"
rules_egress: "{{ item.rules_egress }}"
register: sg
The launch configuration code looks like this:
- name: Create Launch Configuration
ec2_lc:
region: "{{ item.region }}"
name: "{{ item.name }}"
image_id: "{{ item.image_id }}"
key_name: "{{ item.key_name }}"
security_groups: "{{ item.security_groups }}" # how can i refer to specific group_id based on a group name?
instance_type: "{{ item.instance_type }}"
user_data: "{{ item.ec2_user_data }}"
instance_profile_name: "{{ item.instance_profile_name }}"
assign_public_ip: "{{ item.assign_public_ip }}"
Use the ec2_group-facts to query the security groups by name:
- ec2_group_facts:
filters:
group-name:
- "{{ sg.name }}"
register: ec2sgs
- debug:
msg: "{{ ec2sgs.security_groups | map(attribute='group_id')| list }}"
With some tribute to this question, you can try this:
- name: Create Launch Configuration
ec2_lc:
...
security_groups: "{{ sg.results | selectattr('item.name','equalto',item) | join('',attribute='group_id') }}"
...
You can write a filter, that can make an aws api call for you dynamically.
For instance I have something like this in my vars/main.yml
public_sg_id: "{{ 'Public' |get_sg(public_vpc_id, aws_region) }}"
Here is the code for get_sg filter.
import boto.ec2
from ansible import errors
def get_sg(name, vpc_id, region):
connect = boto.ec2.connect_to_region(region)
filter_by = {
"tag-key": "Name",
"tag-value": name,
"vpc-id": vpc_id
}
sg_groups = connect.get_all_security_groups(filters=filter_by)
if len(sg_groups) == 1:
return sg_groups[0].id
elif len(sg_groups) > 1:
raise errors.AnsibleFilterError(
"Too many results for {0}: {1}".format(
name, ",".join(sg_groups)
)
)
else:
raise errors.AnsibleFilterError(
"Security Group {0} was not found".format(name)
)