Using ansible launch configuration module ec2_lc and securitygroup names versus id - linux

I want to accomplish the following in aws ec2:
Create security groups using ansible module ec2_group.
Create a launch configuration using ansible module ec2_lc and attach a security group created earlier.
Now, i want to use the security group names instead of id's because i want to be able to recreate the whole infrastructure with ansible if needed.
Recreating security groups will cause the id of the group to be different.
But the ec2_lc module only accepts security group id's.
Is there any way i can map a security group id to a name?
I am defining security groups like this:
- name: create ec2 group
ec2_group:
name: "{{ item.name }}"
description: "{{ item.description }}"
vpc_id: "{{ item.vpc_id }}"
region: "{{ item.region }}"
state: present
rules: "{{ item.rules }}"
rules_egress: "{{ item.rules_egress }}"
register: sg
The launch configuration code looks like this:
- name: Create Launch Configuration
ec2_lc:
region: "{{ item.region }}"
name: "{{ item.name }}"
image_id: "{{ item.image_id }}"
key_name: "{{ item.key_name }}"
security_groups: "{{ item.security_groups }}" # how can i refer to specific group_id based on a group name?
instance_type: "{{ item.instance_type }}"
user_data: "{{ item.ec2_user_data }}"
instance_profile_name: "{{ item.instance_profile_name }}"
assign_public_ip: "{{ item.assign_public_ip }}"

Use the ec2_group-facts to query the security groups by name:
- ec2_group_facts:
filters:
group-name:
- "{{ sg.name }}"
register: ec2sgs
- debug:
msg: "{{ ec2sgs.security_groups | map(attribute='group_id')| list }}"

With some tribute to this question, you can try this:
- name: Create Launch Configuration
ec2_lc:
...
security_groups: "{{ sg.results | selectattr('item.name','equalto',item) | join('',attribute='group_id') }}"
...

You can write a filter, that can make an aws api call for you dynamically.
For instance I have something like this in my vars/main.yml
public_sg_id: "{{ 'Public' |get_sg(public_vpc_id, aws_region) }}"
Here is the code for get_sg filter.
import boto.ec2
from ansible import errors
def get_sg(name, vpc_id, region):
connect = boto.ec2.connect_to_region(region)
filter_by = {
"tag-key": "Name",
"tag-value": name,
"vpc-id": vpc_id
}
sg_groups = connect.get_all_security_groups(filters=filter_by)
if len(sg_groups) == 1:
return sg_groups[0].id
elif len(sg_groups) > 1:
raise errors.AnsibleFilterError(
"Too many results for {0}: {1}".format(
name, ",".join(sg_groups)
)
)
else:
raise errors.AnsibleFilterError(
"Security Group {0} was not found".format(name)
)

Related

How to use "maps" in anisble along with "with_items"

I am trying to add labels to each node where a label is a map as:
set_node_labels:
topology.kubernetes.io/region: "syd"
topology.kubernetes.io/zone: "syd01"
I have written the Ansible task as follows, however it does not work as expected:
- name: Get all Nodes
shell: "oc get nodes | awk '(NR>1) { print $1 }'"
register: node_names
- name: Print phone records
k8s:
state: present
kind: Node
name: "{{ item }}"
definition:
metadata:
labels: "{{ item.key }} {{ item.value }}"
loop: "{{ lookup('dict', set_node_labels) }}"
with_items: "{{ node_names.stdout_lines }}"
You can also use your existing code with a little tweak
- name: Get all Nodes
shell: "oc get nodes | awk '(NR>1) { print $1 }'"
register: node_names
- name: Print phone records
k8s:
state: present
kind: Node
name: "{{ item }}"
definition:
metadata:
labels: "{{ set_node_labels }}"
with_items: "{{ node_names.stdout_lines }}"
First things first, you should use the existing module, when they are available instead of a shell module.
In your case, you can get the information about your nodes thanks to the k8s_info module.
So, your first taks should be:
- name: Get all Nodes
k8s_info:
kind: Node
register: node_names
Then in order to pass your labels, those should actually be in a dictionary, so you should be able to pass the whole set_node_labels as labels:
- name: Print phone records
k8s:
state: present
kind: Node
name: "{{ item.metadata.name }}"
definition:
metadata:
labels: "{{ set_node_labels }}"
loop: "{{ node_names.resources }}"

Multiple with_items in an Ansible module block

I want to create multiple logical volumes with a variable file but it return a sintax error found character that cannot start any token, I have tried in different ways but still doesn't work
main.yml
---
- name: playbook for create volume groups
hosts: localhost
become: true
tasks:
- include_vars: vars.yml
- name: Create a logical volume
lvol:
vg: vg03
lv: "{{ item.var1 }}"
size: "{{ item.var2 }}"
with_items:
- { var1: "{{ var_lv_name }}", var2: "{{ var_lv_size }}" }
vars.yml
var_lv_name:
- lv05
- lv06
var_lv_size:
- 1g
- 1g
Use with_together. Test it first. For example,
- debug:
msg: "Create lv: {{ item.0 }} size: {{ item.1 }}"
with_together:
- "{{ var_lv_name }}"
- "{{ var_lv_size }}"
gives (abridged)
msg: 'Create lv: lv05 size: 1g'
msg: 'Create lv: lv06 size: 1g'
Optionally, put the declaration below into the file vars.yml
var_lv: "{{ var_lv_name|zip(var_lv_size) }}"
This creates the list
var_lv:
- [lv05, 1g]
- [lv06, 1g]
Use it in the code. The simplified task below gives the same results
- debug:
msg: "Create lv: {{ item.0 }} size: {{ item.1 }}"
loop: "{{ var_lv }}"
The previous answer it's totally correct but In my humble opinion we should be getting into the new way to do the things with loop and filters.
Here's my answer:
---
- name: playbook for create volume groups
hosts: localhost
gather_facts: no
become: true
vars_files: vars.yml
tasks:
- name: Create a logical volume
lvol:
vg: vg03
lv: "{{ item[0] }}"
size: "{{ item[1] }}"
loop: "{{ var_lv_name | zip(var_lv_size) | list }}"
In this answer you're using the new way to use loops with keyword loop and using filters like zip and turning the result into a list type for iteration in the loop.

Extend Volume Group using Ansible

I have trying to extend the VG via ansible passing the pvname by variable, however I really don't understand why is not working.
Below you can see my code.
Variable file:
new_disk:
- diskname: /dev/sdc
pvname: /dev/sdb1, dev/sdc1
vgname: datavg
lvm_settings:
- lv_name: datalv
lv_size: +100%FREE
fs_name: ansible_fs_test
lvpath: /dev/mapper/datavg-datalv
filesystem_type: ext4
tasks file:
include_vars: "{{ vm_name }}.yml"
- name: First disk partition settings
block:
- name: Create a new primary partition
community.general.parted:
device: "{{ item.diskname }}"
number: 1
state: present
with_items: "{{ new_disk }}"
register: partition_status
rescue:
- name: Debug messages to check the error
debug:
msg: "{{ partition_status }}"
- name: Extending the Volume Group
community.general.lvg:
vg: "{{ vgname }}"
pvs: "{{ pvname }}"
pvresize: yes
Below, you can see the error message:
TASK [resize_fs_linux : Extending the Volume Group] **********************************************************************************************************************************************************fatal: [10.1.33.225]: FAILED! => {"changed": false, "msg": "Device /home/icc-admin/ dev/sdc1 not found."}
Do you know have any idea why is not working?
I really appreciate your help and time
Best Regards,
For it works that way:
Variable file
diskname:
- /dev/sdb
- /dev/sdc
disks_settings:
- vgname: datavg
pvname:
- /dev/sdb1
- /dev/sdc1
lvm_settings:
- vgname: datavg
lv_name: datalv
lv_size: +100%FREE
fs_name: ansible_fs_test
lvpath: /dev/mapper/datavg-datalv
filesystem_type: ext4
Tasks file:
---
# tasks file for resize_fs_linux
- include_vars: "{{ vm_name }}.yml"
- name: First disk partition settings
block:
- name: Create a new primary partition
community.general.parted:
device: "{{ item }}"
number: 1
state: present
with_items: "{{ diskname }}"
register: partition_status
run_once: true
rescue:
- name: Debug messages to check the error
debug:
msg: "{{ partition_status }}"
- name: Extending the Volume Group
community.general.lvg:
vg: "{{ item.vgname }}"
pvs: "{{ item.pvname }}"
pvresize: yes
with_items: "{{ disks_settings }}"
- name: Increasing the filesystems
community.general.lvol:
vg: "{{ vgname }}"
lv: "{{ item.lv_name }}"
size: "{{ item.lv_size }}"
resizefs: true
with_items: "{{ lvm_settings }}"

Ansible - Change user password on Linux

I'm doing some playbook to change username password on linux. I want to use the same playbook for all users.
What i am doing is:
- name: change users password
hosts: localhost
vars_files: ['credentials.yml']
tasks:
- user:
name: "{{ user_name }}"
password: "{{ dynamic_password | password_hash('sha512') }}"
And my file.yml:
credentials.yml
dynamic_password: "$6$mysecretsalt$QF9IdmmJLZWuEO8PKQ0w7c81Rre0hv.udU83ypIO3cG5DbAo90IXwHX6wcuhDJaLAkdE5KSSl9lKvdMFh810b."
generic_password: "$6$IxMDgSamMRSMAEY1$rfGAWC8xBgGMMGOFJXAMxnUuiKVKrH3SDOuNIrJpx4rMZy/FG5spqp1f9oSAcDBpTJ2vOK2rAboWHZ6Zn5qZm."
What i am executing:
ansible-playbook prueba81.yml --extra-vars "user_name=pepito type_password=dynamic_password"
What i want to do, is indicate in the command line, the user and what password (inside of the file yml) should it use. But seems that the variable type_password is not recognized.
Can you help me?.
Thanks!!!!
Your extra-vars on the command line isn't setting type_password to the value of the variable dynamic_password. You're literally setting the variable type_password to be the string "dynamic_password".
If you want to tell ansible what variable to use from the command line, you can do it several ways. Here's one example:
ansible-playbook prueba81.yml --extra-vars "user_name=pepito type_password=dynamic"
tasks:
- user:
name: "{{ user_name }}"
password: "{{ dynamic_password | password_hash('sha512') }}"
when type_password == "dynamic"
Use lookup vars to reference the password indirectly. For example
- user:
name: "{{ user_name }}"
password: "{{ my_password | password_hash('sha512') }}"
vars:
my_password: "{{ lookup('vars', type_password) }}"
You might want to set a default type. For example
- user:
name: "{{ user_name }}"
password: "{{ my_password | password_hash('sha512') }}"
vars:
my_password: "{{ lookup('vars', type_password|default('dynamic_password')) }}"

Ansible vmware_guest optional disk with Ansible Tower survey

I have a playbook for the creation of a VM from a template in VMware ESXi 6.7. My playbook is below. I want to only configure the second (and possible subsequent) disks if the DISK1_SIZE_GB variable is > 0. This is not working. I've also tried using 'when: DISK1_SIZE_GB is defined' with no luck. I'm using a survey in Ansible Tower, with the 2nd disk configuration being an optional answer. In this case I get an error about 0 being an invalid disk size, or when I check for variable definition I get an error about DISK1_SIZE_GB being undefined. Either way, the 'when' conditional doesn't seem to be working.
If I hardcode the size, as in the first 'disk' entry, it works fine .. same if I enter a valid size from Ansible Tower. I need to NOT configure additional disks unless the size is defined in the Tower survey.
Thanks!
---
- name: Create a VM from a template
hosts: localhost
gather_facts: no
tasks:
- name: Clone a template to a VM
vmware_guest:
hostname: "{{ lookup('env', 'VMWARE_HOST') }}"
username: "{{ lookup('env', 'VMWARE_USER') }}"
password: "{{ lookup('env', 'VMWARE_PASSWORD') }}"
validate_certs: 'false'
name: "{{ HOSTNAME }}"
template: RHEL-Server-7.7
datacenter: Production
folder: Templates
state: poweredon
hardware:
num_cpus: "{{ CPU_NUM }}"
memory_mb: "{{ MEM_MB }}"
disk:
- size_gb: 20
autoselect_datastore: true
- size_gb: "{{ DISK1_SIZE_GB }}"
autoselect_datastore: true
when: DISK1_SIZE_GB > 0
networks:
- name: "{{ NETWORK }}"
type: static
ip: "{{ IP_ADDR }}"
netmask: "{{ NETMASK }}"
gateway: "{{ GATEWAY }}"
dns_servers: "{{ DNS_SERVERS }}"
start_connected: true
wait_for_ip_address: yes
AFAIK this can't be accomplished in a single task. You were on the right track with when: DISK1_SIZE_GB is defined if disk: was a task and not a parameter though. Below is how I would approach this.
Create two survey questions:
DISK1_SIZE_GB - integer - required answer - enforce a non-zero
minimum value such as 20 (since you're deploying RHEL)
DISK2_SIZE_GB - integer - optional answer - no minimum or maximum
value
Create disk 1 in your existing vmware_guest task:
disk:
- size_gb: "{{ DISK1_SIZE_GB }}"
autoselect_datastore: true
Create a new vmware_guest_disk task which runs immediately afterwards and conditionally adds the second disk:
- name: Add second hard disk if necessary
vmware_guest_disk:
hostname: "{{ lookup('env', 'VMWARE_HOST') }}"
username: "{{ lookup('env', 'VMWARE_USER') }}"
password: "{{ lookup('env', 'VMWARE_PASSWORD') }}"
validate_certs: 'false'
name: "{{ HOSTNAME }}"
datacenter: Production
folder: Templates
state: poweredon
disk:
- size_gb: "{{ DISK2_SIZE_GB }}"
autoselect_datastore: true
when: DISK2_SIZE_GB is defined

Resources