Ansible Azure - Error CanceledAndSupersededDueToAnotherOperation - azure

One of my playbook is executed by multiple instances and if ansible should ignore if the rule or group already exists. But I'm getting this error instead. Any idea why I get this error and how I can prevent it?
Error creating/updating security group testing_cloudshell_sg_201 - Azure Error: Canceled\nMessage: Operation was canceled.\nException Details:\n\tError Code: CanceledAndSupersededDueToAnotherOperation
Operation PutNetworkSecurityGroupOperation (ce212e6d-2196-4e0f-9d52-9433535be288) was canceled and superseded by operation PutNetworkSecurityGroupOperation (2c9b8db1-30f3-4486-965e-3449b4572858)
Here is a sample of playbook which gets executed by multiple instances.
- name: create Azure security group
# create a security group for the vpc
azure_rm_securitygroup:
resource_group: "{{ resource_group }}"
location: "{{ azure_vm_region }}"
purge_rules: no
name: "{{ sg_name }}"
rules: >-
{{
sg_rules.splitlines()
| map('split', ',')
| json_query("[*].{
name: [0],
protocol: [1],
source_port_range: [2],
destination_port_range: [3],
source_address_prefix: [4],
destination_address_prefix: [5],
priority: [6],
access: 'Allow',
direction: 'Inbound'
}")
}}
tags:
Name: "{{ sg_name }}"
register: azure_security_group_results
- name: Update Azure security group with static Rule
azure_rm_securitygroup:
resource_group: "{{ resource_group }}"
location: "{{ azure_vm_region }}"
purge_rules: no
name: "{{ sg_name }}"
rules:
- name: AWX-SSH
protocol: Tcp
source_port_range: "*"
destination_port_range: 22
source_address_prefix: "{{ sg_ssh_cidr }}"
destination_address_prefix: "*"
priority: 299
access: "Allow"
direction: "Inbound"
register: azure_security_group_results_2

Related

Failed to get credentials when running ansible playbook from ansible tower to stop/start azure vm

I am running a ansible playbook from ansible tower to stop/start vm. Below is the code.
---
- hosts: localhost
gather_facts: yes
vars:
state : "{{ state }}"
env:
ARM_SUBSCRIPTION_ID : "{{ subscription_id }}"
ARM_TENANT_ID : "{{ tenant_id }}"
ARM_CLIENT_ID : "{{ client_id }}"
ARM_CLIENT_SECRET : "{{ secret_value }}"
collections:
- ansible.tower
tasks:
- name: Power Off
azure_rm_virtualmachine:
resource_group: "{{ resource_group_name }}"
name: "{{ virtual_machine_name }}"
started: no
when: state == "stop"
- name: Deallocate
azure_rm_virtualmachine:
resource_group: "{{ resource_group_name }}"
name: "{{ virtual_machine_name }}"
allocated: no
when: state == "delete"
- name: Power On
azure_rm_virtualmachine:
resource_group: "{{ resource_group_name }}"
name: "{{ virtual_machine_name }}"
when: state == "start"
environment: "{{ env }}"
This is giving below error:
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to get credentials. Either pass as parameters, set environment variables, define a profile in ~/.azure/credentials, or log in with Azure CLI (az login)."}
Syntax wise everything looks good. Please help.
You can pass the credentials by passing them as environment variables like below.
- name: Restart
azure_rm_virtualmachine:
resource_group: "{{ resource_group_name }}"
name: "{{ virtual_machine_name }}"
restarted: yes
subscription_id : "{{ subscription_id }}"
tenant : "{{ tenant_id }}"
client_id : "{{ client_id }}"
secret : "{{ secret_value }}"
when: state == "restart"

Extend Volume Group using Ansible

I have trying to extend the VG via ansible passing the pvname by variable, however I really don't understand why is not working.
Below you can see my code.
Variable file:
new_disk:
- diskname: /dev/sdc
pvname: /dev/sdb1, dev/sdc1
vgname: datavg
lvm_settings:
- lv_name: datalv
lv_size: +100%FREE
fs_name: ansible_fs_test
lvpath: /dev/mapper/datavg-datalv
filesystem_type: ext4
tasks file:
include_vars: "{{ vm_name }}.yml"
- name: First disk partition settings
block:
- name: Create a new primary partition
community.general.parted:
device: "{{ item.diskname }}"
number: 1
state: present
with_items: "{{ new_disk }}"
register: partition_status
rescue:
- name: Debug messages to check the error
debug:
msg: "{{ partition_status }}"
- name: Extending the Volume Group
community.general.lvg:
vg: "{{ vgname }}"
pvs: "{{ pvname }}"
pvresize: yes
Below, you can see the error message:
TASK [resize_fs_linux : Extending the Volume Group] **********************************************************************************************************************************************************fatal: [10.1.33.225]: FAILED! => {"changed": false, "msg": "Device /home/icc-admin/ dev/sdc1 not found."}
Do you know have any idea why is not working?
I really appreciate your help and time
Best Regards,
For it works that way:
Variable file
diskname:
- /dev/sdb
- /dev/sdc
disks_settings:
- vgname: datavg
pvname:
- /dev/sdb1
- /dev/sdc1
lvm_settings:
- vgname: datavg
lv_name: datalv
lv_size: +100%FREE
fs_name: ansible_fs_test
lvpath: /dev/mapper/datavg-datalv
filesystem_type: ext4
Tasks file:
---
# tasks file for resize_fs_linux
- include_vars: "{{ vm_name }}.yml"
- name: First disk partition settings
block:
- name: Create a new primary partition
community.general.parted:
device: "{{ item }}"
number: 1
state: present
with_items: "{{ diskname }}"
register: partition_status
run_once: true
rescue:
- name: Debug messages to check the error
debug:
msg: "{{ partition_status }}"
- name: Extending the Volume Group
community.general.lvg:
vg: "{{ item.vgname }}"
pvs: "{{ item.pvname }}"
pvresize: yes
with_items: "{{ disks_settings }}"
- name: Increasing the filesystems
community.general.lvol:
vg: "{{ vgname }}"
lv: "{{ item.lv_name }}"
size: "{{ item.lv_size }}"
resizefs: true
with_items: "{{ lvm_settings }}"

Ansible vmware_guest optional disk with Ansible Tower survey

I have a playbook for the creation of a VM from a template in VMware ESXi 6.7. My playbook is below. I want to only configure the second (and possible subsequent) disks if the DISK1_SIZE_GB variable is > 0. This is not working. I've also tried using 'when: DISK1_SIZE_GB is defined' with no luck. I'm using a survey in Ansible Tower, with the 2nd disk configuration being an optional answer. In this case I get an error about 0 being an invalid disk size, or when I check for variable definition I get an error about DISK1_SIZE_GB being undefined. Either way, the 'when' conditional doesn't seem to be working.
If I hardcode the size, as in the first 'disk' entry, it works fine .. same if I enter a valid size from Ansible Tower. I need to NOT configure additional disks unless the size is defined in the Tower survey.
Thanks!
---
- name: Create a VM from a template
hosts: localhost
gather_facts: no
tasks:
- name: Clone a template to a VM
vmware_guest:
hostname: "{{ lookup('env', 'VMWARE_HOST') }}"
username: "{{ lookup('env', 'VMWARE_USER') }}"
password: "{{ lookup('env', 'VMWARE_PASSWORD') }}"
validate_certs: 'false'
name: "{{ HOSTNAME }}"
template: RHEL-Server-7.7
datacenter: Production
folder: Templates
state: poweredon
hardware:
num_cpus: "{{ CPU_NUM }}"
memory_mb: "{{ MEM_MB }}"
disk:
- size_gb: 20
autoselect_datastore: true
- size_gb: "{{ DISK1_SIZE_GB }}"
autoselect_datastore: true
when: DISK1_SIZE_GB > 0
networks:
- name: "{{ NETWORK }}"
type: static
ip: "{{ IP_ADDR }}"
netmask: "{{ NETMASK }}"
gateway: "{{ GATEWAY }}"
dns_servers: "{{ DNS_SERVERS }}"
start_connected: true
wait_for_ip_address: yes
AFAIK this can't be accomplished in a single task. You were on the right track with when: DISK1_SIZE_GB is defined if disk: was a task and not a parameter though. Below is how I would approach this.
Create two survey questions:
DISK1_SIZE_GB - integer - required answer - enforce a non-zero
minimum value such as 20 (since you're deploying RHEL)
DISK2_SIZE_GB - integer - optional answer - no minimum or maximum
value
Create disk 1 in your existing vmware_guest task:
disk:
- size_gb: "{{ DISK1_SIZE_GB }}"
autoselect_datastore: true
Create a new vmware_guest_disk task which runs immediately afterwards and conditionally adds the second disk:
- name: Add second hard disk if necessary
vmware_guest_disk:
hostname: "{{ lookup('env', 'VMWARE_HOST') }}"
username: "{{ lookup('env', 'VMWARE_USER') }}"
password: "{{ lookup('env', 'VMWARE_PASSWORD') }}"
validate_certs: 'false'
name: "{{ HOSTNAME }}"
datacenter: Production
folder: Templates
state: poweredon
disk:
- size_gb: "{{ DISK2_SIZE_GB }}"
autoselect_datastore: true
when: DISK2_SIZE_GB is defined

Ansible loops and Azure resource dependencies

I'm using Ansible to provision resources to Azure. I'd like to have one task for each type of resource I want to deploy to Azure which loops through a list of dictionaries, so I can just add more dicts in case I want more resources provisioned. I'd like to define each resource variable only once.
The problem that arises with this is dependencies to other resources. Resource groups need to be provisioned before virtual networks, virtual networks before subnets and so on. Yet the information of the top level resources is still needed when provisioning the bottom level ones.
Here's the first attempt, with all of the required top level resource vars defined in the bottom level resource vars as well:
- hosts: localhost
connection: local
vars:
resourcegroups:
- name: "eh_test_rg01"
location: "westeurope"
- name: "eh_test_rg02"
location: "eastus"
virtualnetworks:
- name: "eh_test_vn01"
cidr: 10.15.0.0/22
resource_group: "eh_test_rg01"
- name: "eh_test_vn02"
cidr: 10.15.4.0/22
resource_group: "eh_test_rg02"
DMZ_subnets:
- name: "eh_test_dmzsn01"
cidr: 10.15.1.0/24
vnet: "eh_test_vn01"
location: "westeurope"
resource_group: "eh_test_rg01"
- name: "eh_test_dmzsn02"
cidr: 10.15.5.0/24
vnet: "eh_test_vn02"
location: "eastus"
resource_group: "eh_test_rg02"
app_subnets:
- name: "eh_test_appsn01"
cidr: 10.15.2.0/24
vnet: "eh_test_vn01"
location: "westeurope"
resource_group: "eh_test_rg01"
- name: "eh_test_appsn02"
cidr: 10.15.6.0/24
vnet: "eh_test_vn02"
location: "eastus"
resource_group: "eh_test_rg02"
gateway_subnets:
- name: "GatewaySubnet"
cidr: 10.15.0.0/24
vnet: "eh_test_vn01"
resource_group: "eh_test_rg01"
location: "westeurope"
- name: "GatewaySubnet"
cidr: 10.15.4.0/24
vnet: "eh_test_vn02"
resource_group: "eh_test_rg02"
location: "eastus"
tasks:
- name: Create resource Group
azure_rm_resourcegroup:
name: "{{ item.name }}"
location: "{{ item.location }}"
with_items:
- "{{ resourcegroups }}"
tags: resourcegroups
- name: Create vnet
azure_rm_virtualnetwork:
name: "{{ item.name }}"
resource_group: "{{ item.resource_group }}"
address_prefixes_cidr: "{{ item.cidr }}"
with_items:
- "{{ virtualnetworks }}"
tags: vnets
- name: Create subnets
azure_rm_subnet:
name: "{{ item.name }}"
resource_group: "{{ item.resource_group }}"
address_prefix: "{{ item.cidr }}"
virtual_network: "{{ item.vnet }}"
with_items:
- "{{ DMZ_subnets }}"
- "{{ app_subnets }}"
- "{{ gateway_subnets }}"
tags: subnets
As can be seen from above example, by the subnets dicts there's already 2 vars I have defined before. The deeper we go into the hierarchy, the more excess dict entries will come into play.
I tried to build the relationships into the variable structure, but ran into issues looping though the new variable structure. With_subelements worked fine for looping two lists of dictionaries, but it can't handle 3 or more.
- hosts: localhost
connection: local
vars:
resourcegroups:
- name: "eh_test_rg01"
location: westeurope
virtualnetworks:
- name: "eh_test_vn01"
cidr: 10.15.0.0/22
subnets:
- name: GatewaySubnet
cidr: 10.15.0.0/24
- name: eh_test_dmzsn01
cidr: 10.15.1.0/24
- name: eh_test_appsn01
cidr: 10.15.2.0/24
- name: "eh_test_rg02"
location: westeurope
virtualnetworks:
- name: "eh_test_vn02"
cidr: 10.15.4.0/22
subnets:
- name: GatewaySubnet
cidr: 10.15.4.0/24
- name: eh_test_dmzsn02
cidr: 10.15.5.0/24
- name: eh_test_appsn02
cidr: 10.15.6.0/24
tasks:
- name: Create resource Group
azure_rm_resourcegroup:
name: "{{ item.name }}"
location: "{{ item.location }}"
with_items:
- "{{ resourcegroups }}"
tags: resourcegroups
- name: Create vnet
azure_rm_virtualnetwork:
name: "{{ item.1.name }}"
resource_group: "{{ item.0.name }}"
address_prefixes_cidr: "{{ item.1.cidr }}"
with_subelements:
- "{{ resourcegroups }}"
- virtualnetworks
tags: vnets
# Blows up at this point, with_subelements does not support more lists than 2
- name: Create subnets
azure_rm_subnet:
name: "{{ item.2.name }}"
resource_group: "{{ item.0.name }}"
address_prefix: "{{ item.2.cidr }}"
virtual_network: "{{ item.1.vnet }}"
with_subelements:
- "{{ resourcegroups }}"
- virtualnetworks
- subnets
tags: subnets
What would be the best way to approach this problem? Do I need to define the vars differently, make some kind of helper tasks to create variable structures before running the task itself, use different loops or..?
As far as I know, I can't make references to other dict values which are contained in a list of dictionaries using YAML.
I would go totally other route and use arm templates, that a much better way of provisioning things on Azure, you can use native Ansible tasks for that as well:
http://docs.ansible.com/ansible/latest/azure_rm_deployment_module.html

Using ansible launch configuration module ec2_lc and securitygroup names versus id

I want to accomplish the following in aws ec2:
Create security groups using ansible module ec2_group.
Create a launch configuration using ansible module ec2_lc and attach a security group created earlier.
Now, i want to use the security group names instead of id's because i want to be able to recreate the whole infrastructure with ansible if needed.
Recreating security groups will cause the id of the group to be different.
But the ec2_lc module only accepts security group id's.
Is there any way i can map a security group id to a name?
I am defining security groups like this:
- name: create ec2 group
ec2_group:
name: "{{ item.name }}"
description: "{{ item.description }}"
vpc_id: "{{ item.vpc_id }}"
region: "{{ item.region }}"
state: present
rules: "{{ item.rules }}"
rules_egress: "{{ item.rules_egress }}"
register: sg
The launch configuration code looks like this:
- name: Create Launch Configuration
ec2_lc:
region: "{{ item.region }}"
name: "{{ item.name }}"
image_id: "{{ item.image_id }}"
key_name: "{{ item.key_name }}"
security_groups: "{{ item.security_groups }}" # how can i refer to specific group_id based on a group name?
instance_type: "{{ item.instance_type }}"
user_data: "{{ item.ec2_user_data }}"
instance_profile_name: "{{ item.instance_profile_name }}"
assign_public_ip: "{{ item.assign_public_ip }}"
Use the ec2_group-facts to query the security groups by name:
- ec2_group_facts:
filters:
group-name:
- "{{ sg.name }}"
register: ec2sgs
- debug:
msg: "{{ ec2sgs.security_groups | map(attribute='group_id')| list }}"
With some tribute to this question, you can try this:
- name: Create Launch Configuration
ec2_lc:
...
security_groups: "{{ sg.results | selectattr('item.name','equalto',item) | join('',attribute='group_id') }}"
...
You can write a filter, that can make an aws api call for you dynamically.
For instance I have something like this in my vars/main.yml
public_sg_id: "{{ 'Public' |get_sg(public_vpc_id, aws_region) }}"
Here is the code for get_sg filter.
import boto.ec2
from ansible import errors
def get_sg(name, vpc_id, region):
connect = boto.ec2.connect_to_region(region)
filter_by = {
"tag-key": "Name",
"tag-value": name,
"vpc-id": vpc_id
}
sg_groups = connect.get_all_security_groups(filters=filter_by)
if len(sg_groups) == 1:
return sg_groups[0].id
elif len(sg_groups) > 1:
raise errors.AnsibleFilterError(
"Too many results for {0}: {1}".format(
name, ",".join(sg_groups)
)
)
else:
raise errors.AnsibleFilterError(
"Security Group {0} was not found".format(name)
)

Resources