Playbook failing when using Azure dynamic inventory - azure

As the title suggests, I'm using an Azure dynamic inventory file and am having an issue running a playbook against the collected inventory.
I'm using Ansible 2.9.1 and used the instructions found here to setup the inventory file.
$ ansible --version
ansible 2.9.1
config file = None
configured module search path = ['/home/myuser/.ansible/plugins/modules',
'/usr/share/ansible/plugins/modules']
ansible python module location = /home/myuser/.local/lib/python3.6/site-packages/ansible
executable location = /home/myuser/.local/bin/ansible
python version = 3.6.9 (default, Sep 11 2019, 16:40:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
My inventory file:
plugin: azure_rm
include_vm_resource_groups:
- mytestrg
auth_source: cli
cloud_environment: AzureUSGovernment
hostvar_expressions:
ansible_connection: "'winrm'"
ansible_user: "'azureuser"
ansible_password: "'Password1'"
ansible_winrm_server_cert_validation: "'ignore'"
keyed_groups:
- prefix: some_tag
key: tags.sometag | default('none')
exclude_host_filters:
- powerstate != 'running'
Simple ad-hoc commands, like ping are successful when using the inventory file. What I'm not able to get working though is running a playbook against it.
My playbook:
- hosts: all
name: Run whoami
tasks:
- win_command: whoami
register: whoami_out
- debug:
var: whoami_out
Command I'm using to run the playbook:
ansible-playbook -i ./inventory_azure_rm.yaml whoami.yaml
Regardless of the hosts I target the playbook against, it fails with:
[WARNING]: Could not match supplied host pattern, ignoring:
playbooks/whoami.yaml
[WARNING]: No hosts matched, nothing to do
Any advice on how I can get past this? I appreciate any assistance!

Related

Error trying to use the Ansible dynamic inventory plugin for Azure

I'm trying to use the azure_rm plugin for Ansible to generate a dynamic inventory for VMs in Azure, but am getting a "batched request" error of 403 when I try to run the sanity-check command:
$ ansible all -m ping
[WARNING]: * Failed to parse /project/ansible/inventory.azure_rm.yml with
ansible_collections.azure.azcollection.plugins.inventory.azure_rm plugin: a batched request failed with status code 403, url
/subscriptions/<redacted>/resourceGroups/<redacted>/providers/Microsoft.Compute/virtualMachines
...
Here are the specifics of my macOS setup:
$ ansible --version
ansible 2.10.3
config file = /project/ansible/ansible.cfg
configured module search path = ['/Users/me/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.10.3_1/libexec/lib/python3.9/site-packages/ansible
executable location = /usr/local/Cellar/ansible/2.10.3_1/libexec/bin/ansible
python version = 3.9.0 (default, Dec 6 2020, 18:02:34) [Clang 12.0.0 (clang-1200.0.32.27)]
This is the inventory.azure_rm.yml file:
plugin: azure_rm
include_vm_resource_groups:
- <redacted>
auth_source: auto
keyed_groups:
- prefix: tag
key: tags
And I've also added this to the local ansible.cfg file:
inventory = ./inventory.azure_rm.yml
I've also defined the particulars for authenticating to Azure as environment variables:
$ env | grep AZURE
AZURE_TENANT=<redacted>
AZURE_CLIENT_ID=<redacted>
AZURE_USE_PRIVATE_IP=yes
AZURE_SECRET=<redacted>
AZURE_SUBSCRIPTION_ID=<redacted>
These are the same "credentials" that I used with Terraform to create the VMs which I'm now trying to dynamically inventory, so they should be good. So at a bit of a loss as to what is behind the 403 error.
I then added a -vvvv option to the command and got some additional info:
$ ansible all -m ping -vvvv
ansible 2.10.3
config file = /Users/me/project/ansible/ansible.cfg
configured module search path = ['/Users/me/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/2.10.3_1/libexec/lib/python3.9/site-packages/ansible
executable location = /usr/local/Cellar/ansible/2.10.3_1/libexec/bin/ansible
python version = 3.9.0 (default, Dec 6 2020, 18:02:34) [Clang 12.0.0 (clang-1200.0.32.27)]
Using /Users/me/project/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /Users/me/project/ansible/inventory.azure_rm.yml as it did not pass its verify_file() method
script declined parsing /Users/me/project/ansible/inventory.azure_rm.yml as it did not pass its verify_file() method
redirecting (type: inventory) ansible.builtin.azure_rm to azure.azcollection.azure_rm
Loading collection azure.azcollection from /Users/me/.ansible/collections/ansible_collections/azure/azcollection
toml declined parsing /Users/me/project/ansible/inventory.azure_rm.yml as it did not pass its verify_file() method
[WARNING]: * Failed to parse /Users/me/project/ansible/inventory.azure_rm.yml with
ansible_collections.azure.azcollection.plugins.inventory.azure_rm plugin: a batched request failed with status code 403, url
/subscriptions/<redacted>/resourceGroups/<redacted>/providers/Microsoft.Compute/virtualMachines
File "/usr/local/Cellar/ansible/2.10.3_1/libexec/lib/python3.9/site-packages/ansible/inventory/manager.py", line 289, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/usr/local/Cellar/ansible/2.10.3_1/libexec/lib/python3.9/site-packages/ansible/plugins/inventory/auto.py", line 59, in parse
plugin.parse(inventory, loader, path, cache=cache)
File "/Users/me/.ansible/collections/ansible_collections/azure/azcollection/plugins/inventory/azure_rm.py", line 206, in parse
self._get_hosts()
File "/Users/me/.ansible/collections/ansible_collections/azure/azcollection/plugins/inventory/azure_rm.py", line 263, in _get_hosts
self._process_queue_batch()
File "/Users/me/.ansible/collections/ansible_collections/azure/azcollection/plugins/inventory/azure_rm.py", line 405, in _process_queue_batch
raise AnsibleError("a batched request failed with status code {0}, url {1}".format(status_code, result.url))
Has anyone come across this before and figured out a fix? I'm assuming the Service Principal I'm using is missing some role or permission, but I have no idea what it is given the same SP is used to provision the VM in the first place.
Add the collection to get the latest version and then try this:
plugin: azure.azcollection.azure_rm
This will ensure you're using the latest and not the built-in version that won't contain bug fixes and support newer api versions.

Handle YUM package installation deployment for target environments(dev/prod/systest) using ansible playbook

Need to handle YUM package installation deployement process with different versions/packages, for target environments(dev/prod/systest) using ansible playbook.
NOTE: I have gone through groups_var and hosts_var concept but did not understand if multiple packages with different versions can handled for deployment in multiple environment based on input
As you found out, this separation can be achieved by using group_vars and host_vars. These are loaded in relation to the path of inventory file.
Simple example tasks like below will install different versions in dev and prod environments as explained below.
Example playbook1.yml:
- hosts: appservers
tasks:
- name: install app-a
yum:
name: 'app-a-{{ app_a_version }}'
- name: install app-b
yum:
name: 'app-b-{{ app_b_version }}'
Consider the example directory structure separating each environment's inventory:
dev/hosts
prod/hosts
systest/hosts
Each inventory file will contain hosts/groups for that environment.
Dev environment:
Example dev/hosts:
[appservers]
appserver1.dev
appserver2.dev
Then we can have variables specific to this environments in dev/group_vars/appservers.yml:
---
app_a_version: 1.1
app_b_version: 5.5
Will install app-a-1.1 and app-b-5.5 when run as:
ansible-playbook playbook1.yml -i dev/hosts
Prod environment:
Example prod/hosts:
[appservers]
appserver1.prod
appserver2.prod
And variables defined in prod/group_vars/appservers.yml:
app_a_version: 1.0
app_b_version: 5.0
But in prod it will install app-a-1.0 and app-b-5.0 when run as:
ansible-playbook playbook1.yml -i prod/hosts
host_vars work in similar way, and can be used to provide variables specific to each host of the inventory rather than groups in inventory.

How to execute ansible-playbook with Azure Dynamic Inventories along with keyed groups conditional

I am trying to use azure_rm plugin in ansible to generate dynamic hosts on Azure platform. With keyed group conditional, I am able to successfully make it work with an ansible ad-hoc command. However, it does not work when I try to pass the same with "ansible-playbook". Can anyone please help how could I run an ansible-playbook the same way ?
Below is my dynamic inventory generation file:
---
plugin: azure_rm
auth_source: msi
keyed_groups:
- prefix: tag
key: tags
When I use the file to ping the target VM, below is a success response.
Command used:
ansible -m ping tag_my_devops_ansible_slave -i dynamic_inventory_azure_rm.yml
Response:
devops-eastus2-dev-ansibleslave-vm_2f44 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
However, when I use the same with ansible-playbook, I get the below error.
Command used:
ansible-playbook tag_cdo_devops_ansible_slave -i dynamic_inventory_azure_rm.yml test-playbook.yml
Error:
ansible-playbook: error: unrecognized arguments: test-playbook.yml
Can anyone please help on how to execute an ansible-playbook for the above use case ?
The ansible-playbook command does not accept a list of targets on the command line, rather the playbook file has hosts: as a top-level key indicating the hosts to which the playbook will apply.
So, if the playbook is always going to be used with that tag, you can just indicate that in the playbook:
- hosts: tag_cdo_devops_ansible_slave
tasks:
- debug: var=ansible_host
It also appears that hosts: does honor jinja2 templating, so you can achieve what you're trying to do via:
- hosts: '{{ azure_playbook_hosts }}'
tasks:
- debug: var=ansible_host
and then ansible-playbook -e azure_playbook_hosts=tag_cdo_devops_ansible_slave -i dynamic_inventory_azure_rm.yml test-playbook.yml
Or you can create a dedicated inventory file that only returns hosts matching your desired tag, and then use -i for that inventory along with hosts: all in the playbook file.

How to set ansible mitogen strategy on playbook?

I'm trying to make it work mitogen strategy on my ansible playbook. I'm following tutorial on mitogen tutorial. My python version is 3.6 and ansible version is 2.7.10.
Mitogen is installed in /usr/local/lib/python3.6/site-packages/ansible_mitogen/plugins/strategy
When I try to add keys on my playbook as:
- hosts: "{{ host_group | default('host-list') }}"
...
strategy: mitogen_linear
strategy_plugins: /usr/local/lib/python3.6/site-packages/ansible_mitogen/plugins/strategy
I have the following error:
ERROR! 'strategy_plugins' is not a valid attribute for a Play
Also, I'm trying to configure it as a environment variable in playbook execution:
command = ['ANSIBLE_STRATEGY_PLUGINS=/usr/local/lib/python3.6/site-packages/ansible_mitogen/plugins/strategy', 'ANSIBLE_STRATEGY=mitogen_linear', 'ansible-playbook', '-ihosts', 'ansible_scripts/inventory.yml']
process = subprocess.Popen(command, stdout=subprocess.PIPE)
Here is not finding directory:
FileNotFoundError: [Errno 2] No such file or directory: 'ANSIBLE_STRATEGY_PLUGINS=/usr/local/lib/python3.6/site-packages/ansible_mitogen/plugins/strategy': 'ANSIBLE_STRATEGY_PLUGINS=/usr/local/lib/python3.6/site-packages/ansible_mitogen/plugins/strategy'
How I can correctly configure mitogen strategy on my playbook? How I can make it work?
This might be some misconfiguration "No such file or directory". Try the configuration file. For example put into the [defaults] section:
$ grep strategy /etc/ansible/ansible.cfg
strategy_plugins = /usr/local/ansible/plugins/ansible_mitogen/plugins/strategy
strategy = mitogen_linear
Fit the configuration file and path to your needs. (Works for me with ansible 2.7.9 and mitogen-0.2.6)
FWIW, if you want to automate the installation and configuration see plugins.yml and example of vars.

Gcloud clone do not use correct account

I try to clone a repository from gcloud.
Here is my configuration:
$ gcloud info
Google Cloud SDK [183.0.0]
Platform: [Linux, x86_64] ('Linux', 'debian', '4.9.0-4-amd64', '#1 SMP Debian 4.9.65-3 (2017-12-03)', 'x86_64', '')
Python Version: [2.7.13 (default, Nov 24 2017, 17:33:09) [GCC 6.3.0 20170516]]
Python Location: [/usr/bin/python2]
Site Packages: [Disabled]
Installation Root: [/usr/lib/google-cloud-sdk]
Installed Components:
core: [2017.12.08]
app-engine-python: [1.9.64]
beta: [2017.12.08]
gsutil: [4.28]
bq: [2.0.27]
alpha: [2017.12.08]
System PATH: [/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games]
Python PATH: [/usr/bin/../lib/google-cloud-sdk/lib/third_party:/usr/lib/google-cloud-sdk/lib:/usr/lib/python2.7:/usr/lib/python2.7/plat-x86_64-linux-gnu:/usr/lib/python2.7/lib-tk:/usr/lib/python2.7/lib-old:/usr/lib/python2.7/lib-dynload]
Cloud SDK on PATH: [False]
Kubectl on PATH: [/usr/bin/kubectl]
Installation Properties: [/usr/lib/google-cloud-sdk/properties]
User Config Directory: [/home/me/.config/gcloud]
Active Configuration Name: [default]
Active Configuration Path: [/home/me/.config/gcloud/configurations/config_default]
Account: [account#client.com]
Project: [ipcloud-viewer]
Current Properties:
[core]
project: [ipcloud-viewer]
account: [account#client.com]
disable_usage_reporting: [True]
[compute]
region: [europe-west1]
zone: [europe-west1-d]
Logs Directory: [/home/me/.config/gcloud/logs]
Last Log File: [/home/me/.config/gcloud/logs/2017.12.21/11.39.49.435511.log]
git: [git version 2.11.0]
ssh: [OpenSSH_7.4p1 Debian-10+deb9u2, OpenSSL 1.0.2l 25 May 2017]
But when I whant to clone I've got this:
$ gcloud source repos clone repo --project=client_project --account=account#client.com
Clonage dans '/home/me/project/temp/repo'...
fatal: remote error: Access denied to me#other.fr
ERROR: (gcloud.source.repos.clone) Command '['git', 'clone', u'https://source.developers.google.com/p/client_project/r/repo', '/home/me/project/temp/repo', '--config', 'credential.helper=!gcloud auth git-helper --account=account#client.com --ignore-unknown $#']' returned non-zero exit status 128
As you can see, I'm logged with account#client.com and during the process, the account me#other.fr is used... and I do not know why !
Any idea what is the problem ?
BTW, I delete my ~/.config/gcloud and redone a gcloud init before doing all this....
EDIT: solution founded.
I had a ~/.netrc file information about mu me#other.fr account... I removed it and it worked !
The error suggests that the original repository (the one you're trying to clone) has me#other.fr as owner, but you're trying to access it (via git, under the hood) using the account#client.com account - the gcloud command executes under a single account.
You could try to add the account#client.com account as a project member of the original project, allowing it to access that private repository for cloning, see Adding project members and setting permissions.

Resources