Ansible - How to run YAML code in Azure Cloud Shell - azure

Question: How do I run the following YAML in Azure Cloud Shell?
In step 1 of this Ansible tutorial, the author is asking to run the following YAML - to create a resource group. I'm using PowerShell in Azure Cloud Shell (where Ansible is pre-installed).
- name: Create resource group
azure_rm_resourcegroup:
name: rg-cs-ansible
location: eastus

Save it to a playbook.yaml text file and run it with ansible-playbook playbook.yaml, but you also need to have a proper structure to the playbook file. something like this:
---
- hosts: localhost
tasks:
- name: Create resource groupf
azure_rm_resourcegroup:
name: rg-cs-ansible
location: eastus

Related

How to refer to AWX credentials in my ansible playbook

I'm new to ansible . I'm trying to use an existing playbook but deploy it to a different Azure account with seperate credentials but I'm running into some issues. I created a new credential via the AWX portal with my client_id, tenant_id, subscription_id and secret but I cant figure out how to get my playbook to pull this credential instead of the one its currently using.
My playbook authentication role authenticates like so
- name: 'Authenticating against Azure'
command: >
az login --service-principal
-u '{{ vault_azure_client_id }}'
-p '{{ vault_azure_client_secret }}'
-t '{{ vault_azure_tenant_id }}'
there is then a secrets folder with a vault file containing what looks like an encrypted string and starting with the below
$ANSIBLE_VAULT;1.1
My main file declares the variable like below
# Environment Variables
environment:
AZURE_CLIENT_ID: '{{ vault_azure_client_id }}'
AZURE_SECRET: '{{ vault_azure_client_secret }}'
AZURE_TENANT: '{{ vault_azure_tenant_id }}'
How do i edit the main file and role to point at my creds created through the console instead of the ones stored in ansible vault?
This is because by default your playbook file taking credential from vault file. Point to your main file to take credential rather than default file (Vault file).
Variables can come from different sources, such as the playbook file itself or external variable files that are imported in the playbook. Special precedence rules will apply when working with multiple variable sources that define a variable with the same name.
Suggestion 1 : If you are using variable in playbook file itself you pass use the variable like this.
vars:
- AZURE_CLIENT_ID: Client ID
- AZURE_SECRET: Client Secret Value
- AZURE_TENANT: Tenant ID
tasks:
- name: 'Authenticating against Azure'
command: >
az login --service-principal
-u '{{ AZURE_CLIENT_ID}}'
-p '{{ AZURE_SECRET }}'
-t '{{ AZURE_TENANT}}'
Reference : https://www.digitalocean.com/community/tutorials/how-to-use-variables-in-ansible-playbooks
Suggestion 2 : You can also pass the extra variables to an Ansible playbook using
--extra-vars or -e option while running the Ansible playbook, as seen below.
#ansible-playbook myplaybook.yaml --extra-vars "nodes=webgroup”
You can refer this Document to pass the variable from outside.
Assuming your unencrypted "vault file" in your "secrets folder" looks like this:
vault_azure_client_id: foo
vault_azure_client_secret: bar
vault_azure_tenant_id : baz
You have two options:
Stop using this file and configure these variables in AWX. You don't define these variables as credentials in AWX, you need to define them in the job template that calls the playbook.
Rewrite your "vault file" putting your secret variables inline. E.g:
vault_azure_client_id: !vault |
$ANSIBLE_VAULT;1.2;AES256;dev
30613...
vault_azure_client_secret: !vault |
$ANSIBLE_VAULT;1.2;AES256;dev
30613...
vault_azure_tenant_id : !vault |
$ANSIBLE_VAULT;1.2;AES256;dev
30613...
AWX has the limitation of not being able to decrypt variables in an encrypted file, but it could decrypt variables encrypted inline.

How to set KUBECONFIG from terraform cloud generate file in github actions

I am trying to set up github actions to run a CI with terraform and kubernetes. I am connecting to terraform cloud to run the terraform commands and it appears to be generating the kubeconfig during the apply process. I get this in the outout:
local_file.kubeconfig: Creation complete after 0s
In the next step, I try to run kubectl to see the resources that were built, but the command fails because it can't find the the configuration file. Specifically:
error: Missing or incomplete configuration info.
So my question is, how do I use the newly generated local_file.kubeconfig in my kubectl commands?
My first attempt was to expose the KUBECONFIG as an environment variable in the github action step, but I didn't know how I would get the value from terraform cloud into the github actions. So instead, I tried to set the variable in my terraform file with a provisioner definition. But this doesn't seem to work.
Is there an easier way to load that value?
Github Actions Steps
steps:
- name: Checkout code
uses: actions/checkout#v2
with:
ref: 'privatebeta-kubes'
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
with:
cli_config_credentials_token: ${{ secrets.TERRAFORM_API_TOKEN }}
- name: Terraform Init
run: terraform init
- name: Terraform Format Check
run: terraform fmt -check -v
- name: Terraform Plan
run: terraform plan
env:
LINODE_TOKEN: ${{ secrets.LINODE_TOKEN }}
- name: Terraform Apply
run: terraform apply -auto-approve
env:
LINODE_TOKEN: ${{ secrets.LINODE_TOKEN }}
# this step fails because kubectl can't find the token
- name: List kube nodes
run: kubectl get nodes
and my main.tf file has this definition:
provider "kubernetes" {
kubeconfig = "${local_file.kubeconfig.content}"
}

How to pass Azure service principal details to Ansible via command-line?

I am able to connect to Azure using Ansible by putting my service principle details into the credentials file stored in ~/.azure/credentials
That was OK for development, now (in production) I want to move away from using the text credentials file and pass the credentials to Ansible via the command-line via parameters.
How should this be done?
Any help is appreciated - thanks
I have tried:
ansible-playbook -i ./dev-env/epazure_rm.yml ./dev-env/site.yml -vvvv -u adminuser --extra-vars "AZURE_SUBSCRIPTION_ID=XXX AZURE_CLIENT_ID=XXX AZURE_SECRET=XXX AZURE_TENANT=XXX"
My Azure Dynamic Inventory plugin file looks like this
---
plugin: azure_rm
include_vm_resource_groups:
- rg-devdonal-eastus01
auth_source: auto
subscription_id: "{{ AZURE_SUBSCRIPTION_ID }}"
client_id: "{{ AZURE_CLIENT_ID }}"
secret: "{{ AZURE_SECRET }}"
tenant: "{{ AZURE_TENANT }}"
keyed_groups:
- prefix: tag
key: tags
You can use the environment variables for the credential and then read the variables from the environment, here is the example:
- debug: msg="{{ lookup('env','HOME') }} is an environment variable"
And there is also another issue shows the example.

Calling Terraform from Ansible

When i am using terraform modules directly being called from shell scripts it works fine.
But when i am wrapping same shell script which is called from an ansible task it fails. validated all the environment variables for ARM credentials which are being passed. All are fine, but somehow not getting any success to run terraform as an ansible task.
Below is the error I get
Error refreshing state: 1 error(s) occurred:\n\n* module.oracle_server.provider.azurerm: Unable to list provider registration status, it is possible that this is due to invalid credentials or the service principal does not have permission to use the Resource Manager API, Azure error: azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://management.azure.com/subscriptions/****/providers?api-version=2016-02-01: StatusCode=0 -- Original Error: adal:
UPDATEd by the editor
Please update your ansible codes here, more than in comment, lost all format.
- name: Terraform Module
terraform:
project_path: "{{ terraform_module_path }}"
state: "{{ item.infra_state }}"
variables:
platform: "{{ platform }}"
application_name: "{{ application_name }}"
environment: "{{ env }}"
From the error message, it can't properly set the azure credentials, so please check if you include the provider codes or not.
# Configure the Azure Provider
provider "azurerm" {
# whilst the `version` attribute is optional, we recommend pinning to a given version of the Provider
version = "=1.21.0"
}
Reference: https://www.terraform.io/docs/providers/azurerm/

Using cloud-init as custom_data with ansible

I deploy a VM using an ansible playbook, similar to this demo.
- name: Create VM
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: myVM
...
custom_data: cloud-init.yml
Now I also want to install some packages and do some minor preparations. I made a cloud-config.yml
#cloud-config
package_upgrade: true
packages:
- npm
- nodejs-legacy
runcmd:
- sudo mkdir -p /data/projects/
It seems that cloud-init.yml is not executed, so I guess this is not the correct syntax. How should you pass cloud-init files in an ansible playbook? Or is there another method to reach this goal?
custom_data parameter in azure_rm_virtualmachine requires the data, not a filename.
You can use file lookup plugin to fetch the data from a file on an Ansible controller:
- name: Create VM
azure_rm_virtualmachine:
resource_group: myResourceGroup
name: myVM
...
custom_data: "{{ lookup('file', 'cloud-init.yml') }}"

Resources