I'm new to ansible . I'm trying to use an existing playbook but deploy it to a different Azure account with seperate credentials but I'm running into some issues. I created a new credential via the AWX portal with my client_id, tenant_id, subscription_id and secret but I cant figure out how to get my playbook to pull this credential instead of the one its currently using.
My playbook authentication role authenticates like so
- name: 'Authenticating against Azure'
command: >
az login --service-principal
-u '{{ vault_azure_client_id }}'
-p '{{ vault_azure_client_secret }}'
-t '{{ vault_azure_tenant_id }}'
there is then a secrets folder with a vault file containing what looks like an encrypted string and starting with the below
$ANSIBLE_VAULT;1.1
My main file declares the variable like below
# Environment Variables
environment:
AZURE_CLIENT_ID: '{{ vault_azure_client_id }}'
AZURE_SECRET: '{{ vault_azure_client_secret }}'
AZURE_TENANT: '{{ vault_azure_tenant_id }}'
How do i edit the main file and role to point at my creds created through the console instead of the ones stored in ansible vault?
This is because by default your playbook file taking credential from vault file. Point to your main file to take credential rather than default file (Vault file).
Variables can come from different sources, such as the playbook file itself or external variable files that are imported in the playbook. Special precedence rules will apply when working with multiple variable sources that define a variable with the same name.
Suggestion 1 : If you are using variable in playbook file itself you pass use the variable like this.
vars:
- AZURE_CLIENT_ID: Client ID
- AZURE_SECRET: Client Secret Value
- AZURE_TENANT: Tenant ID
tasks:
- name: 'Authenticating against Azure'
command: >
az login --service-principal
-u '{{ AZURE_CLIENT_ID}}'
-p '{{ AZURE_SECRET }}'
-t '{{ AZURE_TENANT}}'
Reference : https://www.digitalocean.com/community/tutorials/how-to-use-variables-in-ansible-playbooks
Suggestion 2 : You can also pass the extra variables to an Ansible playbook using
--extra-vars or -e option while running the Ansible playbook, as seen below.
#ansible-playbook myplaybook.yaml --extra-vars "nodes=webgroup”
You can refer this Document to pass the variable from outside.
Assuming your unencrypted "vault file" in your "secrets folder" looks like this:
vault_azure_client_id: foo
vault_azure_client_secret: bar
vault_azure_tenant_id : baz
You have two options:
Stop using this file and configure these variables in AWX. You don't define these variables as credentials in AWX, you need to define them in the job template that calls the playbook.
Rewrite your "vault file" putting your secret variables inline. E.g:
vault_azure_client_id: !vault |
$ANSIBLE_VAULT;1.2;AES256;dev
30613...
vault_azure_client_secret: !vault |
$ANSIBLE_VAULT;1.2;AES256;dev
30613...
vault_azure_tenant_id : !vault |
$ANSIBLE_VAULT;1.2;AES256;dev
30613...
AWX has the limitation of not being able to decrypt variables in an encrypted file, but it could decrypt variables encrypted inline.
Related
What I want to achieve?
Store terraform output value (ip address) as env variable in git hub actions and use it during updating network security group.
What I have done?
Based on: Github Actions, how to share a calculated value between job steps?:
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
- name: Extract gateway ip
run: |
terraform init
echo "IP_GAT=$(terraform output -json gatewayStaticIp | jq -r .)" >> $GITHUB_ENV
working-directory: ${{ env.my_dir }}
- name: Update security group
run: |
ip=${{ env.IP_GAT }}
az network nsg rule update -g myGroup --nsg-name myName -n myRuleName --source-address-prefix $ip
Apparently there is some problem with jq even it seems to be exactly like in example(https://www.terraform.io/docs/commands/output.html):
Error: write EPIPE
Any ideas?
Thanks in advance
The hashicorp/setup-terraform#v1 uses a wrapper to execute terraform and messes up the output when using redirection (like you do with shell pipe). There's an issue describing the problem in their repo.
Disabling the wrapper will make it work but you'll lose some functionalities to reuse stdout, stderr and exit code from the github integration.
- name: Terraform setup
uses: hashicorp/setup-terraform#v1
with:
terraform_version: 0.13.5
terraform_wrapper: false
Question: How do I run the following YAML in Azure Cloud Shell?
In step 1 of this Ansible tutorial, the author is asking to run the following YAML - to create a resource group. I'm using PowerShell in Azure Cloud Shell (where Ansible is pre-installed).
- name: Create resource group
azure_rm_resourcegroup:
name: rg-cs-ansible
location: eastus
Save it to a playbook.yaml text file and run it with ansible-playbook playbook.yaml, but you also need to have a proper structure to the playbook file. something like this:
---
- hosts: localhost
tasks:
- name: Create resource groupf
azure_rm_resourcegroup:
name: rg-cs-ansible
location: eastus
I am setting up an Azure pipeline for a Node app with Jest being used to test APIs and integration. The source code lives on Azure DevOps and the code is deployed in Azure Portal.
When I run the tests, it fails in the pipeline as the .env is never checked in the remote repository. The environment variables are living in the Azure Portal in runtime though configuration so the pipeline cannot really access it.
What is some ways to have access or create new location for the environment variables in order for my tests to run the the virtual machine?
My current solution (which I don't know if its right) is to create a variable group and redefine all my environment variables so the pipeline can read the variables also described here: https://damienaicheh.github.io/azure/devops/2019/09/04/how-to-use-variables-inside-your-azure-devops-builds-en.html
My questions are:
Is this correct? Any of the stored variables here have nothing to do with the build neither they are inputs to run commands, rather all my environment variables are required inside the source code so I can test in a virtual machine (Ex: base_url, apiKeys, etc).
If this is right, how can I possible avoid re-writting and re-assigning all the value in the pipeline? Can I source the entire variable group and the source code can interpret? I want to avoid like this
- env
- API_KEY: $(apiKey)
- MAPS_KEY: $(mapsKey)
- CLIENT_KEY: $(clientKey)
- CLIENT_SECRET: $(clientSecret)
-
-
- and so on...
// looking for something like this
-env: myVariableGroup
Any leads to a post, articles to a better solution? I was thinking of using key vault but I think it will essentially the same that I have to import one-by-one.
Pipeline variables are mapped to env variables automatically so no need to extra work. There is only one exception - secrets. You must mapped them explicitly
steps:
- script: echo $MYSECRET
env:
MYSECRET: $(Foo)
So all values from declaration, group or template are mapped to env vars
vars.yaml
variables:
variableFromTemplate: 'valueFromTemplate'
build.yaml
variables:
- group: PROD
- name: variableFromDeclaration
value: 'valueFromDeclaration'
- template: vars.yaml
pool:
vmImage: 'ubuntu-latest'
steps:
- script: env | sort
- script: |
echo $VARIABLEFROMDECLARATION
echo $VARIABLEFROMGROUP
echo $VARIABLEFROMTEMPLATE
- pwsh: |
$url = "https://dev.azure.com/thecodemanual/$(System.TeamProject)/_apis/build/builds/$(Build.BuildId)?api-version=5.1"
$build = Invoke-RestMethod -Uri $url -Headers #{Authorization = "Bearer $env:MY_SECRET"}
Write-Host "Pipeline = $($build | ConvertTo-Json -Depth 100)"
$status = $build.status
Write-Host $status
name: initial
env:
MY_SECRET: $(System.AccessToken)
So for each step you need to define secrets in env section. As a workaround you my try use container jobs and define env mapping on container level.
resources:
containers:
- container: string # identifier (A-Z, a-z, 0-9, and underscore)
image: string # container image name
options: string # arguments to pass to container at startup
endpoint: string # reference to a service connection for the private registry
env: { string: string } # list of environment variables to add
ports: [ string ] # ports to expose on the container
volumes: [ string ] # volumes to mount on the container
mapDockerSocket: bool # whether to map in the Docker daemon socket; defaults to true
mountReadOnly: # volumes to mount read-only - all default to false
externals: boolean # components required to talk to the agent
tasks: boolean # tasks required by the job
tools: boolean # installable tools like Python and Ruby
work: boolean # the work directory
I am able to connect to Azure using Ansible by putting my service principle details into the credentials file stored in ~/.azure/credentials
That was OK for development, now (in production) I want to move away from using the text credentials file and pass the credentials to Ansible via the command-line via parameters.
How should this be done?
Any help is appreciated - thanks
I have tried:
ansible-playbook -i ./dev-env/epazure_rm.yml ./dev-env/site.yml -vvvv -u adminuser --extra-vars "AZURE_SUBSCRIPTION_ID=XXX AZURE_CLIENT_ID=XXX AZURE_SECRET=XXX AZURE_TENANT=XXX"
My Azure Dynamic Inventory plugin file looks like this
---
plugin: azure_rm
include_vm_resource_groups:
- rg-devdonal-eastus01
auth_source: auto
subscription_id: "{{ AZURE_SUBSCRIPTION_ID }}"
client_id: "{{ AZURE_CLIENT_ID }}"
secret: "{{ AZURE_SECRET }}"
tenant: "{{ AZURE_TENANT }}"
keyed_groups:
- prefix: tag
key: tags
You can use the environment variables for the credential and then read the variables from the environment, here is the example:
- debug: msg="{{ lookup('env','HOME') }} is an environment variable"
And there is also another issue shows the example.
When i am using terraform modules directly being called from shell scripts it works fine.
But when i am wrapping same shell script which is called from an ansible task it fails. validated all the environment variables for ARM credentials which are being passed. All are fine, but somehow not getting any success to run terraform as an ansible task.
Below is the error I get
Error refreshing state: 1 error(s) occurred:\n\n* module.oracle_server.provider.azurerm: Unable to list provider registration status, it is possible that this is due to invalid credentials or the service principal does not have permission to use the Resource Manager API, Azure error: azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://management.azure.com/subscriptions/****/providers?api-version=2016-02-01: StatusCode=0 -- Original Error: adal:
UPDATEd by the editor
Please update your ansible codes here, more than in comment, lost all format.
- name: Terraform Module
terraform:
project_path: "{{ terraform_module_path }}"
state: "{{ item.infra_state }}"
variables:
platform: "{{ platform }}"
application_name: "{{ application_name }}"
environment: "{{ env }}"
From the error message, it can't properly set the azure credentials, so please check if you include the provider codes or not.
# Configure the Azure Provider
provider "azurerm" {
# whilst the `version` attribute is optional, we recommend pinning to a given version of the Provider
version = "=1.21.0"
}
Reference: https://www.terraform.io/docs/providers/azurerm/