Is it possible to overwrite or create a new version of a secret through ansible in azure? - azure

I need to deploy my secrets in Azure's keyvault through ansible.
If the secret is a new one (i.e. it didnt exist before) it works perfectly, the secret is created properly.
Problem came when I need to update the secret, it is never overwritten.
I tried to delete it and create it again but is not working either since it performs a soft delete so it can be created again with the same name.
Here what I tried so far:
Secret creation (working fine the first time but not overwriting it)
- name: "Create endpoint secret."
azure_rm_keyvaultsecret:
secret_name: mysecret
secret_value: "desiredvalue"
keyvault_uri: "https://{{ AZURE_KV_NAME }}.vault.azure.net/"
tags:
environment: "{{ ENV }}"
role: "endpointsecret"
Here is how I try to delete it first and then creating it again
- name: "Delete endpoint secret."
azure_rm_keyvaultsecret:
secret_name: mysecret
keyvault_uri: "https://{{ AZURE_KV_NAME }}.vault.azure.net/"
state: "absent"
- name: "Create endpoint secret."
azure_rm_keyvaultsecret:
secret_name: mysecret
secret_value: "desiredvalue"
keyvault_uri: "https://{{ AZURE_KV_NAME }}.vault.azure.net/"
tags:
environment: "{{ ENV }}"
role: "endpointsecret"
When trying this error is:
Secret mysecret is currently being deleted and cannot be re-created; retry later
**Secret creation with state: present (it's not creating a new version either) **
- name: "Create endpoint secret."
azure_rm_keyvaultsecret:
secret_name: mysecret
secret_value: "desiredvalue"
keyvault_uri: "https://{{ AZURE_KV_NAME }}.vault.azure.net/"
state: "present"
tags:
environment: "{{ ENV }}"
role: "endpointsecret"
Any idea how to overwrite ( create a new version )a secret or at least perform a hard delete?

I find no way other than deploy it through ARM
- name: "Create ingestion keyvault secrets."
azure_rm_deployment:
state: present
resource_group_name: "{{ AZURE_RG_NAME }}"
location: "{{ AZURE_RG_LOCATION }}"
template:
$schema: "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#"
contentVersion: "1.0.0.0"
parameters:
variables:
resources:
- apiVersion: "2018-02-14"
type: "Microsoft.KeyVault/vaults/secrets"
name: "{{AZURE_KV_NAME}}/{{item.name}}"
properties:
value: "{{item.secret}}"
contentType: "string"
loop: "{{ SECRETLIST }}"
register: publish_secrets
async: 300 # Maximum runtime in seconds.
poll: 0 # Fire and continue (never poll)
- name: Wait for the secret deployment task to finish
async_status:
jid: "{{ publish_secrets_item.ansible_job_id }}"
loop: "{{publish_secrets.results}}"
loop_control:
loop_var: "publish_secrets_item"
register: jobs_publish_secrets
until: jobs_publish_secrets.finished
retries: 5
delay: 2
And then in other file the SECRETLIST declared as a variable:
SECRETLIST :
- name: mysecret
secret: "secretvalue"
- name: othersecret
secret: "secretvalue2"
Hope this helps to anyone with a similar problem

Related

Ansible loop to add member in Azure AD group using graph api

I have been stuck to loop over each "members" in variable file. Reason to take each members is to get user_id and passing those user id in graph api will be able to POST those members in specific AD group.
main.yml
- name: Check if Group Exists
include_tasks: tasks/groups/check_group.yml
- name: Create Group
include_tasks: tasks/groups/create_group.yml
when: item.state == "present" and az_group_id == ""
- name: Populate the group with members
include_tasks: tasks/groups/add_member.yml
when: az_group_id != "" and item.state == "present"
loop: "{{ az_group_config_data }}"
add_member.yml
- name: Get Azure AD User ID
include_tasks: tasks/users/gets/get_user_id.yml
- name: Add Member to a group
uri:
url: "{{ graph_api_base_url }}{{ group_service_context }}/{{ az_group_id }}/members/$ref"
method: POST
body: "{{ lookup('template','./add-group-member-body.j2') }}"
status_code: 204
return_content: yes
use_proxy: no
headers:
Authorization: Bearer {{ token }}
body_format: json
register: group_create_result
get_userid.yml
- name: Register list of Azure Users as Ansible Fact
uri:
url: "{{ graph_api_base_url }}{{ user_service_context }}?$top=999&$select=mail,id"
method: GET
status_code: 200
use_proxy: no
headers:
Authorization: Bearer {{ token }}
register: user_list_result
- name: Register User {{ az_user_mail }} ID as Ansible Fact
set_fact:
az_user_id: "{{ user_list_result.json.value | json_query(az_user_id_query) }}"
defaults/main.yml
az_group_config_data:
- az_group_name: ghe-test-users
aws_account: sbx
environment: poc
state: present
members:
- name#name.com
- name#example.com
- test#test.com
vars/main.yml
az_group_name: "{{ item.az_group_name }}"
az_user_mail: "{{ item.members }}"
# api helpers
graph_api_base_url: "https://graph.microsoft.com/v1.0/"
group_service_context: "groups"
user_service_context: "users"
# query helpers
az_group_id_query: "[?displayName == '{{ az_group_name }}'].id | [0]"
az_user_id_query: "[?mail == '{{ az_user_mail }}'].id | [0]"
add-group-member-body.j2
{
"#odata.id": "https://graph.microsoft.com/v1.0/directoryObjects/{{ az_user_id }}"
}
Below code works to fetch email accounts
- name: latest-debug
set_fact:
az_user_mail: "{{ populate.1 }}"
loop: "{{ az_group_config_data|subelements('members') }}"
loop_control:
loop_var: populate
My issue is only last email account is getting created.

Access environment variables set by configMapRef in kubernetes pod

I have a set of environment variables in my deployment using EnvFrom and configMapRef. The environment variables held in these configMaps were set by kustomize originally from json files.
spec.template.spec.containers[0].
envFrom:
- secretRef:
name: eventstore-login
- configMapRef:
name: environment
- configMapRef:
name: eventstore-connection
- configMapRef:
name: graylog-connection
- configMapRef:
name: keycloak
- configMapRef:
name: database
The issue is that it's not possible for me to access the specific environment variables directly.
Here is the result of running printenv in the pod:
...
eventstore-login={
"EVENT_STORE_LOGIN": "admin",
"EVENT_STORE_PASS": "changeit"
}
evironment={
"LOTUS_ENV":"dev",
"DEV_ENV":"dev"
}
eventstore={
"EVENT_STORE_HOST": "eventstore-cluster",
"EVENT_STORE_PORT": "1113"
}
graylog={
"GRAYLOG_HOST":"",
"GRAYLOG_SERVICE_PORT_GELF_TCP":""
}
...
This means that from my nodejs app I need to do something like this
> process.env.graylog
'{\n "GRAYLOG_HOST":"",\n "GRAYLOG_SERVICE_PORT_GELF_TCP":""\n}\n'
This only returns the json string that corresponds to my original json file. But I want to be able to do something like this:
process.env.GRAYLOG_HOST
To retrieve my environment variables. But I don't want to have to modify my deployment to look something like this:
env:
- name: NODE_ENV
value: dev
- name: EVENT_STORE_HOST
valueFrom:
secretKeyRef:
name: eventstore-secret
key: EVENT_STORE_HOST
- name: EVENT_STORE_PORT
valueFrom:
secretKeyRef:
name: eventstore-secret
key: EVENT_STORE_PORT
- name: KEYCLOAK_REALM_PUBLIC_KEY
valueFrom:
configMapKeyRef:
name: keycloak-local
key: KEYCLOAK_REALM_PUBLIC_KEY
Where every variable is explicitly declared. I could do this but this is more of a pain to maintain.
Short answer:
You will need to define variables explicitly or change configmaps so they have 1 environment variable = 1 value structure, this way you will be able to refer to them using envFrom. E.g.:
"apiVersion": "v1",
"data": {
"EVENT_STORE_LOGIN": "admin",
"EVENT_STORE_PASS": "changeit"
},
"kind": "ConfigMap",
More details
Configmaps are key-value pairs that means for one key there's only one value, configmaps can get string as data, but they can't work with map.
I tried edited manually the configmap to confirm the above and got following:
invalid type for io.k8s.api.core.v1.ConfigMap.data: got "map", expected "string"
This is the reason why environment comes up as one string instead of structure.
For example this is how configmap.json looks:
$ kubectl describe cm test2
Name: test2
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
test.json:
----
environment={
"LOTUS_ENV":"dev",
"DEV_ENV":"dev"
}
And this is how it's stored in kubernetes:
$ kubectl get cm test2 -o json
{
"apiVersion": "v1",
"data": {
"test.json": "evironment={\n \"LOTUS_ENV\":\"dev\",\n \"DEV_ENV\":\"dev\"\n}\n"
},
In other words observed behaviour is expected.
Useful links:
ConfigMaps
Configure a Pod to Use a ConfigMap

Environmental variables returning undefined for Kubernetes deployment

I posted a question similar to this and tried to implement what the answer for this question said: How to access Kubernetes container environment variables from Next.js application?
However, when I still call my environment variables doing process.env.USERNAME, I'm still getting undefined back... Am I doing something wrong in my deployment file? Here is a copy of my deployment.yaml:
metadata:
namespace: <namespace>
releaseName: <release name>
releaseVersion: 1.0.0
target: <target>
auth:
replicaCount: 1
image:
repository: '<name of repository is here>'
pullPolicy: <always>
container:
multiPorts:
- containerPort: 443
name: HTTPS
protocol: TCP
- containerPort: 80
name: HTTP
protocol: TCP
env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: my-username
key: username
- name: PASSWORD
valueFrom:
secretKeyRef:
name: my-password
key: password
- name: HOST
valueFrom:
secretKeyRef:
name: my-host
key: host
volumeMounts:
- name: config
mountPath: "/configMap"
readOnly: true
volume:
- name: config
configMap:
name: environmental-variables
resources:
requests:
cpu: 0.25
memory: 256Mi
limits:
cpu: 1
memory: 1024Mi
variables:
- name: NODE_ENV
value: <node env value here>
ingress:
enabled: true
ingressType: <ingressType>
applicationType: <application type>
serviceEndpoint: <endpoint>
multiPaths:
- path: /
- HTTPS
tls:
enabled: true
secretName: <name>
autoscale:
enabled: false
minReplicas: 1
maxReplicas: 5
cpuAverageUtilization: 50
memoryUtilizationValue: 50
annotations:
ingress:
nginx.ingress.kubernetes.io/affinity: <affinity>
nginx.ingress.kubernetes.io/session-cookie-name: <cookie-name>
nginx.ingress.kubernetes.io/session-cookie-expires: <number>
nginx.ingress.kubernetes.io/session-cookie-max-age: <number>
I also created a configMap.yaml file, although I'm not sure if that's the right way to do this. Here is my configMap.yaml file:
apiVersion: v1
kind: ConfigMap
metadata:
name: environmental-variables
data:
.env: |
USERNAME: <username>
PASSWORD: <password>
HOST: <host>
Any help will be greatly appreciated! Also I'm trying to make my environment variable as Secrets since I don't want to expose any of my variables because it contains sensitive information. I am trying to do this on a Node.js application using Express. Thank you!
EDIT: Here is how the Secrets part looks like in my yaml file
secrets:
- name: environmental-variables
key: USERNAME
- name: environmental-variables
key: PASSWORD
How my Secrets yaml file looks like:
kind: Secret
apiVersion: v1
metadata:
name: environmental-variables
namespace: tda-dev-duck-dev
data:
USERNAME: <username>
PASSWORD: <password>
After days of figuring out how to use Secrets as an environmental variable, I figured out how to reference it in my nodejs application!
Before I was doing the normal way of calling environmental variables, process.env.VARIABLE_NAME, but that did not work for me when I had Secrets as an environment variable. In order to get the value of the variable, I had to do process.env.ENVIRONMENTAL_VARIABLES_USERNAME in my Javascript file and that worked for me! Where ENVIRONMENTAL_VARIABLES is the name and USERNAME is the key!
Not sure if this will help anyone else but this is how I managed to access my Secrets in my nodejs application!
You created ConfigMap and trying to get value from secret. If you want set value from configmap then update env like following
env:
- name: USERNAME
valueFrom:
configMapKeyRef:
name: environmental-variables # this is ConfigMap Name
key: USERNAME # this is key in ConfigMap
- name: PASSWORD
valueFrom:
configMapKeyRef:
name: environmental-variables
key: PASSWORD
- name: HOST
valueFrom:
configMapKeyRef:
name: environmental-variables
key: HOST
and update the configmap like following
apiVersion: v1
kind: ConfigMap
metadata:
name: environmental-variables
data:
USERNAME: <username>
PASSWORD: <password>
HOST: <host>
To learn how to define container environment variables using ConfigMap data click here
If you want to use secrets as environment variables check here

Ansible vmware_guest optional disk with Ansible Tower survey

I have a playbook for the creation of a VM from a template in VMware ESXi 6.7. My playbook is below. I want to only configure the second (and possible subsequent) disks if the DISK1_SIZE_GB variable is > 0. This is not working. I've also tried using 'when: DISK1_SIZE_GB is defined' with no luck. I'm using a survey in Ansible Tower, with the 2nd disk configuration being an optional answer. In this case I get an error about 0 being an invalid disk size, or when I check for variable definition I get an error about DISK1_SIZE_GB being undefined. Either way, the 'when' conditional doesn't seem to be working.
If I hardcode the size, as in the first 'disk' entry, it works fine .. same if I enter a valid size from Ansible Tower. I need to NOT configure additional disks unless the size is defined in the Tower survey.
Thanks!
---
- name: Create a VM from a template
hosts: localhost
gather_facts: no
tasks:
- name: Clone a template to a VM
vmware_guest:
hostname: "{{ lookup('env', 'VMWARE_HOST') }}"
username: "{{ lookup('env', 'VMWARE_USER') }}"
password: "{{ lookup('env', 'VMWARE_PASSWORD') }}"
validate_certs: 'false'
name: "{{ HOSTNAME }}"
template: RHEL-Server-7.7
datacenter: Production
folder: Templates
state: poweredon
hardware:
num_cpus: "{{ CPU_NUM }}"
memory_mb: "{{ MEM_MB }}"
disk:
- size_gb: 20
autoselect_datastore: true
- size_gb: "{{ DISK1_SIZE_GB }}"
autoselect_datastore: true
when: DISK1_SIZE_GB > 0
networks:
- name: "{{ NETWORK }}"
type: static
ip: "{{ IP_ADDR }}"
netmask: "{{ NETMASK }}"
gateway: "{{ GATEWAY }}"
dns_servers: "{{ DNS_SERVERS }}"
start_connected: true
wait_for_ip_address: yes
AFAIK this can't be accomplished in a single task. You were on the right track with when: DISK1_SIZE_GB is defined if disk: was a task and not a parameter though. Below is how I would approach this.
Create two survey questions:
DISK1_SIZE_GB - integer - required answer - enforce a non-zero
minimum value such as 20 (since you're deploying RHEL)
DISK2_SIZE_GB - integer - optional answer - no minimum or maximum
value
Create disk 1 in your existing vmware_guest task:
disk:
- size_gb: "{{ DISK1_SIZE_GB }}"
autoselect_datastore: true
Create a new vmware_guest_disk task which runs immediately afterwards and conditionally adds the second disk:
- name: Add second hard disk if necessary
vmware_guest_disk:
hostname: "{{ lookup('env', 'VMWARE_HOST') }}"
username: "{{ lookup('env', 'VMWARE_USER') }}"
password: "{{ lookup('env', 'VMWARE_PASSWORD') }}"
validate_certs: 'false'
name: "{{ HOSTNAME }}"
datacenter: Production
folder: Templates
state: poweredon
disk:
- size_gb: "{{ DISK2_SIZE_GB }}"
autoselect_datastore: true
when: DISK2_SIZE_GB is defined

Ansible loops and Azure resource dependencies

I'm using Ansible to provision resources to Azure. I'd like to have one task for each type of resource I want to deploy to Azure which loops through a list of dictionaries, so I can just add more dicts in case I want more resources provisioned. I'd like to define each resource variable only once.
The problem that arises with this is dependencies to other resources. Resource groups need to be provisioned before virtual networks, virtual networks before subnets and so on. Yet the information of the top level resources is still needed when provisioning the bottom level ones.
Here's the first attempt, with all of the required top level resource vars defined in the bottom level resource vars as well:
- hosts: localhost
connection: local
vars:
resourcegroups:
- name: "eh_test_rg01"
location: "westeurope"
- name: "eh_test_rg02"
location: "eastus"
virtualnetworks:
- name: "eh_test_vn01"
cidr: 10.15.0.0/22
resource_group: "eh_test_rg01"
- name: "eh_test_vn02"
cidr: 10.15.4.0/22
resource_group: "eh_test_rg02"
DMZ_subnets:
- name: "eh_test_dmzsn01"
cidr: 10.15.1.0/24
vnet: "eh_test_vn01"
location: "westeurope"
resource_group: "eh_test_rg01"
- name: "eh_test_dmzsn02"
cidr: 10.15.5.0/24
vnet: "eh_test_vn02"
location: "eastus"
resource_group: "eh_test_rg02"
app_subnets:
- name: "eh_test_appsn01"
cidr: 10.15.2.0/24
vnet: "eh_test_vn01"
location: "westeurope"
resource_group: "eh_test_rg01"
- name: "eh_test_appsn02"
cidr: 10.15.6.0/24
vnet: "eh_test_vn02"
location: "eastus"
resource_group: "eh_test_rg02"
gateway_subnets:
- name: "GatewaySubnet"
cidr: 10.15.0.0/24
vnet: "eh_test_vn01"
resource_group: "eh_test_rg01"
location: "westeurope"
- name: "GatewaySubnet"
cidr: 10.15.4.0/24
vnet: "eh_test_vn02"
resource_group: "eh_test_rg02"
location: "eastus"
tasks:
- name: Create resource Group
azure_rm_resourcegroup:
name: "{{ item.name }}"
location: "{{ item.location }}"
with_items:
- "{{ resourcegroups }}"
tags: resourcegroups
- name: Create vnet
azure_rm_virtualnetwork:
name: "{{ item.name }}"
resource_group: "{{ item.resource_group }}"
address_prefixes_cidr: "{{ item.cidr }}"
with_items:
- "{{ virtualnetworks }}"
tags: vnets
- name: Create subnets
azure_rm_subnet:
name: "{{ item.name }}"
resource_group: "{{ item.resource_group }}"
address_prefix: "{{ item.cidr }}"
virtual_network: "{{ item.vnet }}"
with_items:
- "{{ DMZ_subnets }}"
- "{{ app_subnets }}"
- "{{ gateway_subnets }}"
tags: subnets
As can be seen from above example, by the subnets dicts there's already 2 vars I have defined before. The deeper we go into the hierarchy, the more excess dict entries will come into play.
I tried to build the relationships into the variable structure, but ran into issues looping though the new variable structure. With_subelements worked fine for looping two lists of dictionaries, but it can't handle 3 or more.
- hosts: localhost
connection: local
vars:
resourcegroups:
- name: "eh_test_rg01"
location: westeurope
virtualnetworks:
- name: "eh_test_vn01"
cidr: 10.15.0.0/22
subnets:
- name: GatewaySubnet
cidr: 10.15.0.0/24
- name: eh_test_dmzsn01
cidr: 10.15.1.0/24
- name: eh_test_appsn01
cidr: 10.15.2.0/24
- name: "eh_test_rg02"
location: westeurope
virtualnetworks:
- name: "eh_test_vn02"
cidr: 10.15.4.0/22
subnets:
- name: GatewaySubnet
cidr: 10.15.4.0/24
- name: eh_test_dmzsn02
cidr: 10.15.5.0/24
- name: eh_test_appsn02
cidr: 10.15.6.0/24
tasks:
- name: Create resource Group
azure_rm_resourcegroup:
name: "{{ item.name }}"
location: "{{ item.location }}"
with_items:
- "{{ resourcegroups }}"
tags: resourcegroups
- name: Create vnet
azure_rm_virtualnetwork:
name: "{{ item.1.name }}"
resource_group: "{{ item.0.name }}"
address_prefixes_cidr: "{{ item.1.cidr }}"
with_subelements:
- "{{ resourcegroups }}"
- virtualnetworks
tags: vnets
# Blows up at this point, with_subelements does not support more lists than 2
- name: Create subnets
azure_rm_subnet:
name: "{{ item.2.name }}"
resource_group: "{{ item.0.name }}"
address_prefix: "{{ item.2.cidr }}"
virtual_network: "{{ item.1.vnet }}"
with_subelements:
- "{{ resourcegroups }}"
- virtualnetworks
- subnets
tags: subnets
What would be the best way to approach this problem? Do I need to define the vars differently, make some kind of helper tasks to create variable structures before running the task itself, use different loops or..?
As far as I know, I can't make references to other dict values which are contained in a list of dictionaries using YAML.
I would go totally other route and use arm templates, that a much better way of provisioning things on Azure, you can use native Ansible tasks for that as well:
http://docs.ansible.com/ansible/latest/azure_rm_deployment_module.html

Resources