I'm attempting to retrieve a secret stored in Azure Key Vault with Ansible. I found and installed the azure.azure_preview_modules using ansible-galaxy. I've also updated the ansible.cfg to point to the lookup_plugins directory from the role. When Running the following playbook I get the error:
- hosts: localhost
connection: local
roles:
- { role: azure.azure_preview_modules }
tasks:
- name: Look up secret when ansible host is general VM
vars:
url: 'https://myVault.vault.azure.net/'
secretname: 'SecretPassword'
client_id: 'ServicePrincipalIDHere'
secret: 'ServicePrinipcalPassHere'
tenant: 'TenantIDHere'
debug: msg="the value of this secret is {{lookup('azure_keyvault_secret',secretname,vault_url=url, cliend_id=client_id, secret=secret, tenant_id=tenant)}}"
fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'azure_keyvault_secret'. Error was a <class 'ansible.errors.AnsibleError'>, original message: Invalid credentials provided."}
Using the same information I can connect to Azure using AZ PowerShell and AZCLI and retrieve the Azure Key Vault secrets at the commandline. However, those same credentails do not work within this task for the playbook using the lookup plug-in.
I had a similar error when using python sdk (which ansible is built on top of). try changing url to this:
url: 'https://myVault.vault.azure.net' # so remove the trailing slash
the error text is 101% misleading
After much toil I figured out the issue! The argument client_id is misspelled in the example and I didn't catch it which resulted in the error. cliend_id=client_id,
https://github.com/Azure/azure_preview_modules/blob/master/lookup_plugins/azure_keyvault_secret.py#L49
Corrected example below.
- name: Look up secret when ansible host is general VM
vars:
url: 'https://valueName.vault.azure.net'
secretname: 'secretName/version'
client_id: 'ServicePrincipalID'
secret: 'P#ssw0rd'
tenant: 'tenantID'
debug: msg="the value of this secret is {{lookup('azure_keyvault_secret',secretname,vault_url=url, client_id=client_id, secret=secret, tenant_id=tenant)}}"
Related
I'm having some issues when trying to use Hashicorp vault template (with terraform to.be.continuous).
Actually when I use it with terraform-vault template I got an error message.
This is a summary of .gitlab-ci.yml
include:
- project: "to-be-continuous/terraform"
ref: "2.4.0"
file: "templates/gitlab-ci-terraform.yml"
# Vault variant
- project: 'to-be-continuous/terraform'
ref: '2.4.0'
file: '/templates/gitlab-ci-terraform-vault.yml'
variables:
VAULT_BASE_URL: "https://vault.secrets.tech.orange/v1"
VAULT_ROLE_ID: $VAULT_ROLE_ID
VAULT_SECRET_ID: $VAULT_SECRET_ID
GCP_MYSECRET: "#url#http://vault-secrets-provider/api/secrets/XXX/gcp/credentials?field=mygcpsecret"
Error Message:
[ERROR] Failed getting secret GCP_MYSECRET:
... Connecting to vault-secrets-provider (127.0.0.1:80)
... wget: server returned error: HTTP/1.1 404 Not Found
I tried without vault template and it works.
Would you please help me with this? Or perhaps, where I can ask for some help?
It turns out you were facing this issue due to a Kubernetes runners limitation.
As stated in GitLab documentation,
Kubernetes runners cannot use several services using the same port
As a result, using the tracking service in addition to another one using the same port (80) fails.
It has now been fixed.
I have integrated my self hosted Gitlab with Hashicorp vault. I have followed the steps here https://docs.gitlab.com/ee/ci/examples/authenticating-with-hashicorp-vault/ and tried to run the pipeline.
I am receiving the certificate error while running the pipeline.
Error writing data to auth/jwt/login: Put "https://vault.systems:8200/v1/auth/jwt/login": x509: certificate signed by unknown authority
My .gitlab yml file -
Vault Client:
image:
name: vault:latest
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
before_script:
script:
- export VAULT_ADDR=https:/vault.systems:8200/
- export VAULT_TOKEN="$(vault write -field=token auth/jwt/login role=staging jwt=$CI_JOB_JWT)"
- export PASSWORD="$(vault kv get -field=password kv/project/staging/db)"
- echo $PASSWORD
If i use -tls-skip-verify flag then it works fine.
Do i need to place the self signed server certificate somewhere on the vault server or gitlab server?
Please let me know if anyone has any ideas on this one?
The containers that are managed by the docker/kube executor must be configured to trust the self-signed cert(s). You can edit the config.toml for your runner to mount in the trusted certs/CA roots to GitLab CI job containers
For example, on Linux-based docker executors:
[[runners]]
name = "docker"
url = "https://example.com/"
token = "TOKEN"
executor = "docker"
[runners.docker]
image = "ubuntu:latest"
# Add path to your ca.crt file in the volumes list
volumes = ["/cache", "/path/to-ca-cert-dir/ca.crt:/etc/gitlab-runner/certs/ca.crt:ro"]
See the docs for more info.
I was able to solve this by using this variable VAULT_CACERT in my gitlab.yml file :
- export VAULT_CACERT=/etc/gitlab-runner/certs/ca.crt. The certificate path here is the path of the mounted container which we specify during the start of container.
Posting this so if anyone is looking for it, this is the solution. :)
Error writing data to auth/jwt/login: Put "https://vault.systems:8200/v1/auth/jwt/login": x509: certificate signed by unknown authority
The error you're receiving is being returned from Vault, so it's Vault that you need to get to accept that certificate. There's a decent note on how to do it in the Deployment Guide. (I used to work for HashiCorp Vault so I knew where to dig it up.)
You can use -tls-skip-verify in your vault command vault kv get -tls-skip-verify -field=password kv/project/staging/db , or if you have vault's ca-cert you have to export CA CERT path by setting VAULT_CACERT to the right path .
when I execute the following PowerShell command:
.\kubectl get nodes
I get no nodes in response. I noticed that the config file from kubectl is empty too:
apiVersion: v1
clusters:
- cluster:
server: ""
name: cl-kubernetes
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
When I enter the server address at the config file, I get the message that the connection was refused. I suspect that it is due to missing certificates. During another installation this (apparently) following information was created automatically, which is now missing:
certificate-authority-data,
contexts - cluster,
contexts - user,
current context,
users - name,
client-certificate-data,
client-key-data,
token,
Could that be it? If so, where do I get this information?
Many thanks for the help
You need to use the Azure CLI first to get the credentials. Run
az aks get-credentials
https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-get-credentials
I'm trying to manage my Azure cloud with Ansible module with no luck using the official guide.
I've set Service Principal and got credentials, then put them to file $HOME/.azure/credentials as advised:
[default]
subscription_id=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
client_id=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
secret=xxxxxxxxxxxxxxxxx
tenant=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
When start test playbook I've got:
"No subscription_id provided. Please set 'AZURE_SUBSCRIPTION_ID' or use the 'subscription_id' parameter"
Then I've set environment variables:
export AZURE_CLIENT_ID=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
export AZURE_SECRET=xxxxxxxxxxxxxxxxx
export AZURE_SUBSCRIPTION_ID=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
export AZURE_TENANT=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Now I've got the error:
"No management_cert_path provided. Please set 'AZURE_CERT_PATH' or use the 'management_cert_path' parameter"
I can successfully log in into my application with Azure CLI:
azure account show
info: Executing command account show
data: Name : Visual Studio Enterprise: BizSpark
data: ID : xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
data: State : Enabled
data: Tenant ID : xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
data: Is Default : true
data: Environment : AzureCloud
data: Has Certificate : No
data: Has Access Token : Yes
data: User name : xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
data:
info: account show command OK
My test playbook:
---
- hosts: localhost
connection: local
tasks:
- name: Azure VM creation
azure:
name: Test_machine
role_size: Basic_A0
image:
offer: CentOS
publisher: OpenLogic
sku: '7.1'
version: latest
location: 'West Europe'
user: admin
password: Password!
storage_account: my-storage-account
wait: yes
p.s. The receipt listed in this question is not suitable to my case.
The azure module is the legacy Service Management module, and will likely be deprecated in Ansible 2.2. Sounds like you're already using ARM, and the guide you're referring to is about ARM, so you should be using the azure_rm_virtualmachine module instead.
i am using azure=0.11.1 and also tried in 1.0.1 version and execute it but i getting same error which mention below, playbook is mention below:
azurevm_yml
---
- local_action:
module: "azure"
name: 'vm_ubuntu1'
role_size: Small
image: '5112500ae3b842c8b9c604889f8753c3__OpenLogic-CentOS-67-20150815'
password: "admin12345#"
location: 'East US 2'
user: admin
wait: yes
subscription_id: 'xxxxxxxxxxxxxx'
management_cert_path: '/ansible-pbook/xxxx.pem'
storage_account: 'storageacc01'
endpoints: '22,8080,80'
register: azure_vm
Error:
root#xxxxx:/ansible-pbook# ansible-playbook azure_vm.yml
ERROR: password is not a legal parameter of an Ansible Play
Please suggest me...
The correct format for a task is something like this:
- local_action: azure
name='vm_ubuntu1'
role_size=Small
image='5112500ae3b842c8b9c604889f8753c3__OpenLogic-CentOS-67-20150815'
password="admin12345#"
location='East US 2'
user=admin
wait=yes
subscription_id='xxxxxxxxxxxxxx'
management_cert_path='/ansible-pbook/xxxx.pem'
storage_account='storageacc01'
endpoints='22,8080,80'
register: azure_vm
All the parameters passed to the module should be in the format of key=value, while attributes to the task/action itself (like register, tags, ignore_errors, etc.) are in the format of attribute: value