Hiera-eyaml-Kms - decrypting issue when it used in puppet - puppet

I'm using Kms key in puppet to decrypt some secrets , im getting bellow error while decrypting the secrets
Error: Evaluation Error: Error while evaluating a Function Call, missing region; use :region option or export region name to ENV['AWS_REGION'] at /init.pp:3:18 on node
This is my hiera config file which i have configured kms key id and region
---
:backends:
- yaml
- eyaml
:hierarchy:
- "%{::osfamily}/%{::environ}/%{::appname}"
:yaml:
:datadir: 'C:\temp\var'
:eyaml:
:datadir: 'C:\temp\var\eyaml'
:key_id: '<key_id>'
:aws_region: 'eu-west-1'
:extension: 'yaml'
I'm using hiera-eyaml-kms-0.0.1 gem
even i have tried setting it in environment variable- AWS_REGION

Change the following in your hiera.yaml
:eyaml:
:aws_region: 'eu-west-1' -> :kms_aws_region: 'eu-west-1'

Related

Gitlab integration with Hashicorp Vault

I have integrated my self hosted Gitlab with Hashicorp vault. I have followed the steps here https://docs.gitlab.com/ee/ci/examples/authenticating-with-hashicorp-vault/ and tried to run the pipeline.
I am receiving the certificate error while running the pipeline.
Error writing data to auth/jwt/login: Put "https://vault.systems:8200/v1/auth/jwt/login": x509: certificate signed by unknown authority
My .gitlab yml file -
Vault Client:
image:
name: vault:latest
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
before_script:
script:
- export VAULT_ADDR=https:/vault.systems:8200/
- export VAULT_TOKEN="$(vault write -field=token auth/jwt/login role=staging jwt=$CI_JOB_JWT)"
- export PASSWORD="$(vault kv get -field=password kv/project/staging/db)"
- echo $PASSWORD
If i use -tls-skip-verify flag then it works fine.
Do i need to place the self signed server certificate somewhere on the vault server or gitlab server?
Please let me know if anyone has any ideas on this one?
The containers that are managed by the docker/kube executor must be configured to trust the self-signed cert(s). You can edit the config.toml for your runner to mount in the trusted certs/CA roots to GitLab CI job containers
For example, on Linux-based docker executors:
[[runners]]
name = "docker"
url = "https://example.com/"
token = "TOKEN"
executor = "docker"
[runners.docker]
image = "ubuntu:latest"
# Add path to your ca.crt file in the volumes list
volumes = ["/cache", "/path/to-ca-cert-dir/ca.crt:/etc/gitlab-runner/certs/ca.crt:ro"]
See the docs for more info.
I was able to solve this by using this variable VAULT_CACERT in my gitlab.yml file :
- export VAULT_CACERT=/etc/gitlab-runner/certs/ca.crt. The certificate path here is the path of the mounted container which we specify during the start of container.
Posting this so if anyone is looking for it, this is the solution. :)
Error writing data to auth/jwt/login: Put "https://vault.systems:8200/v1/auth/jwt/login": x509: certificate signed by unknown authority
The error you're receiving is being returned from Vault, so it's Vault that you need to get to accept that certificate. There's a decent note on how to do it in the Deployment Guide. (I used to work for HashiCorp Vault so I knew where to dig it up.)
You can use -tls-skip-verify in your vault command vault kv get -tls-skip-verify -field=password kv/project/staging/db , or if you have vault's ca-cert you have to export CA CERT path by setting VAULT_CACERT to the right path .

How to create an elasticbeanstalk environment through boto3 setting a keypair name

If you uses the AWS console or even the command line, you won't get any issue in setting a default keypair to your Elasticbeanstalk environment.
But you do if using boto3.
Surprisingly, there's no any single mention about setting a keypair in the official boto3 documentation for elasticbeanstalk: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elasticbeanstalk.html.
Tried also to create a zip file containing the most basic files to make a simple website works. And supposedly, I can set a keypair name in the .elasticbeanstalk/config.yml". I did in this way:
branch-defaults:
default:
environment: app10-env
group_suffix: null
global:
application_name: app10
branch: null
default_ec2_keyname: main4
default_platform: PHP 7.4 running on 64bit Amazon Linux 2
default_region: us-east-1
include_git_submodules: true
instance_profile: null
platform_name: null
platform_version: null
profile: null
repository: null
sc: null
workspace_type: null
Yes, the "main4" exists in my AWS account. But creating an environment to my application with a zip containing it, it seems that it have no effect at all. After my environment has sucessfully deployed, I can check afterwards through console and see that have no keypair setted to environment. I need to go to a further step on console to set the keypair and await a new environment deployiment to perform the update.
Is there a real issue with the boto3 elasticbeanstalk when dealing with environment keypairs or I am doing something wrong?
I would set the OptionSettings when calling create_environment or include the keyname in the .ebextensions. Boto3 is not reading the EB CLI default config you are using i guess.
Refs
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elasticbeanstalk.html#ElasticBeanstalk.Client.create_environment
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environment-configuration-methods-before.html
Option to set
Namespace: aws:autoscaling:launchconfiguration
Option Names: IamInstanceProfile, EC2KeyName, InstanceType
The response of #f7o is not accurate, but helped to solve the problem.
There's no option for setting an keypair using "create_environment" command from boto3 client. Tried to use a "EC2KeyName", but it returned an exception of invalid value.
But using the "ebextensions" do the work. If someone else are interested in do the same task that I am, so everything that is needed to do is create a folder called ".ebextensions" with a file called "customkey.config" (the file name can be anything, but must be suffixed with .config), and with the following content:
option_settings:
- namespace: aws:autoscaling:launchconfiguration
option_name: EC2KeyName
value: <your_keypair_name>

Ansible Lookup with azure_keyvault_secret Invalid Credentails

I'm attempting to retrieve a secret stored in Azure Key Vault with Ansible. I found and installed the azure.azure_preview_modules using ansible-galaxy. I've also updated the ansible.cfg to point to the lookup_plugins directory from the role. When Running the following playbook I get the error:
- hosts: localhost
connection: local
roles:
- { role: azure.azure_preview_modules }
tasks:
- name: Look up secret when ansible host is general VM
vars:
url: 'https://myVault.vault.azure.net/'
secretname: 'SecretPassword'
client_id: 'ServicePrincipalIDHere'
secret: 'ServicePrinipcalPassHere'
tenant: 'TenantIDHere'
debug: msg="the value of this secret is {{lookup('azure_keyvault_secret',secretname,vault_url=url, cliend_id=client_id, secret=secret, tenant_id=tenant)}}"
fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'azure_keyvault_secret'. Error was a <class 'ansible.errors.AnsibleError'>, original message: Invalid credentials provided."}
Using the same information I can connect to Azure using AZ PowerShell and AZCLI and retrieve the Azure Key Vault secrets at the commandline. However, those same credentails do not work within this task for the playbook using the lookup plug-in.
I had a similar error when using python sdk (which ansible is built on top of). try changing url to this:
url: 'https://myVault.vault.azure.net' # so remove the trailing slash
the error text is 101% misleading
After much toil I figured out the issue! The argument client_id is misspelled in the example and I didn't catch it which resulted in the error. cliend_id=client_id,
https://github.com/Azure/azure_preview_modules/blob/master/lookup_plugins/azure_keyvault_secret.py#L49
Corrected example below.
- name: Look up secret when ansible host is general VM
vars:
url: 'https://valueName.vault.azure.net'
secretname: 'secretName/version'
client_id: 'ServicePrincipalID'
secret: 'P#ssw0rd'
tenant: 'tenantID'
debug: msg="the value of this secret is {{lookup('azure_keyvault_secret',secretname,vault_url=url, client_id=client_id, secret=secret, tenant_id=tenant)}}"

How can I run terraform init with azure on my local machine

I'm trying to run terraform locally but it should connect to an azure machine. We have azure agents that do exactly this. If I run it locally, it would help me to move faster.
Here is my command
terraform init -reconfigure -backend-config ~/common.tfvars
Here is the error
Initializing modules... │··················································
- module.kubernetes │··················································
- module.database │··················································
- module.trafficmanager │··················································
- module.appInsights │··················································
│··················································
Initializing the backend... │··················································
│··················································
Error configuring the backend "azurerm": resource_group_name and credentials must be provided when access_key is absent │··················································
│··················································
Please update the configuration in your Terraform files to fix this error │··················································
then run this command again.
cat ~/common.tfvars
resource_group_name = "myproject-nst-config-RG"
storage_account_name = "myprojectnstterraform"
container_name = "tfstatemyprojectact"
key = "nstproject"
What am I missing? Is what I want even possible?
Thank you!
You need to provide credentials for Terraform to connect to Azure, usually a service principal, this will include username/applicationID, password and tenant. Have a read of the MS documentation on using Terraform with Azure and you will see they set environment variables with these details.
If you are trying to use the az cli login, you'll want to make sure you are running Terraform 0.12.

Can't use S3 backend with Terraform - missing credentials

I have the most pedestrian of a Terraform sample:
# Configure AWS provider
provider "aws" {
region = "us-east-1"
access_key = "xxxxxxxxx"
secret_key = "yyyyyyyyyyy"
}
# Terraform configuration
terraform {
backend "s3" {
bucket = "terraform.example.com"
key = "85/182/terraform.tfstate"
region = "us-east-1"
}
}
When I run terraform init I receive the following (traced) response:
2018/08/14 14:19:13 [INFO] Terraform version: 0.11.7 41e50bd32a8825a84535e353c3674af8ce799161
2018/08/14 14:19:13 [INFO] Go runtime version: go1.10.1
2018/08/14 14:19:13 [INFO] CLI args: []string{"C:\\cygwin64\\usr\\local\\bin\\terraform.exe", "init"}
2018/08/14 14:19:13 [DEBUG] Attempting to open CLI config file: C:\Users\judall\AppData\Roaming\terraform.rc
2018/08/14 14:19:13 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2018/08/14 14:19:13 [INFO] CLI command args: []string{"init"}
2018/08/14 14:19:13 [DEBUG] command: loading backend config file: C:\cygwin64\home\judall\t2
2018/08/14 14:19:13 [DEBUG] command: no data state file found for backend config
Initializing the backend...
2018/08/14 14:19:13 [DEBUG] New state was assigned lineage "5113646b-318f-9612-5057-bc4803292c3a"
2018/08/14 14:19:13 [INFO] Building AWS region structure
2018/08/14 14:19:13 [INFO] Building AWS auth structure
2018/08/14 14:19:13 [INFO] Setting AWS metadata API timeout to 100ms
2018/08/14 14:19:13 [INFO] Ignoring AWS metadata API endpoint at default location as it doesn't return any instance-id
2018/08/14 14:19:13 [DEBUG] plugin: waiting for all plugin processes to complete...
Error configuring the backend "s3": No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider
Please update the configuration in your Terraform files to fix this error
then run this command again.
I've been googling for hours on this. I've tried to use the 'profile' property - which yields slightly different trace logs, but the same end result. I've tried setting the AWS_ environment variables - with the same result.
I'm running terraform version 0.11.7. Any suggestions?
The provider configuration is independent from your backend configuration.
The credentials, you have configured in the provider block, are used to create your AWS related resources. For accessing S3 bucket as a storage for your remote state, you also need to provide credentials. This can be the same like in the config for your provider or can be completely different (with permissions only on this specific bucket for security reasons).
You can fix it by adding the credentials in the backend block:
# Terraform configuration
terraform {
backend "s3" {
bucket = "terraform.example.com"
key = "85/182/terraform.tfstate"
region = "us-east-1"
access_key = "xxxxxxxxx"
secret_key = "yyyyyyyyyyy"
}
}
Or you can create an AWS (default) profile in your home directory (Docs) and remove your credentials in your terraform code (preferred option, when you store your config in a version control system).
As pointed by #JimUdall in the comment, if you are re-running init on an updated backend configuration, you need to use -reconfigure for the updated config to apply the changed configuration.
terraform init -reconfigure

Resources