using bitbucket-pipelines deploy same branch to multiple environments - bitbucket-pipelines

I have three environments is AWS dev/uat/prod, and the same branch(develop) I wanted to develop in all three respective environments using bitbucket-pipelines. As I know we need AWS AWS_ACCESS_KEY_ID to do so.
My question is: how to provide AWS AWS_ACCESS_KEY_ID for all the three environments dynamically?,
I am able to deploy at time on one environment as of now.
Thanks for help in advanced

There's a number of client libraries that allow you to parametrize AWS credentials without having to store them in the environment-specific config files. You didn't specify what AWS service you want to use, but here's and example for S3: s3_website
Their config file looks like this; you can configure multiple sets of variables.
s3_id: <%= ENV['S3_ID'] %>
s3_secret: <%= ENV['S3_SECRET'] %>
If this doesn't work for you, write a shell/python script around AWS CLI and pull the environment-specific variables into AWS config file yourself. Manage that script as part of your source code or a docker image.

Related

Move configurartion from env.ym to ssm on serverless.yml

My API keys are hard coded in env.yml and published on our git so, I need to move all secrets from my serverless.yml config (using ${file(env.yml)}) to ssm for all environement except for local environment.
The idea is to fallback to local env.yml in case configuration for one enviroment (i.e. localhost) is not available on remote server.
So, for insance to find the value for PRIVATE_API_KEY_<stage> look up ssm for /SHARED/<stage>/PRIVATE_API_KEY if not found look up look up .env.local for CEFLA_KEY_VALUE_<stage>
Any clue?

How to emulate AWS Parameter Store on local computer for lambda function development?

I'm using Serverless framework and NodeJS to develop my AWS Lambda function. So far, I have used .env file to store my secrets. So, I can get access to them in serverless.yml like this
provider:
...
environment:
DB_HOST: ${env:DB_HOST}
DB_PORT: ${env:DB_PORT}
But now I need to use AWS Parameter Store instead of .env file. I have tried to find information about how to emulate it on my local machine, but I couldn't.
I think, I have to use one serverless config file on local and staging. I need a way to select somehow env values either from .env file (if it's local machine) or from Parameter Store (if it's AWS Lambda). Is there any way how to do it? Thanks!
It should work like this: within your serverless.yml you can reference .env parameters with ${env:keyname} and AWS Parameters using the ${param:keyname} syntax.
If you need to support both of them you just need to write ${env:keyname, param:keyname}.
Here's an example:
provider:
...
environment:
ALLOWED_ORIGINS: ${env:ALLOWED_ORIGINS, param:ALLOWED_ORIGINS}
AUTHORIZER_ARN: ${env:AUTHORIZER_ARN, param:AUTHORIZER_ARN}
MONGODB_URL: ${env:MONGODB_URL, param:MONGODB_URL}

Replace passwords in config files using Jenkins

I have a variety of nodeJS packages with config files containing a combination of environment specific config variables but also sensitive information/secrets (passwords, api keys etc).
Is there some way either to put placeholders in the config file and have a jenkins plugins, swap them out for valid values? In particular I'd like to be able to use the Jenkins credential plugin for passwords information?
If not what is the best way to customize config files for each environment securely?
Depending on your specific use case, you can use environment variables, or you can save the information as Jenkins secrets and reference them like so in your pipeline:
environment {
AWS_ACCESS_KEY_ID = credentials('jenkins-aws-secret-key-id')
AWS_SECRET_ACCESS_KEY = credentials('jenkins-aws-secret-access-key')
}
See more here: https://www.jenkins.io/doc/book/pipeline/jenkinsfile/#secret-text

terraform interpolation with variables returning error [duplicate]

# Using a single workspace:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "company"
workspaces {
name = "my-app-prod"
}
}
}
For Terraform remote backend, would there be a way to use variable to specify the organization / workspace name instead of the hardcoded values there?
The Terraform documentation
didn't seem to mention anything related either.
The backend configuration documentation goes into this in some detail. The main point to note is this:
Only one backend may be specified and the configuration may not contain interpolations. Terraform will validate this.
If you want to make this easily configurable then you can use partial configuration for the static parts (eg the type of backend such as S3) and then provide config at run time interactively, via environment variables or via command line flags.
I personally wrap Terraform actions in a small shell script that runs terraform init with command line flags that uses an appropriate S3 bucket (eg a different one for each project and AWS account) and makes sure the state file location matches the path to the directory I am working on.
I had the same problems and was very disappointed with the need of additional init/wrapper scripts. Some time ago I started to use Terragrunt.
It's worth taking a look at Terragrunt because it closes the gap between Terraform and the lack of using variables at some points, e.g. for the remote backend configuration:
https://terragrunt.gruntwork.io/docs/getting-started/quick-start/#keep-your-backend-configuration-dry

Use variable in Terraform remote backend

# Using a single workspace:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "company"
workspaces {
name = "my-app-prod"
}
}
}
For Terraform remote backend, would there be a way to use variable to specify the organization / workspace name instead of the hardcoded values there?
The Terraform documentation
didn't seem to mention anything related either.
The backend configuration documentation goes into this in some detail. The main point to note is this:
Only one backend may be specified and the configuration may not contain interpolations. Terraform will validate this.
If you want to make this easily configurable then you can use partial configuration for the static parts (eg the type of backend such as S3) and then provide config at run time interactively, via environment variables or via command line flags.
I personally wrap Terraform actions in a small shell script that runs terraform init with command line flags that uses an appropriate S3 bucket (eg a different one for each project and AWS account) and makes sure the state file location matches the path to the directory I am working on.
I had the same problems and was very disappointed with the need of additional init/wrapper scripts. Some time ago I started to use Terragrunt.
It's worth taking a look at Terragrunt because it closes the gap between Terraform and the lack of using variables at some points, e.g. for the remote backend configuration:
https://terragrunt.gruntwork.io/docs/getting-started/quick-start/#keep-your-backend-configuration-dry

Resources