how to pass in variables over CLI when running cdktf deploy - terraform

I currently have a cdktf (terraform cdk for typescript) project where I have a variable defined as follows:
const resourceName = new TerraformVariable(this, "resourceName", {
type: "string",
default: "defaultResourceName",
description: "resource name",
});
However, when I run cdktf deploy -var="resourceName=foo" I am seeing that the resourceName variable is still defaultResourceName rather than foo as I have intended to pass in via the cli. According to the terraform documentation at https://www.terraform.io/language/values/variables#variables-on-the-command-line this is the right way to pass in variables on the cli but it's clearly not working here - would anyone know the actual correct way? I know variables can be dynamically changed via environment variables but I'd ideally like to just pass variables through cli directly.

First, you need to set EXCLUDE_STACK_ID_FROM_LOGICAL_IDS to true in the cdktf.json file, otherwise, the variables get a random suffix.
Also, there's no -var flag for the deploy argument, you have to set them as environment variables.

I used cdktf deploy -p resourceName=foo

Related

How to find the deployment environment of an azure function app in code

Scenerio is that:
We have Azure DevOps and we can run a pipeline into one of x number of named environments
We make use of Azure App Configuration, and labels for the values for each environment. So for each setting, it might have a different value depending on the label
It occurs to me that if i match up the label to the same as the names of the environments, then in code, when i get the config value, if I can somehow determine what environment I've been deployed to (speaking from the code's point of view) then i can just pass this variable when getting the app config and i will have the correct config settings for my environment.
var environment = // HERE find my deployed to environment as in pipeline (1.)
var credentials = new DefaultAzureCredential();
configurationBuild.AddAzureAppConfiguration(options =>
{
options.Connect(settings.GetValue<string>("ConnectionStrings:AppConfig"))
.Select(KeyFilter.Any, LabelFilter.Null)
.Select(KeyFilter.Any, labelFilter: environment);
});
I was thinking that the solution would be something of the form of setting the environment in the azure-pipelines.yaml where the pipeline somehow knows the choice of environment and then reading it in code back out of the environment variable. but i dont know how to do that, or if there is a better way to do it? Thanks in advance for any help offered.
You can use the pipeline variables to pass the environment value to your code. The pipeline variables you defined in azure-pipelines.yaml will get injected as environment variables for your platform, which allows you to get their values in your code using Environment.GetEnvironmentVariable().
So you can define a pipeline variable in the azure-pipelines.yaml like below example(ie.DeployEnv):
parameters:
- name: Environment
displayName: Deploy to environment
type: string
values:
- none
- test
- dev
variables:
DeployEnv: ${{parameters.Environment}}
trigger: none
pool:
vmImage: 'windows-latest'
Then you can get the pipeline variable (ie.DeployEnv) in you code like below:
using System;
var environment = Environment.GetEnvironmentVariable("DeployEnv");
var credentials = new DefaultAzureCredential();
....
Another workaround is to define an environment property in the config(eg.web.config) file. And you can read the environment property in your code. In the pipeline you need to add tasks to replace the value of the environment property in the config file. See this thread for more information.

terraform interpolation with variables returning error [duplicate]

# Using a single workspace:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "company"
workspaces {
name = "my-app-prod"
}
}
}
For Terraform remote backend, would there be a way to use variable to specify the organization / workspace name instead of the hardcoded values there?
The Terraform documentation
didn't seem to mention anything related either.
The backend configuration documentation goes into this in some detail. The main point to note is this:
Only one backend may be specified and the configuration may not contain interpolations. Terraform will validate this.
If you want to make this easily configurable then you can use partial configuration for the static parts (eg the type of backend such as S3) and then provide config at run time interactively, via environment variables or via command line flags.
I personally wrap Terraform actions in a small shell script that runs terraform init with command line flags that uses an appropriate S3 bucket (eg a different one for each project and AWS account) and makes sure the state file location matches the path to the directory I am working on.
I had the same problems and was very disappointed with the need of additional init/wrapper scripts. Some time ago I started to use Terragrunt.
It's worth taking a look at Terragrunt because it closes the gap between Terraform and the lack of using variables at some points, e.g. for the remote backend configuration:
https://terragrunt.gruntwork.io/docs/getting-started/quick-start/#keep-your-backend-configuration-dry

Azure Pipeline Unit Tests & Environment Variables

Struggling to see another question with an answer for this. I have the following code in a unit test (variable names changed). This information is used in my integration tests
var configuration = new ConfigurationBuilder()
.SetBasePath(Environment.CurrentDirectory)
.AddEnvironmentVariables()
.AddUserSecrets<MyTestTests>()
.Build();
var option= new Option();
option.x1 = configuration.GetValue<string>("Option:x1");
option.x2 = configuration.GetValue<string>("Option:x2");
option.x3 = configuration.GetValue<string>("Option:x3");
option.x3= configuration.GetValue<string>("Option:x4");
return option;
This works fine locally when my unit tests are running locally. However, when my integration tests run in an Azure Pipeline it is not picking up the environment variables.
I have created them in the format of
option__x1 where the _ is a double underscore.
If the Environment Variables are open then it works, however, if they are set as secret then it does not work.
Does anyone have any idea?
Azure Pipeline Unit Tests & Environment Variables
This behavior is by designed for protecting secret variables from being exposed in the task.
This documentation states that secret variables are:
Not decrypted into environment variables. So scripts and programs run by your build steps are not given access by default.
Decrypted for access by your build steps. So you can use them in password arguments and also pass them explicitly into a script or a
program from your build step (for example as $(password)).
That the reason why you could not use the secret variables in your task.
To resolve this issue, we need to explicitly map secret variables:
variables:
GLOBAL_MYSECRET: $(mySecret)
GLOBAL_MY_MAPPED_ENV_VAR: foo
steps:
- powershell: |
env:
MY_MAPPED_ENV_VAR: $(mySecret) # right way to map to an env variable
You could check this thread for and the document for some more details.
Hope this helps.

Use variable in Terraform remote backend

# Using a single workspace:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "company"
workspaces {
name = "my-app-prod"
}
}
}
For Terraform remote backend, would there be a way to use variable to specify the organization / workspace name instead of the hardcoded values there?
The Terraform documentation
didn't seem to mention anything related either.
The backend configuration documentation goes into this in some detail. The main point to note is this:
Only one backend may be specified and the configuration may not contain interpolations. Terraform will validate this.
If you want to make this easily configurable then you can use partial configuration for the static parts (eg the type of backend such as S3) and then provide config at run time interactively, via environment variables or via command line flags.
I personally wrap Terraform actions in a small shell script that runs terraform init with command line flags that uses an appropriate S3 bucket (eg a different one for each project and AWS account) and makes sure the state file location matches the path to the directory I am working on.
I had the same problems and was very disappointed with the need of additional init/wrapper scripts. Some time ago I started to use Terragrunt.
It's worth taking a look at Terragrunt because it closes the gap between Terraform and the lack of using variables at some points, e.g. for the remote backend configuration:
https://terragrunt.gruntwork.io/docs/getting-started/quick-start/#keep-your-backend-configuration-dry

How to get environment variables defined in serverless.yml in tests

I am using the serverless framework for running lambda functions on AWS.
In my serverless.yml there are environment variables that are fetched from SSM.
When I write integration tests for the code, I need the code to have the environment variables and I can't find a good way to do this.
I don't want to duplicate all the variables definitions just for the tests, they are already defined in the serverless.yml. Also, some are secrets and I can't commit them to source conrol, so I would have to also repeat them in the ci environment.
Tried using the serverless-jest-plugin but it is not working and not well maintained.
Ideas I had for solutions:
Make the tests exec sls invoke - this will work but would mean that the code cannot be debugged, I won't know the test coverage, and it will be slow.
Parse the serverless.yml myself and export the env variables - possible but rewriting the logic of pulling the SSM variables just for tests seems wrong.
Any ideas?
The solution we ended up using is a serverless plugin called serverless-export-env.
After adding this plugin you can run serverless export-env to export all the resolved environment variables to an .env file. This resolves ssm parameters correctly and made integration testing much simpler for us.
BTW, to get the environment variables set from the .env file use the the dotenv npm package.
Credit to grishezz for finding the solution
You can run node with --require option to inject .env file to a serverless command.
Create .env at the project root with package.json, and list variables in .env.
Install serverless and dotenv in the project by yarn add -D serverless dotenv.
Run a command like node -r dotenv/config ./node_modules/.bin/sls invoke.
Then, you can get environment variables in the handler process.env.XXX.
Are you looking to do mocked unit tests, or something more like integration tests?
In the first case, you don't need real values for the environment variables. Mock your database, or whatever requires environment variables set. This is actually the preferable way because the tests will run super quickly with proper mocks.
If you are actually looking to go with end-to-end/integration kind of approach, then you would do something like sls invoke, but from jest using javascript. So, like regular network calls to your deployed api.
Also, I would recommend not to store keys in serverless.yml. Try the secret: ${env:MY_SECRET} syntax instead (https://serverless.com/framework/docs/providers/aws/guide/variables#referencing-environment-variables), and use environment variables instead. If you have a ci/cd build server, you can store your secrets there.
After searching I did my custom solution
import * as data from './secrets.[stage].json'
if( process.env.NODE_ENV === 'test'){
process.env = Object.assign( data, process.env );
}
//'data' is the object that has the Serverless environment variables
The SLS environment variables in my case at the file secrets.[stage].json
Serverless.yml has
custom:
secrets: ${file(secrets.[stage].json)}

Resources