Terraform Cloud env vars not visible in registry module - terraform

I've written some custom modules which use the method described here to get the value of workspace environment variables. Testing the module code "locally" (the test code accessing the module in the same GitHub repo) works fine in TF Cloud. But once the module is published to our private registry and I try to reference it with other code, the module doesn't seem to be able to see the same env variables defined in that workspace. For example, I'm trying to get the value for this variable using the script env.sh:
cat <<EOF
{
"env": "$env"
}
EOF
I have this code in the shared module:
data "external" "env_vars" {
program = ["${path.module}/env.sh"]
}
And then I use this to get the env variable value: data.external.env_vars.result["env"]. The variable "env" is defined in the TF Cloud workspace:enter image description here.
However, when the new code runs in TF cloud, the module is not getting anything for "env". Is there a limitation on shared modules from a registry having visibility of workspace variables?
The module code should see the workspace-defined value for "env" but it's just empty.

Related

How can I get rid of the "Warning: Invalid configuration encountered" in serverless.yml?

Currently I'm trying to breakdown my serverless service into multiple services to get over the cloudFormation resource limit.
My current project structure is as follows:
aws-backend
functions
workers
serverless.yml // workers service
.env.local
.env.dev
serverless.yml // Rest of the functions in here
In my workers service, I'm trying to reference the .env.* files in the root folder using variables.
My issue is when i use the following syntax
${env:SLS_AWS_REGION}
I get a
Error:Cannot resolve serverless.yml: Variables resolution errored with:
- Cannot resolve variable at "provider.region": Value not found at "env" source
but when I use the following syntax:
${../../env:SLS_AWS_REGION}
It works but I get a warning:
Warning: Invalid configuration encountered
at 'package.individually': must be boolean
at 'provider.region': must be equal to one of the allowed values [use-east-1, etc...]
How can I get rid of this error? Am I even using the correct syntax?
Thanks
as for this error
Error:Cannot resolve serverless.yml: Variables resolution errored with:
- Cannot resolve variable at "provider.region": Value not found at "env" source
You get this error because the Framework cannot find SLS_AWS_REGION environment variable. The env variable source doesn't read from .env files, but from environment variables from the process.
As for this syntax:
${../../env:SLS_AWS_REGION}
This does not work because env is a correct variable source, not ../../env.
You have two options here:
Ensure that content of the .env file(s) is exported so the variables defined in these files are actually exported as environment variables before running serverless commands.
Set useDotenv: true in your serverless.yml file so the .env files will be loaded automatically with dotenv. Please see the docs for reference on how it works: https://www.serverless.com/framework/docs/environment-variables
According to the plugin documentation, you should run sls deploy with --stage or --env (deprecated in Serverless >=3.0.0) or NODE_ENV corresponding to the .env file name. If you run it without any of those flags, it will default to development, and the plugin will look for the plugin will look for files named .env, .env.development, .env.development.local
Note that Serverless Framework .env files resolution works differently to the plugin.
The framework looks for .env and .env.{stage} files in service directory and then tries to load them using dotenv. If .env.{stage} is found, .env will not be loaded. If stage is not explicitly defined, it defaults to dev.
I believe the plugin takes precedence here.

Terraform Modules not working as expected

We are using a private Github and Terraform Cloud for our projects. Everything is able to talk to each other so there is no issue there. However, I'm trying to create modules for a project I started. I was able to make it work as regular terraform files, but when I try to convert to the module system I am having issues with getting the state imported.
We have a separate repository called tf-modules. In this repository, my directory setup:
> root
>> mymodule
>>> lambda.tf
>>> eventbridge.tf
>>> bucket.tf
These files manage the software being deployed in our AWS environment. They are being used across multiple environments for each of our customers (each separated out by environment [qa, dev, prod]).
In my terraform files, I have:
> root
>> CUSTNAME
>>> mymodule
>>>> main.tf
Inside main.tf I have:
module "mymodule" {
source = "git::https://github.com/myprivaterepo/tf-modules.git"
}
In my dev environment, everything is set up so I need to import the state. However, it's not detecting the resources at all. In the .terraform directory, it is downloading the entire repository (the root with the readme.md and all)
I'm fairly new to Terraform. Am I approaching this wrong or misunderstanding?
I am using the latest version of Terraform.
Since there is a sub-directory "mymodule", you should specify the whole path.
module "mymodule" {
source = "git::https://github.com/myprivaterepo/tf-modules.git//mymodule"
}
Refer to module sources - sub directory
Example: git::https://example.com/network.git//modules/vpc

Property env is not allowed in launch.json [VSCode]

All I've done is initialize a template Azure Functions project in VS and when I try to set run configuration environment variables via launch.json, VS directly warns me that it's not "allowed".
Furthermore, even when I try to run my .ps1 with env anyway, it doesn't work because I have something like
$variable = $env:AWS_REGION
Write-Host $variable
and the terminal output is blank, so clearly it's not working.
It's not possible directly atm see Issue 1472
I can however see that you are trying to start local version of azure functions, so you could declare your environment variables in local.settings.json or in profile.ps1
Edit: This just means its avalible while running the local instance of az functions, and not avalible in the integrated powershell console. the local.settings.json is also the local version of app configuration in azure and you should makre sure to include this file in your gitignore, if you are using git.

Cloud9 does not expose bash_profile exports in nodejs lambda

I have a Cloud9 environment spun up and have modified my ~/.bash_profile to export a value at the end of the file.
export foo="hello world"
I run . ~/.bash_profile and then echo $foo and I see hello world output in the terminal.
I then created a NodeJS Lambda with API Gateway. I run the API Gateway locally in Cloud 9 and attempt to read the environment variables
console.log(process.env)
I see a list of variables available to me that AWS has defined. My export is not listed there however. Since I will be using environment variables when my Lambda is deployed, I want to test it with environment variables defined in the Cloud9 environment.
Is there something specific I have to do in order to get the Lambda to read my .bash_profile exports?
AWS Cloud9's Lambda plugin is backed by SAM Local, which uses Docker: https://github.com/awslabs/aws-sam-cli . By default, this means that the ~/.bash_profile file is not used by Lambda; you'll want to load this in manually.
Please see Using the AWS Serverless Application Model (AWS SAM) article that describes how to work with environment variables in SAM (so also in cloud9).
In summary - put environment variables into the template.yaml file (present in the root folder of your app) like below:
Properties:
.... # tons of other properties here, add yours at the end
Environment:
Variables:
MY_ENV_VARIABLE: 'This is my awesome env variable value'

terraform sub-module changes not being recognized in plan or apply

i have a terraform repo that looks something like this:
infrastructure
global
main.tf
The main.tf file references a module in a remote repository:
module "global" {
source = "git#github.com/company/repo//domain/global"
}
and the above module makes a reference to another module within the same remote repo: main.tf
module "global" {
source = "git#github.com/company/repo//infrastructure/global"
}
If i make a change in this module thats 3 levels deep, and then run terraform get and terraform init in the top level Terraform project followed by terraform plan, those changes aren't picked up.
Is there any reason for this?
i needed to do one of the following:
1) when running terraform init, i needed to pass the flag upgrade=true
2) or if running terraform get, i needed to pass the flag update=true
this downloads the latest versions of the requested modules

Resources