How Nodejs process.env works - node.js

I have a ENVIRONMENT Variable , which resolves the current stage inside the container in kubernetes.
when i refer the variable inside code it always prints "dev" even when the actual value refers to "stage". inside container .
my helm variables :
profiles:
- node
owner:
group: gcp-admin # change to your own group
notify:
slack:
channelName: XXXXXXXX-ingestion # change to your own slack channel
build:
docker:
app:
runtime: node
buildArgs:
nodeVersion: 14.17.1
buildDir: '.'
deploy:
helm:
values:
env:
ENVIRONMENT: stage
my java script code goes like this..
env: process.env.ENVIRONMENT
when i write console.log(env) it always prints dev.
the below image is what i get when i run describe pod

Seems your configuration looks old (verify the version). You can refer to the below doc.
env:
- name: ENVIRONMENT
value: "stage"
Read more here:
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
https://phoenixnap.com/kb/helm-environment-variables

I did something similar but env name was “APP_ENV” instead and it works
helm:
values:
env:
APP_ENV: "staging" // or "development" or "production"
and in code
if (process.env.APP_ENV == "staging") {

Related

How to pass global variable value to next stage in GitLab CI/CD

Based on GitLab documentation You can use the variables keyword to pass CI/CD variables to a downstream pipeline.
I have a global variable DATABASE_URL
The init stage retrieves connection string from the AWS Secret manager and sets it
to DATABASE_URL
Then in the deploy stage I want to use that variable to deploy database. However in the deploy stage variable's value is empty.
variables:
DATABASE_URL: ""
default:
tags:
- myrunner
stages:
- init
- deploy
init-job:
image: docker.xxxx/awscli
stage: init
script:
- SECRET_VALUE="$(aws secretsmanager get-secret-value --secret-id my_secret --region us-west-2 --output text --query SecretString)"
- DATABASE_URL="$(jq -r .DATABASE_URL <<< $SECRET_VALUE)"
- echo "$DATABASE_URL"
deploy-dev-database:
image: node:14
stage: deploy
environment:
name: development
script:
- echo "$DATABASE_URL"
- npm install
- npx sequelize-cli db:migrate
rules:
- if: $CI_COMMIT_REF_NAME == "dev"
Init Job. echos the DATABASE_URL
However DATABASE_URL is empty in deploy stage
Questions
1> How do I pass the global variable across the stages.
2> NodeJS database deployment process will be using this variable as process.env.DATABASE_URL will it be available to nodejs environment?
Variables are set by precedence, and when you print a variable inside of a job, it will look for the variable inside itself (the same job), and then start moving up to what's defined in the CI YAML file (variables: section), then the project, group, and instance. The job will never look at other jobs.
If you want to pass a variable from one job to another, you would want to make sure you don't set the variable at all and instead pass the variable from one job to another following the documentation on passing environment variables to another job.
Basically,
Make sure to remove DATABASE_URL: "" from the variables section.
Make the last line of your init-job script - echo "$DATABASE_URL" >> init.env. You can call your .env file whatever you want of course.
Add an artifacts: section to your init-job.
Add a dependencies: or needs: section to your deploy-dev-database job to pull the variable.
You should end up with something like this:
stages:
- init
- deploy
init-job:
image: docker.xxxx/awscli
stage: init
script:
- SECRET_VALUE="$(aws secretsmanager get-secret-value --secret-id my_secret --region us-west-2 --output text --query SecretString)"
- DATABASE_URL="$(jq -r .DATABASE_URL <<< $SECRET_VALUE)"
- echo "$DATABASE_URL" >> init.env
artifacts:
reports:
dotenv: init.env
deploy-dev-database:
image: node:14
stage: deploy
dependencies:
- init-job
environment:
name: development
script:
- echo "$DATABASE_URL"
- npm install
- npx sequelize-cli db:migrate
rules:
- if: $CI_COMMIT_REF_NAME == "dev"

Pass enviromental variable on the fly to Serverless invoke local function

I have the following Serverless configuration file:
service: aws-node-scheduled-cron-project
frameworkVersion: '2 || 3'
plugins:
- serverless-plugin-typescript
provider:
name: aws
runtime: nodejs14.x
lambdaHashingVersion: 20201221
region: us-west-2
# Imports all the enviromental variables in `.env.yml` file
environment: ${file(./env.yml)}
functions:
...
env.yml
DATABASE_HOST: 127.0.0.1
DATABASE_USER: root
DATABASE_PASSWORD: your_mysql_root_password
DATABASE_TABLE: laravel
DATABASE_PORT: 3306
DATABASE_USE_SSL: false
NOTIFICATION_SERVICE_URL: http://localhost:4000
Now I would like to change DATABASE_TABLE on the fly when invoking the function locally. I have tried:
export DATABASE_TABLE=1-30-task-template-schedule && npx serverless invoke local --function notifyTodoScheduleFullDay
but the variable DATABASE_TABLE gets overwritten by the one in env.yml. Is it possible to do it via command line?
On your yml file you can declare the table name as ${opt:tablename,'DEFAULT'}.
This line means that you are going to give the name as a parameter from the terminal command like this serverless deploy ... -tablename NAME_OF_THE_TABLE if you do not give it as a parameter it takes the default name you can give it. This can generate a name on the fly.

Accessing environment configs defined in serverless.yaml in standalone nodejs script

I have recently started working on a project in which we are using serverless framework. We are using docker to make dev environment easier to setup.
As part of this docker setup we have created a script that creates S3 buckets & tables among other things. We were earlier defining environment variables in the docker-compose file and were accessing them in our nodejs app. For purposes of deployment to other environments our devops team defined a few environment variables in the serverless.yaml file resulting in environment configs being present at two places. We are now planning to move all the environment configs defined in our docker-compose file to serverless.yaml. This works well for our lambdas functions as they are able to read these configs, but it doesn't work for the standalone setup script that we have written.
I tried using this plugin(serverless-scriptable-plugin) in an attempt to be able to read these env variables but still unable to do so.
Here is my serverless.yaml file
service:
name: my-service
frameworkVersion: '2'
configValidationMode: error
provider:
name: aws
runtime: nodejs14.x
region: 'us-east-1'
profile: ${env:PROFILE, 'dev'}
stackName: stack-${self:provider.profile}
apiName: ${self:custom.environment_prefix}-${self:service.name}-my-api
environment: ${self:custom.environment_variables.${self:provider.profile}}
plugins:
- serverless-webpack
- serverless-scriptable-plugin
- serverless-offline-sqs
- serverless-offline
functions:
myMethod:
handler: handler.myHandler
name: ${self:custom.environment_prefix}-${self:service.name}-myHandler
events:
- sqs:
arn:
Fn::GetAtt:
- MyQueue
- Arn
resources:
Resources:
MyQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: ${self:custom.queuename}
Tags:
- Key: product
Value: common
- Key: service
Value: common
- Key: function
Value: ${self:service.name}
- Key: region
Value: ${env:REGION}
package:
individually: true
custom:
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
serverless-offline:
host: 0.0.0.0
port: 3000
serverless-offline-sqs:
apiVersion: '2012-11-05'
endpoint: http://sqs:9324
region: ${self:provider.region}
accessKeyId: root
secretAccessKey: root
skipCacheInvalidation: false
localstack:
stages:
- local
lambda:
mountCode: true
debug: true
environment_prefixes:
staging: staging
production: production
dev: dev
environment_prefix: ${self:custom.environment_prefixes.${self:provider.profile}}
queuename: 'myQueue'
environment_variables:
dev:
AWS_ACCESS_KEY_ID: test
AWS_SECRET_ACCESS_KEY: test
BUCKET_NAME: my-bucket
S3_URL: http://localstack:4566
SLS_DEBUG: '*'
scriptable:
commands:
setup: node /app/dockerEntrypoint.js
In my DockerFile I try executing script using sls setup CMD. I initially thought using sls command might expose these environment variables defined in serverless.yaml file but it doesn't seem to happen.
Is there any other way this can be achieved? I am trying to access these variables using process.env which works for lambdas but not for my standalone script. Thanks!
There's not a good way to get access to these environment variables if you're running the lambda code as a script.
The Serverless Framework injects these variables into the Lambda function runtime configuration via CloudFormation.
It does not insert/update the raw serverless.yml file, nor does it somehow intercept calls to process.env via the node process.
You'll could use the scriptable plugin to run after package, and then export each variable into your local docker environment. But that seems pretty heavy for the variables in your env.
Instead, you might consider something like dotenv, which will load variables from a .env file into your environment.
There is a serverless-dotenv plugin you could use, and then your script could also call dotenv before running.

How to create env file in GCP?

I was trying to create .env in GCP cloud build.
The cloudbuild.yaml file
steps:
- name: 'python:3.8'
entrypoint: python3
args: ['-m', 'pip', 'install', '-t', '.', '-r', 'requirements.txt']
- name: 'python:3.8'
entrypoint: python3
args: ['touch', '.env']
env:
- 'DATABASE_HOST=$$DB_HOST'
- 'DATABASE_USER=$$DB_USER'
- 'DATABASE_PASSWORD=$$DB_PASS'
- 'DATABASE_NAME=$$DB_NAME'
- 'FIREBASE_CREDENTIALS=$$FIRE_CRED'
- name: 'python:3.8'
entrypoint: python3
args: ['./manage.py', 'collectstatic', '--noinput']
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: "1600s"
I've tried with several ways to do that but it's not yet solved.
Created Substitution variable in GCP Trigger that I used in env.
The problem is
- name: 'python:3.8'
entrypoint: python3
args: ['touch', '.env']
env:
- 'DATABASE_HOST=$$DB_HOST'
- 'DATABASE_USER=$$DB_USER'
- 'DATABASE_PASSWORD=$$DB_PASS'
- 'DATABASE_NAME=$$DB_NAME'
- 'FIREBASE_CREDENTIALS=$$FIRE_CRED'
This portion of the code.
Thank you in advance.
Update
I have used args: ['./create-env.py'] instead of args: ['touch', '.env'] and write the environment file to .env
ok, let's start on correct basis. In Cloud Build, each step run a container. This runtime is based on an image (the name) and several parameters (entrypoint, args, env,...)
env allow you to define environment variable in the runtime environment, for example
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: "echo"
args: ["$$ENV_VAR"]
env:
- 'ENV_VAR=WORKS'
will display WORKS. The entry point echo has in argument $$ENV_VAR. The value of this environment variable, in the runtime environment is equals to WORKS.
Note the double $. It's a special mention to indicate to not search in the substitution variables (single $) but to search in the runtime environment variables
A final word: from 1 step to another one, the runtime is destroyed and recreated. only the /workspace directory is kept. All other files and env vars are destroyed.
At the end, I'm not sure of what you want to achieve:
Create a .env file from your env vars? I don't understand the entrypoint and the args of your step
Load env vars from a .env file? if so, it's useless because the context is offloaded between each step.
So, if you need more guidance, explain the ultimate target of your code, I will update this answer accordingly.

Secret environment variables in Cloudbuild (with out files), how?

I am creating a CI/CD pipeline in Cloud Build of a very basic Node.js app with deployment to GCP appengine standard.
None-secret environment variables are stored in app.yaml file. But of course I don't want to put my secrets there. In fact I don't want to put them in any file any where (encrypted or not) since this file will end up on the AppEngine instance and can be "viewed" by a "bad admin". There are many samples out there that suggests to encrypt/decrypt complete files (and some times even code) but I don't want to go down that path.
I am looking for a way to set secret environment variables "in memory" as part of the CI/CD pipeline. Anyone?
I added none secrets in the app.yaml file (env_variables) - works fine
Added encrypted secrets into my cloudbuild.yaml file (secrets) - no error
Added secretEnv: into a build steps but value don't end up as process.env.[KEY] in app engine
cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/npm'
args: ['install']
dir: "appengine/hello-world/standard"
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy", "test-app.yaml"]
dir: "appengine/hello-world/standard"
secretEnv: ['API_KEY', 'API_URL']
secrets:
- kmsKeyName: projects/XXXXXXXX/locations/global/keyRings/customintegrations-secrets/cryptoKeys/integration-secrets
secretEnv:
API_KEY: XXQAoHgKKoHBKOURrUU2RqU+ki8XyqmTjz+ns+MEWp5Kx3hQBpgSQgATFQ5yRdW4m1TLNqNRIdHIqVJi8tn8jFrtlHIEouOzNDe/ASlOT0ZQBfl9Rf7xlvOHAa667poBq2hEoMNvOclxUQ==
API_URL: YYQAoHgKKklo08ZsQF+/8M2bmi9nhWEtb6klyY4rNthUhSIhQ8oSQQATFQ5ywKOxaM/TLwGDmvMtCpl/1stXOOK0kgy42yipYbw/J/QZL68bMat1u4H3Hvp/GMbUVIKEb9jwUtN2xvbL
I was hoping that the secretEnv: ['API_KEY', 'API_URL'] would make the decrypted values accessable in code (process.env.API_KEY) in app engine.
Here is a full tutorial on how to securely store env vars in your cloud build (triggers) settings and import them into your app.
Basically there are three steps:
Add your env vars to the 'variables' section in one of your build trigger settings
Screenshot of where to add variables in build triggers
By convention variables set in the build trigger must begin with an underscore (_)
Configure cloudbuild.yaml (on the second step in the code example) to read in variables from your build trigger, set them as env vars, and write all env vars in a local .env file
Add couldbuild.yaml (below) to your project root directory
steps:
- name: node:10.15.1
entrypoint: npm
args: ["install"]
- name: node:10.15.1
entrypoint: npm
args: ["run", "create-env"]
env:
- 'MY_SECRET_KEY=${_MY_SECRET_KEY}'
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: "1600s"
Add create-env script to package.json
"scripts": {
"create-env": "printenv > .env"
},
Read env vars from .env to your app (config.js)
Install dotenv package
npm i dotenv -S
Add a config.js to your app
// Import all env vars from .env file
require('dotenv').config()
export const MY_SECRET_KEY = process.env.MY_SECRET_KEY
console.log(MY_SECRET_KEY) // => Hello
Done! Now you may deploy your app by triggering the cloud build and your app will have access to the env vars.
Using secrets from Secrets Manager
Your sample would become:
steps:
- name: 'gcr.io/cloud-builders/npm'
args: ['install']
dir: "appengine/hello-world/standard"
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy", "test-app.yaml"]
dir: "appengine/hello-world/standard"
secretEnv: ['API_KEY', 'API_URL']
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/<secret name>/versions/latest
env: API_KEY
- versionName: projects/$PROJECT_ID/secrets/<secret name 2>/versions/latest
env: API_URL
Add a cloud trigger step in your cloudbuild.yaml to add place holders in your app.yaml file
steps:
- name: "gcr.io/cloud-builders/gcloud"
secretEnv: ['API_KEY','API_URL']
entrypoint: 'bash' args:
- -c
- |
echo $'\n API_KEY: '$$API_KEY >> app.yaml
echo $'\n API_URL: '$$API_URL >> app.yaml
gcloud app deploy
availableSecrets: secretManager:
- versionName: projects/012345678901/secrets/API_KEY
env: 'API_KEY'
- versionName: projects/012345678901/secrets/API_URL
env: 'API_URL'
look following reference app.yaml
runtime: nodejs
service: serviceone
env_variables:
PROJECT_ID: demo
PORT: 8080
Reference by: https://stackoverflow.com/users/13763858/cadet

Resources