how to use GitHub environment variable in Actions? - node.js

I added environment variables (NODE_ENV) in my 'dev' GitHub Environment.
How can I use it in my Action for my self-hosted runner on AWS?
Now, I tried in this way:
- name: start pm2 service
env:
NODE_ENV: ${{ secrets.NODE_ENV }}
run: NODE_ENV=$NODE_ENV pm2 start ./bin/www --name 'backend'
But I can't get the env on the AWS, my app shows nothing.

Try and and use the official example
steps:
- shell: bash
env:
NODE_ENV: ${{ secrets.NODE_ENV }}
run: |
echo "NODE=ENV='$NODE_ENV'"
pm2 start ./bin/www --name 'backend'
Since NODE_ENV is exported, you might not need to prefix pm2 with NODE_ENV=$NODE_ENV.
However, since the value is still empty, that would confirm the "External Configuration/Secret Sources" is not fully supported yet for an AWS App Runner, assuming it is used for a Github self-hosted runner execution.
That differs from the fact App Runners support GitHub Actions since Nov. 2021.

Related

Cloud Run partially sets environment variables via Github Actions

I am playing with Github Actions and Cloud Run to automate my tasks.
I have set up repository on Github and prepared two workflows.
One for DEV and other for.. let's call it PROD environment.
Workflow deploys and run container with variables that are hardcoded:
- name: Deploy to Cloud Run
run: |-
gcloud run deploy ${{ env.SERVICE }} \
--region ${{ env.REGION }} \
--image gcr.io/${{ env.PROJECT_ID }}/${{ env.SERVICE }}:${{ github.sha }} \
[...]
--set-env-vars "X=testvar1" \
--set-env-vars "Y=testvar2" \
--set-env-vars "Z=testvar3" \
The problem I am facing is that dev environment works fine.
Whenever I trigger action for DEV it ends successfully, cloud run service is green and I can request my app on GCP.
When I am deploying literally the same on a prod environment, this step above fails.
When I debug more on this I go to the Variables & Secrets tab on Cloud Run instance for this failed service and there these env variables are missing. When I will add them manually via GCP console and redeploy it service works fine.
This should be done automatically as on the dev environment.
What is more when I trigger github action again for prod it replaces docker image and these manually set by me env variables are gone and I have to set them manually via condole again.
No additional security configs are made. This is just simple express app made in NodeJS.
Everything is literally the same when it comes to github workflows (yaml files), Dockerfile is also the same, GCP project too.
What might be the cause for that?

How to set environment variable for node js build job in azure devops pipeline

I am importing some secrets from Azure Key Vault to Variable Group to CI / CD pipeline.
I am able to map the required secrets in VariableGroup from KeyVault using Azure Devops UI.
In my pipeline YAML i am able to read and print those VariableGroup variables which are AzureKeyVault secrets.
trigger:
- dev
# define the VM image
pool:
vmImage: "Ubuntu 16.04"
# define variables to use during the build
variables:
- group: SecretVarGroup # it has keyvault variable 'KV_API_KEY'
- group: PublicVarGroup # it has a variable 'API_CLIENTID'
# define the step to export key to env varaiable
steps:
- script: echo $MYSECRETAPIKEY
env:
MYSECRETAPIKEY: $(KV_API_KEY)
## Run the npm build
- script: |
npm run build
displayName: "npm build"
I am able to see value for 'KV_API_KEY' secret printed as *** value in the build output log which i assume its able to consume. I also see value for API_CLIENTID printed in build log as well as node js process.env object.
I was assuming the variable "MYSECRETAPIKEY" will be available in my node js process.env object. But it's not avaialble.
The way i tested it is in my node js project build config i have a print statement which prints process.env object. It printed all the environment variables of pipeline build agent including my PUBLICVARGROUP variable 'API_CLIENTID'. But i don't see my secret variable 'MYSECRETAPIKEY' in the process.env object.
env:
MYSECRETAPIKEY: $(KV_API_KEY)
I thought above line would export variable to specific language process environment. But it is not. How can i fix this?
# define the step to export key to env varaiable
steps:
## Run the npm build
- script: |
npm run build
displayName: "npm build"
env:
MYSECRETAPIKEY: $(KV_API_KEY)
Looks like secrets are scoped on the agent for individual tasks and scripts to use. The issue was i had env: declaraion in a separate adhoc task.Moving it to the same place of my script declaration in the above code has fixed the issue.

Azure App service slot and swap deployment using circleci config.yml

Azure App service slot deployment using circleci config.yml
Need to add a step to deploy to production slot or staging slot then modify the config to swap the deployment
Description: When i run this config file then it deploys to production slot of azure app service by default , but i want to deploy to stage slot first and then do a swap .
below file is working fine but need some configuration changes so that i should be able to deploy to stage slot and then swap the slot to the production slot .
Using Circleci config.yml , below is my config.yml
version: 2.1
jobs:
build:
docker:
- image: circleci/node:10.16.3
steps:
## Fetch all release tags
- checkout
- run:
name: Install Node.js dependencies with Npm
command: npm install
- run:
name: Test
command: CI=true npm run coverage
dev-deploy:
machine: true
steps:
- checkout
- run:
name: create / update infrastructure
command: |
docker login -u $REGISTRY_UN -p $REGISTRY_PW $REGISTRY_SERVER
docker run --rm -it -e TF_VAR_repo_branch=$CIRCLE_BRANCH -e vaultkey=$VAULT_KEY -v `pwd`:/dp/config dockerimage/dpdeployer:beta-1.0 .dp.yaml
workflows:
version: 2
build_and_test_publish:
jobs:
- build
# - hold: # <<< A job that will require manual approval in the CircleCI web application.
# type: approval # <<< This key-value pair will set your workflow to a status of "On Hold"
# requires: # We only run the "hold" job when test2 has succeeded
# - build
- dev-deploy:
requires:
- build
filters:
branches:
only : feature/appservice
Hmmm, this may be a good link to review: Deploy to Azure from CircleCI
But, I think it comes down to how you want to deploy your code to Azure App Service. There are a lot of different ways to do so. Checking your config, you are using Docker already. This link, https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-custom-docker-image , talks about the steps for deploying your container as an Azure App Service.
The gist of it seems to be you need to configure your WebApp to pull from a docker registry per Azure app slot .
Then after a successful build, have circleci push/tag the docker image to that registry. Then Azure App Service will start up the new version of the app.
For jumping between Azure App service slots, you could have your circleci config push to different docker registry image tags. This would require setting up each Azure App Service slot with a slightly different config. For example ...
# Dev
az webapp config container set --name <app-name> --resource-group <rg> --docker-custom-image-name <registry-name>/mydockerimage:$VERSION_FOR_DEV ...
# Staging
az webapp config container set --name <app-name> --resource-group <rg> --docker-custom-image-name <registry-name>/mydockerimage:$VERSION_FOR_STAGE ...
In your circleCI config, when you setup your pipeline between dev , stage and production jobs. Dev and Stage jobs would either do docker pushes or tagging for you. And the Production job does the swap for you for the final step. Something like this...
prod-deploy:
steps:
- run:
name: swap staging and product slots
command: az webapp deployment slot swap -g MyResourceGroup -n MyUniqueApp --slot staging --target-slot production
Also see: https://learn.microsoft.com/en-us/cli/azure/webapp/deployment/slot?view=azure-cli-latest#az-webapp-deployment-slot-swap
Hopefully this helps..and I did not misunderstand your question. 🤞
Yes, it worked!!! Thanks
Although as per our current deployment structure , We are using a deploy script and handling swapping from there and then deploying an application through CircleCI.

NodeJS: How to include environment variable from CircleCI into the application

In my front end application, I'm storing sensitive information in the environment and using them as following:
const client_secret = process.env.CLIENT_SECRET;
On local development, I use dotenv package to pass in the values in .env file
CLIENT_SECRET=XXXXX
The .env file is not committed.
I use CircleCI for my deployment process, and saved the CLIENT_SECRET value in CircleCI environment variables, but how can I pass into the application?
This is my CircleCI config.yml:
- deploy:
name: Deploy
command: |
ENVIRONMENT=${ENVIRONMENT:=test}
VERSION=`date "+%Y-%m-%dt%H%M"`
if [ "${ENVIRONMENT}" = "production" ]; then
APP_FILE=app-prod.yaml
else
APP_FILE=app.yaml
fi
gcloud app deploy ${APP_FILE} --quiet --version ${VERSION}
I can do this in app.yaml:
env_variables:
NODE_ENV: 'production'
CLIENT_SECRET: XXXXX
But I don't want to include the sensitive information into the .yaml file and commit them. Does anyone know any way I can pass environment values into the application?
I'm using Google Cloud Platform, and gcloud app deploy command doesn't seem to have a flag to include the environment variables.
Using bash script to create a .env file with environment variables manually
app.yaml.sh
#!/bin/bash
echo """
env: flex
runtime: nodejs
resources:
memory_gb: 4.0
disk_size_gb: 10
manual_scaling:
instances: 1
env_variables:
NODE_ENV: 'test'
CLIENT_SECRET: \"$CLIENT_SECRET\"
"""
config.yml
steps:
- checkout
- run:
name: chmod permissions
command: chmod -R 755 ./
- run:
name: Copy across app.yaml config
command: ./app.yaml.sh > ./app.yaml
- deploy:
name: Deploy
command: |
VERSION=`date "+%Y-%m-%dt%H%M"`
gcloud app deploy app.yaml --quiet --version ${VERSION}
Reading about it, it's indeed, as you mentioned, that the only "official" way to set environment variables, it's by setting them in the app.yaml - this article provides more information on it. Considering that, I went to search further and I have found this good question from the Community - accessible here - where some workarounds are provided.
For example, the one that you mentioned, about creating a second file with the values and call it in the app.yaml is a good one. You can them use the .gitignore for the file not exist in the repository - in case you are using one. Another option would be to use Cloud Datastore to store the information and use it in your application. This way, Datastore would keep this information secured and accessible for your application, without becoming public within your App Engine configuration.
I just thought a good idea of adding this information here, with the article and question included, in case you want more information! :)
Let me know if the information helped you!

.ebextensions with CodePipeline and Elastic Beanstalk

I started working on my first CodePipeline with node.js app which is hosted on github. I would like to create simple pipe as follow:
Github repo triggers pipe
Test env (Elastic Beanstalk app) is built from S3 .zip file
Test env runs npm test and npm lint
If everything is OK then QA env (another EB app) is built
For above pipe I've created .config files under .ebextensions directory:
I would like to use npm install --production for QA and PROD env, but it seems that EC2 can't find node nor npm. I checked logs and EC2 triggered npm install by default in temporary folder, then it fails on my first script and app catalogue is always empty.
container_commands:
install-dev:
command: "npm install"
test: "[ \"$NODE_ENV\" = \"TEST\" ]"
ignoreErrors: false
install-prod:
command: "npm install --production"
test: "[ \"$NODE_ENV\" != \"TEST\" ]"
ignoreErrors: false
Is it posible to run unit tests and linting without jenkins?
container_commands:
lint:
command: "npm run lint"
test: "[ \"$NODE_ENV\" = \"TEST\" ]"
ignoreErrors: false
test:
command: "npm run test"
test: "[ \"$NODE_ENV\" = \"TEST\" ]"
ignoreErrors: false
I set NODE_ENV for each Elastic Beanstalk instance. No matter what I will do every time my pipe fails because of npm is not recognized, but how is it possible if I'm running 64bit Amazon Linux with node.js ? What's more I cannot find any examples about CodePipeline with node.js in AWS Docs. Thanks in advance!
If you're using AWS for CI/CD, you can use CodeBuild. However, Github provides a great feature called Actions for running Unit Tests, which I find much simpler than AWS. Anyway, I will walk you through both examples:
Using AWS for running Unit Tests
Essentially, you could create a new stage into your CodePipeline, and configure the CodeBuild for running Unit Tests, e.g.
First, add a buildspec.yml file in the root folder of your app so you can use the following example:
version: 0.2
phases:
install:
runtime-versions:
nodejs: 10
commands:
- echo Installing Mocha globally...
- npm install -g mocha
pre_build:
commands:
- echo Installing dependencies...
- npm install
- npm install unit.js
build:
commands:
- echo Build started on `date`
- echo Run Unit Tests and so on
- npm run test
- npm run lint
post_build:
commands:
- echo Build completed on `date`
# THIS IS OPTIONAL
artifacts:
files:
- app.js
- package.json
- src/app/*
- node_modules/**/*
You can find everything you need in the BackSpace Academy, this course is for free:
AWS DevOps CI/CD - CodePipeline, Elastic Beanstalk and Mocha
Using Github for running Unit Tests
You could create your custom actions using Github, it will automatically set up everything you need in your root folder, e.g.
After choosing the appropriate workflow, it will automatically generate a folder/file ".github > workflow > nodejs.yml".
So it will look like this:
name: Node CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [8.x, 10.x, 12.x]
steps:
- uses: actions/checkout#v1
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: npm install, build, and test
run: |
npm install
npm run build --if-present
npm test
env:
CI: true
I hope you could find everything you need in this answer. Cheers
Have you incorporated CodeBuild into your pipeline?
You should
1) Create a pipeline whose source is your github account. Go through the setup procedure so that commits on a particular branch trigger the Codepipeline
2) Create a test stage in your Codepipeline which leverages the CodeBuild service. In order to run your Node tests, you might need to provide a configured build environment. And you probably also need to provide a build spec file that specifies the tests to run etc.
3) Assuming that the test stage passes, you can determine if the pipeline continues to another stage which is linked to an elasticbeanstalk app environment which supports the Node platform. These environments are purely for artifacts that have passed testing, so I see no need to have the .ebextensions commands written above.
Have a read of what CodeBuild can do to help you run tests for Node,
https://aws.amazon.com/codebuild/
Good luck!

Resources