Cloud Run partially sets environment variables via Github Actions - node.js

I am playing with Github Actions and Cloud Run to automate my tasks.
I have set up repository on Github and prepared two workflows.
One for DEV and other for.. let's call it PROD environment.
Workflow deploys and run container with variables that are hardcoded:
- name: Deploy to Cloud Run
run: |-
gcloud run deploy ${{ env.SERVICE }} \
--region ${{ env.REGION }} \
--image gcr.io/${{ env.PROJECT_ID }}/${{ env.SERVICE }}:${{ github.sha }} \
[...]
--set-env-vars "X=testvar1" \
--set-env-vars "Y=testvar2" \
--set-env-vars "Z=testvar3" \
The problem I am facing is that dev environment works fine.
Whenever I trigger action for DEV it ends successfully, cloud run service is green and I can request my app on GCP.
When I am deploying literally the same on a prod environment, this step above fails.
When I debug more on this I go to the Variables & Secrets tab on Cloud Run instance for this failed service and there these env variables are missing. When I will add them manually via GCP console and redeploy it service works fine.
This should be done automatically as on the dev environment.
What is more when I trigger github action again for prod it replaces docker image and these manually set by me env variables are gone and I have to set them manually via condole again.
No additional security configs are made. This is just simple express app made in NodeJS.
Everything is literally the same when it comes to github workflows (yaml files), Dockerfile is also the same, GCP project too.
What might be the cause for that?

Related

how to use GitHub environment variable in Actions?

I added environment variables (NODE_ENV) in my 'dev' GitHub Environment.
How can I use it in my Action for my self-hosted runner on AWS?
Now, I tried in this way:
- name: start pm2 service
env:
NODE_ENV: ${{ secrets.NODE_ENV }}
run: NODE_ENV=$NODE_ENV pm2 start ./bin/www --name 'backend'
But I can't get the env on the AWS, my app shows nothing.
Try and and use the official example
steps:
- shell: bash
env:
NODE_ENV: ${{ secrets.NODE_ENV }}
run: |
echo "NODE=ENV='$NODE_ENV'"
pm2 start ./bin/www --name 'backend'
Since NODE_ENV is exported, you might not need to prefix pm2 with NODE_ENV=$NODE_ENV.
However, since the value is still empty, that would confirm the "External Configuration/Secret Sources" is not fully supported yet for an AWS App Runner, assuming it is used for a Github self-hosted runner execution.
That differs from the fact App Runners support GitHub Actions since Nov. 2021.

Setup CICD using Google cloud run and GITlab

I am very new to the CICD.
I have to set up a pipeline to connect the GitLab repo to the cloud run.
I have currently hosted my website on cloud run and code in GitLab using the manual command.
I have tried to mind many documents and vedios but those are not very clear or I am not able to understand them. If anyone can provide me good documents or guide me, il really appreciate it.
Here's my solution for your problem:
You have to configure your Google Cloud projects:
Enable Google Cloud Run API and Cloud Build API services.
Create a Google Service Account with the correct permissions (Cloud Build Service Agent, Service Account User, Cloud Run Admin and Viewer)
Generate a credential file from your Service Account, it will output a JSON.
Setup Gitlab CI/CD variables: GCP_PROJECT_ID (with your project id) and GCP_SERVICE_ACCOUNT (with the content of your previous generated JSON).
Setup your .gitlab-ci.yml like this:
variables:
SERVICE_NAME: 'your-service-id'
image: google/cloud-sdk:latest
before_script:
- apt-get --assume-yes install npm
- npm install
- npm run build
deploy:
stage: deploy
only:
- master
script:
- echo $GCP_SERVICE_ACCOUNT > gcloud-service-key.json
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud auth configure-docker
- gcloud config set project $GCP_PROJECT_ID
- gcloud config set run/region europe-west3
- gcloud run deploy $SERVICE_NAME --source . --allow-unauthenticated
If you have worked with the Gitlab CI/CD (.yml) and Cloud Run (local) before, you will understand the steps easily.
This example is assuming you have a NodeJS project.

Azure App service slot and swap deployment using circleci config.yml

Azure App service slot deployment using circleci config.yml
Need to add a step to deploy to production slot or staging slot then modify the config to swap the deployment
Description: When i run this config file then it deploys to production slot of azure app service by default , but i want to deploy to stage slot first and then do a swap .
below file is working fine but need some configuration changes so that i should be able to deploy to stage slot and then swap the slot to the production slot .
Using Circleci config.yml , below is my config.yml
version: 2.1
jobs:
build:
docker:
- image: circleci/node:10.16.3
steps:
## Fetch all release tags
- checkout
- run:
name: Install Node.js dependencies with Npm
command: npm install
- run:
name: Test
command: CI=true npm run coverage
dev-deploy:
machine: true
steps:
- checkout
- run:
name: create / update infrastructure
command: |
docker login -u $REGISTRY_UN -p $REGISTRY_PW $REGISTRY_SERVER
docker run --rm -it -e TF_VAR_repo_branch=$CIRCLE_BRANCH -e vaultkey=$VAULT_KEY -v `pwd`:/dp/config dockerimage/dpdeployer:beta-1.0 .dp.yaml
workflows:
version: 2
build_and_test_publish:
jobs:
- build
# - hold: # <<< A job that will require manual approval in the CircleCI web application.
# type: approval # <<< This key-value pair will set your workflow to a status of "On Hold"
# requires: # We only run the "hold" job when test2 has succeeded
# - build
- dev-deploy:
requires:
- build
filters:
branches:
only : feature/appservice
Hmmm, this may be a good link to review: Deploy to Azure from CircleCI
But, I think it comes down to how you want to deploy your code to Azure App Service. There are a lot of different ways to do so. Checking your config, you are using Docker already. This link, https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-custom-docker-image , talks about the steps for deploying your container as an Azure App Service.
The gist of it seems to be you need to configure your WebApp to pull from a docker registry per Azure app slot .
Then after a successful build, have circleci push/tag the docker image to that registry. Then Azure App Service will start up the new version of the app.
For jumping between Azure App service slots, you could have your circleci config push to different docker registry image tags. This would require setting up each Azure App Service slot with a slightly different config. For example ...
# Dev
az webapp config container set --name <app-name> --resource-group <rg> --docker-custom-image-name <registry-name>/mydockerimage:$VERSION_FOR_DEV ...
# Staging
az webapp config container set --name <app-name> --resource-group <rg> --docker-custom-image-name <registry-name>/mydockerimage:$VERSION_FOR_STAGE ...
In your circleCI config, when you setup your pipeline between dev , stage and production jobs. Dev and Stage jobs would either do docker pushes or tagging for you. And the Production job does the swap for you for the final step. Something like this...
prod-deploy:
steps:
- run:
name: swap staging and product slots
command: az webapp deployment slot swap -g MyResourceGroup -n MyUniqueApp --slot staging --target-slot production
Also see: https://learn.microsoft.com/en-us/cli/azure/webapp/deployment/slot?view=azure-cli-latest#az-webapp-deployment-slot-swap
Hopefully this helps..and I did not misunderstand your question. 🤞
Yes, it worked!!! Thanks
Although as per our current deployment structure , We are using a deploy script and handling swapping from there and then deploying an application through CircleCI.

GCP Cloud build ignores timeout settings

I use Cloud Build for copying the configuration file from storage and deploying the app to App Engine flex.
The problem is that the build fails every time when it lasts more than 10 minutes. I've specified timeout in my cloudbuild.yaml but it looks like it's ignored. Also, I configured app/cloud_build_timeout and set it to 1000. Could somebody explain to me what is wrong here?
My cloudbuild.yaml looks in this way:
steps:
- name: gcr.io/cloud-builders/gsutil
args: ["cp", "gs://myproj-dev-247118.appspot.com/.env.cloud", ".env"]
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: 1000s
timeout: 1600s
My app.yaml use custom env that build it from Dockerfile and looks like this:
runtime: custom
env: flex
manual_scaling:
instances: 1
env_variables:
NODE_ENV: dev
Dockerfile also contains nothing special, just installing dependencies and app building:
FROM node:10 as front-builder
WORKDIR /app
COPY front-end .
RUN npm install
RUN npm run build:web
FROM node:12
WORKDIR /app
COPY api .
RUN npm install
RUN npm run build
COPY .env .env
EXPOSE 8080
COPY --from=front-builder /app/web-build web-build
CMD npm start
When running gcloud app deploy directly for an App Engine Flex app, from your local machine for example, under the hood it spawns a Cloud Build job to build the image that is then deployed to GAE (you can see that build in Cloud Console > Cloud Build). This build has a 10min timeout that can be customized via:
gcloud config set app/cloud_build_timeout 1000
Now, the issue here is that you're issuing the gcloud app deploy command from within Cloud Build itself. Since each individual Cloud Build step is running in its own Docker container, you can't just add a previous step to customize the timeout since the next one will use the default gcloud setting.
You've got several options to solve this:
Add a build step to first build the image with docker build, upload it to Google Cloud Registry. You can set a custom timeout on these steps to fit your needs. Finally, deploy your app with glcoud app deploy --image-url=IMAGE-URL.
Create your own custom gcloud builder where app/cloud_build_timeout is set to your custom value. You can derive it from the default gcloud builder Dockerfile and add /builder/google-cloud-sdk/bin/gcloud config set app/cloud_build_timeout 1000
Just in case if you are using Google Cloud Build with Skaffold, remember checking the skaffold.yaml if you setted the timeout option inside the googleCloudBuild section in build. For example:
build:
googleCloudBuild:
timeout: 3600s
Skaffold will ignore the gcloud config of the machine where you are running the deploy. For example it will ignore this CLI command: gcloud config set app/cloud_build_timeout 3600

Unable to connect to container in Gitlab CI in my free account

I have a free account of gitlab.
I also have a company account (not sure which plan).
I have the exact same project, a wrapper on EventStore.
In the CI pipeline I want to spin up a container with event store so that I can run some integration tests against it.
This is my .gitlab-ci.yml that restores, compiles, runs tests and publishes nuget packages
#Stages
stages:
- ci
- pack
#Global variables
variables:
GITLAB_RUNNER_DOTNET_CORE: mcr.microsoft.com/dotnet/core/sdk:2.2
EVENT_STORE: eventstore/eventstore:release-5.0.2
NUGET_REPOSITORY: $NEXUS_NUGET_REPOSITORY
NUGET_API_KEY: $NEXUS_API_KEY
NUGET_FOLDER_NAME: nupkgs
#Docker image
image: $GITLAB_RUNNER_DOTNET_CORE
#Jobs
ci:
stage: ci
services:
- $EVENT_STORE
variables:
# event store service params testing with standard ports
EVENTSTORE_INT_TCP_PORT: "1113"
EVENTSTORE_EXT_TCP_PORT: "1113"
EVENTSTORE_INT_HTTP_PORT: "2113"
EVENTSTORE_EXT_HTTP_PORT: "2113"
EVENTSTORE_EXT_HTTP_PREFIXES: "http://*:2113/"
script:
- dotnet restore --no-cache --force
- dotnet build --configuration Release
- dotnet vstest test/*Tests/bin/Release/**/*Tests.dll
pack-beta-nuget:
stage: pack
script:
- export VERSION_SUFFIX=beta$CI_PIPELINE_ID
- dotnet pack *.sln --configuration Release --output $NUGET_FOLDER_NAME --version-suffix $VERSION_SUFFIX --include-source --include-symbols -p:SymbolPackageFormat=snupkg
- dotnet nuget push **/*.nupkg --api-key $NUGET_API_KEY --source $NUGET_REPOSITORY
except:
- master
pack-nuget:
stage: pack
script:
- dotnet restore
- dotnet pack *.sln --configuration Release --output $NUGET_FOLDER_NAME
- dotnet nuget push **/*.nupkg --api-key $NUGET_API_KEY --source $NUGET_REPOSITORY
only:
- master
As you can see, I spin up the event store container.
From my integration tests I try to connect to the container within the CI using the following connection string:
"ConnectTo=tcp://admin:changeit#127.0.0.1:1113; HeartBeatTimeout=500;";
With my work account it works fine, there is a container listening on 127.0.0.1 on port 1113 and I can connect to it using the above connection string.
With my free personal account it is unable to connect.
Why?
I suspect it has something to do with the way docker is available on both gitlab CI runners, but why is different?
And more important, how can I configure event store on my personal CI pipeline on my free account so that I can connect to it if the localhost is not a valid host Uri for whatever reason?
Well, you have not provided any details but it seems you're using the Docker executor. In that case, services are not available on localhost but only accessible as service aliases.
This is an extract from the working CI file:
test:
stage: test
script:
- dotnet test
variables:
ASPNETCORE_ENVIRONMENT: Testing
EVENTSTORE_EXT_HTTP_PORT: 2113
EVENTSTORE_EXT_TCP_PORT: 1113
EVENTSTORE_RUN_PROJECTIONS: all
EVENTSTORE_START_STANDARD_PROJECTIONS: "true"
EventStore__ConnectionString: ConnectTo=tcp://admin:changeit#eventstore:1113
services:
- name: eventstore/eventstore:latest
alias: eventstore
only:
refs:
- branches
- tags
For this to work, your appsettings.Testing.json file needs to point to ConnectTo=tcp://admin:changeit#eventstore:1113.
If you want to keep using the appsettings file with the configuration that points to localhost, you can override the setting using env variable in the CI file. Just remember to add environment variables as the configuration source. The code snippet above has such an override, matching our settings structure:
{
"EventStore": {
"ConnectionString": "ConnectTo=whatever"
}
}
If you ever decide using the Kubernetes executor, you will need to revert to using localhost, because Kubernetes executor creates one pod per build with multiple containers, including all service containers. There's an open issue to support service aliases with Kubernetes runners, I think it will be like 12.9 or 13, pretty soon. That being said, using service aliases is a safe, future proof way of making it all work.
P.S. Just noticed that your setup works with one account and doesn't work with another. My guess would be that you either use different executors (Docker doesn't work and Kubernetes works) or different GitLab versions (if the service alias issue has already been fixed).

Resources