I am trying to get a Google Cloud Build pipeline running with a Node.js application that is using Google Cloud Build, Cloud SQL (PostgreSQL) and Prisma for the ORM. I have started with the default yaml provided by GCP Cloud Build when clicking on the Setup Continuous Integration button on the Cloud Run UI view for an existing application. The part that is missing is the prisma migrations for the Cloud SQL instance.
steps:
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- api/Dockerfile
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
id: Push
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
args:
- run
- services
- update
- $_SERVICE_NAME
- '--platform=managed'
- '--image=$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- >-
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID,$_LABELS
- '--region=$_DEPLOY_REGION'
- '--quiet'
id: Deploy
entrypoint: gcloud
images:
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
options:
substitutionOption: ALLOW_LOOSE
tags:
- gcp-cloud-build-deploy-cloud-run
- gcp-cloud-build-deploy-cloud-run-managed
- api
I solved the issue by using the following Cloud Build yaml. This is hosted within my git repo, so any code changes are tracked. I selected the Repository as the source and added the location of the cloudbuild.yaml file in my repo rather than the inline option in the Google Cloud Build trigger. This solution should work as long as there aren't any breaking changes from the previous API version to the new version (this won't work for a short period of time until the new application code has all the traffic running to it if you, for example, rename a field in the database that the old application code relies on). The way to manage this is not to make breaking changes and migrate the data from the old column to the new column before removing the old column. Another option is to schedule downtime for DB maintenance.
Keep in mine that there is a race condition when the database migrations run, but the previous version of the code is still accepting traffic before the cut over and that people using the application will potentially receive 500 errors.
This is the updated cloudbuild.yaml with the Prisma migration step (note: This also uses Google Cloud Secret Manager for the DB):
steps:
- name: 'node:$_NODE_VERSION'
entrypoint: 'yarn'
id: yarn-install
args: ['install']
waitFor: ["-"]
- id: migrate
name: gcr.io/cloud-builders/yarn
env:
- NODE_ENV=$_NODE_ENV
entrypoint: sh
args:
- "-c"
- |
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
chmod +x cloud_sql_proxy
./cloud_sql_proxy -instances=$$_DB_HOST=tcp:$$_DB_PORT & sleep 3
export DATABASE_URL=postgresql://$$_DB_USER:$$_DB_PASS#localhost/$$_DB_NAME?schema=public
yarn workspace api run migrate
secretEnv: ['_DB_USER', '_DB_PASS', '_DB_HOST', '_DB_NAME', '_DB_PORT']
timeout: "1200s"
waitFor: ["yarn-install"]
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- api/Dockerfile
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
id: Push
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
args:
- run
- services
- update
- $_SERVICE_NAME
- '--platform=managed'
- '--image=$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- >-
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID,$_LABELS
- '--region=$_DEPLOY_REGION'
- '--quiet'
id: Deploy
entrypoint: gcloud
images:
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
options:
substitutionOption: ALLOW_LOOSE
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/DB_NAME/versions/latest
env: '_DB_NAME'
- versionName: projects/$PROJECT_ID/secrets/DB_PASS/versions/latest
env: '_DB_PASS'
- versionName: projects/$PROJECT_ID/secrets/DB_PORT/versions/latest
env: '_DB_PORT'
- versionName: projects/$PROJECT_ID/secrets/DB_USER/versions/latest
env: '_DB_USER'
- versionName: projects/$PROJECT_ID/secrets/DB_HOST/versions/latest
env: '_DB_HOST'
tags:
- gcp-cloud-build-deploy-cloud-run
- gcp-cloud-build-deploy-cloud-run-managed
- api
Related
I add a connection string to my app service (configuration > connection strings > + New connection string > Save), and this works. But when I redeploy through my CI/CD github workflow, the connection string is gone.
Before a new deployment:
After a new deployement:
My workflow file:
on: [push]
name: workflow
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2 # checks-out your repository under $GITHUB_WORKSPACE, so your workflow can access it.
- run: dotnet --version
- run: dotnet tool restore
- run: dotnet run --project tests/Server/Server.Tests.fsproj
build-and-deploy:
if: github.ref == 'refs/heads/deploy'
needs: test
runs-on: ubuntu-latest
steps:
- name: 'Checkout Github Action'
uses: actions/checkout#v2
- name: 'Login via Azure CLI'
uses: azure/login#v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: 'Restore'
run: dotnet tool restore
- name: 'Deploy'
run: dotnet run azure
I have deployed the .NET Core App to Azure App service using GitHub Actions.
Added new Connection String in Configuration => Connection Strings as you have mentioned.
When I tried to Sync (Redeploy) from Deployment Center, got the below alert.
It clearly says that old deployment changes will be updated with the new one.
But for me the Connection String is not missing.
While creating the App Service, Initially Disable the GitHub Actions settings.
Later, Connect to GitHub from Deployment Center.
If we don't want to miss any configurations which are done after deployment. Instead of Re-deploying the App using Sync option, click on Disconnect.
Re-connect and Build the workflow again.
You can see the available Connection Strings.
So I have this this small Angular project of mine and every time I try to deploy it to Azure, it uploads ~43k files as an artifact. I'm not any good at deployment to Azure, so this may as well be a really stupid question, but still.
So, here is my GitHub Actions workflow file
name: Build and deploy Node.js app to Azure Web App - minesweeper
on:
release:
branches:
- main
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up Node.js version
uses: actions/setup-node#v1
with:
node-version: '16.x'
- name: npm install, build, and test
run: |
npm install
npm run build --prod
working-directory: .
- name: Upload artifact for deployment job
uses: actions/upload-artifact#v2
with:
name: node-app
path: .
deploy:
runs-on: ubuntu-latest
needs: build
environment:
name: 'Production'
url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
steps:
- name: Download artifact from build job
uses: actions/download-artifact#v2
with:
name: node-app
- name: 'Deploy to Azure Web App'
id: deploy-to-webapp
uses: azure/webapps-deploy#v2
with:
app-name: 'minesweeper'
publish-profile: $
package: ./dist/minesweeper_
So, here I have a path, that matches my project's name: minesweeper_ and app name is from azure
What am I doing wrong here
https://github.com/yan14171/Minesweeper - here is the repo itselff
There are over 10,000 files in this artifact, consider creating an archive before upload to improve the upload performance.
As per documentation:
During upload, each file is uploaded concurrently in 4MB chunks using a separate HTTPS connection per file. Chunked uploads are used so that in the event of a failure, the upload can be retried. If there is an error, a retry will be attempted after a certain period of time.
Alternatively, you can try zip and unzip steps as mentioned by Steve.
You can refer to React Deployment on App Service Linux, and Deploying Node.js to Azure App Service with GitHub Actions
Im trying to find out if there is a way to exclude certain files from being sent over github actions, for example, i have a server and a client in the same repository. right now, both the server (node.js) and the client (its a react.js application) are being hosted together on azure app services. once the / is hit, it serves up the index.html file from the build folder.
however I am finding that hosting these two things together is taking its toll on the overall application, for example, it sometimes takes up to 10 seconds for the server to respond and return the index file to the client. I remember in my training some of my more senior devs didnt like to host the server and client together, and im starting to see why..
so I likely will need to split these up to improve performance, but before i go through a daunting task of splitting the repositories up. is there a way to specify in github actions in a workflow to ignore certain files/folders etc..
the only modification i've made to this is that i added an action to zip the application for faster upload to azure to improve workload performance.
here is my workflow:
# Docs for the Azure Web Apps Deploy action: https://github.com/Azure/webapps-deploy
# More GitHub Actions for Azure: https://github.com/Azure/actions
name: Build and deploy Node.js app to Azure Web App
on:
push:
branches:
- main
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up Node.js version
uses: actions/setup-node#v1
with:
node-version: '14.x'
- name: npm install, build, and test
run: |
npm install
npm run build --if-present
npm run test --if-present
- name: Zip artifact for deployment
run: zip release.zip ./* -r
- name: Upload artifact for deployment job
uses: actions/upload-artifact#v2
with:
name: node-app
path: release.zip
deploy:
runs-on: ubuntu-latest
needs: build
environment:
name: 'Production'
url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
steps:
- name: Download artifact from build job
uses: actions/download-artifact#v2
with:
name: node-app
- name: unzip artifact for deployment
run: unzip release.zip
- name: 'Deploy to Azure Web App'
id: deploy-to-webapp
uses: azure/webapps-deploy#v2
with:
app-name: 'Omitted'
slot-name: 'Production'
publish-profile: ${{SECRET}}
package: .
You could create a shell script that excludes the files you don't want.
In .github, create a new folder scripts. Inside the scripts folder, create a new file named exclude.sh.
In the exclude.sh, add the following:
zip -r [file_name.zip] [files/folder to zip] -x [file path/name to exclude]
In your workflow:
- name: unzip artifact for deployment
run: unzip file_name.zip
I'm starting to use gitlab CI/CD pipeline but have some doubts regarding the output of the building process if i was to have a project(Repo) and inside this project I have the front and backend separated by the project structure, ex:
CarProject
.gitlab-ci.yml
|__FrontEndCarProject
|__BackendCarProject
let's say that every time I change something in the frontend I would need to build it and deploy it to S3, but there is no need to build the backend (java application) and deploy it to elastic beanstalk (and vice versa for when i change the backend)..Is there a way to check where the changes have been made(FrontEndCarProject/BackendCarProject) using GitLab and redirect the .gitlab-ci.yml to a script file depending on if a have to deploy to S3 or elastic beanstalk?
Just trying
Note: another way is just to manually change the yml file depending on where i want to deploy..but is there a way to autodetect this and automated?
.gitlab-ci.yml
Just to get the idea, heres an example that would run in a linear way, but how can i conditionally build/deploy(depending on my front or backend)? should i keep them in different repos for simplicity? is it a good practice?
variables:
ARTIFACT_NAME: cars-api-v$CI_PIPELINE_IID.jar
APP_NAME: cars-api
stages:
- build
- deploy
# ONLY Build when front(FrontendCarProject) in changed
build_front:
stage: build
image: Node:latest
script:
- npm install
artifacts:
paths:
- ./dist
# ONLY build when backend(BackendCarProject) is changed
build_back:
stage: build
image: openjdk:12-alpine
script:
- ./gradlew build
artifacts:
paths:
- ./build/libs/
# ONLY deploy when front(FrontendCarProject) in changed
deploy_s3:
stage: deploy
image:
name: python:latest
script:
- aws configure set region us-east-1
- aws s3 cp ./build/libs/cars-api.jar s3://$S3_BUCKET/cars-api.jar
# ONLY deploy when backend(BackendCarProject) is changed
deploy_back_end:
stage: deploy
image:
name: banst/awscli
script:
- aws configure set region us-east-1
- aws s3 cp ./build/libs/$ARTIFACT_NAME s3://$S3_BUCKET/$ARTIFACT_NAME
- aws elasticbeanstalk create-application-version --application-name $APP_NAME --version-label $CI_PIPELINE_IID --source-bundle S3Bucket=$S3_BUCKET,S3Key=$ARTIFACT_NAME
- aws elasticbeanstalk update-environment --application-name $APP_NAME --environment-name "production" --version-label=$CI_PIPELINE_IID
If your frontend and backend can be built and deployed seperately, than you can use rules:changes to check if a change happened and need:optional to only deploy the respective built libraries.
variables:
ARTIFACT_NAME: cars-api-v$CI_PIPELINE_IID.jar
APP_NAME: cars-api
stages:
- build
- deploy
# ONLY Build when front(FrontendCarProject) in changed
build_front:
stage: build
image: Node:latest
script:
- npm install
rules:
- changes:
- FrontEndCarProject/*
artifacts:
paths:
- ./dist
# ONLY build when backend(BackendCarProject) is changed
build_back:
stage: build
image: openjdk:12-alpine
script:
- ./gradlew build
rules:
- changes:
- BackendEndCarProject/*
artifacts:
paths:
- ./build/libs/
# ONLY deploy when front(FrontendCarProject) in changed
deploy_s3:
stage: deploy
image:
name: python:latest
script:
- aws configure set region us-east-1
- aws s3 cp ./build/libs/cars-api.jar s3://$S3_BUCKET/cars-api.jar
needs:
- job: build_front
artifacts: true
optional: true
# ONLY deploy when backend(BackendCarProject) is changed
deploy_back_end:
stage: deploy
image:
name: banst/awscli
script:
- aws configure set region us-east-1
- aws s3 cp ./build/libs/$ARTIFACT_NAME s3://$S3_BUCKET/$ARTIFACT_NAME
- aws elasticbeanstalk create-application-version --application-name $APP_NAME --version-label $CI_PIPELINE_IID --source-bundle S3Bucket=$S3_BUCKET,S3Key=$ARTIFACT_NAME
- aws elasticbeanstalk update-environment --application-name $APP_NAME --environment-name "production" --version-label=$CI_PIPELINE_IID
needs:
- job: build_back
artifacts: true
optional: true
How should I edit my cloudbuild.yaml file so that I can pass multiple environment variables as secrets?
I have stored two authentication tokens in two separate files, SECRET1.txt and SECRET2.txt on my local machine's current working directory.
I want to pass both these authentication tokens as secrets to Google Cloud Build using KMS.
How should the cloudbuild.yaml file look like so that my tokens are safely accessed by Cloud Build?
I tried to use encrypted secrets found here https://cloud.google.com/cloud-build/docs/securing-builds/use-encrypted-secrets-credentials
Here is what I tried for cloudbuild.yaml:
steps:
- name: "gcr.io/cloud-builders/gcloud"
secretEnv: ['SECRET1', 'SECRET2']
timeout: "1600s"
secrets:
- kmsKeyName: projects/<Project-Name>/locations/global/keyRings/<Key-Ring-Name>/cryptoKeys/<Key-Name>
secretEnv:
SECRET1: <encrypted-key-base64 here>
SECRET2: <encrypted-key-base64 here>
I am getting this error message:
Error
Cloud Build is able to read the token(I have struck it out using RED ink here Error), yet it outputs an error message saying that 'Error: ENOENT: no such file or directory'.
Can anyone tell me what went wrong in my approach and why Cloud Build is not able to access these authentication tokens(secrets)?
If you are decrypting a value to use as an env var for a build step, you could use the following setup as you described.
steps:
- name: "gcr.io/cloud-builders/gcloud"
secretEnv: ['SECRET1', 'SECRET2', ...]
timeout: "1600s"
secrets:
- kmsKeyName: projects/[Project-Name]/locations/global/keyRings/[Key-Ring-Name]/cryptoKeys/[Key-Name]
secretEnv:
SECRET1: [encrypted-base64-encoded-secret]
SECRET2: [encrypted-base64-encoded-secret]
However, if you are decrypting a files, you would need to decrypt them in build steps prior to where they are being used, like so:
steps:
- name: "gcr.io/cloud-builders/gcloud"
args:
- kms
- decrypt
- --ciphertext-file=SECRET1.txt.enc
- --plaintext-file=SECRET1.txt
- --project=$PROJECT_ID
- --location=global
- --keyring=[KEYRING-NAME]
- --key=[KEY-NAME]
- name: "gcr.io/cloud-builders/gcloud"
args:
- kms
- decrypt
- --ciphertext-file=SECRET2.txt.enc
- --plaintext-file=SECRET2.txt
- --project=$PROJECT_ID
- --location=global
- --keyring=[KEYRING-NAME]
- --key=[KEY-NAME]
- name: "gcr.io/cloud-builders/gcloud"
args:
- [something that uses SECRET1.txt and SECRET2.txt]
timeout: "1600s"