I am very new to the CICD.
I have to set up a pipeline to connect the GitLab repo to the cloud run.
I have currently hosted my website on cloud run and code in GitLab using the manual command.
I have tried to mind many documents and vedios but those are not very clear or I am not able to understand them. If anyone can provide me good documents or guide me, il really appreciate it.
Here's my solution for your problem:
You have to configure your Google Cloud projects:
Enable Google Cloud Run API and Cloud Build API services.
Create a Google Service Account with the correct permissions (Cloud Build Service Agent, Service Account User, Cloud Run Admin and Viewer)
Generate a credential file from your Service Account, it will output a JSON.
Setup Gitlab CI/CD variables: GCP_PROJECT_ID (with your project id) and GCP_SERVICE_ACCOUNT (with the content of your previous generated JSON).
Setup your .gitlab-ci.yml like this:
variables:
SERVICE_NAME: 'your-service-id'
image: google/cloud-sdk:latest
before_script:
- apt-get --assume-yes install npm
- npm install
- npm run build
deploy:
stage: deploy
only:
- master
script:
- echo $GCP_SERVICE_ACCOUNT > gcloud-service-key.json
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud auth configure-docker
- gcloud config set project $GCP_PROJECT_ID
- gcloud config set run/region europe-west3
- gcloud run deploy $SERVICE_NAME --source . --allow-unauthenticated
If you have worked with the Gitlab CI/CD (.yml) and Cloud Run (local) before, you will understand the steps easily.
This example is assuming you have a NodeJS project.
Related
A few months ago I did a proof of concept for a CI/CD pipeline on gitlab for my dotnet core API deployed on AWS elastic beanstalk. At the time I got it to work.
I'm now attempting to actually implement the solution properly, but it's no longer working. The appropriate code in my yml file is as follows:
stages:
- deploy_testing
deploy-testing-job:
stage: deploy_testing
image: mcr.microsoft.com/dotnet/sdk:6.0
before_script:
- dotnet tool install -g Amazon.ElasticBeanstalk.Tools
- export PATH="$PATH:/root/.dotnet/tools"
- apt update
- apt-get --assume-yes install zip
script:
- dotnet restore
- dotnet eb deploy-environment --profile my_profile --configuration Release --framework net6.0-windows --project-location "MyProject.Api/" --self-contained true --region $AWS_REGION --application $APP_NAME --environment $APP_ENV_NAME --solution-stack "64bit Windows Server Core 2019 v2.11.0 running IIS 10.0"
Now I started getting an error:
Error creating Elastic Beanstalk application: User: arn:aws:iam::1234:user/gitlab is not authorized to perform: elasticbeanstalk:CreateApplication on resource: arn:aws:elasticbeanstalk:my-region:1234:application/my-application
I found this strange since I confirmed that I correctly identified both an existing application and environment.
Just to see what would happen, I temporarily attached Create Application permission to the appropriate gitlab-elastic-beanstalk policy and now I get the error:
Error creating Elastic Beanstalk application: Application my-application already exists.
Why is eb deploy-environment attempting to recreate the whole application / environment?
I am playing with Github Actions and Cloud Run to automate my tasks.
I have set up repository on Github and prepared two workflows.
One for DEV and other for.. let's call it PROD environment.
Workflow deploys and run container with variables that are hardcoded:
- name: Deploy to Cloud Run
run: |-
gcloud run deploy ${{ env.SERVICE }} \
--region ${{ env.REGION }} \
--image gcr.io/${{ env.PROJECT_ID }}/${{ env.SERVICE }}:${{ github.sha }} \
[...]
--set-env-vars "X=testvar1" \
--set-env-vars "Y=testvar2" \
--set-env-vars "Z=testvar3" \
The problem I am facing is that dev environment works fine.
Whenever I trigger action for DEV it ends successfully, cloud run service is green and I can request my app on GCP.
When I am deploying literally the same on a prod environment, this step above fails.
When I debug more on this I go to the Variables & Secrets tab on Cloud Run instance for this failed service and there these env variables are missing. When I will add them manually via GCP console and redeploy it service works fine.
This should be done automatically as on the dev environment.
What is more when I trigger github action again for prod it replaces docker image and these manually set by me env variables are gone and I have to set them manually via condole again.
No additional security configs are made. This is just simple express app made in NodeJS.
Everything is literally the same when it comes to github workflows (yaml files), Dockerfile is also the same, GCP project too.
What might be the cause for that?
image: node:9.2.0
stages:
- build
build:
stage: build
script:
- set NODE_ENV=production
- npm install
- npm run transpile
- ls
- cd dist-server
- ls
- node /bin/www
#- npm run prod
artifacts:
expire_in: 1 day
paths:
- dist/
Above is my yaml file for ci can anyone share how to deploy this on the linux Azure Web App.
There is no out-of-the-box solution to deploy to Azure using Gitlab.
What you can do in your Gitlab pipeline is the following proces:
Build docker container
Push docker container to Gitlab Container Registry (is included in your Gitlab Repository)
Run a curl command to trigger the Azure App Service webhook to update
You can host this Docker container in Azure (after creating the App Service, you can find the webhook url in the Deployment settings)
Azure App service slot deployment using circleci config.yml
Need to add a step to deploy to production slot or staging slot then modify the config to swap the deployment
Description: When i run this config file then it deploys to production slot of azure app service by default , but i want to deploy to stage slot first and then do a swap .
below file is working fine but need some configuration changes so that i should be able to deploy to stage slot and then swap the slot to the production slot .
Using Circleci config.yml , below is my config.yml
version: 2.1
jobs:
build:
docker:
- image: circleci/node:10.16.3
steps:
## Fetch all release tags
- checkout
- run:
name: Install Node.js dependencies with Npm
command: npm install
- run:
name: Test
command: CI=true npm run coverage
dev-deploy:
machine: true
steps:
- checkout
- run:
name: create / update infrastructure
command: |
docker login -u $REGISTRY_UN -p $REGISTRY_PW $REGISTRY_SERVER
docker run --rm -it -e TF_VAR_repo_branch=$CIRCLE_BRANCH -e vaultkey=$VAULT_KEY -v `pwd`:/dp/config dockerimage/dpdeployer:beta-1.0 .dp.yaml
workflows:
version: 2
build_and_test_publish:
jobs:
- build
# - hold: # <<< A job that will require manual approval in the CircleCI web application.
# type: approval # <<< This key-value pair will set your workflow to a status of "On Hold"
# requires: # We only run the "hold" job when test2 has succeeded
# - build
- dev-deploy:
requires:
- build
filters:
branches:
only : feature/appservice
Hmmm, this may be a good link to review: Deploy to Azure from CircleCI
But, I think it comes down to how you want to deploy your code to Azure App Service. There are a lot of different ways to do so. Checking your config, you are using Docker already. This link, https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-custom-docker-image , talks about the steps for deploying your container as an Azure App Service.
The gist of it seems to be you need to configure your WebApp to pull from a docker registry per Azure app slot .
Then after a successful build, have circleci push/tag the docker image to that registry. Then Azure App Service will start up the new version of the app.
For jumping between Azure App service slots, you could have your circleci config push to different docker registry image tags. This would require setting up each Azure App Service slot with a slightly different config. For example ...
# Dev
az webapp config container set --name <app-name> --resource-group <rg> --docker-custom-image-name <registry-name>/mydockerimage:$VERSION_FOR_DEV ...
# Staging
az webapp config container set --name <app-name> --resource-group <rg> --docker-custom-image-name <registry-name>/mydockerimage:$VERSION_FOR_STAGE ...
In your circleCI config, when you setup your pipeline between dev , stage and production jobs. Dev and Stage jobs would either do docker pushes or tagging for you. And the Production job does the swap for you for the final step. Something like this...
prod-deploy:
steps:
- run:
name: swap staging and product slots
command: az webapp deployment slot swap -g MyResourceGroup -n MyUniqueApp --slot staging --target-slot production
Also see: https://learn.microsoft.com/en-us/cli/azure/webapp/deployment/slot?view=azure-cli-latest#az-webapp-deployment-slot-swap
Hopefully this helps..and I did not misunderstand your question. 🤞
Yes, it worked!!! Thanks
Although as per our current deployment structure , We are using a deploy script and handling swapping from there and then deploying an application through CircleCI.
I have a free account of gitlab.
I also have a company account (not sure which plan).
I have the exact same project, a wrapper on EventStore.
In the CI pipeline I want to spin up a container with event store so that I can run some integration tests against it.
This is my .gitlab-ci.yml that restores, compiles, runs tests and publishes nuget packages
#Stages
stages:
- ci
- pack
#Global variables
variables:
GITLAB_RUNNER_DOTNET_CORE: mcr.microsoft.com/dotnet/core/sdk:2.2
EVENT_STORE: eventstore/eventstore:release-5.0.2
NUGET_REPOSITORY: $NEXUS_NUGET_REPOSITORY
NUGET_API_KEY: $NEXUS_API_KEY
NUGET_FOLDER_NAME: nupkgs
#Docker image
image: $GITLAB_RUNNER_DOTNET_CORE
#Jobs
ci:
stage: ci
services:
- $EVENT_STORE
variables:
# event store service params testing with standard ports
EVENTSTORE_INT_TCP_PORT: "1113"
EVENTSTORE_EXT_TCP_PORT: "1113"
EVENTSTORE_INT_HTTP_PORT: "2113"
EVENTSTORE_EXT_HTTP_PORT: "2113"
EVENTSTORE_EXT_HTTP_PREFIXES: "http://*:2113/"
script:
- dotnet restore --no-cache --force
- dotnet build --configuration Release
- dotnet vstest test/*Tests/bin/Release/**/*Tests.dll
pack-beta-nuget:
stage: pack
script:
- export VERSION_SUFFIX=beta$CI_PIPELINE_ID
- dotnet pack *.sln --configuration Release --output $NUGET_FOLDER_NAME --version-suffix $VERSION_SUFFIX --include-source --include-symbols -p:SymbolPackageFormat=snupkg
- dotnet nuget push **/*.nupkg --api-key $NUGET_API_KEY --source $NUGET_REPOSITORY
except:
- master
pack-nuget:
stage: pack
script:
- dotnet restore
- dotnet pack *.sln --configuration Release --output $NUGET_FOLDER_NAME
- dotnet nuget push **/*.nupkg --api-key $NUGET_API_KEY --source $NUGET_REPOSITORY
only:
- master
As you can see, I spin up the event store container.
From my integration tests I try to connect to the container within the CI using the following connection string:
"ConnectTo=tcp://admin:changeit#127.0.0.1:1113; HeartBeatTimeout=500;";
With my work account it works fine, there is a container listening on 127.0.0.1 on port 1113 and I can connect to it using the above connection string.
With my free personal account it is unable to connect.
Why?
I suspect it has something to do with the way docker is available on both gitlab CI runners, but why is different?
And more important, how can I configure event store on my personal CI pipeline on my free account so that I can connect to it if the localhost is not a valid host Uri for whatever reason?
Well, you have not provided any details but it seems you're using the Docker executor. In that case, services are not available on localhost but only accessible as service aliases.
This is an extract from the working CI file:
test:
stage: test
script:
- dotnet test
variables:
ASPNETCORE_ENVIRONMENT: Testing
EVENTSTORE_EXT_HTTP_PORT: 2113
EVENTSTORE_EXT_TCP_PORT: 1113
EVENTSTORE_RUN_PROJECTIONS: all
EVENTSTORE_START_STANDARD_PROJECTIONS: "true"
EventStore__ConnectionString: ConnectTo=tcp://admin:changeit#eventstore:1113
services:
- name: eventstore/eventstore:latest
alias: eventstore
only:
refs:
- branches
- tags
For this to work, your appsettings.Testing.json file needs to point to ConnectTo=tcp://admin:changeit#eventstore:1113.
If you want to keep using the appsettings file with the configuration that points to localhost, you can override the setting using env variable in the CI file. Just remember to add environment variables as the configuration source. The code snippet above has such an override, matching our settings structure:
{
"EventStore": {
"ConnectionString": "ConnectTo=whatever"
}
}
If you ever decide using the Kubernetes executor, you will need to revert to using localhost, because Kubernetes executor creates one pod per build with multiple containers, including all service containers. There's an open issue to support service aliases with Kubernetes runners, I think it will be like 12.9 or 13, pretty soon. That being said, using service aliases is a safe, future proof way of making it all work.
P.S. Just noticed that your setup works with one account and doesn't work with another. My guess would be that you either use different executors (Docker doesn't work and Kubernetes works) or different GitLab versions (if the service alias issue has already been fixed).