Convert Yaml from GitLab to azure devops - gitlab

I want to convert Yaml pipeline from Gitlab to Azure DevOps. The problem is I did not have experience with GitLab before. This is yaml.
Is .package_deploy template for job? And is image it a pool or I need to use for this Docker task?
And before_script: means I need to create a task before task with docker?
variables:
myVar: "Var"
stages:
- deploy
.package_deploy:
image: registry.gitlab.com/docker-images/$myVar:latest
stage: build
script:
- cd src
- echo "Output file name is set to $OUTPUT_FILE_NAME"
- echo $OUTPUT_FILE_NAME > version.txt
- az login --service-principal -u $ARM_CLIENT_ID -p $ARM_CLIENT_SECRET --tenant $ARM_TENANT_ID
dev_package_deploy:
extends: .package_deploy
stage: deploy
before_script:
- export FOLDER=$FOLDER_DEV
- timestampSuffix=$(date -u "+%Y%m%dT%H%M%S")
- export OUTPUT_FILE_NAME=${myVar}-${timestampSuffix}-${CI_COMMIT_REF_SLUG}.tar.gz
when: manual
demo_package_deploy:
extends: .package_deploy
stage: deploy
before_script:
- export FOLDER=$FOLDER_DEMO
- timestampSuffix=$(date -u "+%Y%m%dT%H%M%S")
- export OUTPUT_FILE_NAME=${myVar}-${timestampSuffix}.tar.gz
when: manual
only:
refs:
- master

.package_deploy: is a 'hidden job' that you can use with the extends keyword. Itself, it does not create any job. It's a way to avoid repeating yourself in other job definitions.
before_script really is no different from script except that they're two different keys. The effect is that before_script + script includes all the script steps in the job.
before_script:
- one
- two
script:
- three
- four
Is the same as:
script:
- one
- two
- three
- four
image: defines the docker container in which the job runs. In this way, it is very similar to a pool you would define in ADO. But if you want things to run close to thee way it does in GitLab, you probably want to define it as container: in ADO.

Related

Azure Devops Pipelines - Add multiple/batch variables

I'm using the below azure-pipeline.yml file to build docker image, push to Azure docker registry & restart the Azure docker app servicer.
This yaml file uses variable set in azure pipeline, screenshot attached.
My issue is, I need to create 2-3 pipelines every-week for different projects I need to add every variable manually for each project and copy paste from my config. Is there a way I can import a .env file or add multiple variables all at once while creating the pipeline.
Objectively I need to cut down the single variable copy paste time & avoid errors that might occurr
1.You could use variable group to reuse variables.
trigger:
- none
pool:
vmImage: ubuntu-latest
variables:
- group: forTest
steps:
- script: |
echo $(test1)
echo $(test2)
displayName: 'Run a multi-line script'
2.You could use variable template.
trigger:
- none
pool:
vmImage: ubuntu-latest
variables:
- template: vars.yml
steps:
- script: |
echo $(test1)
echo $(test2)
displayName: 'Run a multi-line script'

How to pass global variable value to next stage in GitLab CI/CD

Based on GitLab documentation You can use the variables keyword to pass CI/CD variables to a downstream pipeline.
I have a global variable DATABASE_URL
The init stage retrieves connection string from the AWS Secret manager and sets it
to DATABASE_URL
Then in the deploy stage I want to use that variable to deploy database. However in the deploy stage variable's value is empty.
variables:
DATABASE_URL: ""
default:
tags:
- myrunner
stages:
- init
- deploy
init-job:
image: docker.xxxx/awscli
stage: init
script:
- SECRET_VALUE="$(aws secretsmanager get-secret-value --secret-id my_secret --region us-west-2 --output text --query SecretString)"
- DATABASE_URL="$(jq -r .DATABASE_URL <<< $SECRET_VALUE)"
- echo "$DATABASE_URL"
deploy-dev-database:
image: node:14
stage: deploy
environment:
name: development
script:
- echo "$DATABASE_URL"
- npm install
- npx sequelize-cli db:migrate
rules:
- if: $CI_COMMIT_REF_NAME == "dev"
Init Job. echos the DATABASE_URL
However DATABASE_URL is empty in deploy stage
Questions
1> How do I pass the global variable across the stages.
2> NodeJS database deployment process will be using this variable as process.env.DATABASE_URL will it be available to nodejs environment?
Variables are set by precedence, and when you print a variable inside of a job, it will look for the variable inside itself (the same job), and then start moving up to what's defined in the CI YAML file (variables: section), then the project, group, and instance. The job will never look at other jobs.
If you want to pass a variable from one job to another, you would want to make sure you don't set the variable at all and instead pass the variable from one job to another following the documentation on passing environment variables to another job.
Basically,
Make sure to remove DATABASE_URL: "" from the variables section.
Make the last line of your init-job script - echo "$DATABASE_URL" >> init.env. You can call your .env file whatever you want of course.
Add an artifacts: section to your init-job.
Add a dependencies: or needs: section to your deploy-dev-database job to pull the variable.
You should end up with something like this:
stages:
- init
- deploy
init-job:
image: docker.xxxx/awscli
stage: init
script:
- SECRET_VALUE="$(aws secretsmanager get-secret-value --secret-id my_secret --region us-west-2 --output text --query SecretString)"
- DATABASE_URL="$(jq -r .DATABASE_URL <<< $SECRET_VALUE)"
- echo "$DATABASE_URL" >> init.env
artifacts:
reports:
dotenv: init.env
deploy-dev-database:
image: node:14
stage: deploy
dependencies:
- init-job
environment:
name: development
script:
- echo "$DATABASE_URL"
- npm install
- npx sequelize-cli db:migrate
rules:
- if: $CI_COMMIT_REF_NAME == "dev"

Is there any way to dynamically edit a variable in one job and then pass it to a trigger/bridge job in Gitlab CI?

I need to pass a file path to a trigger job where the file path is found within a specified json file in a separate job. Something along the lines of this...
stages:
- run_downstream_pipeline
variables:
- FILE_NAME: default_file.json
.get_path:
stage: run_downstream_pipeline
needs: []
only:
- schedules
- triggers
- web
script:
- apt-get install jq
- FILE_PATH=$(jq '.file_path' $FILE_NAME)
run_pipeline:
extends: .get_path
variables:
PATH: $FILE_PATH
trigger:
project: my/project
branch: staging
strategy: depend
I can't seem to find any workaround to do this, as using extends won't work since Gitlab wont allow for a script section in a trigger job.
I thought about trying to use the Gitlab API trigger method, but I want the status of the downstream pipeline to actually show up in the pipeline UI and I want the upstream pipeline to depend on the status of the downstream pipeline, which from my understanding is not possible when triggering via the API.
Any advice would be appreciated. Thanks!
You can use artifacts:reports:dotenv for setting variables dynamically for subsequent jobs.
stages:
- one
- two
my_job:
stage: "one"
script:
- FILE_PATH=$(jq '.file_path' $FILE_NAME) # In script, get the environment URL.
- echo "FILE_PATH=${FILE_PATH}" >> variables.env # Add the value to a dotenv file.
artifacts:
reports:
dotenv: "variables.env"
example:
stage: two
script: "echo $FILE_PATH"
another_job:
stage: two
trigger:
project: my/project
branch: staging
strategy: depend
Variables in the dotenv file will automatically be present for jobs in subsequent stages (or that declare needs: for the job).
You can also pull artifacts into child pipelines, in general.
But be warned you probably don't want to override the PATH variable, since that's a special variable used to help you find builtin binaries.

How to reuse job in .gitlab-ci.yml

I currently have two jobs in my CI file which are nearly identical.
The first is for manually compiling a release build from any git branch.
deploy_internal:
stage: deploy
script: ....<deploy code>
when: manual
The second is to be used by the scheduler to release a daily build from develop branch.
scheduled_deploy_internal:
stage: deploy
script: ....<deploy code from deploy_internal copy/pasted>
only:
variables:
- $MY_DEPLOY_INTERNAL != null
This feels wrong to have all that deploy code repeated in two places. It gets worse. There are also deploy_external, deploy_release, and scheduled variants.
My question:
Is there a way that I can combine deploy_internal and scheduled_deploy_internal such that the manual/scheduled behaviour is retained (DRY basically)?
Alternatively: Is there is a better way that I should structure my jobs?
Edit:
Original title: Deploy job. Execute manually except when scheduled
You can use YAML anchors and aliases to reuse the script.
deploy_internal:
stage: deploy
script:
- &deployment_scripts |
echo "Deployment Started"
bash command 1
bash command 2
when: manual
scheduled_deploy_internal:
stage: deploy
script:
- *deployment_scripts
only:
variables:
- $MY_DEPLOY_INTERNAL != null
Or you can use extends keyword.
.deployment_script:
script:
- echo "Deployment started"
- bash command 1
- bash command 2
deploy_internal:
extends: .deployment_script
stage: deploy
when: manual
scheduled_deploy_internal:
extends: .deployment_script
stage: deploy
only:
variables:
- $MY_DEPLOY_INTERNAL != null
Use GitLab's default section containing a before_script:
default:
before_script:
- ....<deploy code>
job1:
stage: deploy
script: ....<code after than deploy>
job2:
stage: deploy
script: ....<code after than deploy>
Note: the default section fails to function as such if you try to execute a job locally with the gitlab-runner exec command - use YAML anchors instead.

How to share environment variables in parallel Bitbucket pipeline steps?

So, I am using Bitbucket pipelines to deploy my application. The app consists of two components: 1 and 2. They are deployed in two parallel steps in the Bitbucket pipeline:
pipelines:
custom:
1-deploy-to-test:
- parallel:
- step:
name: Deploying 1
image: google/cloud-sdk:latest
script:
- SERVICE_ENV=test
- GCLOUD_PROJECT="some-project"
- MEMORY_LIMIT="256Mi"
- ./deploy.sh
- step:
name: Deploying 2
image: google/cloud-sdk:latest
script:
- SERVICE_ENV=test
- GCLOUD_PROJECT="some-project"
- MEMORY_LIMIT="256Mi"
- ./deploy2.sh
The environment variables SERVICE_ENV, GCLOUD_PROJECT and MEMORY_LIMIT are always the same for deployments 1 and 2.
Is there any way to define these variables once for both parallel steps?
You can use User-defined variables in Pipelines. For example, you can configure your SERVICE_ENV, GCLOUD_PROJECT and MEMORY_LIMIT as a Repository variables and they will be available to all steps in your pipeline.
To add to the other answers, here's a glimpse of how we currently handle this, inspired from all the other solutions found in Bitbucket's forums.
To allow parallel tasks to re-use deployment variables (currently cannot be passed between steps), we use the bash Docker image first to set environment variables in an artifact. The pure bash image is very fast (runs under 8 seconds usually).
Then all other tasks can run in parallel benefiting of the deployment and repository variables that we usually set, all of that bypassing the current Bitbucket Pipelines limitations.
definitions:
steps:
- step: &set_env
name: Set multi-steps env variables
image: bash:5.2.12
artifacts:
- set_env.sh
script:
## Pass env
- echo "Passing all env variables to next steps"
- >-
echo "
export USERNAME=$USERNAME;
export HOST=$HOST;
export PORT_LIVE=$PORT_LIVE;
export THEME=$THEME;
" >> set_env.sh
- step: &git_ftp
name: Git-Ftp
image: davidwebca/docker-php-deployer:8.0.25
script:
# check if env file exists
- if [ -e set_env.sh ]; then
- cat set_env.sh
- source set_env.sh
- fi
# ...
branches:
staging:
- parallel:
- step:
<<: *set_env
deployment: staging
- parallel:
- step:
<<: *git_ftp
- step:
<<: *root_composer
- step:
<<: *theme_assets
According to the forums, Bitbucket staff is working on a solution to allow more flexibility on this as we speak (2022-12-01) but we shouldn't expect immediate release of this as it seems complicated to implement safely on their end.
as was explained in this link, you can define the Environment Variables and copy them to a file.
After that you can share that file between steps as an Artifact.

Resources