My CI has two main steps. Build and deploy. The result of build is that an artifact is uploaded to maven nexus. And currently manual deploy step just takes the latest artifact from nexus and deploys it.
stages:
- build
- deploy
full:
stage: build
image: ubuntu
script:
- // Build and upload to nexus here
deploy:
stage: deploy
script:
- // Take latest artifact from nexus and deploy
when: manual
But to me this doesn't seem to make that much sense to always deploy latest build from every pipeline. I think ideally deploy step of each pipeline should deploy the artifact that was build by the same pipelines build task. Otherwise deploy step of each pipeline will do exactly the same thing regardless when it is started.
So I have two questions.
1) How can I make my deploy step to deploy the version that was build by this run?
2) If I still want to keep the "deploy latest" functionality, then does gitlab support adding a task separate of each pipeline because as I explained this step doesn't make a lot of seance to be in pipeline? I imagine it being in a separate specific place.
Not too familiar with maven and nexus, but assuming you can name the artifact before you push it, you can add one of the built-in environment variables that dictates which pipeline it's from.
ie:
...
Build:
stage: build
script:
- ./buildAsNormal.sh > build$CI_PIPELINE_ID.extension
- ./pushAsNormal.sh
Deploy:
stage: deploy
script:
- ./deployAsNormal #(but specify the build$CI_PIPELINE_ID.extension file)
There are a lot of CI env variables you can use that are extremely useful. The full list of them is here. The difference with $CI_PIPELINE_ID and $CI_JOB_ID is that the pipeline id is constant for all jobs in the pipeline, no matter when they execute. That means the pipeline id will be the same even if you run a manual step a week after the automated steps. The job id is specific to each job.
Regarding your comment, the usage of artifacts: can solve your problem.
You can put the version number in a file and get the file in the next stage :
stages:
- build
- deploy
full:
stage: build
image: ubuntu
script:
- echo "1.0.0" > version
- // Build and upload to nexus here
artifacts:
paths:
- version
expire_in: 1 week
deploy:
stage: deploy
script:
- VERSION = $(cat version)
- // Take the artifact from nexus using VERSION variable and deploy
when: manual
An alternative is to build, push to nexus and use artifact: to pass the result of the build to the Deploy job :
stages:
- build
- deploy
full:
stage: build
image: ubuntu
script:
- // Build and put the result in out/ directory
- // Upload the result from out/ to nexus
artifacts:
paths:
- out/
expire_in: 1 week
deploy:
stage: deploy
script:
- // Take the artifact from in out/ directory and deploy it
when: manual
Related
I can run script for build my project in gitlab-ci.yaml config.Is there a way let pipline download the file output by build script from browser, then i can find it in my computer?
You could create a job artifact from the output of commands from your pipeline (like your build script), assuming you have redirected said output in a file.
You can then download job artifacts by using the GitLab UI or the API.
Using the GitLab CLI glab:
glab ci artifact <refName> <jobName> [flags]
# example
glab ci artifact main build
This is my ci config, to download build-files on Gitlab UI via artifact field.
stages:
- build
.build-base:
stage: build
script:
- cd dev/${APP_PATH}
- yarn ${BUILD_SCRIPT}
when: manual
after_script:
- mv dev/${APP_PATH}/dist ./${APP_OUTPUT_NAME}
artifacts:
paths:
- ./${APP_OUTPUT_NAME}
expire_in: 2 week
name: ${APP_OUTPUT_NAME}_${CI_JOB_ID}
tags:
- kube-runner
order_workbench_sit_build:
extends: .build-base
variables:
APP_PATH: 'order-management'
BUILD_SCRIPT: 'build:sit'
APP_OUTPUT_NAME: 'order_workbench_sit'
order_workbench_build:
extends: .build-base
variables:
APP_PATH: 'order-management'
BUILD_SCRIPT: 'build'
APP_OUTPUT_NAME: 'order_workbench'
I am trying to build GitLab CI/CD for the first time. I have two stages build and deploy The job in the build stage produce artifacts. And then the job in deploy stage wants to upload those artifacts to AWS S3. Both the jobs are using same runner but different docker image.
default:
tags:
- dev-runner
stages:
- build
- deploy
build-job:
image: node:14
stage: build
script:
- npm install
- npm run build:prod
artifacts:
paths:
- deploy/build.zip
deploy-job:
image: docker.xx/xx/gitlab-templates/awscli
stage: deploy
script:
- aws s3 cp deploy/build.zip s3://mys3bucket
The build-job is successfully creating the artifacts. GitLab documentation says artifacts will be automatically downloaded and available in the next stage, however it does not specify where & how these artifacts will be available to consume in the next stage.
Question
In the deploy-job will the artifacts available at the same location? like deploy/build.zip
The artifacts should be available to the second job in the same location, where the first job saved them using the 'artifacts' directive.
I think this question already has an answer on the gitlab forum:
https://forum.gitlab.com/t/access-artifact-in-next-task-to-deploy/9295
Maybe you need to make sure the jobs run in the correct order using the dependencies directive, which is also mentioned in the forum discussion accesible via the link above.
I have a pipeline that does 2 main things: 1) builds a static site using content from an external provider and 2) builds a docker container from that static site.
At the moment, I have these steps in 2 stages, and the build stage produces an artifact
stages:
- build
- package
build:
stage: build
image: node:12
script:
- npm ci
- npm run build
artifacts:
untracked: true
paths:
- folder/for/project
- folder/that/was/not/there/before/build/time
package:
stage: package
image: docker:stable
services:
- docker:dind
needs:
- build
script:
- export DOCKER_HOST=tcp://docker:2375/
- docker build -t my-project .
I can't get the package stage to see the built files though - the docker build -t my-project command will build a version where folder/for/project is present but folder/that/was/not/there/before/build/time is not. Downloading the artifact after the build step is completed does give me both folders, so clearly it is exporting the right stuff from that step and not importing it into the next.
The CI log for the package stage does say that it's downloading something but I can't tell where it goes or how to access it (someID and tokenHere match values seen in the upload artifacts bit of the build step)
Downloading artifacts for build (someID)...
Downloading artifacts from coordinator... ok id=someID responseStatus=200 OK token=tokenHere
How do I pass these files from one stage in my pipeline to the next?
I've now managed to get this working, hopefully this will help someone else in the future. The issue was that I had named my jobs the same names as my stages (build, test, etc) so GitLab must have been getting confused. Using the example in the question, the working solution would be:
stages:
- build
- package
build-job:
stage: build
image: node:12
script:
- npm ci
- npm run build
artifacts:
untracked: true
paths:
- folder/for/project
- folder/that/was/not/there/before/build/time
package:
stage: package
image: docker:stable
services:
- docker:dind
dependencies:
- build-job
script:
- export DOCKER_HOST=tcp://docker:2375/
- docker build -t my-project .
Notice I've changed the build job to build-job and updated the dependency name, and the stage names have remained the same.
In my repo only source files are checked in — the code is tested and the dist files are generated in a pipeline. I then want to be able to tag a specific version and attach the artifacts generated by this pipeline to it. Ideally this should all happen with as little manual intervention as possible.
What is the best way to reference pipeline artifacts from a release?
You can use the release-cli in a stage after the job you built your app, to upload to a release a file form a previous job you'll need that build job id that you can store in a file in the artifacts:
build:
stage: build
script:
- echo "Build your app"
- echo "${CI_JOB_ID}" > CI_JOB_ID.txt # This way you know the job id in the next stage
artifacts:
paths:
- your_app.exe
- CI_JOB_ID.txt
expire_in: never
rules:
- if: $CI_COMMIT_TAG
release:
stage: release
image: registry.gitlab.com/gitlab-org/release-cli:latest
script:
- |
release-cli create --name "Release $CI_COMMIT_TAG" --tag-name $CI_COMMIT_TAG \
--assets-link "{\"name\":\"Executable file\",\"url\":\"https://gitlab.com/some/repo/-/jobs/`cat CI_JOB_ID.txt`/artifacts/file/your_app.exe\"}"
rules:
- if: $CI_COMMIT_TAG
This way every time you tag your repo, it will create a release if the pipeline succeed.
We have a project hosted on an internal Gitlab installation.
The Pipeline of the project has 3 stages:
Build
Tests
Deploy
The objective is to hide or disable the Deploy stage when Tests fails
The problem is that we can't use artifacts because they are lost each time our machines reboot.
My question: Is there an alternative solution to artifacts to achieve this task?
The used .gitlab-ci.yml looks like this:
stages:
- build
- tests
- deploy
build_job:
stage: build
tags:
# - ....
before_script:
# - ....
script:
# - ....
when: manual
only:
- develop
- master
all_tests:
stage: tests
tags:
# - ....
before_script:
# - ....
script:
# - ....
when: manual
only:
- develop
- master
prod:
stage: deploy
tags:
# - ....
script:
# - ....
when: manual
environment: prod
I think you might have misunderstood the purpose of the built-in CI. The goal is to have building and testing all automated on each commit or at least every push. Having all tasks set to manual execution gives you almost no advantage over external CI tools like Jenkins or Bamboo. Your only advantage to local execution of the targets right now is having visibility in a central place.
That said there is no way to conditionally show or hide CI tasks, because it's against the basic idea. If you insist on your idea, you could look up the artifacts of the previous stages and abort the manual execution in case something is wrong.
The problem is that we can't use artifacts because they are lost each time our machines reboot
AFAIK artifacts are uploaded to the master and not saved on the runners. You should be fine having your artifacts passed from stage to stage.
By the way, the default for when is on_success which means to execute build only when all builds from prior stages succeed.