CI Build not uploading artifacts - gitlab

I am new and is trying set up a CICD pipeline. The CI build stage is not uploading artifacts as shown. I have provide the current yml script below. what are the steps to correct errors in order to generate an artifacts for deployment.
Gitlab yaml script:
stages:
- build
- deploy
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_REGION: $AWS_REGION
S3_BUCKET_NAME: $S3_BUCKET_NAME
build:
tags: [docker]
stage: build
image: node:latest
script:
- cd client
- npm install
- CI='' npm run build-prod
artifacts:
expose_as: 'Arti-Reports'
paths:
- build.
expire_in: 24 hour
when: on_success
the build job in CI pipeline shown as:
Uploading artifacts for successful job
Uploading artifacts...
WARNING: build.: no matching files
ERROR: No files to upload
Cleaning up file based variables

Related

Is there a way to download file(output by ci job) from browser in gitlab ci?

I can run script for build my project in gitlab-ci.yaml config.Is there a way let pipline download the file output by build script from browser, then i can find it in my computer?
You could create a job artifact from the output of commands from your pipeline (like your build script), assuming you have redirected said output in a file.
You can then download job artifacts by using the GitLab UI or the API.
Using the GitLab CLI glab:
glab ci artifact <refName> <jobName> [flags]
# example
glab ci artifact main build
This is my ci config, to download build-files on Gitlab UI via artifact field.
stages:
- build
.build-base:
stage: build
script:
- cd dev/${APP_PATH}
- yarn ${BUILD_SCRIPT}
when: manual
after_script:
- mv dev/${APP_PATH}/dist ./${APP_OUTPUT_NAME}
artifacts:
paths:
- ./${APP_OUTPUT_NAME}
expire_in: 2 week
name: ${APP_OUTPUT_NAME}_${CI_JOB_ID}
tags:
- kube-runner
order_workbench_sit_build:
extends: .build-base
variables:
APP_PATH: 'order-management'
BUILD_SCRIPT: 'build:sit'
APP_OUTPUT_NAME: 'order_workbench_sit'
order_workbench_build:
extends: .build-base
variables:
APP_PATH: 'order-management'
BUILD_SCRIPT: 'build'
APP_OUTPUT_NAME: 'order_workbench'

How to use output/artifact of one GitLab CI pipeline stage in subsequent stage?

I have a pipeline that does 2 main things: 1) builds a static site using content from an external provider and 2) builds a docker container from that static site.
At the moment, I have these steps in 2 stages, and the build stage produces an artifact
stages:
- build
- package
build:
stage: build
image: node:12
script:
- npm ci
- npm run build
artifacts:
untracked: true
paths:
- folder/for/project
- folder/that/was/not/there/before/build/time
package:
stage: package
image: docker:stable
services:
- docker:dind
needs:
- build
script:
- export DOCKER_HOST=tcp://docker:2375/
- docker build -t my-project .
I can't get the package stage to see the built files though - the docker build -t my-project command will build a version where folder/for/project is present but folder/that/was/not/there/before/build/time is not. Downloading the artifact after the build step is completed does give me both folders, so clearly it is exporting the right stuff from that step and not importing it into the next.
The CI log for the package stage does say that it's downloading something but I can't tell where it goes or how to access it (someID and tokenHere match values seen in the upload artifacts bit of the build step)
Downloading artifacts for build (someID)...
Downloading artifacts from coordinator... ok id=someID responseStatus=200 OK token=tokenHere
How do I pass these files from one stage in my pipeline to the next?
I've now managed to get this working, hopefully this will help someone else in the future. The issue was that I had named my jobs the same names as my stages (build, test, etc) so GitLab must have been getting confused. Using the example in the question, the working solution would be:
stages:
- build
- package
build-job:
stage: build
image: node:12
script:
- npm ci
- npm run build
artifacts:
untracked: true
paths:
- folder/for/project
- folder/that/was/not/there/before/build/time
package:
stage: package
image: docker:stable
services:
- docker:dind
dependencies:
- build-job
script:
- export DOCKER_HOST=tcp://docker:2375/
- docker build -t my-project .
Notice I've changed the build job to build-job and updated the dependency name, and the stage names have remained the same.

Depoying a certain build with gitlab

My CI has two main steps. Build and deploy. The result of build is that an artifact is uploaded to maven nexus. And currently manual deploy step just takes the latest artifact from nexus and deploys it.
stages:
- build
- deploy
full:
stage: build
image: ubuntu
script:
- // Build and upload to nexus here
deploy:
stage: deploy
script:
- // Take latest artifact from nexus and deploy
when: manual
But to me this doesn't seem to make that much sense to always deploy latest build from every pipeline. I think ideally deploy step of each pipeline should deploy the artifact that was build by the same pipelines build task. Otherwise deploy step of each pipeline will do exactly the same thing regardless when it is started.
So I have two questions.
1) How can I make my deploy step to deploy the version that was build by this run?
2) If I still want to keep the "deploy latest" functionality, then does gitlab support adding a task separate of each pipeline because as I explained this step doesn't make a lot of seance to be in pipeline? I imagine it being in a separate specific place.
Not too familiar with maven and nexus, but assuming you can name the artifact before you push it, you can add one of the built-in environment variables that dictates which pipeline it's from.
ie:
...
Build:
stage: build
script:
- ./buildAsNormal.sh > build$CI_PIPELINE_ID.extension
- ./pushAsNormal.sh
Deploy:
stage: deploy
script:
- ./deployAsNormal #(but specify the build$CI_PIPELINE_ID.extension file)
There are a lot of CI env variables you can use that are extremely useful. The full list of them is here. The difference with $CI_PIPELINE_ID and $CI_JOB_ID is that the pipeline id is constant for all jobs in the pipeline, no matter when they execute. That means the pipeline id will be the same even if you run a manual step a week after the automated steps. The job id is specific to each job.
Regarding your comment, the usage of artifacts: can solve your problem.
You can put the version number in a file and get the file in the next stage :
stages:
- build
- deploy
full:
stage: build
image: ubuntu
script:
- echo "1.0.0" > version
- // Build and upload to nexus here
artifacts:
paths:
- version
expire_in: 1 week
deploy:
stage: deploy
script:
- VERSION = $(cat version)
- // Take the artifact from nexus using VERSION variable and deploy
when: manual
An alternative is to build, push to nexus and use artifact: to pass the result of the build to the Deploy job :
stages:
- build
- deploy
full:
stage: build
image: ubuntu
script:
- // Build and put the result in out/ directory
- // Upload the result from out/ to nexus
artifacts:
paths:
- out/
expire_in: 1 week
deploy:
stage: deploy
script:
- // Take the artifact from in out/ directory and deploy it
when: manual

How to delete artifacts directory on gitlab runner after uploading them to gitlab?

I'm trying to create a gitlab job that shows a metric for test code coverage. To do that, I'm creating a .coverage file and placing it in a directory that uploads artifacts. In a subsequent stage the artifacts are downloaded and consumed by a coverage tool to produce a coverage report. I noticed that the artifacts are not deleted when the gitlab runner finishes the job and are bloating my filesystem. How can I remove the artifacts directory after the artifacts are uploaded?
Here's what we currently have
stages:
- test
- build
before_script:
- export GITLAB_ARTIFACT_DIR="$(pwd)"/artifacts
[...]
some-test:
stage: test
script:
- [some script that puts something in ${GITLAB_ARTIFACTS_DIR}
artifacts:
expire_in: 4 days
paths:
- artifacts/
some-other-test:
stage: test
script:
- [some script that puts something in ${GITLAB_ARTIFACTS_DIR}
artifacts:
expire_in: 4 days
paths:
- artifacts/
[...]
coverage:
stage: build
before_script:
script:
- [our coverage script]
coverage: '/TOTAL.*\s+(\d+%)$/'
artifacts:
expire_in: 4 days
paths:
- artifacts/
when: always
[...]
after_script:
- sudo rm -rf "${GITLAB_ARTIFACT_DIR}"
According to https://gitlab.com/gitlab-org/gitlab-runner/issues/4146 after_script does not have access to before_script or scripts environment variables.
A solution could be to use cache and artifact simultaneously.
This config will create a new directory depending of the job id ($CI_JOB_ID) for each job execution :
stages:
- test
remote:
stage: test
script :
- mkdir cache-$CI_JOB_ID
- echo hello> cache-$CI_JOB_ID/foo.txt
cache:
key: build-cache
paths:
- cache-$CI_JOB_ID/
artifacts:
paths:
- cache-$CI_JOB_ID/foo.txt
expire_in: 1 week
At the next run, the previous cache-$CI_JOB_ID will be removed and replace by a new directory (as the $CI_JOB_ID will be different). This will keep only one instance of your cached file until the next job execution.
Note : you need to prefix the directory name with cache- otherwise the .gitlab-ci.yml is invalid.

CI artifacts are not uploaded if the stage failed even with `when: always` in the config

I use powershell and msbuild for both build and test stages.
The configuration for the test stage is like this:
test:
stage: test
artifacts:
when: always
name: "${CI_BUILD_STAGE}_${CI_BUILD_REF_NAME}"
expire_in: 1 week
paths:
- TestResults/
dependencies:
- build
script:
- ./.gitlab-ci/Test.ps1
tags:
- powershell
- msbuild
On successful test run (Test.ps1 returns 0) it uploads the artifacts as it should:
Uploading artifacts...
TestResults/: found 15 matching files
Uploading artifacts to coordinator... ok
Build succeeded
However if the test run was was failed it just fails and doesn't upload anything:
ERROR: Build failed: exit status 1
SOLVED
As cascaval suggested I had to update the runner.

Resources