How to move files from one to second directories on branch during Pipeline run? Gitlab - gitlab

I have created Docs folder and it should contain report files.
I'm trying to move content from temporary created Allure folder to Docs folder and then copy everything from Docs to public folder to get access to Pages on which that Allure report will be located. I'm doing that process insted of simple copying files from allure folder to public folder to get history about previous runs. Maybe there is some better way to do it ? I'd like to store old reports for some X time (for example for 2 days, to be able to see on ALlure what was wrong is there are some problems) and then delete old ones, not deleting latest which have not reached "deleting point". So, here is my yml file:
stages:
- testing
- deploy
docker_job:
stage: testing
tags:
- docker
image: atools/chrome-headless:java11-node14-latest
before_script:
- npm ci
- npx playwright install
- npm install allure-commandline --save-dev
script: #||true
- npx playwright test
after_script:
- npx allure generate allure-results
rules:
- when: always
allow_failure: true
artifacts:
when: always
paths:
- ./allure-report
expire_in: 1 day
pages:
stage: deploy
script:
- mkdir public
- mv ./allure-report/* Docs
- cp -R ./Docs/* public
artifacts:
paths:
- public
rules:
- when: always
Everything is going good but it doesn't work - mv ./allure-report/* Docs
- cp -R ./Docs/* public are doing nothing or I just can't see any effect. Help me please to correctly solve that problem.
Maybe there is obvious holes in logic, idk, have tried a lot of variants but they all don't work.
Can it be done by my way at all?

okay, so I have done it with "logging in" with my git.conf email/name, then using auth token I did a push of these artifacts to Docs folder, looks like:
stages:
- testing
- deploy
docker_job:
stage: testing
tags:
- docker
image: atools/chrome-headless:java11-node14-latest
before_script:
- npm ci
- npx playwright install
- npm install allure-commandline --save-dev
script: #||true
- npx playwright test
after_script:
- npx allure generate allure-results
rules:
- when: always
allow_failure: true
artifacts:
when: always
paths:
- ./allure-report
expire_in: 15 mins
pages:
stage: deploy
script:
- cp -r -u ./allure-report/* Docs
- cp -R ./Docs/* public
- git config --global user.email "mail"
- git config --global user.name "name"
- git remote set-url origin https://gitlab-ci-token:${token}#gitlab.com/proj_link
- git checkout main
- git add Docs
- git commit -m "assets"
- git push
artifacts:
paths:
- public
rules:
- when: always

Related

how to continue running scripts if one of them fails? GITLAB CI

here is my yml file:
stages:
- testing
- deploy
docker_job:
stage: testing
tags:
- docker
image: atools/chrome-headless:java11-node14-latest
before_script:
- npm ci
- npx playwright install
- npm install allure-commandline --save-dev
script:
- npm run BookingTestDEV --project=BookingTesting
- npx playwright test --project=BookEngineTests
- npm run BookingTestNEO --project=BookingTesting
after_script:
- npx allure generate allure-results --clean
rules:
- when: always
allow_failure: true
artifacts:
when: always
paths:
- ./allure-report
expire_in: 7 day
pages:
stage: deploy
script:
- mkdir public
- mv ./allure-report/* public
artifacts:
paths:
- public
rules:
- when: always
if first script
- npm run BookingTestDEV --project=BookingTesting fails, other will be skipped, how to run them anyway ? is there any analog of if(): always like on github ?
In most cases, the pipeline should fail if one of the commands throws an exit code that is not 0. However, in some cases, the rest of the commands should run anyway. A possible solution for this would be to add || true at the end of the command.
For example:
script:
- npm run BookingTestDEV --project=BookingTesting || true
- npx playwright test --project=BookEngineTests
- npm run BookingTestNEO --project=BookingTesting

How to build specific dist folder by specific branch in GitLab CI/CD

How to build specific dist folder by specific branch in GitLab CI/CD ?
I got a main project that includes two folders,I want to build the project if I change the specific folder,such as when I changed code in Web1 folder ,then the CI/CD will be triggered and only build Web1 folder,not build all folders in the same time?
stages:
- build
build:
stage: build
tags:
- test
only:
- test
script:
# - Web1
- cd Web1
- npm install
- npm run build
- rm - rf /home/user/test/web1
- cp - r../dist/web1 /home/user/test/web1
- cd..
# - Web2
- cd Web2
- npm install
- npm run build
- rm - rf /home/user/test/web2
- cp - r../dist/web2 /home/user/test/web2
- cd..
You should create two different jobs for this, which are triggered differently with the help of the keyword changes.
One job should be triggered when changes are made in the "Web1" folder
build_web1:
stage: build
only:
changes:
- Web1/**/*
script:
- ...
and the other when changes are made in the "Web2" folder
build_web2:
stage: build
only:
changes:
- Web2/**/*
script:
- ...

How to write .gitlab-ci.yml to build/deploy with conditions

I am new to CI/CD and Gitlab. I have a CI/CD script to test, build and deploy and I use 2 branches and 2 EC2. My goal is to have a light and not redundant script to build and deploy my changes in functions of the branch.
Currently my script looks like this but after looking the Gitlab doc I saw many conditionals keywords like rules but I'm really lost about how I can use conditional format in my script to optimise it.
Is there a way to use condition and run some script if there is a merge from a branch or from an other? Thanks in advance!
#image: alpine
image: "python:3.7"
before_script:
- python --version
stages:
- test
- build_staging
- build_prod
- deploy_staging
- deploy_prod
test:
stage: test
script:
- pip install -r requirements.txt
- pytest Flask_server/test_app.py
only:
refs:
- develop
build_staging:
stage: build_staging
image: node
before_script:
- npm install -g npm
- hash -d npm
- nodejs -v
- npm -v
script:
- cd client
- npm install
- npm update
- npm run build:staging
artifacts:
paths:
- client/dist/
expire_in: 30 minutes
only:
refs:
- develop
build_prod:
stage: build_prod
image: node
before_script:
- npm install -g npm
- hash -d npm
- nodejs -v
- npm -v
script:
- cd client
- npm install
- npm update
- npm run build
artifacts:
paths:
- client/dist/
expire_in: 30 minutes
only:
refs:
- master
deploy_staging:
stage: deploy_staging
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest # gitlab image for awc cli commands
before_script:
- apt-get update
# - apt-get -y install python3-pip
# - apt-get --assume-yes install awscli
- apt-get --assume-yes install -y shellcheck
script:
- shellcheck .ci/deploy_aws_STAGING.sh
- chmod +x .ci/deploy_aws_STAGING.sh
- .ci/deploy_aws_STAGING.sh
- aws s3 cp client/dist/ s3://......./ --recursive
only:
refs:
- develop
deploy_prod:
stage: deploy_prod
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest # gitlab image for awc cli commands
before_script:
- apt-get update
# - apt-get -y install python3-pip
# - apt-get --assume-yes install awscli
- apt-get --assume-yes install -y shellcheck
script:
- shellcheck .ci/deploy_aws_PROD.sh
- chmod +x .ci/deploy_aws_PROD.sh
- .ci/deploy_aws_PROD.sh
- aws s3 cp client/dist/ s3://........../ --recursive
only:
refs:
- master
Gitlab introduces rules for includes with version 14.2
include:
- local: builds.yml
rules:
- if: '$INCLUDE_BUILDS == "true"'
A good pattern as your cicd grows in complexity is to use includes and extend keywords. For example you could implement the following in your root level .gitlab-ci.yml file:
# best practice is to pin to a specific version of node or build your own image to avoid surprises
image: node:12
# stages don't need an environment appended to them; you'll see why in the included file
stages:
- build
- test
- deploy
# cache node modules in between jobs on a per branch basis like this
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm/
# include other definitions
includes:
- './ci-templates/.foo-app-ci.yml'
Then in another folder (or even another repository) you can include other templates. I didn't fully refactor this out for you but I hope this gives you the idea of not only how to use a rule to trigger your job but also how you can start to make reusable snippets and build on them to reduce the overall complexity. See the yaml comments for guidance on why I did things a certain way. example .foo-app-ci.yml file
# this script was repeated so define it once and reference it via anchor
.npm:install: &npm:install
- npm ci --cache .npm --prefer-offline # to use the cache you'll need to do this before installing dependencies
- cd client
- npm install
- npm update
# you probably want the same rules for each stage. define once and reuse them via anchor
.staging:rules: &staging:rules
- if: $CI_COMMIT_TAG
when: never # Do not run this job when a tag is created manually
- if: $CI_COMMIT_BRANCH == 'develop' # Run this job when commits are pushed or merged to the develop branch
.prod:rules: &prod:rules
- if: $CI_COMMIT_TAG
when: never # Do not run this job when a tag is created manually
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # Run this job when commits are pushed or merged to the default branch
# many parts of the build stage were repeated; define it once and lets extend from it
.build:template: &build:template
stage: build
before_script:
- &npm:install
artifacts:
paths:
- client/dist/
expire_in: 30 minutes
# many parts of the deploy stage were repeated; define it once and lets extend from it
.deploy:template: &deploy:template
stage: deploy
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest # gitlab image for awc cli commands
before_script:
- apt-get update
- apt-get --assume-yes install -y shellcheck
# here we extend from the build template to run the staging specific build
build:staging:
extends: *build:template
environment: staging
script:
- npm run build:staging
rules:
- *staging:rules
# this is kind of an oddball... not used to seeing python to test a node app. we're not able to reuse as much here
test:staging:
image: "python:3.7"
stage: test
script:
- pip install -r requirements.txt
- pytest Flask_server/test_app.py
rules:
- *staging:rules # apply staging rules to trigger test stage
needs:
- job: build:staging # normally we want to build before test; this will trigger test after the build
# here we extend from the build template to run the prod specific build
build:prod:
extends: *build:template
environment: prod
script:
- npm run build
rules:
- *prod:rules
# same thing for the deploy phases... extend from the deploy template for env specific requirements
deploy:staging:
extends: *deploy:template
script:
- shellcheck .ci/deploy_aws_STAGING.sh
- chmod +x .ci/deploy_aws_STAGING.sh
- .ci/deploy_aws_STAGING.sh
- aws s3 cp client/dist/ s3://......./ --recursive
rules:
- *staging:rules
needs:
- job: build:staging
artifacts: true
deploy:prod:
extends: *deploy:template
script:
- shellcheck .ci/deploy_aws_PROD.sh
- chmod +x .ci/deploy_aws_PROD.sh
- .ci/deploy_aws_PROD.sh
- aws s3 cp client/dist/ s3://........../ --recursive
rules:
- *prod:rules
needs:
- job: build:prod
artifacts: true
I would start basic and as you start to get comfortable with a working pipeline you can experiment with further enhancements and breaking out into more fragments. Hope this helps!

How do I deploy a sapper/svelte site to Gitlab Pages?

I am trying to use gitlab pages to host my static site generated by Sapper and Svelte.
I used the sapper starter app from the getting started docs:
npx degit "sveltejs/sapper-template#rollup" my-app
I added the .gitlab-ci.yml file as gitlab docs instrcuted:
# This file is a template, and might need editing before it works on your project.
image: node:latest
# This folder is cached between builds
# http://docs.gitlab.com/ce/ci/yaml/README.html#cache
cache:
paths:
- node_modules/
pages:
stage: deploy
script:
- npm run export
- mkdir public
- mv __sapper__/export public
artifacts:
paths:
- public
only:
- master
When the pipeline runs, it says it passes, but I still get a 404 error even after a day of waiting.
Has anyone successfully done this with sapper??
You're moving the export folder, rather than its contents. Change your move command to
mv __sapper__/export/* public/
so that your config would be
# This file is a template, and might need editing before it works on your project.
image: node:latest
# This folder is cached between builds
# http://docs.gitlab.com/ce/ci/yaml/README.html#cache
cache:
paths:
- node_modules/
pages:
stage: deploy
script:
- npm run export
- mkdir public
- mv __sapper__/export/* public/
artifacts:
paths:
- public
only:
- master

GitLab CI: configure yaml file on NodeJS

I have problem with test scss-lint in my project on nodejs.
When tests reach scss-lint, it gives an error.
How to make sure that tests do not fall with the successful result of the test itself?
My gitlab-ci.yml
image: node:wheezy
cache:
paths:
- node_modules/
stages:
- build
- test
gem_lint:
image: ruby:latest
stage: build
script:
- gem install scss_lint
artifacts:
paths:
- node_modules/
only:
- dev
except:
- master
install_dependencies:
stage: build
script:
- npm install
artifacts:
paths:
- node_modules/
only:
- dev
except:
- master
scss-lint:
stage: test
script:
- npm run lint:scss-lint
artifacts:
paths:
- node_modules/
only:
- dev
except:
- master
You are doing it wrong.
Each job you define (gem_lint, install_dependencies, and scss-lint) is run with its own context.
So your problem here is that during the last step, it doesn't find the scss-lint gem you installed because it switched its context.
You should execute all the scripts at the same time in the same context :
script:
- gem install scss_lint
- npm install
- npm run lint:scss-lint
Of course for this you need to have a docker image that has both npm and gem installed maybe you can find one on docker hub), or you can choose one (for example : ruby:latest) and add as the first script another one that would install npm :
- curl -sL https://deb.nodesource.com/setup_6.x | bash -

Resources