I have a circle.yml file that looks something like this:
general:
branches:
only:
- master
- develop
- /release-[0-9]+(\.[0-9]+)*/
test:
pre:
- docker-compose run $SERVICE npm install
override:
- docker-compose run $SERVICE npm test
post:
- docker-compose run SPECIFIC_COMMAND // this should only trigger for branches that fall under /release-[0-9]+(\.[0-9]+)*/
- docker-compose stop
Unit tests run when merging to master, develop, or /release-[0-9]+(\.[0-9]+)*/.
However, there's a certain command in the post-hook under tests that I would only like to trigger when merging into /release-[0-9]+(\.[0-9]+)*/. This command must run before the final one, docker-compose stop, which is why I haven't used a deployment block.
Turns out this isn't quite possible in the test block (unlike the branches or deployment blocks).
The best way around this was to place the conditional logic in a shell script that accessed the $CIRCLE_BRANCH environment variable. The shell script in turn would be triggered all the time.
Related
I've created around 180 end-to-end tests around a web application. Now I can't afford to run those sequentially. I've tried running it in parallel via the cypress dashboard. But they provide only 500 test runs per month and then it doesn't work in parallel. In my git lab runner, I am seeing this error:
Can anyone suggest how can I run tests in parallel with cypress and GitLab only?
You will need to do some hackings/workarounds hahaha but will work.
First you need to have in your root path the file .gitlab-ci.yml. In your .gitlab-ci.yml define how many parallel jobs do you want, in my case I will use as example two parallel jobs to run the same tests in different browsers(chrome and firefox):
stages:
- triggers
smoke-test-chrome:
stage: triggers
trigger:
include: gitlab-ci/smoke-test-chrome/.gitlab-ci.yml
smoke-test-firefox:
stage: triggers
trigger:
include: gitlab-ci/smoke-test-firefox/.gitlab-ci.yml
Now you need to create the .gitlab-ci.yml for each parallel job in different directories but all included in a main directory called gitlab-ci/
In my example I will create two files with the following path:
gitlab-ci/smoke-test-chrome/.gitlab-ci.yml
gitlab-ci/smoke-test-firefox/.gitlab-ci.yml
In the gitlab-ci/smoke-test-chrome/.gitlab-ci.yml file I will have:
stages:
- triggers
smoke-test-chrome:
image: cypress/browsers:node16.14.0-slim-chrome99-ff97
stage: triggers
script:
- npm ci
- npm run smoke:test-chrome
And in the gitlab-ci/smoke-test-firefox/.gitlab-ci.yml file I will have:
stages:
- triggers
smoke-test-firefox:
image: cypress/browsers:node16.14.0-slim-chrome99-ff97
stage: triggers
script:
- npm ci
- npm run smoke:test-firefox
Last create your specific scripts in your package.json, in my example I created:
"smoke:test-chrome": "cypress run --browser chrome --spec 'cypress/integration/Signup/SmokeTests.test.ts'",
"smoke:test-firefox": "cypress run --browser firefox --spec 'cypress/integration/Signup/SmokeTests.test.ts'",
In your case, you can just change the spec files you want to called in each specific job by changing those specs in package.json script and calling those scripts in the .gitlab-ci.yml files you have created before
actually i'm facing a problem in this code :
Sorted
- Changes
- Lint
- Build
- Tests
- E2E
- SAST
- DAST
- Publish
- Deployment
# Get Runner Image
image: Node:latest
# Set Variables for mysql
**variables:**
MYSQL_ROOT_PASSWORD: secret
MYSQL_PASSWORD:
..
..
**script:**
- ./addons/scripts/ci/lintphp.sh
why we use image I asked some one said that we build on it like the docker file command FROM ubuntu:latest
and one other told me it's because it executes the code and I don't actually know the script tag above what evem does it mean to execute inside the image or on the runner?
GitLab Runner is an open source application that collects pipeline job payload and executes it. Therefore, it implements a number of executors that can be used to run your builds in different scenarios, if you are using a docker executor you need to specify what image you will be using to run your builds.
I'm using gitlab-runner to run CI/CD locally.
It works properly when I specify all jobs in .gitlab-ci.yml like
stages:
- test
test1:
stage: test
script:
- echo "ok"
and run gitlab-runner exec shell test1
In general, I'd like to store different jobs in different files. For example, I make test-pipeline.yml with jobs that relates to the test stage in the folder named .gitlab.
The .gitlab-ci.yml contains only to rows
include:
local: .gitlab/test-pipeline.yml
I commit and push changes to the remote repo and it works there but the command gitlab-runner exec shell job_name fails because it can't find such job.
Perhaps, I have to edit some of gitlab-runner's config but it's not obviously.
Has anybody faced with the same problem?
Thanks in advance!
gitlab-runner exec has many limitations. It does not have all the same features of the regular gitlab-runner. One such limitation is that it does not support the include: statement.
So, you won't be able to use gitlab-runner exec against this kind of config file that uses include:.
I have a test command in my repo that should work when my server is up because the tests interact with the server once it is running.
On my local i use two commands on first terminal npm run dev - this gets the server running and on second terminal i run the command npm run test that runs test which only pass when the first command is running. How do i achieve this in my gitlab CICD test stage job?
currently i am doing this
test_job:
stage: test
script:
- npm run dev
- npm run test
so the pipeline executes npm run dev which doesnt self terminate and my pipeline gets stuck cant seem to find the solution. Help and suggestions are appreciated. Stack is typescript express graphql
You have two options that you can use for this:
If you're building your server into a container prior to this test job, you can run the container as a service, which will allow you to access it via it's service alias. That would look something like this:
test_job:
stage: test
services:
- name: my_test_container:latest
alias: test_container
script:
- npm run test # should hit http://test_container:<port> to access service
You can use the nohup linux command to run your service in the background. This will run the command in the background once it's started up, and it will die when the CI job ends (as part of shutting down the job). That would look like this:
test_job:
stage: test
script:
- nohup npm run dev &
- npm run test
GitLab-CI executes the stop-environment script in dynamic environments after the branch has been deleted. This effectively forces you to put all the teardown logic into the .gitlab-ci.yml instead of a script that .gitlab-ci.yml just calls.
Does anyone know a workaround for this? I have a shell script that removes the deployment. This script is part of the repository and can also be called locally (i.e. not onli in an CI environment). I want GitLab-CI to call this script when removing a dynamic environment but it's obviously not there anymore when the branch has been deleted. I also cannot put this script to the artifacts as it is generated before the build by a configure script and contains secrets. It would be great if one could execute the teardown script before the branch is deleted.
Here's a relevant excerpt from the .gitlab-ci.yml
deploy_dynamic_staging:
stage: deploy
variables:
SERVICE_NAME: foo-service-$CI_BUILD_REF_SLUG
script:
- ./configure
- make deploy.staging
environment:
name: staging/$CI_BUILD_REF_SLUG
on_stop: stop_dynamic_staging
except:
- master
stop_dynamic_staging:
stage: deploy
variables:
GIT_STRATEGY: none
script:
- make teardown # <- this fails
when: manual
environment:
name: staging/$CI_BUILD_REF_SLUG
action: stop
Probably not ideal, but you can curl the script using the gitlab API before running it:
curl \
-X GET https://gitlab.example. com/raw/master/script.sh\
-H 'PRIVATE-TOKEN: ${GITLAB_TOKEN}' > script.sh
GitLab-CI executes the stop-environment script in dynamic environments after the branch has been deleted.
That includes:
An on_stop action, if defined, is executed.
With GitLab 15.1 (June 2022), you can skip that on_top action:
Force stop an environment
In 15.1, we added a force option to the stop environment API call.
This allows you to delete an active environment without running the specified on_stop jobs in cases where running these defined actions is not desired.
See Documentation and Issue.