Can someone please tell me how to run a job in detached mode ? A job in my pipeline takes 30 minutes to complete, I want the pipeline to proceed without waiting for this job to complete.
For example:
stages:
- build
- build2
- test
newservice:
stage: build
script:
- echo "build is done"
newservice1:
stage: build2
script:
- echo "build1 is done"
- sleep 60
mygotservice:
stage: test
needs: ["newservice"]
script:
- echo "test is done"
I want the pipeline to proceed ahead without waiting for newservice1.
It depends on many reasons...
running tests may take time.
Where is the server? Where to get resources?
There are many possible reasons and it is not just about the git...
Related
I have following jobs and stages and with this yml configuration the test stage runs on schedule and regular pipelines when I set $RUN_JOB variable to true in my schedule and in my project's CI/CD variables. But this also schedules scheduled-test-1 and scheduled-test-2 in my scheduled pipelines.
What I want to do is that the test stage should continue to run on schedule and regular pipelines but scheduled-test-1 and scheduled-test-2 should not be scheduled with test stage.
stages:
build
test
deploy
scheduled-test-1
scheduled-test-2
build:
script:
- echo $Service_Version
only:
- develop
except:
- schedules
test:
script:
- echo $Service_Version
only:
variables:
- $RUN_JOB
deploy:
script:
- echo $Service_Version
only:
- develop
except:
- schedules
scheduled-test-1:
script:
- echo $Service_Version
only:
- schedules
scheduled-test-2:
script:
- echo $Service_Version
only:
- schedules
There might be a simpler option but, using artifacts:reports as in Exporting environment variables from one stage to the next in GitLab CI might help.
The idea would be to export a RUN_TEST environment variable (a flag to signal the test job has been executed), and use it in the scheduled-test-x jobs in a rules section:
rules:
- if: $RUN_TEST != ""
While those scheduled-test-x jobs might still be scheduled, at least they would not run, but their rule would be false.
I have a YAML file as below. Let’s say the *.md file is committed, the build does not work, but the test works. Here how can I make the test depend on the build? Like if the build doesn’t work, the test shouldn’t work.
Thanks in advance.
build:
stage: build
script:
- echo "Build is running"
only:
changes:
- Dockerfile
- requirements.txt
- ./configs/*
test:
stage: test
script:
- echo "Test is running"
- echo "$CI_JOB_STAGE"
dependencies:
- build
That should be what stages defines
Use stages to define stages that contain groups of jobs.
stages is defined globally for the pipeline.
Use stage in a job to define which stage the job is part of.
The order of the stages items defines the execution order for jobs:
Jobs in the same stage run in parallel.
Jobs in the next stage run after the jobs from the previous stage complete successfully.
For example:
stages:
- build
- test
- deploy
All jobs in build execute in parallel.
If all jobs in build succeed, the test jobs execute in parallel.
If all jobs in test succeed, the deploy jobs execute in parallel.
If all jobs in deploy succeed, the pipeline is marked as passed.
If any job fails, the pipeline is marked as failed and jobs in later stages do not start.
Jobs in the current stage are not stopped and continue to run.
So, in your case:
stages:
- build
- test
test won't run if build fails.
I would like to know how to manually trigger specific jobs in a project's CI pipeline.
Since there is only one gitlab-ci.yml file, I can define many jobs to be executed one after the other sequentially. But what if I want to start a manual CI pipeline that only carries out one job?
As I understand it, every time the pipeline will run, it will run all jobs, unless I use many only and similar parameters. For instance, when I have this simple pipeline config:
stages:
- build
build:
stage: build
script:
- npm i
- npm run build
- echo "successful build"
What do I do if I want to only run an echo job that runs a simple echo "hello" script, but does only that and only when I manually run it? There are no 'triggers' for a job like that afaik.
Is this even a possibility?
Thanks for the clarification!
Apparently, the solution is pretty simple, just needed to add a when: manual paramater to the job:
echo:
stage: echo
script:
- echo 'this is a manual job'
when: manual
Once that's done, the job can be triggered independently right here:
Here is my .gitlab-ci.yml file:
script1:
only:
refs:
- merge_requests
- master
changes:
- script1/**/*
script: echo 'script1 done'
script2:
only:
refs:
- merge_requests
- master
changes:
- script2/**/*
script: echo 'script2 done'
I want script1 to run whenever there is a change in script1 directory; likewise script2.
I tested these with a change in script1, a change in script2, change in both the directories, and no change in either of these directories.
Former 3 cases are passing as expected but 4th case, the one with no change in either directory, is failing.
In the overview, Gitlab gives the message
Could not retrieve the pipeline status. For troubleshooting steps, read thedocumentation.
In the Pipelines tab, I have an option to Run pipeline. Clicking on that gives the error
An error occurred while trying to run a new pipeline for this Merge Request.
If there is no job, I want the pipeline to succeed.
Gitlab pipelines do not have any independent validity outside of jobs. A pipeline, by definition, consists of one or more jobs. In your example 4 above no jobs are created. The simplest hack you can add to your pipeline is a job which always runs:
dummyjob:
script: exit 0
I'm using Gitlab CI, and so have been working on a fairly complex .gitlab-ci.yml file. The file has an after_script section which runs when the main task is complete, or the main task has failed somehow. Problem: I need to do different cleanup based on whether the main task succeeded or failed, but I can't find any Gitlab CI variable that indicates the result of the main task.
How can I tell, inside the after_script section, whether the main task has succeeded or failed?
Since gitlab-runner 13.5, you can use the CI_JOB_STATUS variable.
test_job:
# ...
after_script:
- >
if [ $CI_JOB_STATUS == 'success' ]; then
echo 'This will only run on success'
else
echo 'This will only run when job failed or is cancelled'
fi
See GitLab's documentation on predefined_variables: https://docs.gitlab.com/ee/ci/variables/predefined_variables.html
The accepted answer may apply to most situations, but it doesn't answer the original question and will only work if you only have one job per stage.
Note: There currently a feature request opened (issues/3116) to handle on_failure and on_success in after_script.
It could be possible to use variables to pass the job status to an after_script script, but this also has a feature request (issues/1926) opened to be able to share variables between before_script, script and after_script.
One workaround will be to write to a temporary file that will be accessed during the after_script block.
test_job:
stage: test
before_script:
- echo "FAIL" > .job_status
script:
- exit 1
- echo "SUCCESS" > .job_status
after_script:
- echo "$(cat .job_status)"
Instead of determining whether or not the task succeeded or failed in the after_script, I would suggest defining another stage, and using the when syntax, where you can use when: on_failure or when: on_success.
Example from the documentation:
stages:
- build
- cleanup_build
- test
- deploy
- cleanup
build_job:
stage: build
script:
- make build
cleanup_build_job:
stage: cleanup_build
script:
- cleanup build when failed
when: on_failure
test_job:
stage: test
script:
- make test
deploy_job:
stage: deploy
script:
- make deploy
when: manual
cleanup_job:
stage: cleanup
script:
- cleanup after jobs
when: always
Just another way to handle this if you want to setup if failure behavior.
scripts:
- ./script_that_fails.sh > /dev/null 2>&1 || FAILED=true
- if [ $FAILED ]
then ./do_something.sh
fi
Note: Other examples also worked for me, but I find this implementation more faster and suitable for me