I am running GitLab runner in Kubernetes. The GitLab pipeline job is working fine. Job is taking 10sec to complete the task. but I am getting pipeline status (success/fail) after 5 min.
I tried to set job timeout to 1 min also but it's not working. every time I am getting the status after 5 min.
the same task is working fine in gitlab.com. I am facing this issue in the self-GitLab runner. Please let me know if any setting is required to change in GitLab helm.
GitLab yml
job duration
Related
We are currently implementing Test Execution of Test Suites using Gitlab's CI.
One of the issue we encountered is that it keeps using a runner even though it is being used by another job.
Problem 1:
There are 3 available runners (different machines), config is already set to limit = 1 and request_concurrency = 1.
Pipeline with 3 jobs running in parallel will be triggered.
Result: The 2 jobs will use 1 runner.
Expectation: The 3 jobs will use the 3 runners (1 job per runner).
Problem 2:
Given the above conditions.
Pipeline with 3 jobs running in parallel will be triggered twice.
Result: almost all jobs are running, executing multiple jobs in a runner.
Expectation: First pipeline should be running (1 job per runner) and second pipeline should be pending.
Second pipeline's job/s will only be executed once done with first pipeline's job/s
You must set concurrent to 1 in the global settings.
Also double check your configuration file to ensure you have not registered multiple runners on the same machine.
I have created an http cloud scheduler task. I'm expecting it to have a maximum run time of 5 minutes. However my task is reporting DEADLINE_EXCEEDED after exactly 1 minute.
When I run gcloud scheduler jobs describe MySyncTask to view my task it reports attemptDeadline: 300s. The service I am calling is cloud run and I have also set a 300s limit.
I am running the task manually by clicking "force a job run" in the GUI.
After 1 minute exactly in the logs it reports DEADLINE_EXCEEDED
When you execute a job from the GUI, it will be executed using the default attemptDeadline value, which is 60 seconds according to this question.
If you want to run it manually, I suggest to run the job from the Cloud Shell and pass the --attempt-deadline flag with the desired value, as shown on this answer:
gcloud beta scheduler jobs update http <job> --attempt-deadline=1800s --project <project>
In a pipeline I have 2 jobs , second job parses the logs of first job and for that I am using below API to get job id
https://source.golabs.io/api/v4/projects/<id>/jobs?scope[]=success
Now issue is what will happen if I will execute multiple parallel runs using that pipeline.how I can differentiate the job logs in respective pipelines.
How to delay a job in azure devops pipelines, I have multiple that will be running simultaneously, the problem is in the checkout phase I get the error saying files are used by another process.
I found "delayForMinutes" and running a powershell script but they only work for tasks not for jobs.
My goal is to have the checkout phase for the job to be delayed not the tasks in it.
You can do something like after the checkout job add a agentless Job with in that you can include a delay task. Then you can continue the other task in a separate agent job
We have a pipeline with many jobs and the last job failed. I'm trying to debug the issue, but the job requires artifacts from previous jobs.
How can I run this job locally with gitlab-runner so it has access to these artifacts?
That's not possible (yet).
See the limitations of the exec compared to regular CI here (artifacts -> not available).
Consider upvoting the issue to get this fixed.