Run triggered Gitlab jobs only on specific days - gitlab

I have the following issue:
I have a scheduled git pipeline which runs every work day in the morning. This pipeline is triggering other pipelines of other projects. The jobs are defined in multiple gitlab.yml files in their corresponding projects. For better understanding, here is a minimal example with only one triggered job:
main job:
trigger: child job
child job:
// Do something here
Now the thing is that this triggered child job is only allowed to run on specific days. In our case, it is that the child job is not allowed to run on Mondays.
I already had the idea to distinguish in the main job on which day the child job should be executed, give the child job a variable and check the given variable with the only or except tags. But it seems like it is not that easy to get the current work day inside the gitlab.yml. Or am I wrong there?
Is there a way to achieve this?
UPDATE
#KamilCuk made me realize that I was missing an important aspect in the question. When the child job is executed on its own, it should run without any hinderance (also on monday) if possible. When triggered by the main job, the check should apply.

The simplest is to just check it in the job.
child job:
script:
- weekday=$(LC_ALL=C date +%a)
- case "$weekday" in
Mon) echo "Not running on monday"; exit 0; ;;
esac
- rest of the job
You can trigger the job via API. https://docs.gitlab.com/ee/api/pipeline_triggers.html https://docs.gitlab.com/ee/ci/triggers/index.html
main job trigger child job:
script:
- weekday=$(LC_ALL=C date +%a)
- case "$weekday" in
Mon) echo "Not running on monday"; exit 0; ;;
esac
- curl
-H "Authorization: Bearer somethingosmething"
$CI_GITALB_URL/.../api/4/....
trigger child job

Related

Non-constant Variables in Gitlab Pipelines

Surely many of you have encountered this and I would like to share my hacky solution. Essentially during the CI / CD process of a Gitlab pipeline most parameters are passed through "Variables". There are two issues that I've encountered with that.
Those parameters cannot be altered in Realtime - Say I'd want to execute jobs based on information from previous jobs it would always need to be saved in the cache as opposed to written to CI / CD variables
The execution of jobs is evaluated before the script, so the "rules" will only ever apply to the original parameters. Trouble arises when those are only available during runtime.
For complex pipelines one would want to pick and choose the tests automatically without having to respecify parameters every time. In my case I delivered a data product and depending on the content different steps had to be taken. How do we deal with those issues?
Changing parameters real-time:
https://docs.gitlab.com/ee/api/project_level_variables.html This API provides users with a way of interacting with the CI / CD variables. This will not work for any variables defined at the head of the YML files under the Variables tag. Rather this is a way to access the "Custom CI/CD Variables" https://docs.gitlab.com/ee/ci/variables/#custom-cicd-variables found under this link. This way custom variables can be created altered and deleted during running a pipeline. The only thing needed is a PRIVATE-TOKEN that has access rights to the API (I believe my token has all the rights).
job:
stage: example
script:
- 'curl --request PUT --header "PRIVATE-TOKEN: $ACCESS_TOKEN" "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/variables/VARIABLE_NAME" --form "value=abc"'
Onto the next problem. Altering the variables won't let us actually control downstream jobs like this because of the fact that the "rules" block is executed before the pipeline is actually run. Hence it will use the variable before the curl request is sent.
job2:
stage: after_example
rules:
- if: $VARIABLE_NAME == "abc"
script:
- env
The way to avoid that is child pipelines. Child pipelines are initialized inside the parent pipeline and check the environment variables anew. A full example should illustrate my point.
variables:
PARAMETER: "Cant be changed"
stages:
- example
- after_example
- finally
job_1:
# Changing "VARIABLE_NAME" during runtime to "abc", VARIABLE_NAME has to exist
stage: example
script:
- 'curl --request PUT --header "PRIVATE-TOKEN: $ACCESS_TOKEN" "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/variables/VARIABLE_NAME" --form "value=abc"'
job_2.1:
# This wont get triggered as we assume "abc" was not the value in VARIABLE_NAME before job_1
stage: after_example
rules:
- if: $VARIABLE_NAME == "abc"
script:
- env
job_3:
stage: after_example
trigger:
include:
- local: .donwstream.yml
strategy: depend
job_4:
stage: finally
script:
- echo "Make sure to properly clean up your variables to a default value"
# inside downstream.yml
stages:
- downstream
job_2.2:
# This will happen because the pipeline is initialized after job_1
stage: downstream
rules:
- if: $VARIABLE_NAME == "abc"
script:
- env
This coding bit probably won't run, however it exemplifies my point rather nicely. Job 2 should be executed based on an action that happens in Job 1. While the variables will be updated once we reach job 2.1, the rules check happens before so it will never be executed. Child pipelines do the rule check during the runtime of the Parent pipeline, this is why job 2.2 does run.
This variant is quite hacky and probably really inefficient, but for all intents and purposes it gets the job done.

Apply Resource_Groups to specific schedules?

We're having an issue within our Gitlab instance that runs test jobs. We have approximately 30 scheduled jobs, which are either triggered manually or by the API. out of those jobs, about 10 are specific to a ci/cd pipeline, and they get triggered all the time by merges/commits. What we'd like to do is use resource_group, but only apply that setting to those specific jobs.
when I add "resource_group: runtest" to our yml file, it applies to ALL our scheduled pipelines. Is there a way to apply it to just a specific set of schedules? Maybe by using a tag or specifc naming convention?
Dan
I think you would need to create extra jobs in your GitLab config so that you have some with resource_group and some without and then choose which to include in the pipeline using an environment variable input when the pipeline is triggered.
job with resource group:
script: echo "Hello!"
rules:
- if: $MY_VARIABLE = "ThisIsASchedule"
when: always
resource_group: production
job without resource group:
script: echo "Hello!"
rules:
- if: $MY_VARIABLE != "ThisIsASchedule"
when: never
Thanks for the direction Glen Thomas. Was able to get it to work.
had to refactor our gitlab-ci.yml file a bit to organize it better, now have two jobs that run the tests. One checks to see if the variable is false, and if so, runs the jobs as normal. the 2nd job checks if the variable is true, and if so, runs the jobs, but with the added resource_group setting so those jobs wait for the previous job to finish before kicking off .
so:
job1 :
script: echo "regular schedules"
if: $Variable != "True"
do all the test things here;
job2:
script echo "group schedules"
if: $Variable == "True"
resource_group: group
do all the test things here

Using conditionals in GitLab CI with user intervention

Is that possible to add conditionals that executes if or else block based on a user input in GitLab CI yml?
In the event the user input is a YES then it must trigger a template(ansible tower template) to restart specific service (tower-cli job launch --job-template)
Should I userules:if with when for such a conditional.Could someone put an insight on such a format.I am a first time user in Gitlab and have some experience in Jenkins.
The rules section (and also only and except) are evaluated when the pipeline is first created, but before the first job starts running because it controls which jobs will be in a pipeline or not, so you can't use variables/conditionals that come from another job or manual task variables.
An alternative is to use a normal bash conditional in your script section and simply exit 0; when the job shouldn't run.
...
Job:
stage: my_stage
when: manual
script:
- if [ $MY_USER_INPUT_VARIABLE == "don't run the job" ] then exit 0; fi
- ./run_job.sh
These are just simple examples, but you check the variable for whatever it is you need it to do (or use multiple conditionals if using multiple variables) and if the job shouldn't run, execute exit 0; which will stop the job from processing, but will not mark it as failed. Otherwise, we run whatever we need this job to do.

Run section of puppet manifest once a day but with hourly poll

I have nodes checking into a puppet server every hour.
We have some tasks which we want to run on check-in but only once a day.
Would it be possible to make a function inside a puppet manifest that saves last run time and only runs if the last time was over 24 hours?
Update:
I did try one thing which semi-works. That is move the chunk of puppet code into a separate file, and have my main puppet ensure a cron job exists for it.
The complaint I go back from another department with this is that they can no longer see install errors on puppet board. This image shows 2 nodes on the old puppet branch and 1 on the new branch:
With having cron run puppet apply myFile.pp we no longer got the feedback from failures on Puppetboard, as the main script simply ensures that the cron job exists:
You have at least two options.
Assuming your unspecified task is handled by an exec resource, you could design this in such a way that Puppet only ever regards the exec as out of sync once per day. That could be achieved by having your exec write the calendar day into a file. Then you could add an unless attribute:
unless => "test $(</var/tmp/last_run) == $(date +%d)"
Obviously your exec would need to also keep track of updating that file.
A second option would be to use the schedule metaparameter:
schedule { 'everyday':
period => daily,
range => '1:00 - 1:59',
}
exec { 'do your thing':
schedule => 'everyday',
}
That assumes that Puppet really will run only once per hour. The risk of course is that Puppet runs more than once in that hour, e.g. a sysadmin might manually run it.

How can I prevent a gitlab job running on push event

I have a simple gitlab-yaml file that I thought would run a job when only scheduled. However, it is getting fire on a push event as well.
Can anyone please tell me the correct way in which to specify that a job is only run when scheduled.
This is my gitlab-yaml file
job:on-schedule:
only:
- schedules
- branches
script:
- /usr/local/bin/phpunit -c phpunit_config.xml
Thanks
According to the GitLab documentation, branches means "When a branch is pushed".
https://docs.gitlab.com/ce/ci/yaml/README.html#only-and-except-simplified
So including branches in your only: section causes the pipeline job to also run on pushes to any branch.
You can either remove the branches entry, or if you wanted to restrict to pushes for a specific branch you could extend the branch entry to include project and branch name (branches#<project>/<branch>).
My suggestion is to reduce your YML to:
job:on-schedule:
only:
- schedules
script:
- /usr/local/bin/phpunit -c phpunit_config.xml

Resources