Jenkins jobs running on slaves - security

I would like to restrict where a job can be run based on the job labels but also based on the job name.
Is that possible?
Or even, check if the job name matches (in same way) with the job label
Thanks in advance
P.D.: I need to do this outside the job options as I do not trust the job configuration, just the job name

Check out the NodeLabel Parameter Plugin
You can set labels on the fly when the build itself starts, you should be able to use the job name there.

You can achieve this with a combination of few plugins: NodeLabel Parameter Plugin, Parameterized Trigger Plugin and EnvInject Plugin.
The idea is the following: use another job ("Builder", for example) to trigger other parameterized jobs with appropriate node parameter. "Builder" job will dynamically set a node parameter based on the name of the job to trigger.
Here is a good example of such approach described in pictures.

I have found this plugin which does exactly what I wanted.
It restricts the job execution on a node base on the job name
https://wiki.jenkins-ci.org/display/JENKINS/Job+Restrictions+Plugin
Thanks for your time anycase

Related

Set global variable value within a job in gitlab yml file

I have two jobs : first one is "test" and the second one is "push". test job is allowed to fail (allow_failure: true) I only wanna run the push job if the test job is success.
One option is to save the variable to a file and use it through artifact. But what I'm interested in is that is if there's a way achieving this without the file, like having global var and update the value in the test job if it's a success, but apparently modifying global variables from the job scope is not possible. Any suggestions?

How to exclude gitlab CI job from pipeline based on a value in a project file?

I need to exclude a job from pipeline in case my project version is pre-release.
How I know it's a pre-release?
I set the following in the version info file, that all project files and tools use:
version = "1.2.3-pre"
From CI script, I parse the file, extract the version value, and know whether it's a pre-release or not, and can set the result in an environment variable.
The only way I know to exclude a job from pipeline is to use rules, while, I know also from gitlab docs that:
rules are evaluated before any jobs run
before_script also is claimed to be called with the script, i.e. after applying the rules.
I can stop the job, only after it starts from the script itself, based on the version value, but what I need is to extract the job from the pipeline in the first place, so it's not displayed in the pipeline history. Any idea?
Thanks
How do you run (start) your pipeline, and is the information whether "it's a pre-release" already known at this point?
If yes, then you could add a flag like IS_PRERELEASE as a variable to the pipeline, and use that in the rules: section of your job. The drawback is that this will not work with automatic pipelines (triggered by a commit or MR); but you can use this approach with manually triggered pipelines (https://docs.gitlab.com/ee/ci/variables/#override-a-variable-when-running-a-pipeline-manually) or via the API (https://docs.gitlab.com/ee/ci/triggers/#pass-cicd-variables-in-the-api-call).

How can I pass value of variable from one job to next job in pipeline in GitLab CI?

Am I able to pass value from variable that I created in one job to next job so I can do some checks of that value in next job of the same stage ?
I would like to have first job that creates some variable and then assigns value to that variable and next job, in same stage that would do check of that value ? I need this for specific use case in my pipeline ?
I was going through the documentation on GitLab and I couldn't find any recource that would help me with solving this case ?
Any help with this would be really appreciated. Thanks a lot! :)
Yes, you do this by using the dotenv file artifact. You'll create a file in one job that has a set of values in it like this:
FIRST_VAR=1234
SECOND_VAR=hello_world
Then set that as a dotenv type artifact according to the documentation, and that will make downstream jobs have those variables be set.

Airflow Branch Operator and S3KeySensor doesnot respect trigger_rule='none_failed'

I have 3 S3KeySensors on different files in different folders. 2 of them have to be successful and the third one can be skipped.I am trying to trigger the downstream task with the trigger_rule='none_failed' but S3KeySensor doesnot seem to respect that. This is how my DAG looks like.
This is how it behaves
This is how i want it to behave:
You have to set trigger_rule="none_failed_or_skipped" to test_step task as explained in this documentation.
From the documentation:
none_failed_or_skipped: all parents have not failed (failed or upstream_failed) and at least one parent has succeeded.

How can I get the seed job's name inside a Jenksin Job DSL script?

I'm using a Freestyle project / job with a Process Job DSLs build step as provided by the Jenkins Job DSL plugin, i.e. that is the "seed" job. How can I, from within the code provided by Use the provided DSL script, get the seed job's name?
I've tried to apply the answers to this question but none of them worked.
All build variables are injected into the DSL scripts, see Access the Jenkins Environment Variables. The JOB_NAME variable contains the name of the seed job.
println JOB_NAME

Resources