I'm pretty new to Jenkins and I'm not sure if it can be used for the following:
We have multiple branches (let's say 5 branches)
For each branch we have to run a test suite that requires 2 servers, 1 linux client and 1
windows client
We want to share these resources between the jobs, say that we have a pool of 6 servers, 3 linux clients and 3 windows clients
Is it possible to manage this through Jenkins? The job can be kicked off through a simple shell script, but it has to "reserve" the resources and pass these as parameters to the shell script. It should also queue the test suite jobs if no resources are currently available.
I looked into the Jenkins basis but so far only found the "build slave" model, where you run jobs on managed clients. But I haven't found any solution to manage multiple resources yet. Is that possible through Jenkins?
Thanks in advance!
Yes this is possible through Jenkins. You must have assigned labels to your slave nodes, so jobs can be so designed that they use pairs of linux and windows. While configuring the job assign, in the textbox, "Restrict where this project can be run" -> "Label Expression" labels of your slaves.
you can also find above icon there from here you can find about the various operators that can be utilized for putting conditions also.
Also take reference from here if you are new to Jenkins
Related
Situation:
I have a pipeline job that executes tests in parallel. I use Azure VMs that I start/stop on each build of the job thru Powershell. Before I run the job, it checks if there are available VMs on azure (offline VMs) then use that VMs for that build. If there is no available VMs then I will fail the job. Now, one of my requirements is that instead of failing the build, I need to queue the job until one of the nodes is offline/available then use those nodes.
Problem:
Is there any way for me to this? Any existing plugin or a build wrapper that will allow me to queue the job based on the status of the nodes? I was forced to do this because we need to stop the Azure VM to lessen the cost usage.
As of the moment, I am still researching if this is possible or any other way for me to achieve this. I am thinking of any groovy script that will check the nodes and if there are no available, I will manually add it to the build queue until at least 1 is available. The closest plugin that I got is Run Condition plugin but I think this will not work.
I am open to any approach that will help me achieve this. Thanks
In high performance computing is crucial to have a code tested against many different architectures/compilers: from a laptop to a supercomputer.
Assuming that we have
N testing machines/workers (each one running gitlab-ci-runner);
M tests,
what shall be the correct layout of .gitlab-ci.yml to ensure that each of the N machines runs all the tests?
Looks to me that adding just more workers ends up in a round-robin like assignment of the jobs.
Thanks for your help.
You could use tags in your .gitlab-ci.yml and on the runners to distribute your tests to all mashines you want. Gitlab CI is very open to those use cases. I assume that you don't use docker for testing.
To accomplish your goal, do following steps:
Install your gitlab-ci-multi-runners on your N mashines BUT if the runner ask you for tags, tag the mashine with a specifc name e.g. "MashineWithManyCPUsWhatever". Please use the gitlab-ci-multi-runner register command to do so. You can alternatively change the tags in Gitlab on the administration view after registration.
The type of the runners should be "shared" not "specific".
When you installed and tagged every runner on your mashines, tag you jobs.
Example:
Every job will now be excecuted on the specific mashine with the specific tag.
For more Information see Gitlab CI Documentation
I have Jenkins installed on a Linux server. It can run builds on itself. I want to create either a Freestyle Project or an External Job that transfers a bash script and runs it on two separate linux servers. Where in the GUI do I configure the destination server when I create a build? I have added "nodes" in the GUI. I can see the free space of the servers in the Jenkins GUI, so I know the credentials work. But when I create a build, I see no field that would tell Jenkins to push the bash scripts and run them on certain servers.
Are Jenkins nodes just servers that lend computing power to the master server? Or are they the targets of Jenkins builds? I believe that Jenkins "slaves" provide computing power to the Jenkins master server.
Normally Jenkins is used to integrate code. What do you call the servers that Jenkins pushes code into? They would be called Chef clients or Puppet agents if I was using Chef or Puppet for integrating code. I've been doing my own research, but I don't seem to know the specific vocabulary.
I've been working with such tools for several years. And for as far as I know there isn't a Ubiquitous Language for this.
The node's you can configure in Jenkins itself to add 'computing power' are indeed called build slaves.
Usually, external machines you will copy to, deploy to or otherwise use in jobs are called "target machine". As it will be the target of an action in your job.
Nodes can be used in several forms, you can use agents, which will require a small installation on the node machine. Which will create a running agent service with which Jenkins can communicate.
Another way is simply allow Jenkins to connect to a machine via ssh and let it execute commands there. Both are called nodes and could be called build slaves. But the first are usually dedicated nodes while the second can be any kind of machine as long as the ssh user can execute the build.
I also have not found any different terms for these two types.
It's probably not a real answer to your questions, but I do hope it helped.
Please note: I am not interested in any enterprise/for-pay (Tower?) solutions here, only solutions available via Ansible's OSS offering.
OK so I've got my Ansible project configured and working perfectly, woo hoo! Looks something like this:
myansible01.example.com:/opt/ansible/
site.yml
fizz.yml
buzz.yml
group_vars/
roles/
common/
tasks/
main.yml
handlers
main.yml
foos/
tasks/
main.yml
handlers/
main.yml
There's several things I need to accomplish to get this working in a production environment:
I need to be able to automate the deployment of changes to this project
I need to schedule playbooks to be ran, say, every 30 seconds (to ensure all managed nodes are always in compliance)
So my concerns:
How are changes usually deployed to live Ansible projects? Say the project is located at myansible01.example.com:/opt/ansible (my Ansible server). Is it sufficient to simply delete the Ansible project root (rm -rf /opt/ansible) and then copy the latest (containing changes) Ansible project back to the same location? What happens if Ansible is currently running any plays while I perform this "drop-n-swap"?
It looks like the commercial offering (Ansible Tower) has a scheduling feature built into it, but not the OSS offering. How can I schedule Ansible OSS to run plays at certain times? For instance, I might want certain plays to be ran every 30 seconds, so as to ensure nodes are always within compliance. Is cron sufficient to do this, or is there a more standard approach?
For this kind of task you typically want an orchestration engine such as Jenkins to do all your, well, orchestration.
You can set Jenkins to run playbooks on timers or other events such as a push to an SCM such as git.
Typically a job starts by checking out a tag/branch of our Ansible code base and then applying it to all of our specified servers so you always know what is being run. If you want, this can simply be the head on master (in git terms) so it's always applying the most recent changes. If you were also to have this to hook into your SCM repo then a simple push will force those changes to be applied to all of your servers.
Because of that immediacy you might want to consider only doing this on some test servers that then have some form of testing done against them (such as Serverspec) to verify that your changes are good before rolling them out to a production environment.
Jenkins, by default, will not run a job while the same job is running (or if you are maxed out on executor slots) so you can always be sure that it will only pull the repo (including any changes) after your Ansible run is complete. If you have multiple jobs running you can use blocking to prevent jobs running at the same time (both trying to apply potentially different configurations to the servers) but you don't have to worry about a new job starting and pulling the repo into the already running job as Jenkins separates these into separate work spaces.
We use Jenkins for manual runs of Ansible against our environment but we also have a "self healing" Jenkins job that simply runs a tagged commit of our Ansible code base against our environment, forcing it to an idempotent state to prevent natural drift of configurations. When we need to do something different to the environment or are running a slightly further ahead commit of our code base in to it we can easily disable the self healing job until we're happy with things and then either just re-enable the job to put things back or advance the tag that Jenkins is using to now use the more recent commit.
Is it possible to use Jenkins server to run custom tasks one by one?
By task I mean to execute an external groovy program which designed as an independent performance and integration test for specific deployment.
If it is possible then how to:
To define tasks in Jenkins and group them so they can start by starting a group.
To see an output of each task (output log).
If there is a specific outcome like "-1" then stop execution of the whole group.
And all this should start automatically after software has been built and deployed.
I feel there has to be a way to do it with Jenkins utilising its out-of-the-box functionality, just not sure how. Or I am wrong and we are looking at custom plugin as a solution?
Thanks a lot!
P.S. I am not asking for detailed answer, just a general direction would be Ok. Also Jenkins is not a requirement, it can be another similar CI server.
It sounds like this could work by a simple Jenkins task with Execute shell commands.
The Console Output for the jobs will contain the output from the processes that you run externally, and the exit status of the script can cause the task to be in failure (any non-zero exit code will do this by default).
On unix systems, #! beginning the first line will denote the script environment to use.
To chain this together with the other Jenkins steps, you can use Build Triggers for Build after other projects are built and use your deployment step as the starting off point.
It is possible, but be careful. Normally Jenkins is used to run build jobs and to deploy software to a QA or staging server. It does not touch Production. But when you start doing this in Jenkins you increase the risk that someone will accidentally run a production job that should not have been run. So if you do decide to use Jenkins for this, set up an entirely separate instance of Jenkins that does nothing other than run these jobs. Then go to Manage Jenkins->Configure Global Security and set up login users. At the least, use "logged in users can do anything" but it would be better to set up "matrix-based security". Then run any jobs that you need by using an Execute Shell step. You can schedule jobs by using a Build Trigger, and you can connect jobs sequentially by setting up Build Other Projects in the post build section. If you want to do more complex job chaining, look into the Join Plugin.
Just keep this Jenkins entirely separate from the Jenkins which you use for CI.