Configuring Jenkins to programmatically determine slave at build time from build parameter? - linux

This is perhaps a slightly unusual Jenkins query, but we've got a project that spans many projects. All of them are Linux based, but they span multiple architectures (MIPS, SPARC, ARMv6, ARMv7).
For a specific component, let's call it 'video-encoder', we'll therefore have 4 projects: mips-video-encoder, sparc-video-encoder, etc.
Each project is built on 4 separate slaves with a label that correlates to their architecture, i.e. the MIPS slave has the labels 'mips' 'linux'.
My objectives are to:
Consolidate all of our separate jobs. This should make it easier for us to modify job properties as well as easier to add more jobs without the duplicitous effort of adding so many architecture specific jobs.
To allow us to build only one architecture at a time if we so wish. If the MIPS job fails, we'd like to build just for MIPS and not for others.
I have looked at the 'Multi-configuration' type job -- at the moment we are just using Single confguration jobs which are simple. I am not sure if the Multi-configuration type allows for us to build only individual architectures at once. I had a play with the configuration matrix, but wasn't sure if this could be changed / adapted to just build a for single platform. It looks like I may be able to use a Groovy statement to do this? Something like:
(label=="mips".implies("slave"=="mips")
Maybe that could be simplified to something like slave == label where label is the former name of the job when it was in its single-configuration state and is now a build parameter?
I am thinking that we don't need a Multi-config job for this, if we can programatically choose the slave for this.
I would greatly appreciate some advice on how we can consolidate the number of jobs we have and programatically change the target slave based on the architecture of the project which is a build parameter.
Many thanks in advance,

You can make a wrapper job with a system groovy script. You need the groovy plugin for this. let call the wrapper job - video-encoder-wrapper, here are the bullets how to configure it:
Define the parameter ARCH
Assign the label to the video-encoder job based on the ARCH parameter by the step Execute system Groovy script
import hudson.model.*
encoder=Hudson.instance.getItem('video-encoder')
def arch =build.buildVariableResolver.resolve("ARCH")
label= Hudson.instance.getLabel(arch)
encoder.setAssignedLabel(label)
Invoke non blocking downstream project video-encoder, don't forget to pass the ARCH parameter
Check the option Set Build Name in the video-encoder job's configuration and set it to the something as ${ENV,var="ARCH"} - #${BUILD_NUMBER}. It will allow you to track easily the build history.
Disable the concurrent builds of video-encoder-wrapper job. It will prevent the assigning of 2 different labels in the same time to the video-encoder job
Hope it helps

Related

How to defer "file" function execution in puppet 3.7

This seems like a trivial question, but in fact it isn't. I'm using puppet 3.7 to deploy and configure artifacts from my project on to a variety of environments. Puppet 5.5 upgrade is on the roadmap, but without an ETA so far.
One of the things I'm trying to automate is the incremental changes to the underlying db. It's not SQL so standard tools are out of question. These changes will come in the form of shell scripts contained in a special module, that will also be deployed as an artifact. For each release we want to have a file, whose content will list the shell scripts to execute in scope of this release. For instance, if in version 1.2 we had implemented JIRA-123, JIRA-124 and JIRA-125, I'd like to execute scripts JIRA-123.sh, JIRA-124.sh and JIRA-125.sh, but no other ones that will still be in that module from previous releases.
So my release "control" file would be called something like jiras.1.2.csv and have one line looking like this:
JIRA-123,JIRA-124,JIRA-125
The task for puppet here seems trivial - read the content of this file, split on "," character, and go on to build exec tasks for each of the jiras. The problem is that the puppet function that should help me do it
file("/somewhere/in/the/filesystem/jiras.1.2.csv")
gets executed at the time of building the puppet catalog, not at the time when the catalog is applied. However, since this file is a part of the payload of the release, it's not there yet. It will be downloaded from nexus in a tar.gz package of the release and extracted later. I have an anchor I can hold on to, which I use to synchronize the exec tasks, but can I attach the reading of the file content to that anchor?
Maybe I'm approaching the problem incorrectly? I was thinking the module with the pre-implementation and post-implementation tasks that constitute the incremental db upgrades could be structured so that for each release there's a subdirectory matching the version name, but then I need to list the contents of that subdirectory to build my exec tasks, and that too - at least to my limited puppet knowledge - can't be deferred until a particular anchor.
--- EDITED after one of the comments ---
The problem is that the upgrade to puppet 5.x is beyond my control - it's another team handling this stuff in a huge organisation, so I have no influence over that and I'm stuck on 3.7 for the foreseeable future.
As for what I'm trying to do - for a bunch of different software packages that we develop and release I want to create three new ones: pre-implementation, post-implementation and checks. The first will hold any tasks that are performed prior to releasing new code in our actual packages. This is typically things like backing up db. Post-implementation will deal with issues that need to be addressed after we've deployed the new code - example operation would be to go and modify older data because for instance we've changed a type of column in a table. Checks are just validations performed to make sure the release is 100% correctly implemented - for instance run a select query and assert on the type of data in the column, whose type we've just changed. Today all of these are daunting manual operations performed by whoever is unlucky to be doing a release. Above all else though, being manual these are by definition error prone.
The approach taken is that for every JIRA ticket being part of the release the responsible developer will have to decide what steps (if any) are needed to release their work, and script that. Puppet is supposed to orchestrate the execution of all of this.

Best way to program flow through a job loop

I see that Origen supports passing jobs to the program command in this video. What would be the preferred method to run the program command in a job loop (i.e. job == 'ws' then job == 'ft', etc.).
thx
The job is a runtime concept, not a compile/generate time concept, so it doesn't really make sense to run the program command (i.e. generate the program) against different settings of job.
Origen doesn't currently provide any mechanism to pass define-type arguments through to the program generator from the command line, though you could implement that in your app easily enough by overriding the program command - i.e. capture and store them somewhere in your app and then continue with the regular command.
The 'Origen-way' of doing things like this is to setup different target files with different variables set within them, then execute the program command for the different targets.

Ant - How to define a multi processed build

I am trying to find a way to define a multi processed (not a multi threaded) build on multicores with Ant on Ubuntu.
I guess is not suitable
In the documentation: https://ant.apache.org/manual/Tasks/parallel.html
I found the next "disclaimer":
"The primary use case for is to run external programs ...
Accordingly, while this task has uses, it should be considered an advanced task which should be used in certain batch-processing or testing situations, rather than an easy trick to speed up build times on a multiway CPU.

Generating a file in one Gradle task to be operated on by another Gradle task

I am working with Gradle for the first time on a project, having come from Ant on my last project. So far I like what I have been seeing, though I'm trying to figure out how to do some things that have kind of been throwing me for a loop. I am wondering what is the right pattern to use in the following situation.
I have a series of files on which I need to perform a number of operations. Each task operates on the newly generated output files of the task that came before it. I will try to contrive an example to demonstrate my question, since my project is somewhat more complicated and internal.
Stage One
First, let's say I have a task that must write out 100 separate text files with a random number in each. The name of the file doesn't matter, and let's say they all will live under parentFolder.
Example:
parentFolder
|
|-file1
|-file2
...
|-file100
I would think my initial take would be to do this in a loop inside the doLast (shortcutted with <<) closure of a custom task -- something like this:
task generateNumberFiles << {
File parentFolder = mkdir(buildDir.toString() + "/parentFolder")
for (int x=0; x<=100; x++)
{
File currentFile = file(parentFolder.toString() + "/file" + String.valueOf(x))
currentFile.write(String.valueOf(Math.random()))
}
}
Stage Two
Next, let's say I need to read each file generated in the generateNumberFiles task and zip each file into a separate archive in a second folder, zipFolder. I'll put it under the parentFolder for simplicity, but the location isn't important.
Desired output:
parentFolder
|
|-file1
|-file2
...
|-file100
|-zipFolder
|
|-file1.zip
|-file2.zip
...
|-file100.zip
This seems problematic because, in theory, it's like I need to create a Zip task for each file (to generate a separate archive per file). So I suppose this is the first part of the question: how do I create a separate task to act on a bunch of files generated during a prior task and have that task run as part of the build process?
Adding the task at execution time is definitely possible, but getting the tasks to run seems more problematic. I have read that using .execute() is inadvisable and technically it is an internal method.
I also had the thought to add dependsOn to a subsequent task with a .matching { Task task -> task.name.startsWith("blah")} block. This seems like it won't work though, because task dependency is resolved [during the Gradle configuration phase][1]. So how can I create tasks to operate on these files since they didn't exist at configuration time?
Stage Three
Finally, let's complicate it a little more and say I need to perform some other custom action on the ZIP archives generated in Stage Two, something not built in to Gradle. I can't think of a realistic example, so let's just say I have to read the first byte of each ZIP and upload it to some server -- something that involves operating on each ZIP independently.
Stage Three is somewhat just a continuation of my question in Stage Two. I feel like the Gradle-y way to do this kind of thing would be to create tasks that perform a unit of work on each file and use some dependency to cause those tasks to execute. However, if the tasks don't exist when the dependency graph is built, how do I accomplish this sort of thing? On the other hand, am I totally off and is there some other way to do this sort of thing?
[1]: "Gradle builds the complete dependency graph before any task is executed." http://www.gradle.org/docs/current/userguide/build_lifecycle.html
You cannot create tasks during the execution phase. As you have probably figured out, since Gradle constructs that task execution graph during the configuration phase, you cannot add tasks later.
If you are simply trying to consume the output of one task as the input of another then that becomes a simple dependsOn relationship, just like Ant. I believe where you may be going down the wrong path is by thinking you need to dynamically create a Gradle Zip task for every archive you intend to create. In this case, since the number of archives you will be creating is dynamic based on the output of another task (ie. determined during execution) you could simply just create a single task which created all those zip files. Easiest way of accomplishing this would simply to use Ant's zip task via Gradle's Ant support.
We do something similar. While Mark Vieira's answer is correct, there may be a way to adjust things on both ends a bit. Specifically:
You could discover, as we do, all the zip files you need to create during the configuration phase. This will allow you to create any number of zip tasks, name them appropriately and relate them correctly. This will also allow you to individually build them as needed and take advantage of the incremental build support with up-to-date checks.
If you have something you need to do before you can discover what you need for (1) and if that is relatively simple, you could code that specifically not as a task but as a configuration step.
Note that "Gradle-y" way is flexible but don't do this just because you may feel this is "Gradle-y". Do what is right. You need individual tasks if you want to be able to invoke and relate them individually, perhaps optimize build performance by skipping up-to-date ones. If this is not what you care about, don't worry about making each file into its own task.

Can i call a gradle task from a groovy method in a .gradle file?

I have a non conventional build script in gradle that does cyclic compiling of projects.
It's gonna take weeks to change to standard gradle build so that's not gonna happen now.
The issue is I want to stop using ant in my script and move to use groovy + gradle only.
The question is how to change tasks like copy ? replaceregexp ? unzip ?
I assume in the standart gradle plugin and java plugin I have most of what i need, but they are all tasks, now if i have in my main task a method that needs copy operation, how do I got about ?
Is there a way to call the tasks code ? Is there a way to call the task itself from a groovy script ?
First of all, you should never call a task from another task - bad things will happen if you do. Instead, you should declare a relationship between the two tasks (dependsOn, mustRunAfter, finalizedBy).
In some cases (fewer than people tend to believe), chaining tasks may not be flexible enough; hence, for some tasks (e.g. Copy) an equivalent method (e.g. project.copy) is provided. However, these methods should be used with care. In many cases, tasks are the better choice, as they are the basic building blocks of Gradle and offer many advantages (e.g. automatic up-to-date checks).
Occasionally it also makes sense to use a GradleBuild task, which allows to execute one build as part of another.

Resources