CCNet - Automatically force prerequisite build - cruisecontrol.net

If I have two builds, A and B, how can I set up CCNet such that when B is forced, it will automatically force a build for A before proceeding with the build steps for B?
I've seen the ForceBuildPublisher, however that looks like it's for dependent builds instead of prerequisite builds.
I've also seen the page for the "launchccnetbuild" nant task, but the code file appears to be missing.

No "nice" way, I am afraid. ForceBuildPublisher will work if you can make project B wait for project A to start and finish. For the latter multiple possiblities exist, one would be to poll a file.

As Lothar_K mentioned that there is no "nice" way. Usually when this situation is needed then the build scripts(in your case nant task) from Build A is copied\merge to the nant task from Build B.
Another route is to use sequential task depending on your CCNet version.

Related

GitLab CI - Keep last pipeline status

In GitLab CI, is it possible to keep the last pipeline status when no jobs are queued upon a push? I have a changes rule setup like this in my .gitlab-ci.yml:
changes:
- Assets/*
- Packages/*
- ProjectSettings/*
- .gitlab-ci.yml
which applies to all jobs in the pipeline (these are build jobs for Unity, though irrelevant). NOTE: I only wanted to run a build job if there are any actual files changes that would require a rebuild. changes to README.md and CONTRIBUTING.md are not changes that require a rebuild so this is why I have such a rule.
Problem is I require successful pipeline to merge branches and when I try to merge a branch that modified README.md there obviously is no pipeline.
Is there a way to just "reuse" the result of a previous pipeline or to have a "dummy" job that succeeds instantly upon any push, so as to be able to merge this branch without requiring an expensive rebuild of the whole project?
As you mentioned in your last paragraph, the only way to work around this would be to inject a dummy job that always succeed; something like echo "hello world" in the script.
However, depending on how long your tests run, your best bet may be to just have your tests run every time regardless of changes. Any sort of partial pipeline run using the changes keyword leaves you open to merging changes that break your pipeline. It essentially tightly couples your logic in your pipeline to the component structure of your code, which isn't always a good thing since one of the points of your pipeline is to catch those kinds of cross-component breakages.

How to defer "file" function execution in puppet 3.7

This seems like a trivial question, but in fact it isn't. I'm using puppet 3.7 to deploy and configure artifacts from my project on to a variety of environments. Puppet 5.5 upgrade is on the roadmap, but without an ETA so far.
One of the things I'm trying to automate is the incremental changes to the underlying db. It's not SQL so standard tools are out of question. These changes will come in the form of shell scripts contained in a special module, that will also be deployed as an artifact. For each release we want to have a file, whose content will list the shell scripts to execute in scope of this release. For instance, if in version 1.2 we had implemented JIRA-123, JIRA-124 and JIRA-125, I'd like to execute scripts JIRA-123.sh, JIRA-124.sh and JIRA-125.sh, but no other ones that will still be in that module from previous releases.
So my release "control" file would be called something like jiras.1.2.csv and have one line looking like this:
JIRA-123,JIRA-124,JIRA-125
The task for puppet here seems trivial - read the content of this file, split on "," character, and go on to build exec tasks for each of the jiras. The problem is that the puppet function that should help me do it
file("/somewhere/in/the/filesystem/jiras.1.2.csv")
gets executed at the time of building the puppet catalog, not at the time when the catalog is applied. However, since this file is a part of the payload of the release, it's not there yet. It will be downloaded from nexus in a tar.gz package of the release and extracted later. I have an anchor I can hold on to, which I use to synchronize the exec tasks, but can I attach the reading of the file content to that anchor?
Maybe I'm approaching the problem incorrectly? I was thinking the module with the pre-implementation and post-implementation tasks that constitute the incremental db upgrades could be structured so that for each release there's a subdirectory matching the version name, but then I need to list the contents of that subdirectory to build my exec tasks, and that too - at least to my limited puppet knowledge - can't be deferred until a particular anchor.
--- EDITED after one of the comments ---
The problem is that the upgrade to puppet 5.x is beyond my control - it's another team handling this stuff in a huge organisation, so I have no influence over that and I'm stuck on 3.7 for the foreseeable future.
As for what I'm trying to do - for a bunch of different software packages that we develop and release I want to create three new ones: pre-implementation, post-implementation and checks. The first will hold any tasks that are performed prior to releasing new code in our actual packages. This is typically things like backing up db. Post-implementation will deal with issues that need to be addressed after we've deployed the new code - example operation would be to go and modify older data because for instance we've changed a type of column in a table. Checks are just validations performed to make sure the release is 100% correctly implemented - for instance run a select query and assert on the type of data in the column, whose type we've just changed. Today all of these are daunting manual operations performed by whoever is unlucky to be doing a release. Above all else though, being manual these are by definition error prone.
The approach taken is that for every JIRA ticket being part of the release the responsible developer will have to decide what steps (if any) are needed to release their work, and script that. Puppet is supposed to orchestrate the execution of all of this.

Generating a file in one Gradle task to be operated on by another Gradle task

I am working with Gradle for the first time on a project, having come from Ant on my last project. So far I like what I have been seeing, though I'm trying to figure out how to do some things that have kind of been throwing me for a loop. I am wondering what is the right pattern to use in the following situation.
I have a series of files on which I need to perform a number of operations. Each task operates on the newly generated output files of the task that came before it. I will try to contrive an example to demonstrate my question, since my project is somewhat more complicated and internal.
Stage One
First, let's say I have a task that must write out 100 separate text files with a random number in each. The name of the file doesn't matter, and let's say they all will live under parentFolder.
Example:
parentFolder
|
|-file1
|-file2
...
|-file100
I would think my initial take would be to do this in a loop inside the doLast (shortcutted with <<) closure of a custom task -- something like this:
task generateNumberFiles << {
File parentFolder = mkdir(buildDir.toString() + "/parentFolder")
for (int x=0; x<=100; x++)
{
File currentFile = file(parentFolder.toString() + "/file" + String.valueOf(x))
currentFile.write(String.valueOf(Math.random()))
}
}
Stage Two
Next, let's say I need to read each file generated in the generateNumberFiles task and zip each file into a separate archive in a second folder, zipFolder. I'll put it under the parentFolder for simplicity, but the location isn't important.
Desired output:
parentFolder
|
|-file1
|-file2
...
|-file100
|-zipFolder
|
|-file1.zip
|-file2.zip
...
|-file100.zip
This seems problematic because, in theory, it's like I need to create a Zip task for each file (to generate a separate archive per file). So I suppose this is the first part of the question: how do I create a separate task to act on a bunch of files generated during a prior task and have that task run as part of the build process?
Adding the task at execution time is definitely possible, but getting the tasks to run seems more problematic. I have read that using .execute() is inadvisable and technically it is an internal method.
I also had the thought to add dependsOn to a subsequent task with a .matching { Task task -> task.name.startsWith("blah")} block. This seems like it won't work though, because task dependency is resolved [during the Gradle configuration phase][1]. So how can I create tasks to operate on these files since they didn't exist at configuration time?
Stage Three
Finally, let's complicate it a little more and say I need to perform some other custom action on the ZIP archives generated in Stage Two, something not built in to Gradle. I can't think of a realistic example, so let's just say I have to read the first byte of each ZIP and upload it to some server -- something that involves operating on each ZIP independently.
Stage Three is somewhat just a continuation of my question in Stage Two. I feel like the Gradle-y way to do this kind of thing would be to create tasks that perform a unit of work on each file and use some dependency to cause those tasks to execute. However, if the tasks don't exist when the dependency graph is built, how do I accomplish this sort of thing? On the other hand, am I totally off and is there some other way to do this sort of thing?
[1]: "Gradle builds the complete dependency graph before any task is executed." http://www.gradle.org/docs/current/userguide/build_lifecycle.html
You cannot create tasks during the execution phase. As you have probably figured out, since Gradle constructs that task execution graph during the configuration phase, you cannot add tasks later.
If you are simply trying to consume the output of one task as the input of another then that becomes a simple dependsOn relationship, just like Ant. I believe where you may be going down the wrong path is by thinking you need to dynamically create a Gradle Zip task for every archive you intend to create. In this case, since the number of archives you will be creating is dynamic based on the output of another task (ie. determined during execution) you could simply just create a single task which created all those zip files. Easiest way of accomplishing this would simply to use Ant's zip task via Gradle's Ant support.
We do something similar. While Mark Vieira's answer is correct, there may be a way to adjust things on both ends a bit. Specifically:
You could discover, as we do, all the zip files you need to create during the configuration phase. This will allow you to create any number of zip tasks, name them appropriately and relate them correctly. This will also allow you to individually build them as needed and take advantage of the incremental build support with up-to-date checks.
If you have something you need to do before you can discover what you need for (1) and if that is relatively simple, you could code that specifically not as a task but as a configuration step.
Note that "Gradle-y" way is flexible but don't do this just because you may feel this is "Gradle-y". Do what is right. You need individual tasks if you want to be able to invoke and relate them individually, perhaps optimize build performance by skipping up-to-date ones. If this is not what you care about, don't worry about making each file into its own task.

CruiseControl.Net Nant copy/move failure error

I've recently taken over responsibility of a Cruise Control continuous integration server, although I know very little about either Cruise Control or Nant. But, such is life.
One of the regular build jobs this is supposed to do is to execute a Nant script that backs up files and data from one of live servers to a backup server. I've discovered that this has been failing pretty much as far back as user interface will let me see.
The error message is always the same:
Build Error: NAnt.Core.BuildException
Cannot copy '[filename].bak' to '[server]'.
But it doesn't always fail at exactly the same spot.
The Nant script that's executing is pretty much several iterations of this copy code:
<copy todir="${backup.root}\{dirname}">
<fileset basedir="s:">
<include name="**/*" />
</fileset>
</copy>
Although some of the commands are 'move' rather than 'copy'.
The fact this happens at different points in the scripts suggests to me that this is either down to a timeout, or to the script being unable to access files that in use by the system when the script is running. But I've never been able to get a successful execution through, no matter what time of day I set it to run.
The error messages are not particularly helpful in identifying what the problem actually is. And googling them is not particularly enlightening. I'm not expecting a solution to this (though one would be nice) - but it'd be enormously helpful if I could just get some pointers on where to look next in terms of identifying the problem.
As this is a backup you could set the copy task to proceed on failure until you identify the problem. Obviously you don't want to leave it that way permanently.
See http://nant.sourceforge.net/release/0.85/help/tasks/copy.html
I would add the verbose='true' and failonerror='false' attributes to the copy task and see if that helps.
Setting overwrite based on your scenario may also help.
First of all extract the Nant section to a sandbox Nant file and run Nant on your own from command line so you don't have to wait to test until the scheduled backup time each day. Basically setup a testbed for ease of testing.
Then run it and see where it fails. Then shorten the nant task till it works no matter how little work it's doing. Now you should know what first causes it to fail. Focus on that.
I've used Nant a fair amount and:
Cannot copy '[filename].bak' to '[server]'
makes me think some properties aren't being resolved. Why would someone name a file [filename].bak? 'filename' looks like the name of a property that doesn't exist in Nant. Just going by gut feel here.

Can i call a gradle task from a groovy method in a .gradle file?

I have a non conventional build script in gradle that does cyclic compiling of projects.
It's gonna take weeks to change to standard gradle build so that's not gonna happen now.
The issue is I want to stop using ant in my script and move to use groovy + gradle only.
The question is how to change tasks like copy ? replaceregexp ? unzip ?
I assume in the standart gradle plugin and java plugin I have most of what i need, but they are all tasks, now if i have in my main task a method that needs copy operation, how do I got about ?
Is there a way to call the tasks code ? Is there a way to call the task itself from a groovy script ?
First of all, you should never call a task from another task - bad things will happen if you do. Instead, you should declare a relationship between the two tasks (dependsOn, mustRunAfter, finalizedBy).
In some cases (fewer than people tend to believe), chaining tasks may not be flexible enough; hence, for some tasks (e.g. Copy) an equivalent method (e.g. project.copy) is provided. However, these methods should be used with care. In many cases, tasks are the better choice, as they are the basic building blocks of Gradle and offer many advantages (e.g. automatic up-to-date checks).
Occasionally it also makes sense to use a GradleBuild task, which allows to execute one build as part of another.

Resources