How to defer "file" function execution in puppet 3.7 - puppet

This seems like a trivial question, but in fact it isn't. I'm using puppet 3.7 to deploy and configure artifacts from my project on to a variety of environments. Puppet 5.5 upgrade is on the roadmap, but without an ETA so far.
One of the things I'm trying to automate is the incremental changes to the underlying db. It's not SQL so standard tools are out of question. These changes will come in the form of shell scripts contained in a special module, that will also be deployed as an artifact. For each release we want to have a file, whose content will list the shell scripts to execute in scope of this release. For instance, if in version 1.2 we had implemented JIRA-123, JIRA-124 and JIRA-125, I'd like to execute scripts JIRA-123.sh, JIRA-124.sh and JIRA-125.sh, but no other ones that will still be in that module from previous releases.
So my release "control" file would be called something like jiras.1.2.csv and have one line looking like this:
JIRA-123,JIRA-124,JIRA-125
The task for puppet here seems trivial - read the content of this file, split on "," character, and go on to build exec tasks for each of the jiras. The problem is that the puppet function that should help me do it
file("/somewhere/in/the/filesystem/jiras.1.2.csv")
gets executed at the time of building the puppet catalog, not at the time when the catalog is applied. However, since this file is a part of the payload of the release, it's not there yet. It will be downloaded from nexus in a tar.gz package of the release and extracted later. I have an anchor I can hold on to, which I use to synchronize the exec tasks, but can I attach the reading of the file content to that anchor?
Maybe I'm approaching the problem incorrectly? I was thinking the module with the pre-implementation and post-implementation tasks that constitute the incremental db upgrades could be structured so that for each release there's a subdirectory matching the version name, but then I need to list the contents of that subdirectory to build my exec tasks, and that too - at least to my limited puppet knowledge - can't be deferred until a particular anchor.
--- EDITED after one of the comments ---
The problem is that the upgrade to puppet 5.x is beyond my control - it's another team handling this stuff in a huge organisation, so I have no influence over that and I'm stuck on 3.7 for the foreseeable future.
As for what I'm trying to do - for a bunch of different software packages that we develop and release I want to create three new ones: pre-implementation, post-implementation and checks. The first will hold any tasks that are performed prior to releasing new code in our actual packages. This is typically things like backing up db. Post-implementation will deal with issues that need to be addressed after we've deployed the new code - example operation would be to go and modify older data because for instance we've changed a type of column in a table. Checks are just validations performed to make sure the release is 100% correctly implemented - for instance run a select query and assert on the type of data in the column, whose type we've just changed. Today all of these are daunting manual operations performed by whoever is unlucky to be doing a release. Above all else though, being manual these are by definition error prone.
The approach taken is that for every JIRA ticket being part of the release the responsible developer will have to decide what steps (if any) are needed to release their work, and script that. Puppet is supposed to orchestrate the execution of all of this.

Related

Configuring Jenkins to programmatically determine slave at build time from build parameter?

This is perhaps a slightly unusual Jenkins query, but we've got a project that spans many projects. All of them are Linux based, but they span multiple architectures (MIPS, SPARC, ARMv6, ARMv7).
For a specific component, let's call it 'video-encoder', we'll therefore have 4 projects: mips-video-encoder, sparc-video-encoder, etc.
Each project is built on 4 separate slaves with a label that correlates to their architecture, i.e. the MIPS slave has the labels 'mips' 'linux'.
My objectives are to:
Consolidate all of our separate jobs. This should make it easier for us to modify job properties as well as easier to add more jobs without the duplicitous effort of adding so many architecture specific jobs.
To allow us to build only one architecture at a time if we so wish. If the MIPS job fails, we'd like to build just for MIPS and not for others.
I have looked at the 'Multi-configuration' type job -- at the moment we are just using Single confguration jobs which are simple. I am not sure if the Multi-configuration type allows for us to build only individual architectures at once. I had a play with the configuration matrix, but wasn't sure if this could be changed / adapted to just build a for single platform. It looks like I may be able to use a Groovy statement to do this? Something like:
(label=="mips".implies("slave"=="mips")
Maybe that could be simplified to something like slave == label where label is the former name of the job when it was in its single-configuration state and is now a build parameter?
I am thinking that we don't need a Multi-config job for this, if we can programatically choose the slave for this.
I would greatly appreciate some advice on how we can consolidate the number of jobs we have and programatically change the target slave based on the architecture of the project which is a build parameter.
Many thanks in advance,
You can make a wrapper job with a system groovy script. You need the groovy plugin for this. let call the wrapper job - video-encoder-wrapper, here are the bullets how to configure it:
Define the parameter ARCH
Assign the label to the video-encoder job based on the ARCH parameter by the step Execute system Groovy script
import hudson.model.*
encoder=Hudson.instance.getItem('video-encoder')
def arch =build.buildVariableResolver.resolve("ARCH")
label= Hudson.instance.getLabel(arch)
encoder.setAssignedLabel(label)
Invoke non blocking downstream project video-encoder, don't forget to pass the ARCH parameter
Check the option Set Build Name in the video-encoder job's configuration and set it to the something as ${ENV,var="ARCH"} - #${BUILD_NUMBER}. It will allow you to track easily the build history.
Disable the concurrent builds of video-encoder-wrapper job. It will prevent the assigning of 2 different labels in the same time to the video-encoder job
Hope it helps

In TFS or MTM, is there a way to lock a test case workitem from being edited once it has a test execution record associated to it?

In MTM (or TFS) 2010, I want to be able to lock a Test Case work item from being modified (description or steps) once it has been executed (has 1 or more execution history records).
I am having problems were testers are linking test cases executed against previous releases to a new release and then modifying the test case, in some cases substantially as the product has changed.
When looking at history, it now appears that this new test was executed and passed successfully on prior releases inappropriately. What should happen is that a new test case is created (copy from) but I can't seem to enforce that rule within the system. I need to be able to lock the contents of a test case once its been executed so I always have an accurate historical record of how the product was tested when released.
If we ever had to do a patch to that old version, this old test case would still be accurate, but instead what we have is the test case was modified for new functionality and is not applicable to the original version any more.
Any ideas?
The testers should put the test cases in the Closed state after they are executed against a previous release. For new releases they should run only test cases that are in the Ready state.

SSIS Sequence Container will not fail

I am using Visual Studio 2012. I have created an SSIS solution that makes use of Sequence Containers.
The point of the SSIS package is this. Every 3rd and 5th business day I need to create folders and copy files into them. On the 3rd business day files are copied to the respective 3rd business day folders. Likewise on the 5th business day. These folders and files are housed in a parent folder that is named yyyymm i.e. 201411. The folder structure for both is a little odd but that is beyond my control and not the issue.
I could not figure out how to count business days so the way I was hoping to design the SSIS package so that a single SSIS package could accomplish both is by using Success/Failure toggles precedence constraints.
I designed the flow so that the first time the package is run it creates the Day3 folders and then copies the files. The second time it is ran the day3 package will note that the Day3 folders have been created and fail. That failure is intended. I want the failure of the package to effect the parent sequence container and for the sequence container to fail entirely. After this failure I want the package to continue along the failure constraint, creating the Day5 folders and copying those files.
The issue is the task fails as desired, however this stops the package entirely. The flow does not continue along the failure constraint and perform the Day5 function. I have tried adjusting properties in the packages, sequence containers and also adjusting the propagate System variable as I saw suggested via internet searches but I cannot get the intended flow to work.
There is probably a better way to do this entire process and I am open to suggestions, but the real purpose of this query is to figure out how to get the Sequence container to fail as a whole and for the package to continue performing the Day5 functions.
3rd Business Day
5th Business Day
SSIS Package
Desired Result. Sequence container fails if a child package fails within it.
In response to a comment below. I tried to setup a Boolean constraint but this did not work either. Not sure if I did this correctly.
Alternate way of doing this would be to set a boolean variable on the Package level and as soon as the Day3 would fail, this would be set to False. On the Failure precedence constraint, you would check to see the value of this variable. If the value of the variable is FALSE (or true) then you can proceed with the Day5 Container, otherwise ignore it.
The other thing that comes to my mind, can you toggle the AND/OR on the precedence constraints after Day 3 container. Just wanted to check and see if that might change the behavior.

Generating a file in one Gradle task to be operated on by another Gradle task

I am working with Gradle for the first time on a project, having come from Ant on my last project. So far I like what I have been seeing, though I'm trying to figure out how to do some things that have kind of been throwing me for a loop. I am wondering what is the right pattern to use in the following situation.
I have a series of files on which I need to perform a number of operations. Each task operates on the newly generated output files of the task that came before it. I will try to contrive an example to demonstrate my question, since my project is somewhat more complicated and internal.
Stage One
First, let's say I have a task that must write out 100 separate text files with a random number in each. The name of the file doesn't matter, and let's say they all will live under parentFolder.
Example:
parentFolder
|
|-file1
|-file2
...
|-file100
I would think my initial take would be to do this in a loop inside the doLast (shortcutted with <<) closure of a custom task -- something like this:
task generateNumberFiles << {
File parentFolder = mkdir(buildDir.toString() + "/parentFolder")
for (int x=0; x<=100; x++)
{
File currentFile = file(parentFolder.toString() + "/file" + String.valueOf(x))
currentFile.write(String.valueOf(Math.random()))
}
}
Stage Two
Next, let's say I need to read each file generated in the generateNumberFiles task and zip each file into a separate archive in a second folder, zipFolder. I'll put it under the parentFolder for simplicity, but the location isn't important.
Desired output:
parentFolder
|
|-file1
|-file2
...
|-file100
|-zipFolder
|
|-file1.zip
|-file2.zip
...
|-file100.zip
This seems problematic because, in theory, it's like I need to create a Zip task for each file (to generate a separate archive per file). So I suppose this is the first part of the question: how do I create a separate task to act on a bunch of files generated during a prior task and have that task run as part of the build process?
Adding the task at execution time is definitely possible, but getting the tasks to run seems more problematic. I have read that using .execute() is inadvisable and technically it is an internal method.
I also had the thought to add dependsOn to a subsequent task with a .matching { Task task -> task.name.startsWith("blah")} block. This seems like it won't work though, because task dependency is resolved [during the Gradle configuration phase][1]. So how can I create tasks to operate on these files since they didn't exist at configuration time?
Stage Three
Finally, let's complicate it a little more and say I need to perform some other custom action on the ZIP archives generated in Stage Two, something not built in to Gradle. I can't think of a realistic example, so let's just say I have to read the first byte of each ZIP and upload it to some server -- something that involves operating on each ZIP independently.
Stage Three is somewhat just a continuation of my question in Stage Two. I feel like the Gradle-y way to do this kind of thing would be to create tasks that perform a unit of work on each file and use some dependency to cause those tasks to execute. However, if the tasks don't exist when the dependency graph is built, how do I accomplish this sort of thing? On the other hand, am I totally off and is there some other way to do this sort of thing?
[1]: "Gradle builds the complete dependency graph before any task is executed." http://www.gradle.org/docs/current/userguide/build_lifecycle.html
You cannot create tasks during the execution phase. As you have probably figured out, since Gradle constructs that task execution graph during the configuration phase, you cannot add tasks later.
If you are simply trying to consume the output of one task as the input of another then that becomes a simple dependsOn relationship, just like Ant. I believe where you may be going down the wrong path is by thinking you need to dynamically create a Gradle Zip task for every archive you intend to create. In this case, since the number of archives you will be creating is dynamic based on the output of another task (ie. determined during execution) you could simply just create a single task which created all those zip files. Easiest way of accomplishing this would simply to use Ant's zip task via Gradle's Ant support.
We do something similar. While Mark Vieira's answer is correct, there may be a way to adjust things on both ends a bit. Specifically:
You could discover, as we do, all the zip files you need to create during the configuration phase. This will allow you to create any number of zip tasks, name them appropriately and relate them correctly. This will also allow you to individually build them as needed and take advantage of the incremental build support with up-to-date checks.
If you have something you need to do before you can discover what you need for (1) and if that is relatively simple, you could code that specifically not as a task but as a configuration step.
Note that "Gradle-y" way is flexible but don't do this just because you may feel this is "Gradle-y". Do what is right. You need individual tasks if you want to be able to invoke and relate them individually, perhaps optimize build performance by skipping up-to-date ones. If this is not what you care about, don't worry about making each file into its own task.

Perforce Streams - Isolating imported libraries

I need some guidance on a use case I've run into when using Perforce Streams. Say I have the following structure:
//ProductA/Dev:
share ...
//ProductA/Main
share ...
import Component1/... //Component1/Release-1_0/...
//ProductA/Release-1_0
share ...
//Component1/Dev
share ...
//Component1/Main
share ...
//Component1/Release-1_0
share ...
ProductA_Main imports code from Component1_Release-1_0. Whenever Component1_Release-1_0 gets updated, it will automatically be available to ProductA (but read-only).
Now. The problem I'm running into is that since ProductA_Release-1_0 inherits from Main and thus also imports Component1_Release-1_0, any code or changes made to the component will immediately affect the ProductA Release. This sort of side effect seems very risky.
Is there any way to isolate the code such that in the release stream such that ALL code changes are tracked (even code that was imported) and there are 0 side-effects from other stream depots but for main and and dev streams, the code is imported. This way, the release will have 0 side effects, while main and dev conveniently import any changes made in the depot.
I know one option would be to create some sort of product specific release stream in the Component1 depot, but that seems a bit of a kludge since Component1 shouldn't need any references to ProductA.
If you are just looking to be able to rebuild the previous versions, you can use labels to sync the stream back to the exact situation it was in at the time by giving a change list number (or label) to p4 sync.
If you are looking for explicit change tracking, you may want to branch the component into your release line. This will make the release copy of the library completely immune to changes in the other stream, unless you choose to branch and reconcile the data from there. If you think you might make independent changes to the libraries in order to patch bugs, this might be something to consider. Of course, perforce won't copy the files in your database on the server, just pointers to them in the metadata, and since you're already importing them into the stream, you're already putting copies of the files on your build machines, so there shouldn't be any "waste" except on the metadata front.
In the end, this looks like a policy question. Rebuilding can be done by syncing back to a previous version, but if you want to float the library fixes into the main code line, leave it as is; if you want to lock down the libraries and make the changes explicit, I'd just branch the libraries in.
Integrating into your release branch
In answer to the question in the comments, if you choose to integrate directly into your release branch, you'll need to remove the import line from the stream specification and replace it with the isolate line which would then place the code only in the release branch. Then you would use the standard p4 integrate command (or p4v) to integrate from the //Component1/Release-1_0/... to //ProductA/Main/Component1/....
Firming up a separate component stream
One final thought is that you could create another stream on the //Component1 line to act as a placeholder for the release.

Resources