Can gradle do substitutions as it copies resources? - groovy

For a group of developers, all the differences are stored in a normal property file:
token1=some value
token2=9000
etc.
The 'tokens' are used in a series of XML files that reside in the normal src/main/resources directory. When Gradle copies these files into the build directory (and I don't know for sure what task that is), is there any opportunity to execute custom code? Specifically, I would like to have the token values from the property file substituted into the copy. Thus, the original copy remains untouched, but the version in the runtime has the desired values for the given developer.
Finally, I know this can done brute force with two or three steps that change the file after it is copied. I really want to know if there is an elegant way to do this in a single step.

After compilation, Gradle calls processResources task that copies the resources into the build directory. While copying resources, processResources can be configured to do the filtering (or possibly execute custom code by adding a doLast):
processResources {
filter org.apache.tools.ant.filters.ReplaceTokens, tokens: [
...
]
}
These two links can provide more help:
http://java.dzone.com/articles/resource-filtering-gradle
http://mrhaki.blogspot.in/2010/11/gradle-goodness-add-filtering-to.html

Related

How can I copy an existing overthere.SshHost file in XL Deploy UI using Puppet?

The Infra team in my company has provided us with sample overthere.SshHost under 'Infrastructure' in XL-Deploy UI that has a predefined private key file and passphrase which is not shared with us.
We are asked to duplicate this file manually in the UI, rename it and create infra entries for our application.
How can I achieve this with puppet?
Lets say the sample file is placed under: Infrastructure/Project1/COMMONS/Template_SshHost
and I need to create an overthere.SshHost under Infrastructure/Project1/UAT/Uat_SshHost and Infrastructure/Project1/PREPROD/Preprod_SshHost by copying the sample file.
Thanks in advance!
You can sync a target file with another file accessible via the local file system by using a File resource whose source attribute specifies the path to the original. You can produce a modified copy in a variety of ways, such as by applying one or more File_line resources (from stdlib) or by applying an appropriate script via an Exec resource.
But if you go that route then you have to either
accept that the target file will be re-synced on every Puppet run, OR
set the File resource's replace attribute to false, in which case changes to the original file will not be propagated into the customized copy.
The latter is probably the more acceptable choice for most people. Its file-copying part might look something like this:
$project_dir = '/path/to/Infrastructure/Project1'
file { "${project_dir}/UAT/Uat_SshHost/overthere.SshHost":
ensure => 'file',
source => "${project_dir}/COMMONS/Template_SshHost/overthere.SshHost",
replace => false,
}
But you might want to consider instead writing a custom type and provider for the target file. That would allow you to incorporate changes from the original template without re-syncing the file on every run, and it would give you a lot more flexibility with respect to the customizations you need to apply. It would also present a simpler interface for you to use in your manifests, which could make managing these easier. But, of course, that's offset by the cost is that writing and maintaining a custom type and provider. Only you can determine whether that would be a worthwhile trade-off.

How to run one feature file as initialization (i.e. before all other feature files) in cucumber-jvm?

I have a cucumber feature file 'A' that serves as setting up environment (data clean up and initialization). I want to have it executed before all other feature files can run.
It's it kind of like #before hook as in http://zsoltfabok.com/blog/2012/09/cucumber-jvm-hooks/. However, that does not work because my feature files 'A' contains hundreds of cucumber steps and it is not as simple as:
#Before
public void beforeScenario() {
tomcat.start();
tomcat.deploy("munger");
browser = new FirefoxDriver();
}
instead it's better to be able to run 'A' as a feature file as a whole.
I've searched around but did not find a answer. I am so surprised that no one has this type of requirement before.
The closest i found is 'background'. But that means i can have only one huge feature file with the content of 'A' as 'background' at the top, and rest of my test in the same file. I really do not want to do that.
Any suggestions?
By default, Cucumber features are run single thread in order by:
Alphabetically by feature file directory
Alphabetically by feature file name within directory
Scenario execution is then by order within the feature file.
So have your initialization feature in the first directory (alhpabetically) with a file name that sorts first (alphabetically) in that directory.
That being said it is generally a bad practice to require an execution order in your feature files. We run our feature files in parallel so order is meaningless. For Jenkins or TeamCity you could add a build step that executes the one feature file followed by a second build step that executes the rest of your feature files.
I have also a project, where we have a single feature file, that contains a very long scenario called Scenario: Test data with a lot of very long scenarios, like this:
Given the system knows about the following employees
|uuid|user-key|name|nickname|
|1|0101140000|Anna|annie|
... hundreds of lines like this follow ...
We see this long SystemKnows scenarios as quite valuable, so that our testers, Product Owner and developers have a baseline of what data are in the system. Our domain is quite complex, and we need this baseline of reference data for everyone to be able to understand the tests.
(These reference data become almost like well known personas, and are a shared team metaphore)
In the beginning, we were relying on the alphabetic naming convention, to have the AAA.feature to be run first.
Later, we discovered that this setup was brittle, and decided to use the following trick, inspired by the PageObject pattern:
Add a background with the single line Given(~'^I set test data for all feature files$')
In the step definition, have a factory to create the test data, and make sure inside the factore method, that it is only created once, like testFactory.createTestData()
In this way, you have both the convenience of expressing reference setup as a scenario, that enhances team communication, but you also have a stable test setup.
Hope this is helpful!
Agata

Access test resources within Haskell tests

This is probably a basic question but I've been Googling for a while on it... I have a Cabal-ized Haskell project and I'm in the process of writing integration tests for it. I want to be able to include test resources for my project in the same repo and access them in tests. For example, here are a couple things I want to accomplish:
1) Check a dummy database instance into my repo, including a shell script that spins up a database process. I want to write an Hspec integration test that spins up the database process, makes some calls to it, and then shuts it down. So I need to be able to find the shell script so I can use System.Process.createProcess on it.
2) Check in paired "input" and "output" files. My test should process each of the input files and compare them to a corresponding output file to make sure they match. (I've read about "golden" but it doesn't seem to solve the problem of finding/reading the input files in the first place?)
In short, how can I go about creating a "resources" folder in the root folder of my Haskell project and find the path to it inside tests?
Have a look at an existing project that uses input and output file.
For example, take haddock, the source code is at https://github.com/haskell/haddock. They have the test files under a folder (https://github.com/haskell/haddock/tree/master/html-test/ref) and they are referenced as extra-source-files in the cabal file (https://github.com/haskell/haddock/blob/master/haddock.cabal). Then the test code (https://github.com/haskell/haddock/blob/master/html-test/run.lhs) uses some CPP macro (__FILE__) to get the current directory, and can then resolve the files relative to that folder.

save MATLAB code file along with results in one folder?

I'm processing a data set and running into a problem - although I xlswrite all the relevant output variables to a big Excel file that is timestamped, I don't save the code that actually generated that result. So if I try to recreate a certain set of results, I can't do it without relying on memory (which is obviously not a good plan). I'd like to know if there's a command(s) that will help me save the m-files used to generate the output Excel file, as well as the Excel file itself, in a folder I can name and timestamp so I don't have to do this manually.
In my perfect world I would run the master code file that calls 4 or 5 other function m-files, then all those m-files would be saved along with the Excel output to a folder names results_YYYYMMDDTIME. Does this functionality exist? I can't seem to find it.
There's no such functionality built in.
You could build a dependency tree of your main function by using depfun with mfilename.
depfun(mfilename()) will return a list of all functions/m-files that are called by the currently executing m-file.
This will include all files that come as MATLAB builtins, you might want to remove those (and only record the MATLAB version in your excel sheet).
As pseudocode:
% get all files:
dependencies = depfun(mfilename());
for all dependencies:
if not a matlab-builtin:
copyfile(dependency, your_folder)
As a "long term" solution you might want to check if using a version control system like subversion, mercurial (or one of many others) would be applicable in your case.
In larger projects this is preferred way to record the version of source code used to produce a certain result.

Artefact folder structure does not contain empty directories

I'm trying to store whole the output of my build, this includes some empty folders. These aren't included by the artefact mechanism in teamcity:
What doesn't work:
OAR\=> OAR.zip
OAR->OAR.zip
OAR
Inside of OAR i have a folder structure that needs to be stored. I know i could put a placeholder file in each but that is not the answer i'm after. Otherwise ill have to zip it myself?
Unfortunately TeamCity, by design, searches for files and uploads them as artifacts which means that empty folders are never included. Given the open and very old issue in the TeamCity tracker I doubt they are going to fix it any time soon.
I would recommend zipping the folder yourself, that is the approach we have taken. How you implement that depends on the build technology you are using. For example, if you are building using Nant you could add the zip task to your build, there are similar options for MSBuild and Ant.
If you don't want to rely on the build performing the zip I would recommend installing 7zip on your build agents and using the command line to perform the zip. Just remember if you want 7zip to include empty directories use * as the wildcard rather than *. * like so:
7z a -r OAR.zip *
Technically you could use powershell to do the zipping, which would be better than having to install something on your agents. I haven't tried this option myself.
Apologies for not linking all my references above. Apparently, and understandably so, I need at least 10 reputation to post more than 2 links.

Resources