Jenkins - Dynamic Choice Parameter - Removing File extension from list - groovy

I am having a bit of trouble getting my groovy code to work properly in Jenkins using the Dynamic choice parameter. We currently have a folder that contains a lot of properties files for various environments. The following groovy code returns a list of all the file names correctly, however it is appending the file extension which is unneeded.
Arrays.asList(new File("path").list())
How would I change that to only list .xml files and not append the file extension in the list. I've found some examples of this while searching, but for some reason when I try some of these examples it isn't populating the list.

You mean like:
new File( 'path' ).list()
.findAll { it.endsWith( '.xml' ) }
.collect { it[ 0..-5 ] }
That gets the list of files (as Strings), keeps those that end with .xml, then removes the .xml off the end

Related

One .po file for each .py script

In my package, I would like to use one .po file for each .py script it contains.
Here is my file tree :
foo
mainscript.py
commands/
commandOne.py
locales/fr/LC_MESSAGES/
mainscript_fr.po
commandOne_fr.po
In the mainscript.py, I got the following line to apply gettext to the strings :
if "fr" in os.environ['LANG']:
traduction = gettext.translation('mainscript_fr', localedir='./locales', languages=['fr'])
traduction.install()
else:
gettext.install('')
Until now, it is working as expected. But now I would like to add another .po file to translates the strings in commandOne.py.
I tried the following code :
if "fr" in os.environ['LANG']:
traduction = gettext.translation('commandOne_fr', localedir='../locales', languages=['fr'])
traduction.install()
else:
gettext.install('')
But I get a "FileNotFoundError: [Errno 2] No translation file found for domain: 'commandOne_fr' "
How can I use multiple file like that ? The package being a cli, there is many strings in a single file because of the help man and verbose mode...etc and this is not acceptable to have a single .po file with hundreds of strings.
Note : The mainscript.py calls a function from commandOne.py, which is itself inherited from an abstract class that contains other strings to translate... so I hope if any solution exists that it will also be applicable to the abstract class file.
Thank you
Translations are retrieved from .mo files, not .po files, see https://docs.python.org/3/library/gettext.html#gettext.translation. Most probably you have to compile CommandOne_fr.po with the program msgfmt into CommandOne_fr.mo.
Two more hints:
What you are doing looks like a premature optimization. You won't have any performance problem until the number of translations gets really big. Rather wait for that to happen.
Why the _fr in the name of the translation files? The language code fr is already a path component.

How to read multiple CSV (leaving out specific ones) from a nested directory in PySpark?

Lets say I have a directory called 'all_data', and inside this, I have several other directories based on the date of the data that it contains. These directories are named date_2020_11_01 to date_2020_11_30 and each one of these contain csv files which I intend to read in a single dataframe.
But I don't want to read the data for date_2020_11_15 and date_2020_11_16. How do I do it?
I'm not sure how to exclude certain files, but you can specify a range of file names using brackets. Code below would select all files without 11_15 and 11_16:
spark.read.csv("date_2020_11_{1[0-4,7-9],[0,2-3][0-9]}.csv")
df= spark.read.format("parquet").option("header", "true").load(paths)
where paths is a list of all the paths where data is present, worked for me.
Simple method is, read all data directory as it is and apply filter condition
df.filter("dataColumn != 'date_2020_11_15' & 'date_2020_11_16'")
Else you can use OS module read directory and iterate to that list to eliminate those date directory using condition.

Why can't Groovy find my file?

I wrote a script in groovy to find files java test files recursively in a given directory with certain names, the concerned part of the code is:
def projectRootDirectory = args.length ? new File(args[0]) : new File(System.getProperty("user.dir"))
def srcFilesCount = 0, testFilesCount = 0, srcLinesCount=0, testLinesCount=0
def srcFileSubstringPattern = '.java'
def testFileSubstringPattern = 'Test.java'
projectRootDirectory.eachDirRecurse() { dir ->
dir.eachFile {
if (it.name.endsWith(testFileSubstringPattern) || it.name ==~ /Test.*java/ ||
it.name.endsWith('Tests.java') || it.name.endsWith('TestCase.java')) {
//println "Test file found: " + it.name
testFilesCount++
it.eachLine { testLinesCount++ }
} else if (it.name.contains(srcFileSubstringPattern)) {
srcFilesCount++
it.eachLine { srcLinesCount++ }
}
}
}
It finds already existing files in the repo which was cloned using SVN that match for example someTestCase.java, but when I created some new ones by using the command touch dummyTestCase.java via Cygwin in Windows 7 or via the Windows 7 explorer right click -> New -> Text Document option and rename it to something like TestDummy.java, it doesn't find them. The script also treats copies of the respective files the same way i.e. it finds copies of old files that already existed but not the new ones I create. I even opened up file permissions to fullest on the newly created files, but no change. Whereas the BASH find command via Cygwin always finds all the files without any issue. I have confirmed using diagnostic print statements the the script is looking in the correct directory. I even confirmed this by having the script create some files there and confirmed they got created in the correct place.
Wow, the answer turned out to be amazingly simple. I replaced eachDirRecurse with eachFileRecurse thus also eliminating the nested loop. Thanks a ton to all the comment authors whose help led me to this discovery.

How do I append array items to a string over a loop in puppet

lets say I have an array with directory names
dirs = ['opt', 'apps', 'apache']
I want to iterate and generate a list of following paths
/opt
/opt/apps
/opt/apps/apache
through which I can create file resource.
Is there a reason you want to iterate through those files like that?
Because the simplest way to turn those into file resources would be this:
$dirs = ['/opt', '/opt/apps', '/opt/apps/apache']
file { $dirs:
ensure => directory,
}
If you just want to make sure that all the preceeding directories are created, there is also the dirtree module, which will do this all for you:
https://forge.puppet.com/pltraining/dirtree
$apache_dir = dirtree('/opt/apps/apache')
# Will return: ['/opt', '/opt/apps', '/opt/apps/apache']
You can then use that variable to create the directories.
As Matt mentions, you can also use maps, or an iterator to create the resources.
Basic example here:
$dirs = ['/opt', '/opt/apps', '/opt/apps/apache']
$dirs.each |String $path| {
file {$path:
ensure => directory,
}
}
Documented here: https://docs.puppet.com/puppet/latest/lang_iteration.html
There are a few different ways to do what you want to do in the code, it depends on how much management you want to do of those resources after creation.

Relative path for JMeter XML Schema?

I'm using JMeter 2.6, and have the following setup for my test:
-
|-test.jmx
|-myschema.xsd
I've set up an XML Schema Assertion, and typed "myschema.xsd" in the File Name field. Unfortunately, this doesn't work:
HTTP Request
Output schema : error: line=1 col=114 schema_reference.4:
Failed to read schema document 'myschema.xsd', because
1) could not find the document;
2) the document could not be read;
3) the root element of the document is not <xsd:schema>.
I've tried adding several things to the path, including ${__P(user.dir)} (points to the home dir of the user) and ${__BeanShell(pwd())} (doesn't return anything). I got it working by giving the absolute path, but the script is supposed to be used by others, so that's no good.
I could make it use a property value defined in the command line, but I'd like to avoid it as well, for the same reason.
How can I correctly point the Assertion to the schema under these circumstances?
Looks like you have to in this situation
validate your xml against xsd manually: simply use corresponding java code from e.g. BeanShell Assertion or BeanShell PostProcessor;
here is a pretty nice solution: https://stackoverflow.com/a/16054/993246 (as well you can use any other you want for this);
dig into jmeter's sources, amend XML Schema file obtaining to support variables in path (File Name field) - like CSV Data Set Config does;
but the previous way seems to be much easier;
run your jmeter test-scenario from shell-script or ant-task which will first copy your xsd to jmeter's /bin dir before script execution - at least XML Schema Assertion can be used "as is".
Perhaps if you will find any other/better - please share it.
Hope this helps.
Summary: in the end I've used http://path.to.schema/myschema.xsd as the File Name parameter in the Assertion.
Explanation: following Alies Belik's advice, I've found that the code for setting up the schema looks something like this:
DocumentBuilderFactory parserFactory = DocumentBuilderFactory.newInstance();
...
parserFactory.setAttribute("http://java.sun.com/xml/jaxp/properties/schemaSource", xsdFileName);
where xsdFileName is a string (the attribute string is actually a constant, I inlined it for readability).
According to e.g. this page, the attribute, when in the form a String, is interpreted as an URI - which includes HTTP URLs. Since I already have the schema accessible through HTTP, I've opted for this solution.
Add the 'myschema.xsd' to the \bin directory of your apache-jmeter next to the 'ApacheJMeter.jar' or set the 'File Name' from the 'XML Schema Assertion' to your 'myschema.xsd' from this starting point.
E.g.
JMeter: C:\Users\username\programs\apache-jmeter-2.13\bin\ApacheJMeter.jar
Schema: C:\Users\username\workspace\yourTest\schema\myschema.xsd
File Name: ..\\..\\..\workspace\yourTest\schema\myschema.xsd

Resources