I have a very simple groovy script that works through a JSON and performs some actions. Since there are no dependencies between the JSON records and actions I was hoping I could speed up the execution a bit. Given this codeā¦
def recordJSON = new JsonSlurper().parseText(myFile.text)
recordJSON.each{
do stuff here
}
Is there a way to thread the execution or perform them in parallel? I've done a bit of reading on the topic, but I'm a casual coder and they seem to be a bit over my head.
The easiest is to use GPars which is part of groovy:
import static groovyx.gpars.GParsPool.withPool
withPool {
recordJSON.eachParallel {
do stuff here
}
}
If you are more comfortable with Java thread constructions, this might be of interest (Groovy offers all of the usual Java goodies, with less boilerplate):
import java.util.concurrent.*
import groovy.json.*
class MyTask implements Runnable {
File file
void run() {
def recordJSON = new JsonSlurper().parseText(file.text)
println "do stuff here ${file.name}"
}
}
// ------ main
def pool = Executors.newCachedThreadPool()
(1..6).each {
def f = new File("f${it}.json")
pool.submit(new MyTask(file: f))
}
Related
I'm going through this example but something about it is very confusing to me: https://www.testcookbook.com/book/groovy/jenkins/intro-testing-job-dsl.html
In this test, how/what is executing getJobFiles()? I don't see it being called anywhere. Is there some magic with jobFiles? Is specifying jobFiles somehow calling getJobFiles?
import javaposse.jobdsl.dsl.DslScriptLoader
import javaposse.jobdsl.plugin.JenkinsJobManagement
import org.junit.ClassRule
import org.jvnet.hudson.test.JenkinsRule
import spock.lang.Shared
import spock.lang.Specification
import spock.lang.Unroll
class JobScriptsSpec extends Specification {
#Shared
#ClassRule
JenkinsRule jenkinsRule = new JenkinsRule()
#Unroll
def 'test script #file.name'(File file) {
given:
def jobManagement = new JenkinsJobManagement(System.out, [:], new File('.'))
when:
new DslScriptLoader(jobManagement).runScript(file.text)
then:
noExceptionThrown()
where:
file << jobFiles
}
static List<File> getJobFiles() {
List<File> files = []
new File('jobs').eachFileRecurse {
if (it.name.endsWith('.groovy')) {
files << it
}
}
files
}
}
Edit
It seems like jobFiles does call getJobFiles() but I don't understand how. Is this a groovy or spock feature? I've been trying to research this but can finding anything explaining this in detail.
This is standard Groovy functionality. You can abbreviate any getter call like
def file = new File("sarek-test-parent/sarek-test-common/src/main/java")
println file.name // getName()
println file.parent // getParent()
println file.absolutePath // getAbsolutePath()
println file.directory // isDirectory()
java
sarek-test-parent\sarek-test-common\src\main
C:\Users\alexa\Documents\java-src\Sarek\sarek-test-parent\sarek-test-common\src\main\java
true
The same works for setters:
new Person().name = "John" // setName("John")
new Person().zipCode = "12345" // setZipCode("12345")
Actually the second link provided by jaco0646 explains it, just his mixing up this simple fact with data providers clouds the explanation.
Edit
When Groovy determines that jobFiles does not refer to any existing variable, it considers the name as a Groovy property, which then allows it to take advantage of the shortcut notation for accessing properties.
Inside a spock test we want to create a resource and make sure its disposed correctly regardless of the outcome of the test result.
We tried the approach below. But spock is not executing tests when the test code is wrapped inside a closure.
import spock.lang.Specification
class ExampleSpec extends Specification {
def wrapperFunction(Closure cl) {
try {
cl()
} finally {
// do custom stuff
}
}
def "test wrapped in closure"() {
wrapperFunction {
expect:
1 == 1
println "will not execute!"
}
}
}
What is the best approach on creating and disposing a resource inside a spock test.
setup() and cleanup() are not viable solutions since creating and disposing should be possible at arbitrary points inside the feature method.
You can use setup and cleanup block inside of the test case (feature method) like this:
class ReleaseResourcesSpec extends Specification {
void 'Resources are released'() {
setup:
def stream = new FileInputStream('/etc/hosts')
when:
throw new IllegalStateException('test')
then:
true
cleanup:
stream.close()
println 'stream was closed'
}
}
Code from the cleanup block is always executed although the test fails or if there is any exception. See the result of the above example:
So it is similar to setup() and cleanup() methods but in this case you can have different setup and clean up code for each feature method.
I am trying to create multiple threads in a jenkins pipeline script. So, I took simple example as below. But it not working. Could you please let me know?
In the below example, jobMap contains a key as a string and value as List of Strings. When I just display the list, the values printed properly, but when I used 3 different ways to create threads and thus to display, it is not working.
for ( item in jobMap )
{
def jobList = jobMap.get(item.key);
**// The following loop is printing the values**
for (jobb in jobList)
{
echo "${jobb}"
}
// Thread Implementation1:
Thread.start
{
for (jobb in jobList)
{
echo "${jobb}"
}
}
// Thread Implementation2:
def t = new Thread({ echo 'hello' } as Runnable)
t.start() ;
t.join();
// Thread Implementation3:
t1 = new Thread( new TestMultiThreadSleep(jobList));
t1.start();
}
class TestMultiThreadSleep implements Runnable {
String jobs;
public TestMultiThreadSleep(List jobs) {
this.jobs = jobs;
}
#Override
public void run()
{
echo "coming here"
for (jobb in jobs)
{
echo "${jobb}"
}
}
}
Jenkins has special step - parallel(). In this step you can build another jobs or call Pipeline code.
It's best to think of Pipeline code as a dialect or subset of Groovy.
The CPS transformer ("continuation-passing style") in the workflow script engine turns the Groovy code into something that can be interpreted in a serialized form, passed between different JVMs, etc.
You can probably imagine that this will not work at all well with threads.
If you need threads, you'll have to work within a #NonCPS annotated class or function. This class or function must not call any CPS groovy code - so it can't invoke closures, access the workflow script context, etc.
That's why using the parallel step is preferable.
I am learning Scala multi-thread programming, and write a simple program through referring a tutorial:
object ThreadSleep extends App {
def thread(body: =>Unit): Thread = {
val t = new Thread {
override def run() = body
}
t.start()
t
}
val t = thread{println("New Therad")}
t.join
}
I can't understand why use {} in new Thread {} statement. I think it should be new Thread or new Thread(). How can I understand this syntax?
This question is not completely duplicated to this one, because the point of my question is about the syntax of "new {}".
This is a shortcut for
new Thread() { ... }
This is called anonymous class and it works just like in JAVA:
You are here creating a new thread, with an overriden run method. This is useful because you don't have to create a special class if you only use it once.
Needs confirmation but you can override, add, redefine every method or attribute you want.
See here for more details: https://docs.oracle.com/javase/tutorial/java/javaOO/anonymousclasses.html
By writing new Thread{} your creating an anonymous subclass of Thread where you're overriding the run method. Normally, you'd prefer to create a subclass of Runnable and create a thread with it instead of subclassing Thread
val r = new Runnable{ override def run { body } }
new Thread(r).start
This is usually sematincally more correct, since you'd want to subclass Thread only if you were specializing the Thread class more, for example with an AbortableThread. If you just want to run a task on a thread, the Runnable approach is more adequate.
There is an application where users can provide custom groovy scripts. They can write their own functions in those scripts. I want to restrict people from using the 'synchronized' keyword as well as some other keywords in these scripts. For example it should not be possible to write a function like below.
public synchronized void test() {
}
I am creating a CompilerConfiguration and using the SecureASTCustomizer. However adding org.codehaus.groovy.syntax.Types.KEYWORD_SYNCHRONIZED to the list of black listed tokens doesn't seem to do the job. (if I add org.codehaus.groovy.syntax.Types.PLUS it's preventing the usage of '+' within scripts.. but doesn't seem to do the job for synchronized)
Any ideas on how to achieve this ...
You can do something like this:
import org.codehaus.groovy.control.CompilerConfiguration
import org.codehaus.groovy.syntax.SyntaxException
import org.codehaus.groovy.ast.ClassNode
import org.codehaus.groovy.control.SourceUnit
import org.codehaus.groovy.classgen.GeneratorContext
class SynchronizedRemover extends org.codehaus.groovy.control.customizers.CompilationCustomizer {
SynchronizedRemover() {
super(org.codehaus.groovy.control.CompilePhase.CONVERSION)
}
void call(final SourceUnit source, final GeneratorContext context, final ClassNode classNode) {
classNode.methods.each { mn ->
if (mn.modifiers & 0x0020) { // 0x0020 is for synchronized
source.addError(new SyntaxException("Synchronized is not allowed", mn.lineNumber, mn.columnNumber))
}
}
}
}
def config = new CompilerConfiguration()
config.addCompilationCustomizers(new SynchronizedRemover())
def shell = new GroovyShell(config)
shell.evaluate '''
class Foo { public synchronized void foo() { println 'bar' } }
'''
The idea is to create a compilation customizer that checks generated classes and for each method, add an error if the synchronized modifier is present. For synchronized block inside methods, you can probably use the SecureASTCustomizer with a custom statement checker.