Gpars, name the threads - multithreading

List<byte[]> outputBytes = []
GParsPool.withPool( ) {
outputBytes = events.collectParallel{
depositNoticeService.getDepositNotice( it.id )
}
}
I want to name each thread, so I can monitor it in logging, but I do not see any documentation on how to get there.

Try to get current thread, and than you can change or get it options:
Thread.currentThread().getName()
Thread.currentThread().setName("NEW NAME")
I'm using Executors and here example that works for me:
import java.util.concurrent.*
import javax.annotation.*
def threadExecute = Executors.newSingleThreadExecutor()
threadExecute.execute {
log.info(Thread.currentThread().getName())
log.info(Thread.currentThread().setName("NEW NAME"))
log.info(Thread.currentThread().getName())
}
threadExecute.shutdown()

Related

Jira - Epic validator for certain link types

I wrote a groovy script for Jira Epic workflow that enables to close the Epic only if all the child issues are closed.
The script works great, and now I want to make it valid only for a specific type of linked issue. "Issues in epic"
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.issue.link.IssueLink
import com.atlassian.jira.issue.link.IssueLinkManager
import com.opensymphony.workflow.InvalidInputException
// Allow logging for debug and tracking purposes
import org.apache.log4j.Level
import org.apache.log4j.Logger
// Script code for easy log identification
String scriptCode = "Check all issues in Epics are Done -"
// Setup the log and leave a message to show what we're doing
Logger logger = log
logger.setLevel( Level.ERROR )
logger.debug( "$scriptCode Triggered by $issue.key" )
def passesCondition = true
if (issue.issueType.name == 'Epic')
{
IssueLinkManager issueLinkManager = ComponentAccessor.issueLinkManager
def found = issueLinkManager.getOutwardLinks(issue.id).any
{
it?.destinationObject?.getStatus().getName() != 'Done' &&
it?.destinationObject?.getIssueType().getName() != 'Epic'
}
logger.debug( "$scriptCode Found = $found " )
if (found) {
logger.debug( "$scriptCode return false" )
passesCondition = false
invalidInputException = new InvalidInputException("Please make sure all linked issues are in 'Done' status")
} else {
logger.debug( "$scriptCode return true" )
passesCondition = true
}
}
// Always allow all other issue types to execute this transition
else
{
logger.debug( "$scriptCode Not Epic return true")
passesCondition = true
}
The code above works for all kinds of linked issues.
Does anyone know how to make it works only for a specific link type?
Thanks.
You can use
it?.issueLinkType
inside the Closure.
Then you can use
it?.issueLinkType.inward
and
it?.issueLinkType.outward
to get the inward/outward name of the link type.

Scala Future concurrency Issue

Following is my class where I run tasks concurrently. My problem is , my application never ends even after getting result for all the features. I suspect Thread pool is not shutting down which leads my application alive even after my tasks.Believe me I googled alot to figure it out but no luck. What I'm missing here?
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future
import scala.collection.mutable.ListBuffer
import scala.util.Failure
import scala.util.Success
object AppLauncher{
def launchAll(): ListBuffer[Future[String]] = {
// My code logic where I launch all my threads say 50
null
}
def main(args:Array[String]):Unit= {
register(launchAll())
}
def register(futureList: ListBuffer[Future[String]]): Unit =
{
futureList.foreach { future =>
{
future.onComplete {
case Success(successResult) => {
println(successResult)
}
case Failure(failureResult) => { println(failureResult) }
}
}
}
}
}
Usually, when you operate on an iterable of Futures you should use Future.sequence which changes say, a Seq[Future[T]] to a Future[Seq[T]].
So, use something like:
def register(futureList: Seq[Future[String]]) = Future.sequence(futureList) foreach { results =>
println("received result")
}
if you'd like to map each future and print outputs as it completes, you can also do something on the lines of;
def register(futureList: Seq[Future[String]]) = Future.sequence (
futureList.map(f => f.map { v =>
println(s"$v is complete")
v
}) ) map { vs =>
println("all values done $vs")
vs
}
Finally I was able to figure out the issue.The issue is certianly because of Thread Pool was not terminated even after my futures were completed successfully . I tried to isolate the issue by changing my implementation slightly as below.
//import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future
import scala.collection.mutable.ListBuffer
import scala.util.Failure
import scala.util.Success
//Added ExecutionContex explicitly
import java.util.concurrent.Executors
import concurrent.ExecutionContext
object AppLauncher {
//Implemented EC explicitly
private val pool = Executors.newFixedThreadPool(1000)
private implicit val executionContext = ExecutionContext.fromExecutorService(pool)
def launchAll(): ListBuffer[Future[String]] = {
// My code logic where I launch all my threads say 50
null
}
def main(args: Array[String]): Unit = {
register(launchAll())
}
def register(futureList: ListBuffer[Future[String]]): Unit =
{
futureList.foreach { future =>
{
println("Waiting...")
val result = Await.result(future, scala.concurrent.duration.Duration.Inf)
println(result)
}
}
pool.shutdownNow()
executionContext.shutdownNow()
println(pool.isTerminated() + " Pool terminated")
println(pool.isShutdown() + " Pool shutdown")
println(executionContext.isTerminated() + " executionContext terminated")
println(executionContext.isShutdown() + " executionContext shutdown")
}
}
Result before adding highlighted code to shutdown pools
false Pool terminated
true Pool shutdown
false executionContext terminated
true executionContext shutdown
After adding highlighted code solved my issue. I ensured no resource leak in my code.My scenario permits me to kill the pool when all the futures are done. I'm aware of the fact that I changed elegant callback implementation to blocking implementation but still it solved my problem.

How to run a Data Import Script in Seprate Thread Grails/Grrovy?

I use to import data from excel ,but i use the bootstrap.groovy to write the code and my import script method is called when the application starts.
Here the scenarios is i m having 8000 related data once to import if they are not on my database.And,also when i deploy it to tomcat6 it is blocking other apps from deployment ,until it finish the import.So,i want to use separate thread for to run the script in anyway without affecting performance AND BLOCKING OTHER FROM DEPLOYMENT.
code excerpt ...
class BootStrap {
def grailsApplication
def sessionFactory
def excelService
def importStateLgaArea(){
String fileName = grailsApplication.mainContext.servletContext.getRealPath('filename.xlsx')
ExcelImporter importer = new ExcelImporter(fileName)
def listState = importer.getStateLgaTerritoryList() //get the map,form excel
log.info "List form excel:${listState}"
def checkPreviousImport = Area.findByName('Osusu')
if(!checkPreviousImport) {
int i = 0
int j = 0 // up
date cases
def beforeTime = System.currentTimeMillis()
for(row in listState){
def state = State.findByName(row['state'])
if(!state) {
// log.info "Saving State:${row['state']}"
row['state'] = row['state'].toString().toLowerCase().capitalize()
// log.info "after capitalized" + row['state']
state = new State(name:row['state'])
if(!state.save(flash:true)){
log.info "${state.errors}"
break;
}
}
}
}
For import of large data I suggest to take in consideration the use of Spring Batch. Is easy to integrate it in grails. You can try with this plugin or integrate it manually.

soapui shared datasource in groovy script

I have prepared a test case in SoapUI Open Source which loops over values in csv file and sends request for each set of values (taken care of by groovy script). I want to modify it so each thread for each new iteration uses value from next row of csv file.
import com.eviware.soapui.impl.wsdl.teststeps.*
def testDataSet = []
def fileName = "C:\\sSHhrTqA5OH55qy.csv"
new File(fileName).eachLine { line -> testDataSet.add( line.split(",") ) }
def myProps = new java.util.Properties();
myProps = testRunner.testCase.getTestStepByName("Properties");
def groovyUtils = new com.eviware.soapui.support.GroovyUtils( context );
def testCase = testRunner.testCase;
def testStep = testCase.getTestStepByName("TestRequest");
testRunner = new com.eviware.soapui.impl.wsdl.testcase.WsdlTestCaseRunner(testCase, null);
testStepContext = new com.eviware.soapui.impl.wsdl.testcase.WsdlTestRunContext(testStep);
while (true) {
for ( i in testDataSet ) {
myProps.setPropertyValue("parameter0",i[0]);
myProps.setPropertyValue("username",i[1]);
myProps.setPropertyValue("parameter1",i[2]);
myProps.setPropertyValue("password",i[3]);
testStep.getTestRequest().setUsername(myProps.getPropertyValue("username"))
testStep.getTestRequest().setPassword(myProps.getPropertyValue("password"))
testStep.run(testRunner, testStepContext);
}
}
I want to modify this script so each thread from the pool gets unique (next) unused value from data source
I tried to use newFixedThreadPool from java.util.concurrent as suggested here (Concurrency with Groovy), however I can't get it to work - either requests are duplicated or SoapUI crashes (I am new to concurrency).
Can you please help me to get it right?
I think this would work for you:
while (true) {
for ( i in testDataSet ) {
def th = Thread.start(){
myProps.setPropertyValue("parameter0",i[0]);
myProps.setPropertyValue("username",i[1]);
myProps.setPropertyValue("parameter1",i[2]);
myProps.setPropertyValue("password",i[3]);
testStep.getTestRequest().setUsername(myProps.getPropertyValue("username"))
testStep.getTestRequest().setPassword(myProps.getPropertyValue("password"))
testStep.run(testRunner, testStepContext);
}
th.join()
}
So, new threads would be created on each loop.
If you wanted to test out if its working you could place loginfo(s) in the code...
log.info("Thread Id: " + Thread.currentThread().getId() as String)
I don't see your point. SoapUi already gives you a datasource test step that accepts a csv file as input.
So once you have all these values you can transfer the properties and run the test.

Why am I getting StackOverflowError?

In Groovy Console I have this:
import groovy.util.*
import org.codehaus.groovy.runtime.*
def gse = new GroovyScriptEngine("c:\\temp")
def script = gse.loadScriptByName("say.groovy")
this.metaClass.mixin script
say("bye")
say.groovy contains
def say(String msg) {
println(msg)
}
Edit: I filed a bug report: https://svn.dentaku.codehaus.org/browse/GROOVY-4214
It's when it hits the line:
this.metaClass.mixin script
The loaded script probably has a reference to the class that loaded it (this class), so when you try and mix it in, you get an endless loop.
A workaround is to do:
def gse = new groovy.util.GroovyScriptEngine( '/tmp' )
def script = gse.loadScriptByName( 'say.groovy' )
script.newInstance().with {
say("bye")
}
[edit]
It seems to work if you use your original script, but change say.groovy to
class Say {
def say( msg ) {
println msg
}
}

Resources