Is there a way to reopen an input stream with withReader? - Groovy - groovy

I know that the input stream is automatically closed at the end of this kind of block in Groovy:
def exec = ""
System.in.withReader {
println "input: "
exec = it.readLine()
}
but is there a way to reopen the stream if I want to do something like that:
def exec = ""
while(!exec.equals("q")) {
System.in.withReader {
println "input: "
exec = it.readLine()
}
if(!exec.equals("q")) {
//do something
}
}
When I try this I get this error at the second execution of the while loop:
Exception in thread "main" java.io.IOException: Stream closed
So what would be the best way to achieve that?
Thanks.

You shouldn't try to reopen System.in as you shouldn't close it in the first place. You could try something like the following
def exec
def reader = System.in.newReader()
// create new version of readLine that accepts a prompt to remove duplication from the loop
reader.metaClass.readLine = { String prompt -> println prompt ; readLine() }
// process lines until finished
while ((exec = reader.readLine("input: ")) != 'q') {
// do something
}

Related

Kotlin use variable used in a while loop

I want to use a function in loop condition and reuse it in the loop, like this code in Java, but I want it in kotlin :
while ( (data = in.readLine()) != null ) {
System.out.println("\r\nMessage from " + clientAddress + ": " + data);
}
I tried copying this code in android studio to automatically convert to Kotlin and the code looks like this :
while (reader.readLine().also { line = it } != null) {
Log.d("Line", line)
lines.add(line)
}
So I managed to get the lines with var line = "" before the loop, but the loop doesn't stop when the reader is done getting the message sent from the socket.
My wish is to send 2 messages through the socket, so I try to get all lines of one message, and when the second one arrives, I have to clear my lines variable to get the next message, but I can't.
Thanks !
You can use reader.lineSequence() function:
reader.lineSequence().forEach {
Log.d("Line", it)
lines.add(it)
}
Or as suggested in documentation, you can improve it by using File.useLines() which automatically closes the File reader so you don't have to do it manually

How do I stop the execution of a command-line argument in Scala if the program runs for too long?

I am attempting to run a program from the commandline that will run indefinitely if the image file passed through the program is corrupted and/or the named is wrong. I can test to make sure the filename is valid, but that won't help me if a rootkit damaged the image. My understanding is that the only way to quit the program is to create a separate thread, but sys.process.!! blocks until execution is completed.
val imageInfo: Option[String] = Some(s"python vol.py -f $memFile imageinfo".!!.trim)
You don't have to let a Process block until its completion.
import scala.sys.process.{Process, ProcessLogger}
var (iiOut, iiErr) = ("", "") // for collecting Process output
val getii = Process(s"python vol.py -f $memFile imageinfo")
.run(ProcessLogger(iiOut += _, iiErr += _))
// . . .
// do other useful stuff
// or set a timeout alarm and wait for it
// . . .
val imageInfo: Option[String] =
if (getii.isAlive()) {
// report failure
getii.destroy()
None
} else if (getii.exitValue() != 0 || iiErr != "") {
// report failure
None
} else
Some(iiOut.trim)
You can use Future with Await to measure the too long Process, like:
import scala.concurrent.ExecutionContext.Implicits.global
val sb = new StringBuffer // sb for capture output
val io = BasicIO(false, sb, None)// Creates a `ProcessIO` that appends its output to a `StringBuffer`
val p = s"python vol.py -f $memFile imageinfo".run()
val f = Future {
p.exitValue() //Blocks until this process exits and returns the exit code.
Some(sb.toString)// return the process result
}
try {
val str: Option[String] = Await.result(f, Duration.apply(1, TimeUnit.SECONDS))
} catch {
case e: TimeoutException => {
println("run too long")
p.destroy() //destroy this process when too long
}
}

How to make capturing output from an external process thread-safe?

I've written a small method to execute the git command line tool and capture its output:
def git(String command) {
command = "git ${command}"
def outputStream = new StringBuilder()
def errorStream = new StringBuilder()
def process = command.execute()
process.waitForProcessOutput(outputStream, errorStream)
return [process.exitValue(), outputStream, errorStream, command]
}
I'm using it with GPars to clone multiple repositories simultaneously like
GParsPool.withPool(10) {
repos.eachParallel { cloneUrl, cloneDir->
(exit, out, err, cmd) = git("clone ${cloneUrl} ${cloneDir}")
if (exit != 0) {
println "Error: ${cmd} failed with '${errorStream}'."
}
}
}
However, I believe my git method it not thread-safe: For example, a second thread could modify command in the first line of the method before the first thread reached command.execute() in the fifth line of the method.
I could solve this by making the whole git method synchronized, but that would defeat the purpose of running it in different threads as I want clones to happen in parallel.
So I was thinking to do partial synchronization like
def git(String command) {
def outputStream
def errorStream
def process
synchronized {
command = "git ${command}"
outputStream = new StringBuilder()
errorStream = new StringBuilder()
process = command.execute()
}
process.waitForProcessOutput(outputStream, errorStream)
return [process.exitValue(), outputStream, errorStream, command]
}
But I guess that also is not safe as in thread two waitForProcessOutput() might return earlier than in thread one, screwing up the outputStream / errorStream variables.
What is the correct way to get this thread-safe?
Change the assignment statement inside the eachParallel closure argument as follows:
def (exit, out, err, cmd) = git("clone ${cloneUrl} ${cloneDir}")
This will make the variables local to the closure, which in turn will make them thread-safe. The git() method is fine as is.

Scala: Wait while List is beeing filled

assume having a List where results of jobs that are computed distributed are stored.
Now I have a main thread that is waiting for all jobs finished.
I know the size of the List needs to have until all jobs are finished.
What is the most elegant way in scala let the main thread (while(true) loop) sleep and getting it awake when the jobs are finished?
thanks for your answers
EDIT: ok after trying the concept from #Stefan-Kunze without success (guess I didnt got the point...) I give an example with some code:
The first node:
class PingPlugin extends SmasPlugin
{
val messages = new ListBuffer[BaseMessage]()
val sum = 5
def onStop = true
def onStart =
{
log.info("Ping Plugin created!")
true
}
def handleInit(msg: Init)
{
log.info("Init received")
for( a <- 1 to sum)
{
msg.pingTarget ! Ping() // Ping extends BaseMessage
}
// block here until all messages are received
// wait for messages.length == sum
log.info("handleInit - messages received: %d/%d ".format(messages.length, sum))
}
/**
* This method handles incoming Pong messages
* #param msg Pong extends BaseMessage
*/
def handlePong(msg: Pong)
{
log.info("Pong received from: " + msg.sender)
messages += msg
log.info("handlePong - messages received: %d/%d ".format(messages.length, sum))
}
}
a second node:
class PongPlugin extends SmasPlugin
{
def onStop = true
def onStart =
{
log.info("Pong Plugin created!")
true
}
/**
* This method receives Ping messages and send a Pong message back after a random time
* #param msg Ping extends BaseMessage
*/
def handlePing(msg: Ping)
{
log.info("Ping received from: " + msg.sender)
val sleep: Int = math.round(5000 * Random.nextFloat())
log.info("sleep: " + sleep)
Thread.sleep(sleep)
msg.sender ! Pong()
}
}
I guess the solution is possible with futures...
Picking up #jilen 's approach: (this code is assuming your results are of a type result)
//just like lists futures can be yielded
val tasks: Seq[Future[Result]] = for (i <- 1 to results.size) yield future {
//results.size is the number of //results you are expecting
println("Executing task " + i)
Thread.sleep(i * 1000L)
val result = ??? //your code goes here
result
}
//merge all future results into a future of a sequence of results
val aggregated: Future[Seq[Result]] = Future.sequence(tasks)
//awaits for your results to be computed
val squares: Seq[Int] = Await.result(aggregated, Duration.Inf)
println("Squares: " + squares)
It's hard to test the code here, since I don't have the rest of this system, but I'll try. I'm assuming that somewhere underneath all of this is Akka.
First, blocking like this suggests a real design problem. In an actor system, you should send your messages and move on. Your log command should be in handlePong when the correct number of pings have returned. Blocking init hangs the entire actor. You really should never do that.
But, ok, what if you absolutely have to do that? Then a good tool here would be the ask pattern. Something like this (I can't check that this compiles without more of your code):
import akka.pattern.ask
import akka.util.Timeout
import scala.concurrent.duration._
...
implicit val timeout = Timeout(5 seconds)
var pendingPongs = List.empty[Future[Pong]]
for( a <- 1 to sum)
{
// Ask each target for Ping. Append the returned Future to pendingPongs
pendingPongs += msg.pingTarget ? Ping() // Ping extends BaseMessage
}
// pendingPongs is a list of futures. We want a future of a list.
// sequence() does that for us. We then block using Await until the future completes.
val pongs = Await.result(Future.sequence(pendingPongs), 5 seconds)
log.info(s"handlePong - messages received: ${pongs.length}/$sum")

what is best way to leave a groovy script prematurely (except system.exit(0))

what is best way to leave a groovy script prematurely ?
A groovy script reads a row from a given info file then makes some verification work, in case of verification fails (inconsistent data) script needs to leave the flow prematurely . Then system will call the script again to read next row of the same info file
Code example :
read a row
try{
//make some verification here
}catch(Exception e){
logger("exception on something occurred "+e,e)
//here need to leave a groovy script prematurely
}
Just use System.exit(0).
try {
// code
} catch(Exception e) {
logger("exception on something occurred "+e,e)
System.exit(0)
}
You could use the exit status code to indicate what line you had problems with.
A zero value would indicate that everything was OK, and a positive value would be the line number. You could then let your groovy script take the start line as an input parameter.
This is a naive implementation with just a silly exception if a line is empty.
file = new File(args[0])
startLine = args[1].toInteger()
file.withReader { reader ->
reader.eachLine { line, count ->
try {
if (count >= startLine) {
if (line.length() == 0) throw new Exception("error")
println count + ': ' + line
}
} catch (Exception ignore) {
System.exit(count)
}
}
}
I'm pretty sure you can just return from a script.
Simply use return:
read a row
try{
//make some verification here
}catch(Exception e){
logger("exception on something occurred "+e,e)
//here need to leave a groovy script prematurely
return
}
Use return 0
read a row
try{
//make some verification here
}catch(Exception e){
logger("exception on something occurred "+e,e)
return 0;
}

Resources