First of all, hello guys I am new here. Today I was just playing around with Kotlin Coroutines and Channels. From the official documentation, I saw one code snippet for channels like this:
val channel = Channel<Int>()
launch {
// this might be heavy CPU-consuming computation or async logic, we'll just send five squares
for (x in 1..5) channel.send(x * x)
}
// here we print five received integers:
repeat(5) { println(channel.receive()) }
println("Done!")
Then I decided to run this snippet in the Kotlin Playground with some modifications so that I could know how many coroutines are working.
My code:
import kotlinx.coroutines.*
import kotlinx.coroutines.channels.*
fun log(msg1:String ,msg2: Int) = println("[${Thread.currentThread().name}] $msg1 $msg2")
fun main() = runBlocking {
val channel = Channel<Int>();
launch {
for (x in 1..5) {
channel.send(x * x)
log("sending",x*x)
}
}
repeat(5){log("receiving",channel.receive())}
println("Done!")
}
As expected I received output:
[main #coroutine#2] sending 1
[main #coroutine#1] receiving 1
[main #coroutine#1] receiving 4
[main #coroutine#2] sending 4
[main #coroutine#2] sending 9
[main #coroutine#1] receiving 9
[main #coroutine#1] receiving 16
[main #coroutine#2] sending 16
[main #coroutine#2] sending 25
[main #coroutine#1] receiving 25
Done!
But then I tried to launch a seperate CoroutineScope as I watched the Kotlin Conference on YouTube in which 'Roman Elizarov' illustrated that we can transfer data between channels in different CoroutineScopes. So, I tried this:
import kotlinx.coroutines.*
import kotlinx.coroutines.channels.*
fun log(msg1:String ,msg2: Int) = println("[${Thread.currentThread().name}] $msg1 $msg2")
fun main() = runBlocking {
val channel = Channel<Int>();
launch {
for (x in 1..5) {
channel.send(x * x)
log("sending",x*x)
}
}
CoroutineScope{
launch{log("receiving",channel.receive())}
}
println("Done!")
}
After running this, I am getting an error as Type mismatch: inferred type is () -> Job but CoroutineContext was expected
Here's what I tried to rectify this error: I added Dispatchers.Default
CoroutineScope{
launch(Dispatchers.Default){log("receiving",channel.receive())}
}
Throwing same error. Apology for naive code.
Can somebody help?
I think you are confusing CoroutineScope class with coroutineScope extension function. You are trying to call CoroutineScope constructor with the {launch{...}} block as an argument, but the constructor expects a CoroutineContext instead.
Change CoroutineScope to coroutineScope and add import for kotlinx.coroutines.coroutineScope.
Related
In my existing Scala code I replaced Thread.sleep(10000) with ZIO.sleep(Duration.fromScala(10.seconds)) with the understanding that it won't block thread from the thread pool (performance issue). When program runs it does not wait at this line (whereas of course in first case it does). Do I need to add any extra code for ZIO method to work ?
Adding code section from Play+Scala code:
def sendMultipartEmail = Action.async(parse.multipartFormData) { request =>
.....
//inside this controller below method is called
def retryEmailOnFail(pList: ListBuffer[JsObject], content: String) = {
if (!sendAndGetStatus(pList, content)) {
println("<--- email sending failed - retry once after a delay")
ZIO.sleep(Duration.fromScala(10.seconds))
println("<--- retrying email sending after a delay")
finalStatus = finalStatus && sendAndGetStatus(pList, content)
} else {
finalStatus = finalStatus && true
}
}
.....
}
As you said, ZIO.sleep will only suspend the fiber that is running, not the operating system thread.
If you want to start something after sleeping, you should just chain it after the sleep:
// value 42 will only be computed after waiting for 10s
val io = ZIO.sleep(Duration.fromScala(10.seconds)).map(_ => 42)
I use async http client in my code to asynchronously handle GET responses
I can run simultaneously 100 requests in the same time.
I use just on instance of httpClient in container
#Bean(destroyMethod = "close")
open fun httpClient() = Dsl.asyncHttpClient()
Code looks like
fun method(): CompletableFuture<String> {
return httpClient.prepareGet("someUrl").execute()
.toCompletableFuture()
.thenApply(::getResponseBody)
}
It works fine functionally. In my testing I use mock endpoint with the same url address. But my expectation was that all the requests are handled in several threads, but in profiler I can see that 16 threads are created for AsyncHttpClient, and they aren't destroyed, even if there are no requests to send.
My expectation was that
it will be less threads for async client
threads will be destroyed after some configured timeout
is there some option to control how much threads can be created by asyncHttpClient?
Am I missing something in my expectations?
UPDATE 1
I saw instruction on https://github.com/AsyncHttpClient/async-http-client/wiki/Connection-pooling
I found no info on thread pool
UPDATE 2
I also created method to do the same, but with handler and additional executor pool
Utility method look like
fun <Value, Result> CompletableFuture<Value>.handleResultAsync(executor: Executor, initResultHandler: ResultHandler<Value, Result>.() -> Unit): CompletableFuture<Result> {
val rh = ResultHandler<Value, Result>()
rh.initResultHandler()
val handler = BiFunction { value: Value?, exception: Throwable? ->
if (exception == null) rh.success?.invoke(value) else rh.fail?.invoke(exception)
}
return handleAsync(handler, executor)
}
The updated method look like
fun method(): CompletableFuture<String> {
return httpClient.prepareGet("someUrl").execute()
.toCompletableFuture()
.handleResultAsync(executor) {
success = {response ->
logger.info("ok")
getResponseBody(response!!)
}
fail = { ex ->
logger.error("Failed to execute request", ex)
throw ex
}
}
}
Then I can see that result of GET method is executed in the threads provided by thread pool (previously result was executed in "AsyncHttpClient-3-x"), but additional thread for AsyncHttpClient are still created and not destroyed.
AHC has two types of threads:
For I/O operation.
On your screen, it's AsyncHttpClient-x-x
threads. AHC creates 2*core_number of those.
For timeouts.
On your screen, it's AsyncHttpClient-timer-1-1 thread. Should be
only one.
Source: issue on GitHub: https://github.com/AsyncHttpClient/async-http-client/issues/1658
I'm new to Groovy and am a bit lost on how to batch up requests so they can be submitted to a server as a batch, instead of individually, as I currently have:
class Handler {
private String jobId
// [...]
void submit() {
// [...]
// client is a single instance of Client used by all Handlers
jobId = client.add(args)
}
}
class Client {
//...
String add(String args) {
response = postJson(args)
return parseIdFromJson(response)
}
}
As it is now, something calls Client.add(), which POSTs to a REST API and returns a parsed result.
The issue I have is that the add() method is called maybe thousands of times in quick succession, and it would be much more efficient to collect all the args passed in to add(), wait until there's a moment when the add() calls stop coming in, and then POST to the REST API a single time for that batch, sending all the args in one go.
Is this possible? Potentially, add() can return a fake id immediately, as long as the batching occurs, the submit happens, and Client can later know the lookup between fake id and the ID coming from the REST API (which will return IDs in the order corresponding to the args sent to it).
As mentioned in the comments, this might be a good case for gpars which is excellent at these kinds of scenarios.
This really is less about groovy and more about asynchronous programming in java and on the jvm in general.
If you want to stick with the java concurrent idioms I threw together a code snippet you could use as a potential starting point. This has not been tested and edge cases have not been considered. I wrote this up for fun and since this is asynchronous programming and I haven't spent the appropriate time thinking about it, I suspect there are holes in there big enough to drive a tank through.
That being said, here is some code which makes an attempt at batching up the requests:
import java.util.concurrent.*
import java.util.concurrent.locks.*
// test code
def client = new Client()
client.start()
def futureResponses = []
1000.times {
futureResponses << client.add(it as String)
}
client.stop()
futureResponses.each { futureResponse ->
// resolve future...will wait if the batch has not completed yet
def response = futureResponse.get()
println "received response with index ${response.responseIndex}"
}
// end of test code
class FutureResponse extends CompletableFuture<String> {
String args
}
class Client {
int minMillisLullToSubmitBatch = 100
int maxBatchSizeBeforeSubmit = 100
int millisBetweenChecks = 10
long lastAddTime = Long.MAX_VALUE
def batch = []
def lock = new ReentrantLock()
boolean running = true
def start() {
running = true
Thread.start {
while (running) {
checkForSubmission()
sleep millisBetweenChecks
}
}
}
def stop() {
running = false
checkForSubmission()
}
def withLock(Closure c) {
try {
lock.lock()
c.call()
} finally {
lock.unlock()
}
}
FutureResponse add(String args) {
def future = new FutureResponse(args: args)
withLock {
batch << future
lastAddTime = System.currentTimeMillis()
}
future
}
def checkForSubmission() {
withLock {
if (System.currentTimeMillis() - lastAddTime > minMillisLullToSubmitBatch ||
batch.size() > maxBatchSizeBeforeSubmit) {
submitBatch()
}
}
}
def submitBatch() {
// here you would need to put the combined args on a format
// suitable for the endpoint you are calling. In this
// example we are just creating a list containing the args
def combinedArgs = batch.collect { it.args }
// further there needs to be a way to map one specific set of
// args in the combined args to a specific response. If the
// endpoint responds with the same order as the args we submitted
// were in, then that can be used otherwise something else like
// an id in the response etc would need to be figured out. Here
// we just assume responses are returned in the order args were submitted
List<String> combinedResponses = postJson(combinedArgs)
combinedResponses.indexed().each { index, response ->
// here the FutureResponse gets a value, can be retrieved with
// futureResponse.get()
batch[index].complete(response)
}
// clear the batch
batch = []
}
// bogus method to fake post
def postJson(combinedArgs) {
println "posting json with batch size: ${combinedArgs.size()}"
combinedArgs.collect { [responseIndex: it] }
}
}
A few notes:
something needs to be able to react to the fact that there were no calls to add for a while. This implies a separate monitoring thread and is what the start and stop methods manage.
if we have an infinite sequence of adds without pauses, you might run out of resources. Therefore the code has a max batch size where it will submit the batch even if there is no lull in the calls to add.
the code uses a lock to make sure (or try to, as mentioned above, I have not considered all potential issues here) we stay thread safe during batch submissions etc
assuming the general idea here is sound, you are left with implementing the logic in submitBatch where the main problem is dealing with mapping specific args to specific responses
CompletableFuture is a java 8 class. This can be solved using other constructs in earlier releases, but I happened to be on java 8.
I more or less wrote this without executing or testing, I'm sure there are some mistakes in there.
as can be seen in the printout below, the "maxBatchSizeBeforeSubmit" setting is more a recommendation that an actual max. Since the monitoring thread sleeps for some time and then wakes up to check how we are doing, the threads calling the add method might have accumulated any number of requests in the batch. All we are guaranteed is that every millisBetweenChecks we will wake up and check how we are doing and if the criteria for submitting a batch has been reached, then the batch will be submitted.
If you are unfamiliar with java Futures and locks, I would recommend you read up on them.
If you save the above code in a groovy script code.groovy and run it:
~> groovy code.groovy
posting json with batch size: 153
posting json with batch size: 234
posting json with batch size: 243
posting json with batch size: 370
received response with index 0
received response with index 1
received response with index 2
...
received response with index 998
received response with index 999
~>
it should work and print out the "responses" received from our fake json submissions.
I have implemented multiple groovy methods for sending different notifications, but the code breaks-off the functions concept. So i want to rewrite/combine all groovy methods in one single methods so that i can call that one method wherever i need.
Doesn't matter success or failure and i need to pass message as parameter.
static void sendSuccessApplicationNotification(p1,p2,p3,p4) {
def x = Notify(this)
x.triggerBuild("SUCCESSFUL, application ${p1}:${p2} started properly", "${p3}")
x.triggerBuild("SUCCESSFUL, application ${p1}:${p2} started properly", "${p4")
}
Finally above above should be converted to one method.Checked many articles not getting an exact example.
you can use groovy template engine in your generalized function:
import groovy.text.SimpleTemplateEngine
void triggerBuild(a,b){
println "${a} >>>> ${b}"
}
void sendNotification(code, Map parms, List nodes) {
def templates = [
'appY': 'SUCCESSFUL, application ${app}:${ver} started properly',
'appN': 'FAILED, application ${app}:${ver} failed to start properly',
'depY': 'SUCCESSFUL deployment of ${app}:${ver} to ${node}<br>Executed by ${user}',
'depN': 'FAILED deployment of ${app}:${ver} to ${node}<br>Executed by ${user}'
]
def template = templates[code]
assert template
def message = new SimpleTemplateEngine().createTemplate(template).make(parms).toString()
nodes.each{node->
triggerBuild(message, node)
}
}
sendNotification('appY',[app:'myapp', ver:'v123'],['n1','n2'])
the code above will output:
SUCCESSFUL, application myapp:v123 started properly >>>> n1
SUCCESSFUL, application myapp:v123 started properly >>>> n2
assume having a List where results of jobs that are computed distributed are stored.
Now I have a main thread that is waiting for all jobs finished.
I know the size of the List needs to have until all jobs are finished.
What is the most elegant way in scala let the main thread (while(true) loop) sleep and getting it awake when the jobs are finished?
thanks for your answers
EDIT: ok after trying the concept from #Stefan-Kunze without success (guess I didnt got the point...) I give an example with some code:
The first node:
class PingPlugin extends SmasPlugin
{
val messages = new ListBuffer[BaseMessage]()
val sum = 5
def onStop = true
def onStart =
{
log.info("Ping Plugin created!")
true
}
def handleInit(msg: Init)
{
log.info("Init received")
for( a <- 1 to sum)
{
msg.pingTarget ! Ping() // Ping extends BaseMessage
}
// block here until all messages are received
// wait for messages.length == sum
log.info("handleInit - messages received: %d/%d ".format(messages.length, sum))
}
/**
* This method handles incoming Pong messages
* #param msg Pong extends BaseMessage
*/
def handlePong(msg: Pong)
{
log.info("Pong received from: " + msg.sender)
messages += msg
log.info("handlePong - messages received: %d/%d ".format(messages.length, sum))
}
}
a second node:
class PongPlugin extends SmasPlugin
{
def onStop = true
def onStart =
{
log.info("Pong Plugin created!")
true
}
/**
* This method receives Ping messages and send a Pong message back after a random time
* #param msg Ping extends BaseMessage
*/
def handlePing(msg: Ping)
{
log.info("Ping received from: " + msg.sender)
val sleep: Int = math.round(5000 * Random.nextFloat())
log.info("sleep: " + sleep)
Thread.sleep(sleep)
msg.sender ! Pong()
}
}
I guess the solution is possible with futures...
Picking up #jilen 's approach: (this code is assuming your results are of a type result)
//just like lists futures can be yielded
val tasks: Seq[Future[Result]] = for (i <- 1 to results.size) yield future {
//results.size is the number of //results you are expecting
println("Executing task " + i)
Thread.sleep(i * 1000L)
val result = ??? //your code goes here
result
}
//merge all future results into a future of a sequence of results
val aggregated: Future[Seq[Result]] = Future.sequence(tasks)
//awaits for your results to be computed
val squares: Seq[Int] = Await.result(aggregated, Duration.Inf)
println("Squares: " + squares)
It's hard to test the code here, since I don't have the rest of this system, but I'll try. I'm assuming that somewhere underneath all of this is Akka.
First, blocking like this suggests a real design problem. In an actor system, you should send your messages and move on. Your log command should be in handlePong when the correct number of pings have returned. Blocking init hangs the entire actor. You really should never do that.
But, ok, what if you absolutely have to do that? Then a good tool here would be the ask pattern. Something like this (I can't check that this compiles without more of your code):
import akka.pattern.ask
import akka.util.Timeout
import scala.concurrent.duration._
...
implicit val timeout = Timeout(5 seconds)
var pendingPongs = List.empty[Future[Pong]]
for( a <- 1 to sum)
{
// Ask each target for Ping. Append the returned Future to pendingPongs
pendingPongs += msg.pingTarget ? Ping() // Ping extends BaseMessage
}
// pendingPongs is a list of futures. We want a future of a list.
// sequence() does that for us. We then block using Await until the future completes.
val pongs = Await.result(Future.sequence(pendingPongs), 5 seconds)
log.info(s"handlePong - messages received: ${pongs.length}/$sum")