Kotlin program results in error when using blockhound - multithreading

I have a sceanrio wherein the code is as such:
import kotlinx.coroutines.runBlocking
import reactor.blockhound.BlockHound
BlockHound.install()
val throwable: Throwable = assertThrows {
runBlocking {
// Calling some method
}
}
This code fails in some scenarios with message: reactor.blockhound.BlockingOperationError: Blocking call! jdk.internal.misc.Unsafe#park
Can anyone please help me resolve this?

Related

Ktor - Keep POST alive until receiving websocket communication

I am building an API with Kotlin and Ktor that should be able to receive a normal POST request.
Upon receiving it, he should keep it alive and establish a series of asynchronous communications with other systems using websocket.
Only at the end of these communications and receiving certain information will it be able to respond to the POST request.
Needless to say, the request must be kept alive.
I'm not sure how to make this possible.
I have investigated using coroutines and threads but my inexperience prevents me from understanding what would be the best solution.
By default sequential code inside a coroutine is executed synchronously so you can just put your code for communication via Websockets inside a route's handler and in the end send a response. Here is an example:
import io.ktor.client.*
import io.ktor.client.engine.okhttp.*
import io.ktor.client.plugins.websocket.*
import io.ktor.client.plugins.websocket.WebSockets
import io.ktor.server.application.*
import io.ktor.server.engine.*
import io.ktor.server.netty.*
import io.ktor.server.response.*
import io.ktor.server.routing.*
import io.ktor.server.websocket.*
import io.ktor.websocket.*
fun main() {
val client = HttpClient(OkHttp) {
install(WebSockets)
}
embeddedServer(Netty, port = 12345) {
routing {
get("/") {
client.webSocket("ws://127.0.0.1:5050/ws") {
outgoing.send(Frame.Text("Hello"))
val frame = incoming.receive()
println((frame as Frame.Text).readText())
println("Websockets is done")
}
call.respondText { "Done" }
}
}
}.start(wait = false)
embeddedServer(Netty, port = 5050) {
install(io.ktor.server.websocket.WebSockets)
routing {
webSocket("/ws") {
outgoing.send(Frame.Text("Hello from server"))
}
}
}.start()
}

Swift - how can I wait for dispatch_async finish?

When my app starts first time I perform task of importing data from disk into CoreData. I do thins in background thread. Then I switch to main thread and perform load from CoreData.
Problem is that sometimes load from CoreData occurs before import from disk is finished. So I need a way to wait for import to finish and only them perform load from db.
How can I do this in Swift?
My code looks like this:
func firstTimeLaunch() {
dispatch_async(dispatch_get_global_queue(QOS_CLASS_USER_INTERACTIVE, 0)) { [unowned self] in
self.importArticlesListFromDisk()
self.importArticlesFromDisk()
dispatch_async(dispatch_get_main_queue()) { [unowned self] in
self.loadArticlesListFromDb()
self.loadArticlesFromDb()
}
}
}
Perhaps you should try adding a completion handler to importArticlesListFromDisk and importArticlesFromDisk, then loading from the db in the completion block.
dispatch_async(dispatch_get_global_queue(QOS_CLASS_USER_INTERACTIVE, 0)) { [unowned self] in
self.importArticlesAndArticlesListFromDisk() {
// Completion Handler
dispatch_async(dispatch_get_main_queue()) { [unowned self] in
self.loadArticlesListFromDb()
self.loadArticlesFromDb()
}
}
}
I'd recommend using NSOperations. There is a great talk about this from wwdc15
The sample code is also quite interesting for that purpose.
Essentially, you want to create a concurrent operation for your each of your imports:
let's imagine we override the start function of an operation importing your article list from disk:
override func start {
//long running import operation, even async...
//when done: self.finish() //needs kvo overrides
//finish causes the concurrent operation to terminate
}
A very nice thing you can do with operations, is to set dependencies:
let importArticlesFromDiskOp = ...
let importArticlesFromDBOp = ...
importArticlesFromDBOp.addDependency(importArticlesFromDiskOp)
This way your import from DB would only run after the import from disk is done. I personally use this a LOT.
good luck
R

Terminate existing pool when all work is done

Alright, brand new to gpars so please forgive me if this has an obvious answer.
Here is my scenario. We currently have a piece of our code wrapped in a Thread.start {} block. It does this so it can send messages to an message queue in the background and not block the user request. An issue we have recently ran into with this is for large blocks of work, it is possible for the users to perform another action which would cause this block to execute again. As it is threaded, it is possible for the second batch of messages to get sent before the first causing corrupted data.
I would like to change this process to work as a queue flow with gpars. I've seen examples of creating pools such as
def pool = GParsPool.createPool()
or
def pool = new ForkJoinPool()
and then using the pool as
GParsPool.withExistingPool(pool) {
...
}
This seems like it would account for the case that if the user performs an action again, I could reuse the created pool and the actions would not be performed out of order, provided I have a pool size of one.
My question is, is this the best way to do this with gpars? And furthermore, how do I know when the pool is finished all of its work? Does it terminate when all the work is finished? If so, is there a method that can be used to check if the pool has finished/terminated to know I need a new one?
Any help would be appreciated.
No, explicitly created pools do not terminate by themselves. You have to call shutdown() on them explicitly.
Using withPool() {} command, however, will guarantee that the pool is destroyed once the code block is finished.
Here is the current solution we have to our issue. It should be noted that we followed this route due to our requirements
Work is grouped by some context
Work within a given context is ordered
Work within a given context is synchronous
Additional work for a context should execute after the preceding work
Work should not block the user request
Contexts are asynchronous between each other
Once work for a context is finished, the context should clean up after itself
Given the above, we've implemented the following:
class AsyncService {
def queueContexts
def AsyncService() {
queueContexts = new QueueContexts()
}
def queue(contextString, closure) {
queueContexts.retrieveContextWithWork(contextString, true).send(closure)
}
class QueueContexts {
def contextMap = [:]
def synchronized retrieveContextWithWork(contextString, incrementWork) {
def context = contextMap[contextString]
if (context) {
if (!context.hasWork(incrementWork)) {
contextMap.remove(contextString)
context.terminate()
}
} else {
def queueContexts = this
contextMap[contextString] = new QueueContext({->
queueContexts.retrieveContextWithWork(contextString, false)
})
}
contextMap[contextString]
}
class QueueContext {
def workCount
def actor
def QueueContext(finishClosure) {
workCount = 1
actor = Actors.actor {
loop {
react { closure ->
try {
closure()
} catch (Throwable th) {
log.error("Uncaught exception in async queue context", th)
}
finishClosure()
}
}
}
}
def send(closure) {
actor.send(closure)
}
def terminate(){
actor.terminate()
}
def hasWork(incrementWork) {
workCount += (incrementWork ? 1 : -1)
workCount > 0
}
}
}
}

Scala client server multithreaded using socket

I cant get my head around this one here
I am a beginner to Scala just few weeks old and have tried but failed
I have read and tried about Actors, Futures,...etc didnt work for me
Could you supply a code of a server client example (or at least the server side)
Suppose to open connection using a socket that receives a string (i.e. file path) from several clients and process each one in a thread
import java.net.{Socket, ServerSocket}
import java.util.concurrent.{Executors, ExecutorService}
import java.util.Date
import java.io._
import scala.io._
import java.nio._
import java.util._
import scala.util.control.Breaks
import java.security.MessageDigest
import java.security.DigestInputStream
import scala.util.Sorting
class NetworkService(port: Int, poolSize: Int) extends Runnable {
val serverSocket = new ServerSocket(port)
val pool: ExecutorService = Executors.newFixedThreadPool(poolSize)
def run() {
try {
var i = 0
while (true) {
// This will block until a connection comes in.
val socket = serverSocket.accept()
val in = new BufferedReader(new InputStreamReader(socket.getInputStream)).readLine
/*var f = new FileSplit(in) //FileSplit is another class that i would like each
// client's sent string to be passed as an instance of
f.move*/
pool.execute(new Handler(socket))
}
} finally {
pool.shutdown()
}
}
}
class Handler(socket: Socket) extends Runnable {
def message = (Thread.currentThread.getName() + "\n").getBytes
def run() {
socket.getOutputStream.write(message)
socket.getOutputStream.close()
}
}
object MyServer {
def main(args: Array[String]) {
(new NetworkService(2030, 2)).run
}
}
You have several options available. You could do same old java style app, basically just using java standard libraries and scala syntax.
Maybe this helps: Scala equivalent of python echo server/client example?
You would just need to write logic that handles each socket (the one you get from accept()) in a new thread.
However I would not recommend using plain old java approach directly. There are great libraries out there that can handle that for you. For example Akka:
http://doc.akka.io/docs/akka/2.3.3/scala/io-tcp.html
I would also urge you to read about futures as they are super useful to do stuff async.

How to use futures with Akka for asynchronous results

I am trying to write to multiple files concurrently using the Akka framework, First I created a class called MyWriter that writes to a file, then using futures I call the object twice hopping that 2 files will be created for me, but when I monitor the execusion of the program, it first populates the first file and then the second one (blocking /synchronously).
Q: how can I make the code bellow run (none-blocking /asynchronously)
import akka.actor._
import akka.dispatch._
import akka.pattern.ask
import akka.util.Timeout
import scala.concurrent.Await
import scala.concurrent.duration._
import scala.concurrent.Future
import scala.concurrent.{ ExecutionContext, Promise }
import ExecutionContext.Implicits.global
class my_controler {
}
object Main extends App {
val system = ActorSystem("HelloSystem")
val myobj = system.actorOf(Props(new MyWriter), name = "myobj")
implicit val timeout = Timeout(50 seconds)
val future2 = Future { myobj ! save("lots of conentet") }
val future1 = Future { myobj ! save("event more lots of conentet") }
}
the MyWriter code:
case class save(startval: String)
class MyWriter extends Actor {
def receive = {
case save(startval) => save_to_file(startval)
}
any ideas why the code does not execute concurrently?
Why are you wrapping the call to ? with an additional Future? Ask (?) returns a Future anyway, so what you are doing here is wrapping a Future around another Future and I'm not surte that's what you wanted to do.
The second issue I see is that you are sending two messages to the same actor instance and you are expecting them to be running in parallel. An actor instance processes its mailbox serially. If you wanted to process concurrently, then you will need two instances of your FileWriter actor to accomplish that. If that's all you want to do then just start up another instance of FileWriter and send it the second message.

Resources