I have this flow component in Akka streams
object ExecutionFlow {
def apply(
execute: In => Future[Out],
parallelism: Int)(implicit ec: ExecutionContext): Flow[In, Out, NotUsed] =
Flow[In]
.async // make the streams run asynchronously. essential for throughput
.mapAsyncUnordered(parallelism){
case (in) => execute(in)
}
}
Here In and Out are two data types and execute is a heavy weight expensive operation. There is a thread pool executor on which these operations run. Its set up like this
private val threadPool = new ThreadPoolExecutor(10, 10, 300, TimeUnit.SECONDS, new SynchronousQueue[Runnable]())
Here corepool size is 10, maxPool size is 10, 300 is the ttl in seconds. Note that this is akka explicitly handing over execution to another thread pool.
Now when I use this flow, as soon as messages come into the Execution flow its delegated to this thread pool executor and and a new one comes in, the flow lets messages in and the threadpool suddenly spikes up in its active thread count. I lowered the parallelism value down to 1 and removed .async too but the problem is still present.
When I do this however
object ExecutionFlow {
def apply(
execute: In => Future[Out], timeout: FiniteDuration): Flow[In, Out, NotUsed] =
Flow[In]
.map){ case (in) => Await.result(execute(in), timeout) }
}
This problem goes away. I presume the nature of the blocking somehow implicitly slows down the processing here.
I wanted to know what would be a good strategy to effectively back pressure here so that I do not have to deal with active thread usage spikes.
I am looking at https://doc.akka.io/docs/akka/current/stream/operators/index.html#backpressure-aware-operators to see if I can come up with some scheme but I am at a loss.
Related
I am using scala Iterator for waiting loop in synchronized block:
anObject.synchronized {
if (Try(anObject.foo()).isFailure) {
Iterator.continually {
anObject.wait()
Try(anObject.foo())
}.dropWhile(_.isFailure).next()
}
anObject.notifyAll()
}
Is it acceptable to use Iterator with concurrency and multithreading? If not, why? And then what to use and how?
There are some details, if it matters. anObject is a mutable queue. And there are multiple producers and consumers to the queue. So the block above is a code of such producer or consumer. anObject.foo is a common simplified declaration of function that either enqueue (for producer) or dequeue (for consumer) data to/from the queue.
Iterator is mutable internally, so you have to take that into consideration if you use it in multi-threaded environment. If you guaranteed that you won't end up in situation when e.g.
2 threads check hasNext()
one of them calls next() - it happens to be the last element
the other calls next() - NPE
(or similar) then you should be ok. In your example Iterator doesn't even leave the scope, so the errors shouldn't come from Iterator.
However, in your code I see the issue with having aObject.wait() and aObject.notifyAll() next to each other - if you call .wait then you won't reach .notifyAll which would unblock it. You can check in REPL that this hangs:
# val anObject = new Object { def foo() = throw new Exception }
anObject: {def foo(): Nothing} = ammonite.$sess.cmd21$$anon$1#126ae0ca
# anObject.synchronized {
if (Try(anObject.foo()).isFailure) {
Iterator.continually {
anObject.wait()
Try(anObject.foo())
}.dropWhile(_.isFailure).next()
}
anObject.notifyAll()
}
// wait indefinitelly
I would suggest changing the design to NOT rely on wait and notifyAll. However, from your code it is hard to say what you want to achieve so I cannot tell if this is more like Promise-Future case, monix.Observable, monix.Task or something else.
If your use case is a queue, produces and consumers, then it sound like a use case for reactive streams - e.g. FS2 + Monix, but it could be FS2+IO or something from Akka Streams
val queue: Queue[Task, Item] // depending on use case queue might need to be bounded
// in one part of the application
queue.enqueu1(item) // Task[Unit]
// in other part of the application
queue
.dequeue
.evalMap { item =>
// ...
result: Task[Result]
}
.compile
.drain
This approach would require some change in thinking about designing an application, because you would no longer work on thread directly, but rather designed a flow data and declaring what is sequential and what can be done in parallel, where threads become just an implementation detail.
I have a following code:
import scala.concurrent.ExecutionContext.Implicits.global
def index = Action {
Ok(Await.result(callSync, 10.seconds).body)
}
def callSync = {
WS.url("http://yahoo.jp").get
}
Basically WS.url will return Future[ws.Response] so in the code above I wanted to monitor the behaviour of this service when invoked in blocking manner. In my action, I am waiting for the result then displaying the response body back. I am attempting this with 2000 concurrent users with 20sec ramp. Problem is that above code creates new threads in massive amount that play instance shuts down the the error "java.lang.OutOfMemoryError : unable to create new native Thread". This is totally not expected behaviour. I am using the default execution context, so this pool will only have core + 1 threads. Why is above creating massive amount of threads?
Await.result wraps the blocking wait for a result with scala.concurrent.blocking which informs the ExecutionContext that it is blocking. The default ExecutionContext is backed by a fork-join pool which would then starve quickly since it only has got as many threads as there is cores and instead it will spawn a new thread to keep the number of available threads for non-blocking operations the same.
Do this instead:
import play.api.libs.concurrent.Promise
def index = Action.async {
Future.firstCompletedOf(List(
callsync.map(x => Ok(x.body)),
Promise.timeout(Ok("an error occurred"), 10.seconds)
))
}
I have one thread in the thread-pool servicing blocking request.
def sync = Action {
import Contexts.blockingPool
Future {
Thread.sleep(100)
}
Ok("Done")
}
In Contexts.blockingPool is configured as:
custom-pool {
fork-join-executor {
parallelism-min = 1
parallelism-max = 1
}
}
In theory, if above request receives 100 simultaneous requests, the expected behaviour should be: 1 request should sleep(100) and rest of 99 requests should be rejected (or queued until timeout?). However I observed that extra worker threads are created to service rest of requests. I also observed that latency increases as (gets slower to service request) as number of threads in the pool gets smaller than the requests received.
What is expected behavior if a request larger than configured thread-pool size is received?
Your test is not correctly structured to test your hypothesis.
If you go over this section in the docs you will see that Play has a few thread pools/execution contexts. The one that is important with regards to your question is the default thread pool and how that relates to the HTTP requests served by your action.
As the doc describes, the default thread pool is where all application code is run by default. I.e. all action code, including all Future's (not explicitly defining their own execution context), will run in this execution context/thread pool. So using your example:
def sync = Action {
// *** import Contexts.blockingPool
// *** Future {
// *** Thread.sleep(100)
// ***}
Ok("Done")
}
All the code in your action not commented by // *** will run in the default thread pool.
I.e. When a request gets routed to your action:
the Future with the Thread.sleep will be dispatched to your custom execution context
then without waiting for that Future to complete (because it's running in it's own thread pool [Context.blockingPool] and therefore not blocking any threads on the default thread pool)
your Ok("Done") statement is evaluated and the client receives the response
Approx. 100 milliseconds after the response has been received, your Future completes
So to explain you observation, when you send 100 simultaneous requests, Play will gladly accept those requests, route to your controller action (executing on the default thread pool), dispatch to your Future and then respond to the client.
The default size of the default pool is
play {
akka {
...
actor {
default-dispatcher = {
fork-join-executor {
parallelism-factor = 1.0
parallelism-max = 24
}
}
}
}
}
to use 1 thread per core up to a max of 24.
Given that your action does very little (excl. the Future), you will be able to handle into the 1000's of requests/sec without a sweat. Your Future will however take much longer to work through the backlog because you are blocking the only thread in your custom pool (blockingPool).
If you use my slightly adjusted version of your action, you will see what confirms the above explanation in the log output:
object Threading {
def sync = Action {
val defaultThreadPool = Thread.currentThread().getName;
import Contexts.blockingPool
Future {
val blockingPool = Thread.currentThread().getName;
Logger.debug(s"""\t>>> Done on thread: $blockingPool""")
Thread.sleep(100)
}
Logger.debug(s"""Done on thread: $defaultThreadPool""")
Results.Ok
}
}
object Contexts {
implicit val blockingPool: ExecutionContext = Akka.system.dispatchers.lookup("blocking-pool-context")
}
All your requests are swiftly dealt with first and then your Future's complete one by one afterwards.
So in conclusion, if you really want to test how Play will handle many simultaneous requests with only one thread handling requests, then you can use the following config:
play {
akka {
akka.loggers = ["akka.event.Logging$DefaultLogger", "akka.event.slf4j.Slf4jLogger"]
loglevel = WARNING
actor {
default-dispatcher = {
fork-join-executor {
parallelism-min = 1
parallelism-max = 1
}
}
}
}
}
you might also want to add a Thread.sleep to your action like this (to slow the default thread pools lonesome thread down a bit)
...
Thread.sleep(100)
Logger.debug(s"""<<< Done on thread: $defaultThreadPool""")
Results.Ok
}
Now you will have 1 thread for requests and 1 thread for your Future's.
If you run this with high concurrent connections you will notice that the client blocks while Play handles the requests one by one. Which is what you expected to see...
Play uses AkkaForkJoinPool which extends scala.concurrent.forkjoin.ForkJoinPool.
It has internal queue of tasks.
You may also find this description interesting in respect to handling blocking code by fork-join-pool: Scala: the global ExecutionContext makes your life easier
As a newbie, I am trying to understand how actors work. And, from the documentation, I think I understand that actors are objects which gets executed in sync mode and also that actor execution can contain blocking/sync method calls, e.g. db requests
But, what I don't understand is that if you write an actor that has some blocking invocations inside (like a blocking query execution), it will mess up the whole thread pool (in the sense that cpu utilization will go down, etc.), right ? I mean, from my understanding, there is no way for JVM to understand whether it can switch that thread to someone else, if/when the actor makes a blocking call.
So, given the nature of concurrency, shouldn't it be obvious that Actors should not be doing any blocking calls, ever?
If that is the case, what is the recommended way of doing a non-blocking/async call, let's say a web service call that fetches something and sends a message to another actor when that request is completed? Should we simply use something like within the actor:
future map { response => x ! response.body }
Is this the proper way of handling this?
Would appreciate it if you can clarify this for me.
It really depends on the use-case. If the queries do not need to be serialized, then you can execute the query in a future and send the results back to the sender as follows:
import scala.concurrent.{ future, blocking}
import akka.pattern.pipe
val resFut = future {
blocking {
executeQuery()
}
}
resFut pipeTo sender
You could also create a dedicated dispatcher exclusively for the DB calls and use a router for actor creation. This way you can also easily limit the number of concurrent DB requests.
Really great intro "The Neophyte's Guide to Scala Part 14: The Actor Approach to Concurrency" http://danielwestheide.com/blog/2013/02/27/the-neophytes-guide-to-scala-part-14-the-actor-approach-to-concurrency.html.
Actor receives message, wraps blocking code to future, in it's Future.onSuccess method - sends out results using other async messages. But beware that sender variable could change, so close it (make a local reference in the future object).
p.s.: The Neophyte's Guide to Scala - really great book.
Updated: (added sample code)
We have worker and manager. Manager sets work to be done, worker reports "got it" and starts long process ( sleep 1000 ). Meanwhile system pings manager with messages "alive" and manager pings worker with them. When work done - worker notifies manager on it.
NB: execution of sleep 1000 done in imported "default/global" thread pool executor - you can get thread starvation.
NB: val commander = sender is needed to "close" a reference to original sender, cause when onSuccess will be executed - current sender within actor could be already set to some other 'sender' ...
Log:
01:35:12:632 Humming ...
01:35:12:633 manager: flush sent
01:35:12:633 worker: got command
01:35:12:633 manager alive
01:35:12:633 manager alive
01:35:12:633 manager alive
01:35:12:660 worker: started
01:35:12:662 worker: alive
01:35:12:662 manager: resource allocated
01:35:12:662 worker: alive
01:35:12:662 worker: alive
01:35:13:661 worker: done
01:35:13:663 manager: work is done
01:35:17:633 Shutdown!
Code:
import akka.actor.{Props, ActorSystem, ActorRef, Actor}
import com.typesafe.config.ConfigFactory
import java.text.SimpleDateFormat
import java.util.Date
import scala.concurrent._
import ExecutionContext.Implicits.global
object Sample {
private val fmt = new SimpleDateFormat("HH:mm:ss:SSS")
def printWithTime(msg: String) = {
println(fmt.format(new Date()) + " " + msg)
}
class WorkerActor extends Actor {
protected def receive = {
case "now" =>
val commander = sender
printWithTime("worker: got command")
future {
printWithTime("worker: started")
Thread.sleep(1000)
printWithTime("worker: done")
}(ExecutionContext.Implicits.global) onSuccess {
// here commander = original sender who requested the start of the future
case _ => commander ! "done"
}
commander ! "working"
case "alive?" =>
printWithTime("worker: alive")
}
}
class ManagerActor(worker: ActorRef) extends Actor {
protected def receive = {
case "do" =>
worker ! "now"
printWithTime("manager: flush sent")
case "working" =>
printWithTime("manager: resource allocated")
case "done" =>
printWithTime("manager: work is done")
case "alive?" =>
printWithTime("manager alive")
worker ! "alive?"
}
}
def main(args: Array[String]) {
val config = ConfigFactory.parseString("" +
"akka.loglevel=DEBUG\n" +
"akka.debug.lifecycle=on\n" +
"akka.debug.receive=on\n" +
"akka.debug.event-stream=on\n" +
"akka.debug.unhandled=on\n" +
""
)
val system = ActorSystem("mine", config)
val actor1 = system.actorOf(Props[WorkerActor], "worker")
val actor2 = system.actorOf(Props(new ManagerActor(actor1)), "manager")
actor2 ! "do"
actor2 ! "alive?"
actor2 ! "alive?"
actor2 ! "alive?"
printWithTime("Humming ...")
Thread.sleep(5000)
printWithTime("Shutdown!")
system.shutdown()
}
}
You are right to be thinking about the Thread Pool if you are considering doing blocking calls in Akka. The more blocking you do, the larger the Thread Pool you will need. A completely Non-Blocking system only really needs a pool of threads equal to the number of CPU cores of your machine. The reference configuration uses a pool of 3 times the number of CPU cores on the machine to allow for some blocking:
# The core pool size factor is used to determine thread pool core size
# using the following formula: ceil(available processors * factor).
# Resulting size is then bounded by the core-pool-size-min and
# core-pool-size-max values.
core-pool-size-factor = 3.0
source
But you might want to increase akka.default-dispatcher.fork-join-executor.core-pool-size-factor to a higher number if you do more blocking, or make a dispatcher other than the default specifically for blocking calls with a higher fork-join-executor.core-pool-size-factor
WRT what is the best way to do blocking calls in Akka. I would recommend scaling out by making multiple instances of the actors that do blocking calls and putting a router infront of them to make them look like a single actor to the rest of your application.
In scala, how can I tell a thread: sleep t seconds, or until you receive a message? i.e. sleep at most t seconds, but wake up in case t is not over and you receive a certain message.
The answer depends greatly on what the message is. If you're using Actors (either the old variety or the Akka variety) then you can simply state a timeout value on receive. (React isn't really running until it gets a message, so you can't place a timeout on it.)
// Old style
receiveWithin(1000) {
case msg: Message => // whatever
case TIMEOUT => // Handle timeout
}
// Akka style
context.setTimeoutReceive(1 second)
def receive = {
case msg: Message => // whatever
case ReceiveTimeout => // handle timeout
}
Otherwise, what exactly do you mean by "message"?
One easy way to send a message is to use the Java concurrent classes made for exactly this kind of thing. For example, you can use a java.util.concurrent.SynchronousQueue to hold the message, and the receiver can call the poll method which takes a timeout:
// Common variable
val q = new java.util.concurrent.SynchronousQueue[String]
// Waiting thread
val msg = q.poll(1000)
// Sending thread will also block until receiver is ready to take it
q.offer("salmon", 1000)
An ArrayBlockingQueue is also useful in these situations (if you want the senders to be able to pack messages in a buffer).
Alternatively, you can use condition variables.
val monitor = new AnyRef
var messageReceived: Boolean = false
// The waiting thread...
def waitUntilMessageReceived(timeout: Int): Boolean = {
monitor synchronized {
// The time-out handling here is simplified for the purpose
// of exhibition. The "wait" may wake up spuriously for no
// apparent reason. So in practice, this would be more complicated,
// actually.
while (!messageReceived) monitor.wait(timeout * 1000L)
messageReceived
}
}
// The thread, which sends the message...
def sendMessage: Unit = monitor synchronized {
messageReceived = true
monitor.notifyAll
}
Check out Await. If you have some Awaitable objects then that's what you need.
Instead of making it sleep for a given time, make it only wake up on a Timeout() msg and then you can send this message prematurely if you want it to "wake up".