How do I hook up scalaz-streams to reactive streams (as in reactive-streams.org) - slick

I wanted to stream the data returned from a slick 3.0.0 query via db.stream(yourquery) through scalaz-stream.
It looks like reactive-streams.org is used an API and dataflow model that different libraries implement.
How do you do that with the back pressure flowing back from the scalaz-stream process to the slick publisher?

Take a look at https://github.com/krasserm/streamz
Streamz is a resource combinator library for scalaz-stream. It allows Process instances to consume from and produce to:
Apache Camel endpoints
Akka Persistence journals and snapshot stores and
Akka Stream flows (reactive streams) with full back-pressure support

I finally did answer my own question. If you are willing to use scalaz-streams queues to queue up streaming results.
def getData[T](publisher: slick.backend.DatabasePublisher[T],
queue: scalaz.stream.async.mutable.Queue[T], batchRequest: Int = 1): Task[scala.concurrent.Future[Long]] =
Task {
val p = scala.concurrent.Promise[Unit]()
var counter: Long = 0
val s = new org.reactivestreams.Subscriber[T] {
var sub: Subscription = _
def onSubscribe(s: Subscription): Unit = {
sub = s
sub.request(batchRequest)
}
def onComplete(): Unit = {
sub.cancel()
p.success(counter)
}
def onError(t: Throwable): Unit = p.failure(t)
def onNext(e: T): Unit = {
counter += 1
queue.enqueueOne(e).run
sub.request(batchRequest)
}
}
publisher.subscribe(s)
p.future
}
When you run this using run you obtain a future that when finished, means the query finished streaming. You can compose on this future if you wanted your computation to wait for all the data to arrive. You could also add use an Await in the Task in getData then compose your computation on the returned Task object if you need all the data to run before continuing. For what I do, I compose on the future's completion and shutdown the queue so that my scalaz-stream knows to terminate cleanly.

Here is a slightly different implementation (than the one posted by user1763729) which returns a Process:
def getData[T](publisher: DatabasePublisher[T], batchSize: Long = 1L): Process[Task, T] = {
val q = async.boundedQueue[T](10)
val subscribe = Task.delay {
publisher.subscribe(new Subscriber[T] {
#volatile var subscription: Subscription = _
override def onSubscribe(s: Subscription) {
subscription = s
subscription.request(batchSize)
}
override def onNext(next: T) = {
q.enqueueOne(next).attemptRun
subscription.request(batchSize)
}
override def onError(t: Throwable) = q.fail(t).attemptRun
override def onComplete() = q.close.attemptRun
})
}
Process.eval(subscribe).flatMap(_ => q.dequeue)
}

Related

Batch up requests in Groovy?

I'm new to Groovy and am a bit lost on how to batch up requests so they can be submitted to a server as a batch, instead of individually, as I currently have:
class Handler {
private String jobId
// [...]
void submit() {
// [...]
// client is a single instance of Client used by all Handlers
jobId = client.add(args)
}
}
class Client {
//...
String add(String args) {
response = postJson(args)
return parseIdFromJson(response)
}
}
As it is now, something calls Client.add(), which POSTs to a REST API and returns a parsed result.
The issue I have is that the add() method is called maybe thousands of times in quick succession, and it would be much more efficient to collect all the args passed in to add(), wait until there's a moment when the add() calls stop coming in, and then POST to the REST API a single time for that batch, sending all the args in one go.
Is this possible? Potentially, add() can return a fake id immediately, as long as the batching occurs, the submit happens, and Client can later know the lookup between fake id and the ID coming from the REST API (which will return IDs in the order corresponding to the args sent to it).
As mentioned in the comments, this might be a good case for gpars which is excellent at these kinds of scenarios.
This really is less about groovy and more about asynchronous programming in java and on the jvm in general.
If you want to stick with the java concurrent idioms I threw together a code snippet you could use as a potential starting point. This has not been tested and edge cases have not been considered. I wrote this up for fun and since this is asynchronous programming and I haven't spent the appropriate time thinking about it, I suspect there are holes in there big enough to drive a tank through.
That being said, here is some code which makes an attempt at batching up the requests:
import java.util.concurrent.*
import java.util.concurrent.locks.*
// test code
def client = new Client()
client.start()
def futureResponses = []
1000.times {
futureResponses << client.add(it as String)
}
client.stop()
futureResponses.each { futureResponse ->
// resolve future...will wait if the batch has not completed yet
def response = futureResponse.get()
println "received response with index ${response.responseIndex}"
}
// end of test code
class FutureResponse extends CompletableFuture<String> {
String args
}
class Client {
int minMillisLullToSubmitBatch = 100
int maxBatchSizeBeforeSubmit = 100
int millisBetweenChecks = 10
long lastAddTime = Long.MAX_VALUE
def batch = []
def lock = new ReentrantLock()
boolean running = true
def start() {
running = true
Thread.start {
while (running) {
checkForSubmission()
sleep millisBetweenChecks
}
}
}
def stop() {
running = false
checkForSubmission()
}
def withLock(Closure c) {
try {
lock.lock()
c.call()
} finally {
lock.unlock()
}
}
FutureResponse add(String args) {
def future = new FutureResponse(args: args)
withLock {
batch << future
lastAddTime = System.currentTimeMillis()
}
future
}
def checkForSubmission() {
withLock {
if (System.currentTimeMillis() - lastAddTime > minMillisLullToSubmitBatch ||
batch.size() > maxBatchSizeBeforeSubmit) {
submitBatch()
}
}
}
def submitBatch() {
// here you would need to put the combined args on a format
// suitable for the endpoint you are calling. In this
// example we are just creating a list containing the args
def combinedArgs = batch.collect { it.args }
// further there needs to be a way to map one specific set of
// args in the combined args to a specific response. If the
// endpoint responds with the same order as the args we submitted
// were in, then that can be used otherwise something else like
// an id in the response etc would need to be figured out. Here
// we just assume responses are returned in the order args were submitted
List<String> combinedResponses = postJson(combinedArgs)
combinedResponses.indexed().each { index, response ->
// here the FutureResponse gets a value, can be retrieved with
// futureResponse.get()
batch[index].complete(response)
}
// clear the batch
batch = []
}
// bogus method to fake post
def postJson(combinedArgs) {
println "posting json with batch size: ${combinedArgs.size()}"
combinedArgs.collect { [responseIndex: it] }
}
}
A few notes:
something needs to be able to react to the fact that there were no calls to add for a while. This implies a separate monitoring thread and is what the start and stop methods manage.
if we have an infinite sequence of adds without pauses, you might run out of resources. Therefore the code has a max batch size where it will submit the batch even if there is no lull in the calls to add.
the code uses a lock to make sure (or try to, as mentioned above, I have not considered all potential issues here) we stay thread safe during batch submissions etc
assuming the general idea here is sound, you are left with implementing the logic in submitBatch where the main problem is dealing with mapping specific args to specific responses
CompletableFuture is a java 8 class. This can be solved using other constructs in earlier releases, but I happened to be on java 8.
I more or less wrote this without executing or testing, I'm sure there are some mistakes in there.
as can be seen in the printout below, the "maxBatchSizeBeforeSubmit" setting is more a recommendation that an actual max. Since the monitoring thread sleeps for some time and then wakes up to check how we are doing, the threads calling the add method might have accumulated any number of requests in the batch. All we are guaranteed is that every millisBetweenChecks we will wake up and check how we are doing and if the criteria for submitting a batch has been reached, then the batch will be submitted.
If you are unfamiliar with java Futures and locks, I would recommend you read up on them.
If you save the above code in a groovy script code.groovy and run it:
~> groovy code.groovy
posting json with batch size: 153
posting json with batch size: 234
posting json with batch size: 243
posting json with batch size: 370
received response with index 0
received response with index 1
received response with index 2
...
received response with index 998
received response with index 999
~>
it should work and print out the "responses" received from our fake json submissions.

How to get multiple async results within a given timeout with GPars?

I'd like to retrieve multiple "costly" results using parallel processing but within a specific timeout.
I'm using GPars Dataflow.task but it looks like I'm missing something as the process returns only when all dataflow variable are bound.
def timeout = 500
def mapResults = []
GParsPool.withPool(3) {
def taskWeb1 = Dataflow.task {
mapResults.web1 = new URL('http://web1.com').getText()
}.join(timeout, TimeUnit.MILLISECONDS)
def taskWeb2 = Dataflow.task {
mapResults.web2 = new URL('http://web2.com').getText()
}.join(timeout, TimeUnit.MILLISECONDS)
def taskWeb3 = Dataflow.task {
mapResults.web3 = new URL('http://web3.com').getText()
}.join(timeout, TimeUnit.MILLISECONDS)
}
I did see in the GPars Timeouts doc a way to use Select to get the fastest result within the timeout.
But I'm looking for a way to retrieve as much as possible results in the given time frame.
Is there a better "GPars" way to achieve this?
Or with Java 8 Future/Callable ?
Since you're interested in Java 8 based solutions too, here's a way to do it:
int timeout = 250;
ExecutorService executorService = Executors.newFixedThreadPool(3);
try {
Map<String, CompletableFuture<String>> map =
Stream.of("http://google.com", "http://yahoo.com", "http://bing.com")
.collect(
Collectors.toMap(
// the key will be the URL
Function.identity(),
// the value will be the CompletableFuture text fetched from the url
(url) -> CompletableFuture.supplyAsync(
() -> readUrl(url, timeout),
executorService
)
)
);
executorService.awaitTermination(timeout, TimeUnit.MILLISECONDS);
//print the resulting map, cutting the text at 100 chars
map.entrySet().stream().forEach(entry -> {
CompletableFuture<String> future = entry.getValue();
boolean completed = future.isDone()
&& !future.isCompletedExceptionally()
&& !future.isCancelled();
System.out.printf("url %s completed: %s, error: %s, result: %.100s\n",
entry.getKey(),
completed,
future.isCompletedExceptionally(),
completed ? future.getNow(null) : null);
});
} catch (InterruptedException e) {
//rethrow
} finally {
executorService.shutdownNow();
}
This will give you as many Futures as URLs you have, but gives you an opportunity to see if any of the tasks failed with an exception. The code could be simplified if you're not interested in these exceptions, only the contents of successful retrievals:
int timeout = 250;
ExecutorService executorService = Executors.newFixedThreadPool(3);
try {
Map<String, String> map = Collections.synchronizedMap(new HashMap<>());
Stream.of("http://google.com", "http://yahoo.com", "http://bing.com")
.forEach(url -> {
CompletableFuture
.supplyAsync(
() -> readUrl(url, timeout),
executorService
).thenAccept(content -> map.put(url, content));
});
executorService.awaitTermination(timeout, TimeUnit.MILLISECONDS);
//print the resulting map, cutting the text at 100 chars
map.entrySet().stream().forEach(entry -> {
System.out.printf("url %s completed, result: %.100s\n",
entry.getKey(), entry.getValue() );
});
} catch (InterruptedException e) {
//rethrow
} finally {
executorService.shutdownNow();
}
Both of the codes will wait for about 250 milliseconds (it will take only a tiny bit more because of the submissions of the tasks to the executor service) before printing the results. I found about 250 milliseconds is the threshold where some of these url-s can be fetched on my network, but not necessarily all. Feel free to adjust the timeout to experiment.
For the readUrl(url, timeout) method you could use a utility library like Apache Commons IO. The tasks submitted to the executor service will get an interrupt signal even if you don't explicitely take into account the timeout parameter. I could provide an implementation for that but I believe it's out of scope for the main issue in your question.

Events are not recovering in Akka 2.4.0 Persistence & Cassandra Journal Plugin 0.6

I try to write an app using that is using Akka (version 2.4.0) Persistency and Cassandra Plugin (version 0.6, https://github.com/krasserm/akka-persistence-cassandra) to recover from failures.
The events are being stored to cassandra with no issues, however, one I try to kill and actor, so the supervisor restarts it, the events are not received by receiveRecover.
It seems that the issue is with the plugin itself, as if I use shared LevelDB instead of cassandra, the events are being received on the recovery step.
Here is the implementation of my persistent actor:
class SimplePersistentActor extends PersistentActor with ActorLogging {
def persistenceId: String = context.self.path.name
override def preRestart(cause: Throwable, msg: Option[Any]) = {
log.debug(s"Restarting ${getClass.getSimpleName}")
super.preRestart(cause, msg)
}
override def postStop() = {
log.debug(s"Stopping ${getClass.getSimpleName}")
super.postStop()
}
var transactionData: Either[UninitializedData, RunningTransactionData] = Left(UninitializedData())
def receiveCommand ={
case msg # TransactionStart(transactionId) =>
persist(msg) { _ => }
log.debug(s"Starting a transaction with id $transactionId")
transactionData = Right(RunningTransactionData(transactionId, List()))
/* Send a reply */
sender() ! transactionId
case msg # TransactionData(data) =>
persist(msg) { _ => }
transactionData match {
case Right(t: RunningTransactionData) =>
val updatedTransaction = t.copy(data = t.data ::: List(data))
log.debug(s"There are ${updatedTransaction.data.size} data items within a transaction ${t.transactionId}")
transactionData = Right(updatedTransaction)
/* Send a reply */
sender() ! t.transactionId
case _ => log.error("Actor's transaction data is not initialized")
}
case TransactionEnd(transactionId) =>
transactionData match {
case Right(t: RunningTransactionData) =>
log.debug(s"Ending a transaction with id ${t.transactionId}")
transactionData = Left(UninitializedData())
/* Send a reply */
sender() ! t.transactionId
case _ => log.error("Actor's transaction data is not initialized")
}
case other =>
log.debug(s"Unexpected event received: $other")
}
def receiveRecover = {
case message =>
log.debug(s"Recovery Step. Message $message received")
}
}
In both cases, that I describe above, the code doesn't change.
Has anyone seen this issue before?
I've run into the same thing and found a fix that works for me. I know your question is pretty old but since I was seeing the same problem on the latest version of Akka Cassandra Persistence (0.86) I thought it would be worth mentioning.
The problem I had came from the following config.
cassandra-main-journal = ${cassandra-journal} {
contact-points = ["localhost"]
keyspace-autocreate = true
tables-autocreate = true
keyspace = "main_akka_journal"
}
So, take the default cassandra-journal config and override the keyspace. Then, like you're doing, overriding persistenceId in the Akka persistent actor to point to this config.
What happens if you do this is all writes to the Actor go the the main_akka_journal keyspace. On restarting the Actor you then get a RecoveryCompleted message but you don't see any of the messages you had written. However when you receive RecoveryCompleted the lastSequenceNr will be correct.
What's interesting is that if you have keyspace-autocreate=true you'll see two keyspaces where created. main_akka_journal and akka.
So the problem is the persistent actor is writing to the main_akka_journal keyspace, on restart it's reading events from the akka keyspace (which is empty) and then reading the lastSequenceNr from the main_akka_journal keyspace (which is correct).
The solution for me was this config:
cassandra-main-journal = ${cassandra-journal} {
contact-points = ["localhost"]
keyspace-autocreate = true
tables-autocreate = true
keyspace = "main_akka_journal"
query-plugin = "cassandra-main-query-plugin"
}
cassandra-main-query-plugin = ${cassandra-query-journal} {
write-plugin = "cassandra-main-journal"
}
Otherwise by default the write-plugin points to cassandra-journal and the akka keyspace.

Terminate existing pool when all work is done

Alright, brand new to gpars so please forgive me if this has an obvious answer.
Here is my scenario. We currently have a piece of our code wrapped in a Thread.start {} block. It does this so it can send messages to an message queue in the background and not block the user request. An issue we have recently ran into with this is for large blocks of work, it is possible for the users to perform another action which would cause this block to execute again. As it is threaded, it is possible for the second batch of messages to get sent before the first causing corrupted data.
I would like to change this process to work as a queue flow with gpars. I've seen examples of creating pools such as
def pool = GParsPool.createPool()
or
def pool = new ForkJoinPool()
and then using the pool as
GParsPool.withExistingPool(pool) {
...
}
This seems like it would account for the case that if the user performs an action again, I could reuse the created pool and the actions would not be performed out of order, provided I have a pool size of one.
My question is, is this the best way to do this with gpars? And furthermore, how do I know when the pool is finished all of its work? Does it terminate when all the work is finished? If so, is there a method that can be used to check if the pool has finished/terminated to know I need a new one?
Any help would be appreciated.
No, explicitly created pools do not terminate by themselves. You have to call shutdown() on them explicitly.
Using withPool() {} command, however, will guarantee that the pool is destroyed once the code block is finished.
Here is the current solution we have to our issue. It should be noted that we followed this route due to our requirements
Work is grouped by some context
Work within a given context is ordered
Work within a given context is synchronous
Additional work for a context should execute after the preceding work
Work should not block the user request
Contexts are asynchronous between each other
Once work for a context is finished, the context should clean up after itself
Given the above, we've implemented the following:
class AsyncService {
def queueContexts
def AsyncService() {
queueContexts = new QueueContexts()
}
def queue(contextString, closure) {
queueContexts.retrieveContextWithWork(contextString, true).send(closure)
}
class QueueContexts {
def contextMap = [:]
def synchronized retrieveContextWithWork(contextString, incrementWork) {
def context = contextMap[contextString]
if (context) {
if (!context.hasWork(incrementWork)) {
contextMap.remove(contextString)
context.terminate()
}
} else {
def queueContexts = this
contextMap[contextString] = new QueueContext({->
queueContexts.retrieveContextWithWork(contextString, false)
})
}
contextMap[contextString]
}
class QueueContext {
def workCount
def actor
def QueueContext(finishClosure) {
workCount = 1
actor = Actors.actor {
loop {
react { closure ->
try {
closure()
} catch (Throwable th) {
log.error("Uncaught exception in async queue context", th)
}
finishClosure()
}
}
}
}
def send(closure) {
actor.send(closure)
}
def terminate(){
actor.terminate()
}
def hasWork(incrementWork) {
workCount += (incrementWork ? 1 : -1)
workCount > 0
}
}
}
}

Scala: Wait while List is beeing filled

assume having a List where results of jobs that are computed distributed are stored.
Now I have a main thread that is waiting for all jobs finished.
I know the size of the List needs to have until all jobs are finished.
What is the most elegant way in scala let the main thread (while(true) loop) sleep and getting it awake when the jobs are finished?
thanks for your answers
EDIT: ok after trying the concept from #Stefan-Kunze without success (guess I didnt got the point...) I give an example with some code:
The first node:
class PingPlugin extends SmasPlugin
{
val messages = new ListBuffer[BaseMessage]()
val sum = 5
def onStop = true
def onStart =
{
log.info("Ping Plugin created!")
true
}
def handleInit(msg: Init)
{
log.info("Init received")
for( a <- 1 to sum)
{
msg.pingTarget ! Ping() // Ping extends BaseMessage
}
// block here until all messages are received
// wait for messages.length == sum
log.info("handleInit - messages received: %d/%d ".format(messages.length, sum))
}
/**
* This method handles incoming Pong messages
* #param msg Pong extends BaseMessage
*/
def handlePong(msg: Pong)
{
log.info("Pong received from: " + msg.sender)
messages += msg
log.info("handlePong - messages received: %d/%d ".format(messages.length, sum))
}
}
a second node:
class PongPlugin extends SmasPlugin
{
def onStop = true
def onStart =
{
log.info("Pong Plugin created!")
true
}
/**
* This method receives Ping messages and send a Pong message back after a random time
* #param msg Ping extends BaseMessage
*/
def handlePing(msg: Ping)
{
log.info("Ping received from: " + msg.sender)
val sleep: Int = math.round(5000 * Random.nextFloat())
log.info("sleep: " + sleep)
Thread.sleep(sleep)
msg.sender ! Pong()
}
}
I guess the solution is possible with futures...
Picking up #jilen 's approach: (this code is assuming your results are of a type result)
//just like lists futures can be yielded
val tasks: Seq[Future[Result]] = for (i <- 1 to results.size) yield future {
//results.size is the number of //results you are expecting
println("Executing task " + i)
Thread.sleep(i * 1000L)
val result = ??? //your code goes here
result
}
//merge all future results into a future of a sequence of results
val aggregated: Future[Seq[Result]] = Future.sequence(tasks)
//awaits for your results to be computed
val squares: Seq[Int] = Await.result(aggregated, Duration.Inf)
println("Squares: " + squares)
It's hard to test the code here, since I don't have the rest of this system, but I'll try. I'm assuming that somewhere underneath all of this is Akka.
First, blocking like this suggests a real design problem. In an actor system, you should send your messages and move on. Your log command should be in handlePong when the correct number of pings have returned. Blocking init hangs the entire actor. You really should never do that.
But, ok, what if you absolutely have to do that? Then a good tool here would be the ask pattern. Something like this (I can't check that this compiles without more of your code):
import akka.pattern.ask
import akka.util.Timeout
import scala.concurrent.duration._
...
implicit val timeout = Timeout(5 seconds)
var pendingPongs = List.empty[Future[Pong]]
for( a <- 1 to sum)
{
// Ask each target for Ping. Append the returned Future to pendingPongs
pendingPongs += msg.pingTarget ? Ping() // Ping extends BaseMessage
}
// pendingPongs is a list of futures. We want a future of a list.
// sequence() does that for us. We then block using Await until the future completes.
val pongs = Await.result(Future.sequence(pendingPongs), 5 seconds)
log.info(s"handlePong - messages received: ${pongs.length}/$sum")

Resources