I think I got this wrong from the docs.
I have two actors, XMLActor and HttpActor. XMLActor reads xmlFiles and then sends a message to HTTPActor to process. XMLActor will finish way sooner than HttpActor.
My main class calls join on both actors. I was expecting that the main thread would only terminate after both actors were done. But, what is actually happening is that as soon as all messages are processed by XMLActor, the system terminates and a lot of messages are not processed by HttpActor.
I could use some latch or even an AtomicInteger to wait for all messages to be consumed, but I was wondering if there's a more elegant way for it.
final HttpActor httpActor = new HttpActor().start()
final XMLActor xmlActor = new XMLActor(httpActor:httpActor).start()
Actors.actor {
file.eachLine { line ->
def chunks = line.split(",")
def id = chunks[0].replaceAll("\\\"","").trim()
def name = chunks[1].replaceAll("\\\"","").trim()
xmlActor << new FileToRead(basePath:args[1],id:id,name:name, fileCounter:counter)
}
}
[httpActor, xmlActor]*.join()
//inside xmlActor
countries.each { country ->
httpActor << new AlbumPriceMessage(id:message.id, country:country)
}
The join() method will certainly wait for both actors to finish. I don't see how you stop the two actors, so can't really comment on that. Do you send so sort of poison message? Or call stop() on actors?
For example, the following simulation of your case stops correctly:
import groovyx.gpars.actor.*;
def httpActor = Actors.staticMessageHandler {
println "Http actor processing " + it
}
def xmlActor = Actors.staticMessageHandler {
println "XML Actor processing " + it
httpActor << it
}
xmlActor.metaClass.afterStop = {
httpActor.stop()
}
100.times {
xmlActor << "File$it"
}
xmlActor.stop()
[xmlActor, httpActor]*.join()
println "done"
Related
I was writing a kotin application that needs to retrive data online.
Using the async(Dispatcher.IO) to get the result from the server and
val variable1 = async(Dispatchers.IO) {
delay(10000)
"I am the guy who comes 10 secs later\nDid you miss me?"
}
using variable1.join() to wait for the result like shown below:
#ExperimentalCoroutinesApi
fun btn(view: android.view.View) {
binding.firstText.text = ""
runBlocking {
launch(Dispatchers.IO) {
//runOnUiThread { pop = popUp() }
val variable1 = async(Dispatchers.IO) {
delay(10000)
"I am the guy who comes 10 secs later\nDid you miss me?"
}
variable1.join()
val a = variable1.await()
Log.d(TAG, "btn: ******************************************************* $a")
runOnUiThread {
//binding.firstText.text = a
}
}
}
}
I have an issue getting the result asynchronously, variable1 keeps blocking the UI thread.
To my understanding, .join() waits for the result before executing. But the problem is that it blocks the UI thread even when its not run on the main thread.
How better should I have done this task? Thanks.
Since I see no evidence of any blocking operations, this is all you need:
fun btn(view: android.view.View) {
binding.firstText.text = ""
viewModelScope.launch {
delay(10_000)
val a = "I am the guy who comes 10 secs later\nDid you miss me?"
Log.d(TAG, "btn: $a")
binding.firstText.text = a
}
}
If you do intend to make blocking operations instead of that delay(10_000), then you can add this:
fun btn(view: android.view.View) {
binding.firstText.text = ""
viewModelScope.launch {
val a = withContext(Dispatchers.IO) {
blockingOperation()
"I am the guy who comes 10 secs later\nDid you miss me?"
}
Log.d(TAG, "btn: $a")
binding.firstText.text = a
}
}
Note there's the viewModelScope, this won't work unless you're inside a ViewModel class. You can use GlobalScope instead to try things out, but this is not a production-worthy solution as it leads to memory leaks at runtime whenever you trigger many such actions while the previous ones are in progress (and they will be because there's nothing cancelling them).
I am trying to implement a thread safe queue using a Semaphore that is enqueued with integers. This is not thread-safe at the moment. What would I have to add in terms of synchronization to the queue to make it thread-safe?
I've tried using synchronized blocks on the Queue, so that only one thread is allowed in the queue at the same time, but this does not seem to work, or I am misusing them. What should I be synchronizing on? I have a separate class that is constantly appending and removing with a maintainer thread.
class ThreadSafeQueue {
var queue = List[Int]()
val semaphore = new Semaphore(0)
def append(num: Int): Unit = {
queue = queue ::: List(num)
semaphore.release()
}
def dequeue(): Int = {
semaphore.acquire()
val n = queue.head
queue = queue.tail
n
}
}
To be thread-safe, you should place code that accesses the queue in synchronized blocks, as shown below.
import java.util.concurrent.Semaphore
class ThreadSafeQueue {
var queue = List[Int]()
val semaphore = new Semaphore(0)
def append(num: Int): Unit = {
synchronized {
queue = queue ::: List(num)
}
semaphore.release()
}
def dequeue(): Int = {
semaphore.acquire()
synchronized {
val n = queue.head
queue = queue.tail
n
}
}
}
A few notes:
With the Semaphore permits value set to 0, all acquire() calls will block until there is a release().
In case the Semaphore permits value is > 0, method dequeue would better be revised to return an Option[Int] to cover cases of dequeueing an empty queue.
In case there is only a single queue in your application, consider defining ThreadSafeQueue as object ThreadSafeQueue.
There is an arguably more efficient approach of atomic update using AtomicReference for thread-safety. See this SO link for differences between the two approaches.
It is easy enough in D to create a Queue type using the std.container.dlist.
I would like to have multiple threads but have them communicate with a queue, not with message passing (https://tour.dlang.org/tour/en/multithreading/message-passing). As I understand it the messages are designed to always receive data at particular points in the code; the receiving thread will block until the expected data is received.
(EDIT: I was informed about receiveTimeout but having a no timeout and just a check is really more appropriate in this case (maybe a timeout of 0?). Also I am not sure what the message API will do if multiple messages are sent before any any are received. I will have to play with that.)
void main() {
spawn(&worker, thisTid);
// This line will block until the expected message is received.
receive (
(string message) {
writeln("Received the message: ", text);
},
)
}
What I am needing is to merely receive data if there is some. Something like this:
void main() {
Queue!string queue// custom `Queue` type based on DList
spawn(&worker, queue);
while (true) {
// Go through any messages (while consuming `queue`)
for (string message; queue) {
writeln("Received a message: ", text);
}
// Do other stuff
}
}
I have tried using shared variables (https://tour.dlang.org/tour/en/multithreading/synchronization-sharing) but DMD is complaining that "Aliases to mutable thread-local data not allowed." or some other errors, depending.
How would this be done in D? Or, is there a way to use messages to do this kind of communication?
This doesn't answer the specific question but ti does clear up what I think is a misunderstanding of the message passing api...
just call receiveTimeout instead of plain receive
http://dpldocs.info/experimental-docs/std.concurrency.receiveTimeout.html
I use this:
shared class Queue(T) {
private T[] queue;
synchronized void opOpAssign(string op)(T object) if(op == "~") {
queue ~= object;
}
synchronized size_t length(){
return queue.length;
}
synchronized T pop(){
assert(queue.length, "Please check queue length, is 0");
auto first = queue[0];
queue = queue[1..$];
return first;
}
synchronized shared(T[]) consume(){
auto copy = queue;
queue = [];
return copy;
}
}
I have gotten the answer I need.
Simply put, use core.thread rather than std.concurrency. std.concurrency manages messages for you and does not allow you to manage it yourself. core.thread is what std.concurrency uses internally.
The longer answer, here is how I fully implemented it.
I have created a Queue type that is based on an Singly Linked List but maintains a pointer of the last element. The Queue also uses standard component inputRange and outputRange (or at least I think it does) per Walter Brights vision (https://www.youtube.com/watch?v=cQkBOCo8UrE).
The Queue is also built to allow one thread to write and another to read with very little mutexing internally so it should be fast.
The Queue I shared here https://pastebin.com/ddyPpLrp
A simple implementation to have a second thread read input:
Queue!string inputQueue = new Queue!string;
ThreadInput threadInput = new ThreadInput(inputQueue);
threadInput.start;
while (true) {
foreach (string value; inputQueue) {
writeln(value);
}
}
ThreadInput being defined as thus:
class ThreadInput : Thread {
private Queue!string queue;
this(Queue!string queue) {
super(&run);
this.queue = queue;
}
private void run() {
while (true) {
queue.put(readln);
}
}
}
The code https://pastebin.com/w5jwRVrL
The Queue again https://pastebin.com/ddyPpLrp
I want to implement something like the producer-consumer problem (with only one information transmitted at a time), but I want the producer to wait for someone to take his message before leaving.
Here is an example that doesn't block the producer but works otherwise.
class Channel[T]
{
private var _msg : Option[T] = None
def put(msg : T) : Unit =
{
this.synchronized
{
waitFor(_msg == None)
_msg = Some(msg)
notifyAll
}
}
def get() : T =
{
this.synchronized
{
waitFor(_msg != None)
val ret = _msg.get
_msg = None
notifyAll
return ret
}
}
private def waitFor(b : => Boolean) =
while(!b) wait
}
How can I changed it so the producers gets blocked (as the consumer is) ?
I tried to add another waitFor at the end of but sometimes my producer doesn't get released.
For instance, if I have put ; get || get ; put, most of the time it works, but sometimes, the first put is not terminated and the left thread never even runs the get method (I print something once the put call is terminated, and in this case, it never gets printed).
This is why you should use a standard class, SynchronousQueue in this case.
If you really want to work through your problematic code, start by giving us a failing test case or a stack trace from when the put is blocking.
You can do this by means of a BlockingQueue descendant whose producer put () method creates a semaphore/event object that is queued up with the passed message and then the producer thread waits on it.
The consumer get() method extracts a message from the queue and signals its semaphore, so allowing its original producer to run on.
This allows a 'synchronous queue' with actual queueing functionality, should that be what you want?
I came up with something that appears to be working.
class Channel[T]
{
class Transfer[A]
{
protected var _msg : Option[A] = None
def msg_=(a : A) = _msg = Some(a)
def msg : A =
{
// Reading the message destroys it
val ret = _msg.get
_msg = None
return ret
}
def isEmpty = _msg == None
def notEmpty = !isEmpty
}
object Transfer {
def apply[A](msg : A) : Transfer[A] =
{
var t = new Transfer[A]()
t.msg = msg
return t
}
}
// Hacky but Transfer has to be invariant
object Idle extends Transfer[T]
protected var offer : Transfer[T] = Idle
protected var request : Transfer[T] = Idle
def put(msg : T) : Unit =
{
this.synchronized
{
// push an offer as soon as possible
waitFor(offer == Idle)
offer = Transfer(msg)
// request the transfer
requestTransfer
// wait for the transfer to go (ie the msg to be absorbed)
waitFor(offer isEmpty)
// delete the completed offer
offer = Idle
notifyAll
}
}
def get() : T =
{
this.synchronized
{
// push a request as soon as possible
waitFor(request == Idle)
request = new Transfer()
// request the transfer
requestTransfer
// wait for the transfer to go (ie the msg to be delivered)
waitFor(request notEmpty)
val ret = request.msg
// delete the completed request
request = Idle
notifyAll
return ret
}
}
protected def requestTransfer()
{
this.synchronized
{
if(offer != Idle && request != Idle)
{
request.msg = offer.msg
notifyAll
}
}
}
protected def waitFor(b : => Boolean) =
while(!b) wait
}
It has the advantage of respecting symmetry between producer and consumer but it is a bit longer than what I had before.
Thanks for your help.
Edit : It is better but still not safeā¦
I need to call a number of methods in parallel and wait for results. Each relies on different resources, so they may return at different times. I need to wait until I receive all results or time out after a certain amount of time.
I could just spawn threads with a reference to a shared object via a method call, but is there a better, more groovy way to do this?
Current Implementation:
Executors exec = Executors.newFixedThreadPool(10);
for (obj in objects) {
def method = {
def result = new ResultObject(a: obj, b: obj.callSomeMethod())
result
} as Callable<ResultObject>
callables << method
}
List<Future<ResultObject>> results = exec.invokeAll(callables)
for (result in results) {
try{
def searchResult = result.get()
println 'result retrieved'
} catch (Exception e)
{
println 'exception'
e.printStackTrace()
}
}
}
A Groovier solution is to use GPars - a concurrency library written in Groovy.
import static groovyx.gpars.GParsExecutorsPool.withPool
withPool {
def callable = {obj -> new ResultObject(a: obj, b: obj.callSomeMethod())}.async()
List<ResultObject> results = objects.collect(callable)*.get()
}
AbstractExecutorService.invokeAll(Collection<? extends Callable<T>> tasks, long timeout, TimeUnit unit)
The groovy part would be using closures as Callable