Scala: wake up sleeping thread - multithreading

In scala, how can I tell a thread: sleep t seconds, or until you receive a message? i.e. sleep at most t seconds, but wake up in case t is not over and you receive a certain message.

The answer depends greatly on what the message is. If you're using Actors (either the old variety or the Akka variety) then you can simply state a timeout value on receive. (React isn't really running until it gets a message, so you can't place a timeout on it.)
// Old style
receiveWithin(1000) {
case msg: Message => // whatever
case TIMEOUT => // Handle timeout
}
// Akka style
context.setTimeoutReceive(1 second)
def receive = {
case msg: Message => // whatever
case ReceiveTimeout => // handle timeout
}
Otherwise, what exactly do you mean by "message"?
One easy way to send a message is to use the Java concurrent classes made for exactly this kind of thing. For example, you can use a java.util.concurrent.SynchronousQueue to hold the message, and the receiver can call the poll method which takes a timeout:
// Common variable
val q = new java.util.concurrent.SynchronousQueue[String]
// Waiting thread
val msg = q.poll(1000)
// Sending thread will also block until receiver is ready to take it
q.offer("salmon", 1000)
An ArrayBlockingQueue is also useful in these situations (if you want the senders to be able to pack messages in a buffer).

Alternatively, you can use condition variables.
val monitor = new AnyRef
var messageReceived: Boolean = false
// The waiting thread...
def waitUntilMessageReceived(timeout: Int): Boolean = {
monitor synchronized {
// The time-out handling here is simplified for the purpose
// of exhibition. The "wait" may wake up spuriously for no
// apparent reason. So in practice, this would be more complicated,
// actually.
while (!messageReceived) monitor.wait(timeout * 1000L)
messageReceived
}
}
// The thread, which sends the message...
def sendMessage: Unit = monitor synchronized {
messageReceived = true
monitor.notifyAll
}

Check out Await. If you have some Awaitable objects then that's what you need.

Instead of making it sleep for a given time, make it only wake up on a Timeout() msg and then you can send this message prematurely if you want it to "wake up".

Related

dataTaskWithURL for dummies

I keep learning iDev but I still can't deal with http requests.
It seems to be crazy, but everybody whom I talk about synchronous requests do not understand me. Okay, it's really important to keep on a background queue as much as it possible to provide smooth UI. But in my case I load JSON data from server and I need to use this data immediately.
The only way I achieved it are semaphores. Is it okay? Or I have to use smth else? I tried NSOperation, but in fact I have to many little requests so creating each class for them for me seems to be not easy-reading-code.
func getUserInfo(userID: Int) -> User {
var user = User()
let linkURL = URL(string: "https://server.com")!
let session = URLSession.shared
let semaphore = DispatchSemaphore(value: 0)
let dataRequest = session.dataTask(with: linkURL) { (data, response, error) in
let json = JSON(data: data!)
user.userName = json["first_name"].stringValue
user.userSurname = json["last_name"].stringValue
semaphore.signal()
}
dataRequest.resume()
semaphore.wait(timeout: DispatchTime.distantFuture)
return user
}
You wrote that people don't understand you, but on the other hand it reveals that you don't understand how asynchronous network requests work.
For example imagine you are setting an alarm for a specific time.
Now you have two options to spend the following time.
Do nothing but sitting in front of the alarm clock and wait until the alarm occurs. Have you ever done that? Certainly not, but this is exactly what you have in mind regarding the network request.
Do several useful things ignoring the alarm clock until it rings. That is the way how asynchronous tasks work.
In terms of a programming language you need a completion handler which is called by the network request when the data has been loaded. In Swift you are using a closure for that purpose.
For convenience declare an enum with associated values for the success and failure cases and use it as the return value in the completion handler
enum RequestResult {
case Success(User), Failure(Error)
}
Add a completion handler to your function including the error case. It is highly recommended to handle always the error parameter of an asynchronous task. When the data task returns it calls the completion closure passing the user or the error depending on the situation.
func getUserInfo(userID: Int, completion:#escaping (RequestResult) -> ()) {
let linkURL = URL(string: "https://server.com")!
let session = URLSession.shared
let dataRequest = session.dataTask(with: linkURL) { (data, response, error) in
if error != nil {
completion(.Failure(error!))
} else {
let json = JSON(data: data!)
var user = User()
user.userName = json["first_name"].stringValue
user.userSurname = json["last_name"].stringValue
completion(.Success(user))
}
}
dataRequest.resume()
}
Now you can call the function with this code:
getUserInfo(userID: 12) { result in
switch result {
case .Success(let user) :
print(user)
// do something with the user
case .Failure(let error) :
print(error)
// handle the error
}
}
In practice the point in time right after your semaphore and the switch result line in the completion block is exactly the same.
Never use semaphores as an alibi not to deal with asynchronous patterns
I hope the alarm clock example clarifies how asynchronous data processing works and why it is much more efficient to get notified (active) rather than waiting (passive).
Don't try to force network connections to work synchronously. It invariably leads to problems. Whatever code is making the above call could potentially be blocked for up to 90 seconds (30 second DNS timeout + 60 second request timeout) waiting for that request to complete or fail. That's an eternity. And if that code is running on your main thread on iOS, the operating system will kill your app outright long before you reach the 90 second mark.
Instead, design your code to handle responses asynchronously. Basically:
Create data structures to hold the results of various requests, such as obtaining info from the user.
Kick off those requests.
When each request comes back, check to see if you have all the data you need to do something, and then do it.
For a really simple example, if you have a method that updates the UI with the logged in user's name, instead of:
[self updateUIWithUserInfo:[self getUserInfoForUser:user]];
you would redesign this as:
[self getUserInfoFromServerAndRun:^(NSDictionary *userInfo) {
[self updateUIWithUserInfo:userInfo];
}];
so that when the response to the request arrives, it performs the UI update action, rather than trying to start a UI update action and having it block waiting for data from the server.
If you need two things—say the userInfo and a list of books that the user has read, you could do:
[self getUserInfoFromServerAndRun:^(NSDictionary *userInfo) {
self.userInfo = userInfo;
[self updateUI];
}];
[self getBookListFromServerAndRun:^(NSDictionary *bookList) {
self.bookList = bookList;
[self updateUI];
}];
...
(void)updateUI
{
if (!self.bookList) return;
if (!self.userInfo) return;
...
}
or whatever. Blocks are your friend here. :-)
Yes, it's a pain to rethink your code to work asynchronously, but the end result is much, much more reliable and yields a much better user experience.

Implementing the concept of "events" (with notifiers/receivers) in Golang?

I'm wondering what is the proper way to handle the concept of "events" (with notifiers/receivers) in Golang. I suppose I need to use channels but not sure about the best way.
Specifically, I have the program below with two workers. Under certain conditions, "worker1" goes in and out of a "fast mode" and notify this via channels. "worker2" can then receive this event. This works fine, but then the two workers are tightly coupled. In particular, if worker2 is not running, worker1 gets stuck waiting when writing to the channel.
What would be the best way in Golang to implement this logic? Basically, one worker does something and notify any other worker that it has done so. Whether other workers listen to this event or not must not block worker1. Ideally, there could be any number of workers that could listen to this event.
Any suggestion?
var fastModeEnabled = make(chan bool)
var fastModeDisabled = make(chan bool)
func worker1() {
mode := "normal"
for {
// under some conditions:
mode := "fast"
fastModeEnabled <- true
// later, under different conditions:
mode := "normal"
fastModeDisabled <- true
}
}
func worker2() {
for {
select {
case <-fastModeEnabled:
fmt.Println("Fast mode started")
case <-fastModeDisabled:
fmt.Println("Fast mode ended")
}
}
}
func main() {
go worker2()
go worker1()
for {}
}
Use a non-blocking write to the channel. This way if anyone is listening they receive it. If there is no one listening it doesn't block the sender, although the event is lost.
You could use a buffered channel so that at least some events are buffered if you need that.
You implement a non-blocking send by using the select keyword with a default case. The default makes it non-blocking. Without the default case a select will block until one of its channels becomes usable.
Code snippit:
select {
case ch <- event:
sent = true
default:
sent = false
}

Repeat asynchronous requests to the server

What I want to do is repeat GET requests to a server asynchronously over and over again so that I can synchronize the local data with the remote one. I'd like to do this by using Futures without involving akka because I just want to understand the basic idea of how to do that at the lower level. No async and await preferably either because they are kind of the high level functions for Futures and Promises, thus I'd like to use Futures and Promises themselves.
So this is my functions:
def sendHttpRequestToServer(): String = { ... }
def send: Unit = {
val f = future { sendHttpRequestToServer() }
f onComplete {
case Success(x) =>
processResult(x) // do something with result "x"
send // delay if needed and send the request again
case onFailure(e) =>
logException(e)
send // send the request again
}
}
That's what I think it might be. How could I change it, is there any mistake in algorithm? Your thoughts.
UPDATE:
As I already know, futures are not designed for recurring tasks, only for one time ones. Therefore, they can't be used here. What do I use then?
Your code has some issues with exception handling. If an exception is thrown in processResult or logException the send will no longer occur, breaking your loop. This exception will also not be logged. A better way is:
f.map(processResult).onFailure(logException)
f.onComplete(x => send())
This way the send still happens despite exceptions in processResult or logException, exceptions from processResult are logged, and the next send can begin intermediately while the results are still being processed. If you want to wait until processing is complete, you could do:
val f2 = f.map(processResult).recover { case e => logException(e) }
f2.onComplete(x => send())

idiomatic timeouts for long processes in multi-threaded scala

So, I see a few questions on stackoverflow asking in one way or another how to "kill" a future, a la the deprecated Thread.stop(). I see answers explaining why it's impossible, but not an alternative mechanism to solve similar problems.
For example: Practical use of futures? Ie, how to kill them?
I realize a future can't be "killed."
I know how I could do this the Java way: break up the task into smaller sleeps, and have some "volatile boolean isStillRunning" in a thread class which is periodically checked. If I've cancelled the thread by updating this value, the thread exits. This involves "shared state" (the isStillRunning var), and if I were to do the same thing in Scala it wouldn't seem very "functional."
What's the correct way to do solve this sort of problem in idiomatic functional scala? Is there a reasonably concise way to do it? Should I revert to "normal" threading and volatile flags? I should use #volatile in the same way as the Java keyword?
Yes, it looks the same as in Java.
For a test rig, where a test can hang or run too long, I use a promise for failing the test (for any reason). For instance, a timeout monitor can "cancel" the test runner (interrupt the thread and compareAndSet a flag) and then complete the promise with failure. Or test prep can fail a badly configured test early. Or, the test runs and produces a result. At a higher level, the test rig just sees the future and its value.
What is different from Java are your options for composing futures.
val all = Future.traverse(tests)(test => {
val toKeep = promise[Result] // promise to keep, or fail by monitor
val f = for (w <- feed(prepare(test, toKeep))) yield {
monitored(w, toKeep) {
listener.start(w)
w.runTest()
}
}
f completing consume _
// strip context
val g = f map (r => r.copy(context = null))
(toKeep completeWith g).future
})
I think I've found a better solution to my own problem. Instead of using a volatile variable to let an operation know when to die, I can send a higher-priority exit message to the actor. It looks something like this:
val a = new Actor() {
def act():Unit = {
loop{ react {
case "Exit" => exit(); return;
case MyMessage => {
//this check makes "Exit" a high priority message, if "Exit"
//is on the queue, it will be handled before proceeding to
//handle the real message.
receiveWithin(0) {
case "Exit" => exit(); return
case TIMEOUT => //don't do anything.
}
sender ! "hi!" //reply to sender
}
}}
}
}
a.start()
val f = a !! MyMessage
while( ! f.isSet && a.getState != Actor.State.Terminated ) {
//will probably return almost immediately unless the Actor was terminated
//after I checked.
Futures.awaitAll(100,f)
}
if( a.getState != Actor.State.Terminated ) {
f() // the future should evaluate to "hi!"
}
a ! "Exit" //stops the actor from processing anymore messages.
//regardless of if any are still on the queue.
a.getState // terminated
There's probably a cleaner way to write this.. but that's approximately what I did in my application.
The reactWithin(0) is an immediate no-op unless there is an "Exit" message on the queue. The queue'd "Exit" message replaces the volatile boolean I would have put in a threaded Java application.

"window procedure" of a newly created thread without window

I want to create a thread for some db writes that should not block the ui in case the db is not there. For synchronizing with the main thread, I'd like to use windows messages. The main thread sends the data to be written to the writer thread.
Sending is no problem, since CreateThread returns the handle of the newly created thread. I thought about creating a standard windows event loop for processing the messages. But how do I get a window procedure as a target for DispatchMessage without a window?
Standard windows event loop (from MSDN):
while( (bRet = GetMessage( &msg, NULL, 0, 0 )) != 0)
{
if (bRet == -1)
{
// handle the error and possibly exit
}
else
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
Why windows messages? Because they are fast (windows relies on them) and thread-safe. This case is also special as there is no need for the second thread to read any data. It just has to recieve data, write it to the DB and then wait for the next data to arrive. But that's just what the standard event loop does. GetMessage waits for the data, then the data is processed and everything starts again. There's even a defined signal for terminating the thread that is well understood - WM_QUIT.
Other synchronizing constructs block one of the threads every now and then (critical section, semaphore, mutex). As for the events mentioned in the comment - I don't know them.
It might seem contrary to common sense, but for messages that don't have windows, it's actually better to create a hidden window with your window proc than to manually filter the results of GetMessage() in a message pump.
The fact that you have an HWND means that as long as the right thread has a message pump going, the message is going to get routed somewhere. Consider that many functions, even internal Win32 ones, have their own message pumps (for example MessageBox()). And the code for MessageBox() isn't going to know to invoke your custom code after its GetMessage(), unless there's a window handle and window proc that DispatchMessage() will know about.
By creating a hidden window, you're covered by any message pump running in your thread, even if it isn't written by you.
EDIT: but don't just take my word for it, check these articles from Microsoft's Raymond Chen.
Thread messages are eaten by modal loops
Why do messages posted by PostThreadMessage disappear?
Why isn't there a SendThreadMessage function?
NOTE: Refer this code only when you don't need any sort of UI-related or some COM-related code. Other than such corner cases, this code works correctly: especially good for pure computation-bounded worker thread.
DispathMessage and TranslateMessage are not necessary if the thread is not having a window. So, simply just ignore it. HWND is nothing to do with your scenario. You don't actually need to create any Window at all. Note that that two *Message functions are needed to handle Windows-UI-related message such as WM_KEYDOWN and WM_PAINT.
I also prefer Windows Messages to synchronize and communicate between threads by using PostThreadMessage and GetMessage, or PeekMessage. I wanted to cut and paste from my code, but I'll just briefly sketch the idea.
#define WM_MY_THREAD_MESSAGE_X (WM_USER + 100)
#define WM_MY_THREAD_MESSAGE_Y (WM_USER + 100)
// Worker Thread: No Window in this thread
unsigned int CALLBACK WorkerThread(void* data)
{
// Get the master thread's ID
DWORD master_tid = ...;
while( (bRet = GetMessage( &msg, NULL, 0, 0 )) != 0)
{
if (bRet == -1)
{
// handle the error and possibly exit
}
else
{
if (msg.message == WM_MY_THREAD_MESSAGE_X)
{
// Do your task
// If you want to response,
PostThreadMessage(master_tid, WM_MY_THREAD_MESSAGE_X, ... ...);
}
//...
if (msg.message == WM_QUIT)
break;
}
}
return 0;
}
// In the Master Thread
//
// Spawn the worker thread
CreateThread( ... WorkerThread ... &worker_tid);
// Send message to worker thread
PostThreadMessage(worker_tid, WM_MY_THREAD_MESSAGE_X, ... ...);
// If you want the worker thread to quit
PostQuitMessage(worker_tid);
// If you want to receive message from the worker thread, it's simple
// You just need to write a message handler for WM_MY_THREAD_MESSAGE_X
LRESULT OnMyThreadMessage(WPARAM, LPARAM)
{
...
}
I'm a bit afraid that this is what you wanted. But, the code, I think, is very easy to understand. In general, a thread is created without having message queue. But, once Window-message related function is called, then the message queue for the thread is initialized. Please note that again no Window is necessary to post/receive Window messages.
You don't need a window procedure in your thread unless the thread has actual windows to manage. Once the thread has called Peek/GetMessage(), it already has the same message that a window procedure would receive, and thus can act on it immediately. Dispatching the message is only necessary when actual windows are involved. It is a good idea to dispatch any messages that you do not care about, in case other objects used by your thread have their own windows internally (ActiveX/COM does, for instance). For example:
while( (bRet = GetMessage(&msg, NULL, 0, 0)) != 0 )
{
if (bRet == -1)
{
// handle the error and possibly exit
}
else
{
switch( msg.message )
{
case ...: // process a message
...
break;
case ...: // process a message
...
break;
default: // everything else
TranslateMessage(&msg);
DispatchMessage(&msg);
break;
}
}
}

Resources