The OpenSL reference document does not answer whether it is possible to block within the callback specified by the call to SLBufferQueueItf::RegisterCallback().
I am thinking to do this when the player signals that it is going to starve, and the data is not ready yet.
According to OpenSL ES programming notes for Android, callbacks should never block or perform excessive work. See "Callbacks and Threads" section.
Related
Many articles online demonstrates nodejs as an example of reactor pattern. Isn't it rather proactor?
As far as I understand, the difference between the two is:
reactor handles events in a single thread (synchronously),
proactor handles events is multiple threads (asynchronously) with completion callbacks.
For example in this article:
Reactor Pattern is an idea of non-blocking I/O operations in Node.js. This pattern provides a handler(in case of Node.js, a callback function) that is associated with each I/O operation. When an I/O request is generated, it is submitted to a demultiplexer.
Isn't it actually definition of proactor?
I wasn't familiar with the Proactor design pattern. After reading a bit about it I think I understand your confusion.
Many articles online demonstrates nodejs as an example of reactor pattern
This is true.
Isn't it actually definition of proactor?
This is also true.
The difference is your point of view.
Internally, node's event loop is a blocking call (ironically). That's just the most efficient way to use non-blocking I/O. Different OSes have different functions to request the OS to wake your process up if something you are interested in happens. Due to POSIX requirements there is a cross-platform API that all modern OSes support: select(). Node.js actually uses libuv which automatically picks the right API at compile time depending on the target platform. But for the purposes of this answer we're going to focus on select(). So lets look at select():
numberOfEvents = select(numberOfWaits, read, write, err, timeout);
The select() function blocks for up to timeout milliseconds or something happens to either the read, write or err files/sockets. With just a single function the OS provides enough functionality to implement most of node.js from timers like setTimeout() and setInterval() to listening to network sockets. Using select() the event loop looks something like this:
// Pseudocode:
while(1) {
evaluateJavascript();
timeout = calculateTimers();
events = select(n, read, write, err, timeout);
if (events > 0 || timersActive()) {
getCallbacks(events, read, write, err, timers());
}
}
This is basically a Reactor design pattern.
However, node hides this away in its implementation. What it exposes to Javascript programmers is a set of APIs that registers callbacks and calls those callbacks when an event happens. This is partly historical (the browser APIs was designed that way) and partly practical (it's a much more flexible architecture - almost all GUI frameworks from GTK to wxWindows to .Net works this way).
You may recognise that this sounds a lot like a Proactor design pattern. And in fact it is.
So node.js itself is an example of Reactor design pattern.
Javascript programs written in node.js are examples of Proactor design pattern.
The distinction has nothing to do with multithreading. It is as follows:
Reactor - I want to read from a socket, so I subscribe to a data-is-available kind of event and, and when it fires react to it by synchronously reading the available amount.
Proactor - I want to read from a socket, so I initiate a reading operation (one that proactively reads the data, without waiting for me to react to it's availability), and subscribe to some kind of read-is-complete notification, wherein the read data is immediately available to me.
Node has both kinds of APIs, e.g. stream.ReadableStream#readable/stream.ReadableStream#read are a Reactor interface, while
fs.readFile is a Proactor.
Can someone explain to me what this error I'm seeing is?
Current thread must be set to single thread apartment (STA) mode before OLE calls can be made.
Specifically, I'm trying to open the SaveFileDialog/OpenFileDialog within C++/CLI on a form.
SaveFileDialog^ saveFileDialog1 = gcnew SaveFileDialog;
saveFileDialog1->ShowDialog();
if (saveFileDialog1->ShowDialog() == System::Windows::Forms::DialogResult::OK)
{
s = saveFileDialog1->OpenFile();
}
s->Close();
}
The error that is throwing is
An unhandled exception of type 'System.Threading.ThreadStateException' occurred in System.Windows.Forms.dll
Additional information: Current thread must be set to single thread apartment (STA) mode before OLE calls can be made. Ensure that your Main function has STAThreadAttribute marked on it. This exception is only raised if a debugger is attached to the process.
I'm not really familiar with what this error is saying. I know just a bit about threading, but I'm not sure how threading would be an issue here. I've seen some people reference things like STAThread without providing a clear explanation as to what it does, and Microsoft's documentation makes no mention of having this exception thrown when calling SaveFileDialog/OpenFileDialog, or how to handle it.
Thanks!
When you use OpenFileDialog then a lot of code gets loaded into your process. Not just the operating system component that implements the dialog but also shell extensions. Plugins that programmers write to add functionality to Windows Explorer. They work in that dialog as well. There are many, one you are surely familiar with is the extension that makes a .zip file look like a folder.
One thing Microsoft did when they designed the plug-in interface is to not force an extension to be thread-safe. Because that is very hard to do and often a major source of bugs. They made the promise that the thread that creates the plugin instance is also the thread on which any call to the plugin is made. Thus ensuring that the plugin is always used in a thread-safe manner.
That however requires a little help from you. You have to make a promise that your thread, the one that calls OpenFileDialog::Show(), observes the requirements of a single-threaded apartment. STA for short. You make the promise with the [STAThread] attribute on your program's Main() entrypoint. Or if it is a thread that you created yourself then by calling Thread::SetApartmentState() before you start it.
That's just a promise however, you also have to implement what you promised. Takes two things, you promise to never block the thread and you promise to pump a message loop. Application::Run() in a .NET program. The never-block promise ensures that you won't cause deadlock. And the message loop promise says that you implement a solution to the producer-consumer problem.
This should never be a problem, it is very unclear how this got fumbled in your project. Another implicit requirement for a dialog is that it must have an owner. Another window on which it can be on top of. If it doesn't have one then there are very high odds that the user never sees the dialog. Covered by another program's window, the user can only ever find it back by accident. When you create windows then you always also must call Application::Run() so the windows can respond to user input. Use the boilerplate code in a C++/CLI app so this is done correctly.
I've been doing a fair amount of work with Node lately, trying to build a system which has certain characteristics, one of which is non-blocking / parallelism - a Node strong suit, as I understand it.
What I don't fully understand is when a separate thread is spun off to handle some processing. I'm pretty sue this happens on a function call/call back, but certainly not all of them.
In my specific case, it's an Express based app. At app start-up it does several things including instantiating a RabbitMQ based "bus", an object with a method which will write to the bus (objA) and object which will subscribe to the bus and process messages coming across it (objB).
objA will write to the bus inside an express callback
app.put((req,res) => {
objA.methodWhichWritesToBus();
});
I believe at this point, that objA.methodWhichWritesToBus is executed in a background/worker thread - whatever you call it, not on the main event loop.
Is that the only point at which this sort of thing happens? methodWhichWritesToBus is IO instensive (it calls an elastic search service on another box and brings back 10's to 100's of thousands of records) with lots of chained promises etc., but none of that gets split off, does it?
How about the fact that the obj on which the method is called is instantiated outside the Express callback - does that affect the parallel-ism?
Finally, are the ways to effect/force a method etc to "run in the background"?
I've been noodling this, testing it, for awhile now but all on one machine so it's difficult to tell what's going on.
Who can clarify this for me?
Pre-answer: this is a topic best learned by going and reading, doing coding exercises to solidify your understanding, and working with the technology in a significant way. You're not going to "get it" based on a Q&A format. That said...
What I don't fully understand is when a separate thread is spun off to handle some processing.
Never, sort of. "Processing" as in the computation that happens in your javascript program, happens in the main event loop thread. End of story. However, waiting on I/O to come back from the OS is not considered "processing" so there are various queues managed by node and the OS to track pending I/O requests and invoke callbacks when data is ready. There are a handful of threads node uses internally to manage this stuff with the OS, but from your program's perspective, those threads are irrelevant. Your program can ask node to do some IO, then your program keeps running in parallel, and when the I/O is done, node will eventually invoke the callback in the main event loop and you can process the results.
I believe at this point, that objA.methodWhichWritesToBus is executed in a background/worker thread - whatever you call it, not on the main event loop.
You call it "asynchronously" and it happens whenever you do IO, including filesystem calls, networking, or child processes. Which is to say, quite a lot.
How about the fact that the obj on which the method is called is instantiated outside the Express callback - does that affect the parallel-ism?
Nope.
Finally, are the ways to effect/force a method etc to "run in the background"?
Generally I/O is done asynchronously by default, so no you don't normally need to force anything to run in the background. It's baked into the node design by way of the node core APIs themselves. However, there are ways to delay synchronous processing to a future event loop using setImmediate, setTimeout, or process.nextTick. I explain these in some detail in my blog post setTimeout and friends.
More precisely, all networking is asynchronous. End of story. Specifically, the APIs in node core that are available are all asynchronous, and there's simply no synchronous API available in node. For filesystem IO and child processes, there are both synchronous and asynchronous APIs, but the synchronous APIs must only be used under special limited circumstances, and if you don't know confidently that it's OK in this specific case to make a synchronous IO API call, you should use the asynchronous API so you don't break the lynchpin that makes node perform as it does.
I am writing a Lua script that uses a library to access a hardware device with buttons. I register a callback function to handle the button presses. The code looks like:
globalvar = {}
function buttonCallback(buttonId)
...accessing globalvar
end
device.RegisterButtonCallback("buttonCallback")
while true do
end
This works.
Now I want to update the globalvar not only at a button presses but also at 1 minute intervals. Since I will need to access a network resource anyway I plan on using the socket.select call to get the 1 minute interval.
#require "socket"
globalvar = {}
function buttonCallback(buttonId)
...access globalvar
end
device.RegisterButtonCallback("buttonCallback")
while true do
socket.select(nil, nil, 60) -- wait 60 seconds
...access network
...access globalvar
end
Now I am concerned about the concurrent access of the globalvar. How can I prevent race conditions here? Most sources on multithreading in Lua advise to use continuations in cooperative scheduling but I don't see how that could be applied in my case.
Assuming the library you're using is creating another thread behind the scenes, and your only concern is about accessing globalvar from within the callback, you could avoid it by writing to a pipe in the callback, and reading from it in your select loop. In other words, use a standard POSIX-style pipe to communicate the callback back to the main thread. This is a fairly common technique when dealing with e.g. POSIX signals.
Lua is not thread-safe within a particular lua_State instance. You cannot modify a global variable from one thread while another thread is doing something else with that Lua instance. You most certainly cannot be executing two separate scripts on the same instance.
Thread safety is something you have to do outside of Lua. You cannot have the C/C++ thread that detects the button press actually call Lua code directly. It must send that data to the main thread via some thread-safe mechanism, where it will call the Lua script for them.
So I took a deep dive into the Lua books and online documentation, and contacted the author of the device driver. As the answers already indicated, it takes much more than anticipated to handle the button callbacks safely.
My approach now is to write the device driver myself and use sockets as communication channel between the device and the Lua script.
My initial approach was to use continuations as this is advocated as the Lua "replacement" for multithreading but when I read the programming in Lua book, it turns out that in order to prevent busy waits, it uses the socket.select (!). This increased my feeling that a socket-based approach is good, especially since I also need sockets for internet access in my script.
I've successfully loaded a Spotify track from a playlist (verified by tracing the track name out to the screen), passed it to be played using sp_session_player_load and sp_session_player_play(sess, 1).
However my music_delivery callback is never called (I've have some trace in there to show when it is). The libspotify FAQ seems to imply that it will be invoked by an internal thread inside the API and I do not need to invoke sp_session_process_events to start the streaming.
My application is singly threaded so I'm assuming there is no locking issue in my code.
Sources:
libspotify Haskell binding:
https://github.com/mrehayden1/libspotify
(You will need libspotify installed to get this to compile: https://developer.spotify.com/technologies/libspotify/#download)
The application code:
https://github.com/mrehayden1/harmony
A few ideas:
I do not need to invoke sp_session_process_events to start the streaming.
This is somewhat correct, however, you must trigger sp_session_process_events when you get a notify_main_thread callback — this comes in on another thread, so you need to correctly delegate this back to your main thread to make the call.
Since you mention you only have a single thread, make sure you're not spinning in a tight loop somewhere — something like while (!sp_track_is_loaded(track)) {} — since a lot of work in libspotify goes on in the thread you make the calls on, doing this will cause libspotify to be unable to do any work, and everything will grind to a halt.
passed it to be played using sp_session_player_load and sp_session_player_play(sess, 1).
What are the results of these calls? Loading metadata isn't the same as loading for playback, so you might be getting SP_ERROR_IS_LOADING back from the play call. In addition, the track might not be playable for some other reason, so the error is important.
If you're still having trouble, the problem may be in the bindings or elsewhere in your code. Check the jukebox example that comes with libspotify for an example C implementation of playback.